Oireachtas Joint and Select Committees
Tuesday, 16 April 2024
Joint Committee On Children, Equality, Disability, Integration And Youth
Protection of Children in the Use of Artificial Intelligence: Discussion (Resumed)
Mr. Dualta ? Broin:
I thank members for the invitation to appear before the committee today to discuss the subject of the protection of children in the use of AI. My name is Dualta Ó Broin. I am head of public policy for Meta in Ireland. I am joined by my colleague David Miles, who is safety policy director for Europe, Middle East and Africa with Meta.
While Meta believes in freedom of expression, we also want our platforms, Facebook and Instagram, to be safe places where people, particularly young people, do not have to see content meant to intimidate, exclude or silence them. We take a comprehensive approach to achieving this by: writing clear policies, known as community standards in the case of Facebook and community guidelines in the case of Instagram, about what is and is not allowed on our platform; developing sophisticated technology to detect and prevent abuse from happening in the first place; and providing helpful tools and resources for people to control their experience or get help. We regularly consult with experts, advocates and communities around the world to write our rules and we constantly re-evaluate where we need to strengthen them.
AI plays a central role in reducing the volume of harmful online content on Facebook and Instagram. Our online and publicly accessible transparency centre contains quarterly reports on how we are faring in addressing harmful content on our platforms, in addition to a range of other data. This includes how much content we remove, across a broad range of violations, and how much of that content was removed before any user reported it to us.
There are some violation areas where AI is extremely effective. I refer, for example, to fake accounts, where over 99% of violations are identified by our AI systems. An example of a more difficult violation area for AI is bullying and harassment. In this area we removed 7.7 million posts from Facebook and 8.8 million posts from Instagram in the fourth quarter of 2023. Of these posts, 86.5% on Facebook and 95.3% on Instagram were identified by our AI systems and removed before they were reported to us by a user. One of the reasons that AI is not as effective in this harm area yet is that bullying and harassment can be quite contextual and not as immediately apparent as a fake account. That said, our systems are constantly improving. The same metric for the bullying and harassment violation for the fourth quarter of 2022 was 61% in the case of Facebook and 85.4% in the case of Instagram.
In addition to the actions we take to remove harmful content, we have built over 30 tools and features that help teens have safe, positive experiences and give parents simple ways to set boundaries for their teens. We have included a link to the timeline of these tools in our written submission. Further information about these tools and features and how they work can be found in our Instagram Parent Guide and our Family Centre. Additional resources on supportive online experiences can be found in our Education Hub for Parents and Guardians. While these centres and guides give parents the ability and resources to navigate our tools and products, we understand that it can be overwhelming for parents to stay on top of every new feature and product across every application.
In the US, the average teenager uses 44 applications on their phones.
We believe that a significant step forward can be taken at a European level to ensure that parents only need to verify the age of their child once and that their child will then be placed into an age appropriate experience on every single app. In Meta's view, the most efficient and effective way in which this would work would be at the operating system or app store level, although there are other alternatives. This would not remove responsibility from every app to have processes in place to manage age effectively and my colleague, Mr. Miles, can go into the steps that we at Meta take. The question of age verification is complicated, however we believe that the time has come to move forward with an effective solution that addresses the concerns of all stakeholders, including parents.
I will skip the section on education in the interests of time but am happy to answer any questions on it.
As set out in our submission to the justice Committee in March, as part of Meta’s commitment to transparency, we have published more than 20 AI system cards that explain how artificial intelligence powers recommendation experiences on Facebook and Instagram. In that submission, we described the way in which we use these systems to improve the user experience and make it safer and we also described the tools and controls available to users to control their experiences.
Finally, it is sometimes claimed that Meta is financially motivated to promote harmful or hateful content on our platforms to increase engagement. This is simply untrue. This content violates our policies and prevents our users from having enjoyable experiences. As a company, the vast majority of our revenue comes from advertising. Our advertisers do not want to see such content next to their ads. It is clear therefore that we are financially motivated to remove content which violates our policies as quickly as possible once we become aware of it.
I hope this gives members of the committee an overview of some of the uses of AI by Meta, and we look forward to their questions.