Oireachtas Joint and Select Committees
Tuesday, 16 April 2024
Joint Committee On Children, Equality, Disability, Integration And Youth
Protection of Children in the Use of Artificial Intelligence: Discussion (Resumed)
Ms Susan Moss:
Gabhaim buíochas leis an gCathaoileach agus le comhaltaí uile an chomhchoiste as an gcuireadh freastal. I am head of public policy at TikTok and I am joined by my colleague, Ms Chloe Setter, child safety public policy lead. We appreciate the opportunity to appear before the committee today on this important topic of the protection of children in the use of artificial intelligence. At TikTok, we strive to foster an inclusive environment where people can create, find community and be entertained. More than 2 million people in Ireland use TikTok every month, which not only demonstrates how popular the platform is, but also underlines the responsibility we have to keep TikTok safe. Safety is a core priority that defines TikTok. We have more than 40,000 trust and safety professionals working to protect our community globally. We expect to invest €2 billion in trust and safety efforts this year alone, with the majority of our European trust and safety professionals based here in Ireland.
Artificial intelligence, AI, plays an integral role in our trust and safety work. We know that content moderation is most effective when cutting-edge technology is combined with human oversight and judgment. The adoption and evolution of AI in our processes has made it quick to spot and to stop threats, allows us to better understand online behaviour and improves the efficacy, speed, and consistency of our enforcement. Nowhere is that more important than the protection of teenagers.
Leveraging advanced technology, all content uploaded to our platform undergoes moderation to swiftly identify and address potential instances of harmful content. Automated systems work to prevent violative content from ever appearing on TikTok in the first place, while also flagging content for human review for context and closer scrutiny. We make careful product design choices to help to make our app inhospitable to those who may seek to cause harm. For example, we meticulously monitor for child and sexual abuse material, CSAM, and related materials, employing third-party tools such as PhotoDNA to combat and prevent its dissemination on our platform.
Developing and maintaining TikTok's recommendation system, which powers our For You feed is a continuous process as we work to refine accuracy, adjust models and reassess the factors that contribute to recommendations based on feedback from users, research and data. TikTok's For You feed is designed to help people to discover original and entertaining content. A number of safeguards are in place to support this aim. For example, our safety team takes additional precautions to review videos as they rise in popularity to reduce the likelihood of content that may not be appropriate for a general audience entering our recommendation system. Getting these systems and tools right takes time and iteration. We will continue to explore how we can ensure our system is making a diversity of recommendations. I understand that the introduction of new disruptive technologies inevitably triggers unease and artificial intelligence is no exception to this rule, prompting legitimate concerns around the legal system, privacy and bias. It is, therefore, incumbent on all of us to play our part in ensuring that AI reduces inequity and does not contribute to it.
We have robust community guidelines in place governing the use of AI-generated content on our platform. TikTok supports transparent and responsible content creation practices through our AI labelling tool for creators. The policy requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualise the videos they see and prevent the potential spread of misleading content.
The policy requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualise the videos they see and prevent the potential spread of misleading content and we are currently in the process of testing the automatic labelling of AI-generated content.
Listening to the experiences of teenagers is one of the most important steps we can take to build a safe platform for them and their families. It helps us avoid designing safety solutions that may be ineffective or inadequate for the actual community they are meant to protect. Last month we launched TikTok's Global Youth Council, a new initiative that further strengthens how we build our app to be safe for teens by design. The launch comes as new global research with over 12,000 teenagers and parents reveals a desire for more opportunities to work alongside platforms like TikTok.
At TikTok, we aim to build responsibly and equitably. We work to earn and maintain trust through ongoing transparency into the actions we take to safeguard our platform because we know that saying "trust us" is just not enough. For example, we have established a dedicated transparency centre here in Ireland where vetted experts can securely review TikTok's algorithm source code in full. We also recognise the need to empower independent critical assessment and research of our platform. TikTok provides transparent access to our research API in Europe which is designed to make it easier to independently research our platform and is informed by feedback that we are hearing from researchers and civil society. To empower continued discovery on TikTok, we recently announced a dedicated STEM feed that will give our younger community a dedicated space to explore a wide range of enriching videos related to science, technology, engineering, and mathematics.
Protecting teenagers online necessitates a concerted and collective endeavour, and we share the committee's dedication to protecting young people online. For our part, we will strive to continuously improve our efforts to address harms facing young people online through dedicated policies, 24-7 monitoring, the use of innovative technology and significant ongoing investments in trust and safety to achieve this goal. We thank members for their time and consideration today and welcome any questions they may have.