Oireachtas Joint and Select Committees

Tuesday, 13 February 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion

Ms Caoilfhionn Gallagher:

I thank the committee for extending the invitation to appear before it today in my capacity as special rapporteur on child protection. I thank them, first and foremost, for considering this important topic. To follow Professor O'Sullivan, I will quote from the UNICEF document he referred to, which states:

Today’s children are the first generation that will never remember a time before smart-phones. They are the first generation whose health care and education are increasingly mediated by AI-powered applications and

devices, and some will be the first to regularly ride in self-driving cars. They are also the generation for which AI-related risks, such as an increasing digital divide, job automation and privacy infringements must be addressed before becoming even more entrenched in the future.

That is why UNICEF says it is essential that child specific considerations are front and centre in AI development. As special rapporteur, and bearing in mind the special expertise of my fellow witnesses, a key focus of my role is ensuring that children’s rights principles are embedded in legislative and policy frameworks to comply with the UNCRC, and Article 42A(1) with respect to child protection.

On many other issues which fall within my mandate, there is an abundance of international material. There is a very clear UN Committee on the Rights of the Child guidance on a topic but this is one topic in respect of which in the international policy debate, there has long been a clear gap at the intersection of children’s rights and artificial intelligence, AI, resulting in children’s rights often being overlooked or added as a belated afterthought in guidance and policy documents. All too often, children are simply entirely left out of the policy conversation. Although the rights of children are recognised by the UN Secretary General as needing "acute attention in the digital age", I agree with and adopt UNICEF’s criticism that this is "not being reflected in the global policy and implementation efforts to make AI systems serve society better".

I will focus on three topics in my opening statement, bearing in mind the detailed and specific expertise my colleagues have. I will focus upon gaps in the policy discourse concerning AI and children’s rights. Second, I will highlight a number of key international materials that may assist the committee in considering the issues before it. I note that there is an overlap between myself and Professor O'Sullivan on that. Finally, I will briefly note a number of specific issues arising in the Irish context that require careful consideration. I welcome the views of the subsequent witnesses on those issues.

As I have indicated at the outset, in the international policy debate, there has long been a clear gap at the intersection of children’s rights and AI. The UN Secretary-General’s remarks to the Security Council on AI last July, for example, contained zero mention of the rights of the child or the threats posed thereto by the proliferation of artificial intelligence technologies. This is quite a stark example of children being left out of the AI conversation at the highest level internationally. In 2021, we saw the publication of the UN Committee on the Rights of the Child’s 25th general comment, which addresses children’s rights in respect of the digital environment but fails to comprehensively address the unique threats posed to children by AI or the unique opportunities that arise for children in respect of AI. I recognise that the UN special rapporteur on the right to privacy specifically addressed AI and children in his 2021 report but that, of course, rightly reflects the limits of his mandate with the focus being upon privacy and data protection issues.

The gap I have referred to is also apparent in the most recent draft of the Council of Europe’s Framework Convention on AI from December 2023, which includes a generic, catch-all reference to "the rights of persons with disabilities and of children". The Council of Europe has taken steps towards rectifying this gap by adding a supplementary chapter on AI to its 2020 Handbook for Policy Makers on the Rights of the Child in the Digital Environment, which did not have that chapter at first. It was added later so this is an example of the afterthought approach to this issue with regard to children's rights.

Following what is a clear international pattern, in the Government of Ireland’s 2021 AI strategy, the section dedicated to "risks and concerns" is brief and there is no dedicated focus upon child protection issues or children’s rights. The overall focus of the document is upon building public trust and engagement with AI. This is something that from reviewing AI policies in over s60 countries worldwide, UNICEF says is a common theme. The focus is upon the economy, opportunities presented by AI and children's rights are largely sidelined.

I recognise, of course, that this concern at domestic level has to an extent been overtaken by the extensive consultation of young people at the National Youth Assembly on Artificial Intelligence in October 2022, which I welcome and support. The involvement of young people in AI policy as literate yet vulnerable users of digital technologies is crucial. I welcome further consultation with established pathways for the integration of young people’s perspectives on these issues. I also acknowledge and welcome the work of Coimisiún na Meán, about which I addressed the committee when I appeared before it previously. I also recognise and welcome the EU work and the superb work of Professor O'Sullivan and colleagues in that regard, including the very recent EU developments.

While many relevant international guidance and policy documents concerning AI fail to deal with children’s rights and AI’s impact on them, those that do address children’s rights often follow a restricted approach considering only the potential threats that AI may play relating to children’s privacy, exposure to harmful content and the risk of online exploitation. These are, of course, vitally important issues and need to be explored but they are far from the only issues arising. In order to ensure that the best interests of the child are at the heart of the development of policy, legislation and practice concerning AI, it is vital that the breadth of both the risks that AI poses and the opportunities that AI presents are considered through a children’s rights lens. I recognise that the gaps in the international discourse on this topic pose unique challenges for the Government, the Legislature, policymakers and this committee in ensuring that both risks and opportunities are considered in a child-centred way because there is no ready international yardstick to which they can point.

I commend to the committee three international policy documents following on from what Professor O'Sullivan said because they buck the trend I identified above. The first is the UNICEF and the Ministry for Foreign Affairs of Finland's Policy guidance on AI for Children from November 2021 is a superb document that is very helpful. The second is the JRC Science for Policy Report from the European Commission, Artificial Intelligence and the Rights of the Child, from 2022. It is also important to have regard to the Council of Europe Draft Framework Convention on AI, Human Rights, Democracy and the Rule of Law from 2023. I also flag the importance of the UNICEF policy guidance because it takes as its basis the UNCRC, which sets out the rights that must be realised for every child to develop to his or her full potential. Importantly, this guidance recognises that AI systems can uphold or undermine children’s rights depending on how they are used and it addresses risks and opportunities - how to minimise the risks and leverage the opportunities, respectively, in ways that recognise the unique position of children and, importantly, the different contexts for certain groups of children, particularly those from marginalised groups and communities. There are specific sections in it concerning girls, LGBTQI+ teenagers and children from ethnic minorities. The guidance uses three child-specific lenses when considering how to develop child-centred AI: protection, provision and participation. As Professor O'Sullivan said, it sets out nine requirements for child-centred AI, which he has addressed. It is a helpful and important document and I hope it will be useful to the committee. The documents I referenced recognise the importance of protective measures - ensuring that children are safe - but also the importance of ensuring inclusion - non-discriminatory inclusion - for children in technology that already profoundly affects their lives and will have unknown and far-reaching ramifications for their futures and respect for children’s agency.

I am conscious that Ms Daly and Dr. Ryan are doing to address some of these issues in more detail. Finally, I note three issues in particular in the Irish context that merit careful consideration as this topic is being explored by this committee. First, as my opening statement makes, it is important that the full range of risks posed by AI are considered within the framework set out by UNICEF in particular in that document I referenced. This must include the risks of systemic and automated discrimination and exclusion through bias and the limitation of children’s opportunities and development from AI-based predictive analytics and profiling. I note in particular UNICEF’s warning that profiling and micro targeting based upon past data sets "can reinforce, if not amplify, historical patterns of systemic bias and discrimination". UNICEF gives the example of AI systems that may "reinforce stereotypes for children and limit the full set of possibilities which should be made available to every child, including for girls and LGBT children. This can result in, or reinforce, negative self-perceptions, which can lead to self-harm or missed opportunities." Any of us who are parents of teenagers may well have seen examples of that where a child who is interested in gaming or military history may then receive material that suggests that he or she is going to interested in white supremacy or racism. This is something I have seen with my 13-year-old son. That is a very important topic and one to bear in mind.

Second, a specific issue of serious concern relates to "recommender algorithms". I am conscious that other witnesses are going to deal with this in more detail. This includes social media algorithmic recommender systems that may "push" harmful content to children. In my opening statement, I referred to the 2022 study by the Center for Countering Digital Hate, CCDH, on TikTok’s recommendation algorithm. That study concluded that it pushes self-harm and eating disorder content to teenagers within minutes of them expressing interest in the topics. The study is worth looking at. It found that TikTok promoted content that included dangerously restrictive diets, pro-self-harm content and content romanticising suicide to users showing a preference for the material even if they were registered as under 18. The study was based on accounts registered as age 13 in the US, UK, Canada and Australia. The researchers set up both "standard" and "vulnerable" accounts. The "vulnerable" accounts included reference to the term "loseweight" in their usernames based on their research. Over a 30-minute initial period when the accounts launched, the accounts "paused briefly" on videos about body image, eating disorders and mental health and liked them.

On the standard accounts, content about suicide followed within three minutes and eating disorder material was shown within eight minutes. That research also found that accounts registered for 13-year-olds were proactively shown videos advertising weight loss drinks and tummy tuck surgery. For the vulnerable accounts, the researchers found that the content was even more extreme, including detailed methods of self-harm and young people discussing plans to kill themselves. CCDH said that a mental-health- or body-image-related video was shown every 27 seconds to vulnerable user accounts. This requires attention urgently and I welcome the attention that other witnesses will bring to it. I also welcome Coimisiún na Meán’s detailed focus on the issue of the use of recommender algorithms.

Finally, I will emphasise that AI systems also have great potential to safeguard children. Dedicated services and products using AI technologies plainly have the potential to protect children. I have seen some of that in my work involving children who were abused cross-border in south-east Asia and Uganda. UNICEF has highlighted, for example, the ability to identify abducted children and detect known child abuse material and to detect and block livestreamed abuse and potentially identify the perpetrators, users and affected children. When considering the issue of AI and child protection, it is vitally important to consider AI’s potential to proactively vindicate children’s rights and not only defensive concerns regarding how AI can threaten children’s rights. That reflects the provisions of the UNCRC and Article 42A(1) of the Constitution, which states that the State shall “as far as practicable, by its laws protect and vindicate” children’s rights. It is important to look at the defensive issues and how to protect children from risks but it is also important to look proactively at AI's potential to vindicate children's rights and to give them greater protection.