Oireachtas Joint and Select Committees

Tuesday, 13 February 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion

Professor Barry O'Sullivan:

I am honoured to appear as a witness today. I am a full professor at the school of computer science in UCC and have worked in the field of artificial intelligence for more than 25 years. I am founding director of the ISFI research centre for data analytics at UCC and the Science Foundation Ireland centre for research training in AI. I served as vice chair of the European Commission high-level expert group on AI from 2018 to 2020, which formulated the EU's ethical approach to artificial intelligence. I currently represent the European Union at the global partnership on artificial intelligence. I am a fellow and a past president of the European Association for Artificial Intelligence and a fellow of the Association for the Advancement of Artificial Intelligence, as well as a member of the Royal Irish Academy. I hold a number of ministerial appointments including chair of the national research ethics committee for medical devices and membership of the Government's recently constituted AI advisory council. In 2016, I was recognised as SFI researcher of the year and I also received its best international engagement award in 2021. In 2023, I was the first Irish person to receive the European AI association's distinguished service award. In addition to my academic work, I contribute to several global track two diplomacy efforts and related activities at the interface of military, defence, intelligence and the geopolitics of AI. I am, for example, senior technology advisor to INHR in Geneva, New York, and Washington DC. I serve on the AI governance forum at the centre for new American security in Washington DC, and I am one of three polymath Fellows at the Geneva centre for security policy.

The term "artificial intelligence" was coined in 1955 by John McCarthy - the son of a Kerry immigrant , Marvin Minsky, and others. They proposed that in the context of the Dartmouth summer research project on AI, which took place in 1956. The field of AI is challenging to define and there is no agreed definition.I normally define it as a system that performs tasks normally associated with requiring human intelligence. These include, for example, the ability to learn, reason, plan, understand language and vision. Much recent interest in AI has been as a result of the success of a subfield of AI called machine learning, and specifically the success of deep learning, a subfield of machine learning. The general public has become aware of specific recent success stories in AI through systems such as ChatGPT, one of many large language models, LLMs. LLMs are one of many forms of generative AI, which are systems that can generate text, images, audio, video, and so on, in response to prompts or requests. Despite the hype, while the field of AI has made progress over the past decade or so, major obstacles still exist to building systems that really compete with the capabilities of human beings.

Over the past decade there has been considerable focus on the governance and oversight of AI systems. As part of our work at the European Commission's high-level expert group on AI, for example, we developed the EU's approach to trustworthy AI, built on a set of strong ethical principles. We also proposed a risk-based approach to the regulation of AI. Over the past few weeks, the European Union has finalised the EU's AI Act, which will govern all AI systems deployed in the Union. The Act builds strongly upon our work at the high-level expert group on AI, HLEG-AI. There are specific considerations regarding the protection of children in the AI Act, including some specific use cases that will be prohibited in the EU. I had the pleasure of participating in the national youth assembly on AI in October 2022, which was hosted by the Departments of Children, Equality, Disability, Integration and Youth, and Enterprise, Trade and Employment in partnership with the National Participation Office. The assembly brought together a diverse group of 41 young people from across the country aged between 12 and 24 years. At the national youth assembly on AI delegates considered the issues affecting young people and provided a set of recommendations to the Minister of State, Deputy Calleary, and the Department of Enterprise, Trade and Employment on Government policy on AI. A key objective of the assembly was to discuss the role, impact and understanding of AI in the lives of children and young people, and their opinions, thoughts and possible fears about the technology and its potential. Recommendations are available along four dimensions, which are AI and society, governance and trust, AI serving the public and AI education, skills and talent. They have produced a nice poster.

Children encounter AI systems every day they are working online, using smart devices or gaming, but there are many other modalities. The content they are presented with on their social media accounts, for example, is recommended to them using AI technology known as recommender systems. The movies suggested to them on Netflix and other platforms are curated using AI methods. Smartphones are packed with AI systems such as image editing, image filtering, video production, facial recognition and voice assistant technology. The technology itself is not problematic per se, but it is powerful, and can, therefore, be abused in ways that are extremely impactful. Combined with the power of social media, the combination can be devastating given the reach possible. Children can also encounter AI-generated content. This can range from harmless memes to more sinister uses of deep-fake technology. A deep fake is essentially a piece of content, often generated using AI methods, that does not correspond to something real and may be generated for nefarious purposes. Nudify apps, for example, are becoming readily available, which generate fake images of people in the nude that are often impossible to recognise as such. Technology to create pornographic videos from input images of a third party are also possible and are among the most concerning and harmful uses of AI technology. It is also possible to encounter fake content designed to create hallucination, such as believing an online profile belongs to a person known to the user, or something else they might be comfortable interacting with.

UNICEF issued its policy guidance on AI for children in 2021, building on the UN Convention on the Rights of the Child. This guidance proposed nine requirements for child-centred AI: support children's development and well-being; ensure inclusion of and for all children; prioritise fairness and non-discrimination for children; protect children's data and privacy; ensure safety for children; provide transparency; explainability and accountability for children: empower governments and businesses with knowledge of AI and children's rights; prepare children for present and future developments in AI; and create an enabling environment.

Educating children, parents, guardians, and wider society on the responsible use of AI technology and how AI might be encountered is key. I chaired a committee for the expert group for future skills needs focused on AI skills, which reported in May 2022. Our report assesses the skills that are required by a variety of personas in respect of AI and how skills development initiatives could be delivered. At UCC we host a free online course called the elements of AI, which teaches the basics of AI to anyone interested in the topic. It is our aim to educate at least 1% of the Irish population on the basics of AI. Both an English- and Irish-language version of the course are available. There are of course many educational benefits to AI. Personalised learning experiences can help students achieve higher grades and competence. AI technology can be used to search for additional relevant material and search through vast sources of information. We often do not regard the Google search engine as an AI system, but that is exactly what it is, so people have been using it for a very long time. However, AI technology also has the potential to undermine the integrity of assessment processes. It is, for example, becoming trivial to use AI to produce content that can be submitted as part of an assessment at school or university. Dealing with these issues can be challenging.

While not an instance of children using AI, it is finally important to note that AI is also widely used to protect children. There are, for example, many systems that filter out harmful content before it reaches children. There are, for example, several AI content moderation platforms available. I point to one of those in my notes to this statement. AI systems are also used in the detection of child sex-abuse material, CSAM, online. I previously chaired an advisory board for a project at the invitation of Europol. The Global Response Against Child Exploitation, GRACE, project was aimed at equipping European law enforcement agencies with advanced analytical and investigative capabilities to respond to the spread of online child sexual exploitation material. The project was successful.

I look forward to answering questions.

Comments

No comments

Log in or join to post a public comment.