Oireachtas Joint and Select Committees

Tuesday, 13 February 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion

Ms Clare Daly:

I am on the board of directors of CyberSafeKids. My colleague Ms Alex Cooney, who is our CEO, joins us online. We thank the Cathaoirleach and members of the committee for inviting us here today. We welcome the opportunity to talk about this very important topic.

Established in 2015, CyberSafeKids is the only Irish charity dedicated to enhancing online safety for children nationwide. Our mission is to ensure that children are safer online and that the online world is made safer for children. At our core is an education and research programme for primary and post-primary schools, providing expert guidance to pupils aged eight to 16 and to teachers and parents. We also publish trends and usage data annually, which helps to paint a picture of what children are actually doing online, the levels of access they have and the areas of vulnerability. Our education programme has directly reached 65,000 children and 15,000 parents and educators across Ireland.

I will begin by acknowledging the highly important role the Internet plays in all of our lives and recognising that it is a very beneficial resource for children for learning, creating, socialising and entertainment purposes. In 2021, the UN Convention on the Rights of the Child formally adopted general comment No. 25. This recognised children’s rights in a digital environment to be the same as their rights offline, including the right to participate, the right to access accurate information, the right not to be exploited and the right to be protected from harm. While the Internet brings us opportunities that we could not have imagined 20 years ago, it also brings risks, particularly for children. The Internet was not designed with children in mind. These are environments that many adults struggle to understand and manage effectively, let alone children and young people.

While much of the current discussion around AI focuses on the latest developments in generative AI, it has been around for years and has been actively impacting on children in their use of technology over the past ten years. AI is behind machine learning, which drives the algorithmic recommender system that dominates feeds across social media. The likes of Facebook, Instagram, Snapchat, X, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content to their users. The main aim is to keep eyes on screens. While social media and gaming companies might argue that it is all interest-driven and designed to ensure that we are getting the best content and targeted ads for us, it can be deeply problematic for children as the result of inappropriate content related to self-harm and suicide and pro-anorexia and sexual content being recommended.

Frances Haugen, the ex-Facebook employee turned whistleblower, said Instagram's algorithms can lead to addiction in its young users by creating "little dopamine loops”. Children get caught in the crosshairs of the algorithm and sent down rabbit holes, engaging with sometimes frightening or enraging content because, as Haugen further stated, “it's easier to inspire people to anger than it is to other emotions”. One mother we recently worked with in regard to her 13-year old daughter said:

As a mother I have huge concerns for our teenage children. Last summer it was brought to my attention that my 13-year daughter had been bullied during First Year and by expressing her sadness in a video posted on TikTok, the app starting flooding her daily feed with images of other sad teenage girls referencing suicide, eating disorders & self-harm. The damage & sadness this has caused my family has been immense as we discovered that my daughter saw self-harm as a release from the pain she was suffering from the bullying through the information this app is openly allowing. Anti-bullying efforts by schools are of no use unless these social media platforms are held responsible for openly sharing all this hugely damaging content with children.

Cybercriminals seeking to sexually extort online users, including children, are using advanced social engineering tactics to coerce their victims into sharing compromising content. A recent report from Network Contagion Research Institute noted an exponential increase in this type of criminal activity over the past 18 months and further found that generative AI apps were being used to target minors for exploitation. We know that this is impacting children in this country because we have had calls from families whose children have been affected. One such case involved a teenage boy who thought he was talking to a girl of his own age in a different county. He was persuaded to share intimate images and immediately told in the aftermath that if he did not pay several thousand euro, it would be shared in a private Instagram group of his peers and younger siblings. The threat is very real and terrifying and has led, in some cases, to truly tragic consequences.

To make matters worse, there are new apps that are facilitating such efforts, including ones that remove clothing from photographs, which bypass the need to put people in compromising positions. The photos can be taken from social media accounts and then sent to the individual to begin the process of extorting him or her. Such sophisticated technology is greatly increasing the proliferation and distribution of what the UK’s Internet Watch Foundation describes as "AI-generated child sexual abuse material". There is a real fear, highlighted in the Internet Watch Foundation report, that this technology will evolve to be able to create video content too.

We know from recent headlines regarding celebrity deepfakes that the problem is becoming more widespread. Deepfake software can take a person’s photos and face-swap them onto pornographic videos, making it appear as if the subject is partaking in sexual acts. Research in this area points out that while much of the abuse is image-based, such as exploiting broadly-shared open-source content to generate CSAM, it can also be used in grooming and sexual extortion texts, which pose significant risks to children.

The rise in AI technology also poses risks as regards peer-on-peer abuse, which has been snowballing into a very significant area of risk over the last number of years according to figures from CARI. Peer-on-peer abuse is already massively increasing and the courts in Ireland have reported underage access to online pornography as being a major contributing factor in serious crimes. In September 2023, 28 Spanish girls between the ages of 11 and 17 were subjected to peer abuse when their social media images were altered to depict them as nude and these nude images were then circulated on social media. The reports suggest these images were created and circulated by 11 boys from their own school.

Over the past year, we have seen new AI features being rolled out into the hands of children with little thought as to the consequences. Snapchat added its "My AI" feature onto every subscriber’s account in March 2023. It should be borne in mind that 37% of eight to 12-year-olds in Ireland have Snapchat accounts. It was touted as being like a friend of whom you could ask anything. If you read the small print, you could see that it was still being tested and might return wrong or misleading information. Further testing by external experts found that it forgot that it was talking to a child very quickly into the conversation and started returning inappropriate information. Nine months later, in January 2024, Snapchat added a parental control to restrict the use of My AI.

Children are being treated like guinea pigs in the digital world. This was put succinctly by the Harvard professor and author of The Age of Surveillance Capitalism, Shoshana Zuboff, who wrote:

Each day we send our children into this cowardly new world of surveillance economics, like innocent canaries into Big Tech’s coal mines. Citizens and lawmakers have stood silent, as hidden systems of tracking, monitoring, and manipulation ravage the private lives of unsuspecting kids and their families, challenging vital democratic principles for the sake of profits and power. This is not the promising digital century that we signed up for.

Why do the companies behind these services not do more to protect children using them? One simple answer is money. They would need to invest a lot more money to bring about real change and in the meantime, they are making billions of dollars of profit off the back of advertising to children. A recent Harvard study found that collectively in 2022, Meta, X, Snapchat and TikTok made $11 billion from advertising to children in the US, $2 billion of which was to children under the age of 12.

We acknowledge that there are no easy solutions and this is further complicated by the fact that the EU and the US have very different regulatory approaches, with the former being more bureaucratic and heavily protective of the individual’s right to privacy. That said, we do have some suggestions. First, we should try to harness the power of AI to better protect children in online spacessuch as relying on age assurance to determine the age of child users. We know that technology companies are able to market to users based on age. Further investment in accuracy could see this technology being used to better safeguard children. It could be used, for example, to prevent underage users from accessing the platforms. We know from our trends and usage data that 84% of eight- to 12-year-olds have their own social media profile in Ireland. AI can be used to better protect child users on platforms from exposure to harmful content, targeted advertising and data profiling.

Second, we must ask how well existing legislation mitigates the risks. Does existing law include artificially created images? The emergence of deepfake technology means there is no longer a requirement for a perpetrator to possess real intimate images of a victim. Non-consensual pornographic deepfakes are alarmingly easy to access and create. A report by Sensity AI found that 96% of deepfakes were non-consensual sexual deepfakes. Of those, 99% were of women. The Harassment, Harmful Communications and Related Offences Act 2020 was enacted, in part, to criminalise non-consensual intimate image abuse. Section 3 prohibits the recording, distribution or publishing of an intimate image of another person without that other person’s consent. The definition of intimate image in relation to a person means any visual representation, including any accompanying sound or document, made by any means including any photographic, film, video or digital representation. Section 3 does not appear to clearly extend to images generated without consent. Notably the new EU directive on child sexual abuse will revise the 2011 directive to update the definition of the crime to include child sexual abuse material in deepfakes or AI-generated material. Ireland should be leading the charge in this arena given that we are regarded as one of Europe's leading tech hubs. Our legislation needs to match this status.

In terms of policy, regulation and enforcement, safety by design is a key criteria in devising technologies that are being accessed by children. We know that technology companies are compliance oriented but generally speaking, as commercial entities, they will not go beyond basic compliance where legislation does not demand they do so. How can these powerful concepts of safety by design be included in regulation? Coimisiún na Meán is currently drafting binding safety codes and in our response to its public consultation, CyberSafeKids recommended that definitions be extended to include AI-generated images. We also suggest that the regulations in this area be brought into line with any such definitions. Algorithm-based recommender systems should not be allowed to serve content to child users. Regulation is only beneficial when it is properly enforced and there needs to be great focus on how to do so.

In terms of finding fresh perspectives, we need new thinking and the confidence to believe we can make real progress on this very tough issue. As with the 2015 Paris climate agreement which has been to made to work, there needs to be skin in the game and a financial incentive. We suggest that the Government sets up and funds a research and development laboratory with representatives from academia, industry and the not-for-profit sector to look at how to better protect users in meaningful ways. Ireland can and should be a trailblazer in online child protection given our data protection status, our Europe, Middle East and Africa headquarters status and the strides taken to protect children through legislation over the past 20 years. This could include economic incentives that will change the behaviours of tech companies. For example, if the companies collaborate with academics and other stakeholders, they could get some kind of financial reward or grant based on outcomes, not just on participation.

Policymakers are faced with an enormous and urgent challenge that is growing at pace. There are no quick fixes but a meaningful solution will involve legislation, regulation, education and innovative approaches. Nothing that we have in place currently is good enough or strong enough to take on this challenge properly. We remain hopeful that this will change, but it needs to happen quickly. None of those digital rights referenced in general comment No. 25 are being upheld for children online in the current online environment. They are exposed to a wealth of misinformation and disinformation and are being bombarded with harmful content. This is having a genuine impact on their mental health, contrary to what Mr. Zuckerberg said in the congressional hearing just two weeks ago. At that hearing he was shamed into making an apology to parents who can testify to the tragic consequences for their children of insufficient regulation and oversight. AI offers children opportunities but if it is not properly regulated from the outset, we will see similar scenarios play out, where children are unwittingly testing dangerous, unregulated products for the profit of corporations.

I thank members for their time and look forward to answering any questions they may have.

Comments

No comments

Log in or join to post a public comment.