Oireachtas Joint and Select Committees

Tuesday, 13 February 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion

Professor Barry O'Sullivan:

I will make a couple of points. Ireland has a good story on privacy, to some extent, when it comes to children's data. I was before the committee six years ago to the very day with a colleague, Professor Mary Aiken, when we argued that Ireland should set the digital age of consent at 16. Deputy Sherlock was very helpful in that respect and argued very strongly for it in the Dáil. We have a digital age of consent that is among the highest in Europe. The great thing about that is it forces these companies to ensure they are not processing the data of children inappropriately. We need to make sure that parents and guardians behave in the right way in that respect, but the challenge always is how we know that someone online is the age he or she claims to be. As a country, we should be figuring out how, and requiring, these companies demonstrate they have rigorous age verification techniques in place. I encourage the committee to bring representatives from those companies in to ask them to show members how they verify that a 13-year-old is a 13-year-old or a nine-year-old is a nine-year-old. There are many ways of doing that, but it would be interesting to ask them how. I think it will be found that they simply ask someone to declare his or her date of birth and, for example, a 72-year-old can come onto the platform. Age verification is very challenging but those companies must take it seriously.

As was mentioned, the AI Act requires that deep fakes be labelled but that does not address the generation issue. I previously said that as well as looking at the role the technology companies play, society also needs to take responsibility for production of this content. We need to come up with some mechanism whereby it is effectively a crime on society to generate content that is harmful. If I am at home, sitting down and producing a deep fake that will impact the election, or some person in school or whatever the case might be, and it is a harmful image, I am effectively guilty of some crime against the fabric of society. How one frames that is not my area of expertise, but we need to take that seriously. We do not have to wait for Europe to do it. We can do it ourselves. That is something we should be doing here. I encourage the committee to look into that.

When it comes to inappropriate content, one of the things we have not discussed is why these systems do what they do. This is what is called a value alignment problem. What do social media platforms want to do? They want to make money. What does Google want to do? It wants to make money. It does that by getting people to engage. In the process of engagement, the content people engage with that generates the interest and money is not aligned with what we consider as consistent with our values. There also needs to be some regulatory instrument that ensures there is a value alignment between what these companies do, and their algorithms and business models, and the well-being of society. That has to be demonstrated in some way. It is specifically called the value alignment problem.

We need to deal with the value alignment problem and age verification. We need to think about ways in which the production of harmful content is in some sense a crime.

Comments

No comments

Log in or join to post a public comment.