Oireachtas Joint and Select Committees

Tuesday, 13 February 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion

Professor Barry O'Sullivan:

UNICEF guideline F relates to providing transparency, explainability and accountability for children and this requirement is common among almost all ethical guidance on AI. The explainability part is the easiest one to explain, if you pardon the expression. A user can ask why they are getting certain content, where it came from and why the company is showing it to them. It is a surprisingly difficult problem, and we can explore that in greater detail if the Senator wishes. Under the GDPR, there are all sorts of rights to explanation such that the explanations have become quite diluted because they are technologically very difficult to generate.

Transparency relates to issues such as what information there is about me and others, the provenance of the content and so on. It is about being able to look into the system and get a sense of why somebody is seeing something. Accountability is both within the system and what kind of mechanisms there are for seeking redress. Ireland is doing fantastically well on issues of redress but, unfortunately, our Online Safety and Media Regulation Act does not do a good job of this in the digital space because it says that if someone is wounded, they need to get into line with everybody else. Of course, the scale of these technologies means 200 million people can be harmed in a heartbeat, so it is a rather long line. We need to think of creative ways of dealing with that. Transparency, explainability and accountability are really about understanding where the content is coming from, why the person is getting it, what data was used and so on.

These are not easy problems to solve but they should be solved.

A couple of things are often overlooked in the context of recommended systems. Everybody uses recommended systems every day of the week. When people go home and use Netflix, the reason they get a movie is the platform thinks they will like it. One of the real challenges around children in the context of recommended systems is essentially there is no serious age verification technique online. If the online world was a nightclub, and basically someone comes up and says “I am 18, boss”, in he or she goes. Companies do not verify that the people are the age they say they are.

The other problem is that there is no technique for ensuring that the content that children get is age appropriate. We really need to try to look at that. Of course, the technology companies are a problem but we also need to reflect on ourselves and ask the question: where does the content comes from? I agree with everything that has been said about the recommended system challenges but society as a whole in some sense is complicit in this kind of thing because, unfortunately, everybody, including younger and older people, is generating content that is very poisonous. We very much need to look at that issue as well.

Recommended systems is such a ubiquitous technology. We come in contact with it. In every Google search, we are using a recommended system of some sort. I am happy to come back to discuss those issues, if necessary, later.

Comments

No comments

Log in or join to post a public comment.