Oireachtas Joint and Select Committees

Tuesday, 13 February 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion

Professor Barry O'Sullivan:

The Senator referred to lethal autonomous weapons systems. I do a significant amount of work with militaries throughout the world in a Track II capacity. I agree that militaries are very cautious about these systems. For many reasons, including those referred to by the Senator, I do not think we will ever see lethal autonomous weapons systems existing. In the context of this debate, we need to be careful with regard to the significant focus on social media. AI does not equal social media. People certainly experience it on social media, but that is not the only place it exists. We need to be careful about that. Amara's law, which is well known, states that we overestimate the impact of a technology in the short term but underestimate its impact in the long term. That holds true in this case. Despite the huge amount of hype around AI, we are at risk of underestimating its impact in the longer term.

You do not have to look any further than what happened in the 1970s and the US manufacturing law and the measure that was introduced, misunderstanding the impact that would have in the long term. I would argue that the long-term impact of that was Trumpism and the fragmentation of American society. It created a new world of have-nots. They used to have but they do not have it anymore.

Regarding the questions around moral agency and these sorts of things, we have to ask ourselves what the purpose of a technology is. The purpose of any technology is to remove friction. The thing about AI is that it removes huge amounts of friction. People have access to information they never had access to. You now scale up things that you never scaled up before. All you need is an Internet connection and a computer, and you can really have a global audience. We not only have a technology that removes friction but we have one that has potentially something that can scale to global levels. With that comes fantastic opportunity but enormous risk. We need to be worried about that.

The technology itself is just a lump that has a battery. If the battery runs out, it does not work anymore. However, the morality and the ethics comes when the technology meets the use, and that is us. That is the human being, and we really need to start taking responsibility and not talk about AI as if it is some sort of thing that exists on its own. It does not exist on its own. It is an amplifier and a technology that removes frictions that are sometimes useful to have. We need to be very cautious about that in terms of how children can access information, all the harmful kinds of things we mentioned today but also things like sitting at home and buying something that you really do not need, placing a bet that you did not want, or buying a drink that you did not want to have. These kinds of frictions are removed by technology, and AI is one of these technologies that is so commoditised now that a 14-year-old with a computer and an Internet connection can literally change the world. If we look at Mark Zuckerberg, God bless him, when he was in college, that is where Facebook came from. Who knew that what he was producing would have such an impact? That is where Amara's law comes in. We overestimate the impact in the short term but we underestimate the dramatic impact in the long term, and we really need to resolve that.

The question is around where the morality and the values come from. Regardless of whether we believe that AI is this fantastic genie in the bottle or not, we need to self-reflect on our own values, what we want as a society and what we teach our children. That is where the morality, the data and the ideas come from. Unless those kids are supported and society is supported in having a good values system, then the technology can become very dangerous.

Comments

No comments

Log in or join to post a public comment.