Oireachtas Joint and Select Committees

Tuesday, 16 April 2024

Joint Committee On Children, Equality, Disability, Integration And Youth

Protection of Children in the Use of Artificial Intelligence: Discussion (Resumed)

Photo of Kathleen FunchionKathleen Funchion (Carlow-Kilkenny, Sinn Fein) | Oireachtas source

Thank you. I have a few questions I want to ask. This is more for TikTok and it relates to the algorithms. It is true of all platforms that if you post something and you are getting a large number of negative comments, the algorithm picks this up. What it nearly wants is negative feedback. How are children being protected from that? Part of what is going to be covered on the programme to be broadcast on RTÉ tonight is that we could have young people with something going on in their lives, whether it is worry about school or whatever, and all of a sudden it goes from that to them seeing really negative content. I understand from all the platforms about the wider, really serious things around child sex abuse and all that. What we are trying to get to the bottom of here is the underlying negativity. The more screen time children and young people are exposed to, the greater the chance that they are exposed to these negative algorithms. It becomes a vicious circle and they kind of go down a rabbit hole. That certainly seems to be the case, not just from what is to be reported tonight but from this committee heard from other groups on this topic. They were saying there tends to be a ripple effect and that it gets out of control.

Particularly for teenagers and preteens, they are at a very vulnerable age, and let us be honest, there are many people under the age of 13 who should not be on the platforms but who are. There is all the stuff around body image. As someone in my age category, I often think that we have to try to get through to younger people that this stuff online is not the real world and that no people are walking around looking like that. You often have to get to a certain age in your life to understand that. We have to make sure that all protections that can be in place for younger people and teenagers are there. Specifically on the algorithms, when it is identified that it is doing this negative drive, why can that algorithm not be stopped or banned? Maybe it is not as simple as that but I cannot understand why it would not be the case.

There has been a good bit of discussion on age verification and ID verification. One thing I have always thought for all platforms is that people should have to have some level of ID. We can avoid all those bot accounts if people have to say who they are but it is also relevant in respect of age. I accept what has been said about people not wanting to put forward their documents but I do not know if we should be given that choice. It is not like it is going to be shared in the public domain. It is between the company and the person when they want to sign up to this, that they would have to provide some level of ID. It would erase a huge number of the problems.

Specific to X, I wanted to pick up on a question Senator Clonan asked earlier around information being shared with the Garda in certain cases. If someone is sharing very harmful content, but the account is anonymous or it is not obvious from their picture or username who they are, and I screenshot that content and go to the Garda, and let us say it comes into a case, at the point the Garda contacts X, does X have to reveal who that person is? If the person tries to remove their account or content, is it still accessible in some wider system, cloud or whatever? I would just like to get some clarity on that. Perhaps someone would like to respond first on the algorithms.

Comments

No comments

Log in or join to post a public comment.