Oireachtas Joint and Select Committees

Tuesday, 23 September 2025

Joint Oireachtas Committee on Artificial Intelligence

Artificial Intelligence and Children and Young People: Discussion

2:00 am

Mr. Bernard Joyce:

The Irish Traveller Movement welcomes the opportunity to be here today to discuss the topic of artificial intelligence, and to inform evolving decisions on Ireland’s approaches especially in the area of regulation, and the importance for Travellers of a human rights-based framing as part of any ethical considerations. We have raised these matters with Coimisiún na Meán and have engaged with the coimisiún on development of the online safety code for video-sharing platform services, and the safeguarding required for Traveller children, as a member of their youth advisory committee.

Traveller children are particularly vulnerable to hate-based harms online. Some examples include where dedicated social media bots are created to look like real Traveller accounts and titled under stereotyping names; racist live streaming, which renders complaints inadequate when damage is done in real time; and platforms such as TikTok and Facebook, which facilitate pages that are solely established to negatively stereotype Travellers, including children and young people. We have also raised concerns about the absence of safeguarding via automated discrimination and social media algorithms that act as vectors of harmful content, especially to children, by reinforcement and amplification of content that is inherently attuned to a systemic bias and stereotyping of minority groups such as Travellers. We have expanded in our paper to the committee on some of the matters I would like to refer further here.

Our primary recommendations for the committee’s consideration are on the need for an ethical and human rights standard, and we strongly endorse UNICEF’s policy guidance on AI for children in 2021 and on the UN Convention on the Rights of the Child. We raise the need too, for a very specific protection for Travellers as a group most vulnerable to racism, hate and discrimination bias in every setting, and in digital spaces in particular. We also recommend that digital platforms develop safety by design in all automated and AI-generated interactive tools and systems.

There are three welcome defined protected grounds for Travellers in Ireland’s online safety code for video-sharing platforms, which is underpinned by Article 21 of the European Union Charter of Fundamental Rights. This prohibits incitement to violence or hatred, directed against groups or persons, including on the basis of ethnic or social origin. However, we remain concerned where digital platforms facilitate identity-based harmful content that may not be inciting hatred, but is pervasively stereotyping, perpetuating and racist, while technically still in compliance with the code.

The code does require platforms to operate recommender systems in a way that does not result in a user being exposed to content which, in aggregate, causes harm but for the purposes of the protection of minors, the coimisiún also relies on Article 28b(1)(a) of the AMS directive, citing that the most harmful content shall be subject to the strictest measures. However, that harmful content as described in the code is only where it meets the threshold of incitement, which does not take account of the systemic discrimination and racism streamed. The coimisiún’s guidelines aligned to the code also do not go far enough to mitigate against pervasively harmful content. We strongly recommend greater attention to recommender systems. In this regard, an coimisiún as digital services coordinator leverages a good position to promote higher standards for replication in Europe.

Many digital platforms use artificial intelligence for content moderation but those systems are not dealing sufficiently with harm content on ethnic identity grounds. For Travellers this is a safety failure with multiple effects which start with the human-generated harmful content, amplified by automated algorithmic systems, under-moderated by AI systems and with referral to human complaints moderators who are not trained to understand the nature of identity-based harms, and complaints are not upheld. The element of human interaction on content moderation is critical to complaints processes, however without explicitly defining groups in the AI step to trigger a human involvement in the process, these are left unattended. Moderators are also generally not culturally-specific trained.

This issue is not being tackled at the level of the code for platforms. We have referred in our submission to a case involving SoundCloud. The chilling effect of deficient online protection for Travellers grossly impacts on mental health and wellbeing. It also perpetuates a pervasive status for Travellers as being devoid of guardianship generally from procedures designed to protect all by default, without specific attention to the requirement of specialised protection.

Our concerns are also focused on large language models, LLMs, which generate text, images, audio and video.

Comments

No comments

Log in or join to post a public comment.