Oireachtas Joint and Select Committees

Tuesday, 21 October 2025

Joint Oireachtas Committee on Artificial Intelligence

Artificial Intelligence and the State: Discussion

2:00 am

Mr. Liam Herrick:

I thank the Cathaoirleach and members of the committee for the invitation to appear before them. The Irish Human Rights and Equality Commission is Ireland's national human rights institution and its national equality body. We have a broad mandate to protect and promote human rights and equality.

Advances in artificial intelligence come with opportunities but also with risks, many of them to human rights and equality. While Ireland and the EU will, of course, strive for competitiveness in the area of technological developments, Ireland's strategic priorities must also include building robust safeguards against harms, adapting and applying existing human rights and equality protections, and ensuring alignment with both domestic law and European Union standards. We must be wary of a self-interested antiregulation discourse that would strip away fundamental rights protection in the interests of corporate profits. Indeed, we have seen how such an approach has had disastrous consequences in other, parallel areas.

There is a strong public appetite for effective regulation. Our 2025 polling found that 73% of people are concerned about the societal impacts of AI. Only 22% of Irish people believe the Government is effectively regulating technology companies to date. Furthermore, 68% expressed concern about the use of AI by the Government and public services.

The Council of Europe's Framework Convention on Artificial Intelligence sets out core principles, including human dignity, individual autonomy, equality, privacy, accountability, reliability and safe innovation. AI systems that are designed and deployed throughout their life cycle in compliance with these principles have the potential to do good, including, for example, to significantly improve healthcare, widen access to justice, promote independent living for people living with disability and help to address complex global challenges, including the climate crisis.

However, as we witness AI systems being integrated into daily life, we are also seeing clear impacts on rights, such as privacy, dignity, non-discrimination, education, work and access to justice. Specific concerns, which are already manifesting, include devastating impacts on children and young people in many instances, harmful stereotypes that are reinforced through media and online platforms, negative impacts on both youth and adult mental health, and impacts on the wider workforce, including socioeconomic discrimination.

The risks associated with AI are not evenly distributed. Discriminatory outcomes are being documented across a range of protected characteristics, including gender, disability, race, family status and age. AI technology poses a variety of risks to children, ranging from radicalisation to social withdrawal. Large language models can replicate and amplify sexist and racist narratives in public discourse. AI is also being used to spread misinformation and hate more effectively than previously.

Looking specifically at the area of disability, our disability advisory committee has highlighted the ableism embedded in many AI systems and raised concerns about discriminatory outcomes in education assessment tools and around the automation of services and supports, including in the area of mental health.

It is essential, then, that any approach to AI is intersectional and inclusive. It must involve the people who are most affected in the design, deployment and development of regulatory frameworks around AI systems.

Particular attention must be paid to the use of AI in the public sector. The deployment of AI in public services carries particularly high risks, especially when used to make decisions about essential entitlements and supports. We have seen examples from other jurisdictions, including the Netherlands, where flaws in the design of AI systems led to serious and systemic rights violations in the provision of welfare protections. In this context, we have raised concerns with the Department of public expenditure that its recently issued Guidelines for the Responsible Use of AI in the Public Service do not reference the pre-existing public sector human rights and equality duty. This is a critical omission. That duty should be the core framework guiding public bodies in their adoption of AI, thereby ensuring systems are rights compliant from the start.

As Ireland moves towards a multi-authority regulatory model for AI in its transposition of the EU's Artificial Intelligence Act, it is essential, to ensure accountability and effective enforcement, that the roles, responsibilities and powers of the relevant designated bodies are clearly defined and supported. Structured co-ordination of these mechanisms must be established to ensure collaboration and information sharing between all the relevant authorities. The model of regulation must embed human rights and equality standards and expertise from the outset.

The Irish Human Rights and Equality Commission has been designated, along with eight other public bodies, as one of the fundamental rights regulators of high-risk artificial intelligence under Article 77 of the EU AI Act. This role will carry additional and significant responsibilities. To fulfil the role effectively, in line with established UN and EU standards, we and the other designated bodies must be provided with ring-fenced multi-annual resourcing, including financial capacity and, in particular, technical and human capacity, to deliver on our promise to perform this regulatory function.

We greatly welcome the establishment of this committee, which can play a key role in the design and oversight of Ireland's approach to artificial intelligence. We aim to support the work of the committee in any way we can. We look forward to engaging with members today.

Comments

No comments

Log in or join to post a public comment.