Written answers

Tuesday, 15 July 2025

Department of Enterprise, Trade and Employment

Departmental Meetings

Photo of Sinéad GibneySinéad Gibney (Dublin Rathdown, Social Democrats)
Link to this: Individually | In context

424. To ask the Minister for Enterprise, Trade and Employment the contact his Department has had with social media companies over the potential for AI chatbots on social media platforms to generate antisemitic or racist content; and if he will make a statement on the matter. [39470/25]

Photo of Niamh SmythNiamh Smyth (Cavan-Monaghan, Fianna Fail)
Link to this: Individually | In context

There is now a significant body of legislation providing the foundation for Ireland’s online safety framework, including the regulation of social media.

Coimisiún na Meán, as Ireland’s new online safety and media regulator, is at the heart of that framework. It was established under the Online Safety and Media Regulation (OSMR) Act, which transposed the EU Audio Visual Media Services Directive, and provides that An Coimisiún is independent in the exercise of its functions. The OSMR Act along with the EU Digital Services Act (DSA), under which An Coimisiún is Ireland’s Digital Services Coordinator, and the EU Terrorist Content Online Regulation for which An Coimisiún is a competent authority, are the main elements of Ireland's online safety framework.

Its role is not to moderate individual pieces of content but to ensure that the regulated platforms have the correct safety measures in place to prevent illegal or harmful content being shown.

Under the framework, it is for the regulated platforms to demonstrate that they have the correct safety measures in place to prevent illegal or harmful content being shown. A failure to adequately address these requirements can lead to significant financial sanctions and, under the OSMR Act, continued non-compliance can lead to criminal sanctions for senior management.

As provided for under the OSMR Act, An Coimisiún adopted and applied the new online safety code for designated video-sharing platforms established in Ireland, including TikTok, Facebook, Instagram and X. Part A of the Code, which has applied since November 2024, provides that designated services must provide protections to the general public from harmful online content, including content which incites hatred or violence, is racist or xenophobic, and to establish and operate age verification systems with respect to content which may impair the physical, mental or moral development of minors.

Part B of the Code, which will apply from July 2025, contains specific obligations for designated services to prohibit the uploading or sharing of harmful content that is incitement to hatred or violence, terrorism, child sex abuse material, racism and xenophobia as well as to use age assurance to prevent children from encountering pornography or gratuitous violence online and provide parental controls.

An Coimisiún is responsible for the implementation and enforcement of the Code.

The specific use of AI by deployers (which may include social media companies) is subject to the EU Artificial Intelligence (AI) Act. The AI Act is an EU regulation which entered into force on 2 August 2024 and is directly applicable across the EU. The regulation applies in a phased manner over 36 months from entry into force. A key objective of the regulation is to protect against harmful effects of AI systems in the Union in terms of health, safety and fundamental rights. The Act is not a blanket regulation applying to all AI systems. Rather, it adopts a risk-based approach to regulation, based on four risk categories in order to ensure that its measures are targeted and proportionate. The categories range from unacceptable to minimal risk. While initial phases of the Act, including setting out rules for prohibited AI practices, are now in effect, the provisions to enable surveillance, penalties and enforcement are yet to come into effect.

My Department has been working with stakeholders both nationally and across the EU over the last number of years in preparation for the implementation of the AI Act. Part of the work has been to generally consult and communicate with the wide range of entities set out in the Act, including providers and deployers of AI, and to raise awareness of the obligations arising from the Act for which the need to make the necessary provisions.

As the surveillance, penalties and enforcement phases of the AI Act come into effect in 2026 and 2027, there will be formal regulatory mechanisms available to the relevant surveillance authorities to engage with individual companies specifically in relation to their AI systems.

Comments

No comments

Log in or join to post a public comment.