Oireachtas Joint and Select Committees
Tuesday, 21 October 2025
Joint Oireachtas Committee on Artificial Intelligence
Artificial Intelligence and the State: Discussion
2:00 am
Ms Rebecca Keatinge:
The examples from other jurisdictions are really arresting and should be at the forefront of everybody’s consideration in respect of this. They seem to have arisen from our research in respect of social welfare systems, because these systems affect such a wide proportion of the population.
In the Netherlands, it is the infamous child benefit scandal of 2018 that led to the fall of the government. The tax authorities used what turned out to be a highly discriminatory algorithm in respect of identifying fraud within the child benefit system. It had a category, an indicator of citizenship, as one of the risk factors. It was developed in a black box and it was a self-determining algorithm so that led to its own decision-making capacity. That led to whole nationalities being essentially blacklisted for very small administrative errors in their applications for the benefit. It led to really devastating human impacts where people lost their homes and people experienced a huge impact on their mental health due to the stress of the experience. It affected tens of thousands of people. That was the starting point for the public discourse.
There is a very wide-ranging report on the Danish example produced by Amnesty International. Denmark has a very high level of social welfare provision, such that half the population is in receipt of a social welfare support. I think 26% of its GDP goes on social welfare. It is a highly digitised state. It employs something in the order of 60 different algorithms to help run its social welfare system, including fraud control. There is a similar example coming out in the Danish context where you have a model that uses citizenship as another indicator. It also has a model that picks out atypical characteristics that do not, I suppose, essentially conform to Danish social norms for household size or residence patterns - things that identified risk and then identified people through that. There is an argument to be made that under the EU AI Act this could constitute social scoring but there is probably a lack of legal clarity about exactly how those terms will apply in practice. Those are some examples. Amnesty has written a more recent report, which I think was just published this year, on the UK Department for Work and Pensions, which is similarly employing a digital-first approach. This includes some of its very sensitive and nuanced areas like the personal payment for people with disabilities and universal credit. Serious concerns were raised about similar algorithmic discrimination in the processing of that.