Oireachtas Joint and Select Committees
Tuesday, 30 September 2025
Joint Oireachtas Committee on Artificial Intelligence
Artificial Intelligence and Older People: Discussion
2:00 am
Ms Camille Loftus:
I thank the committee for this invitation to address it. We very much welcome the fact it has taken this focus on older people.
Age Action works to promote equality for all of us as we age, combat ageism and build an inclusive and age-positive society to enable us all to live full and independent lives as we age and in our older years. We recognise that AI brings both potential benefits and risks for older people. Digitalisation has too often made older people feel excluded and left behind. We need to learn from that experience now in approaching artificial intelligence.
Ageism is the process whereby stereotypes influence how we think, leading to prejudice in how we feel and resulting in discrimination in how we act. Ageism is pervasive in society and is often reflected in how we have approached digitalisation. For example, particularly in the private sector, which is the sole provider of many essential goods and services, digitalisation often seems to prioritise cost reductions without considering the needs of older customers, who may no longer be able to access a service, may have to pay more for the service, or may be exposed to safeguarding risks as a consequence. I am sure members will all have received representations, for example, in relation to the recent decision by Ryanair to stop accepting anything other than a digital boarding pass.
Many ICT education and training initiatives are focused on younger people. Even the data we use to measure the impact of digitalisation, the EU’s digital economy and society index, excludes people aged 75 or older. We have no data on that group at all. The digital divide is particularly acute for older people. Those managing on low fixed incomes find investment in expensive devices, such as smartphones, challenging, along with the cost of maintaining broadband connections to their homes. The only State support in this regard is the telephone allowance and only about a sixth of older people get it. It is paid at a rate of a mere €2.50 per week, which has not been increased since 2018.
Digital skills among older people correlate very strongly with levels of educational attainment. It is important to remember that the current generation of older people did not get access to the kind of educational opportunities that are taken for granted by young people now. Digital skills among the older population correlate very strongly with that. If they are one of the very small minority who got a third level education, their skills are good. If they are one of the vast majority who finished school much earlier, went out into the workforce and made a contribution, their digital skills are often poor. My colleague, Mr. Carroll, works indefatigably on a whole bunch of dimensions within Age Action and also volunteers as a tutor on our getting started programme, which to date has helped over 48,000 older people to build their digital skills and confidence. Over 40% of participants report improved confidence online and growth in their knowledge as a consequence of participation. Our volunteers on that programme are now being trained to support older people to engage with AI in a safe and beneficial way.
I will make a couple of points on how we can learn from digitalisation in approaching AI. We urgently require some regulation in this regard. That is one of the key learnings we can take from our experience with digitalisation to date. We know that when artificial intelligence is being designed, older people are often excluded entirely from the datasets used to develop and train applications. When they are included, they are often treated as one homogenous group. This encodes ageism within artificial intelligence. Age-friendly artificial intelligence is built on a clear understanding of older people’s needs, values and priorities, the issues it is intended to address, and detailed consideration of both positive and negative outcomes. The only way of accomplishing this is to include older people, their families and those who provide care for them, and to consult them throughout AI product development, including in datasets, product design and testing and roll-out.
Data should be disaggregated in order to understand and analyse the diversity of older people, taking account of differences such as differing physical, sensory and cognitive capacities, levels of education, familiarity with digital technology, socioeconomic profile, urban-rural location, and living and care arrangements, etc. Older people are a diverse population with a wide range of experience and needs, and that diversity is only likely to increase in the future.
Data privacy and informed consent are critical elements of informed AI technology. Protection of user data and individual privacy must be considered a premium in any regulatory regime, with clear communication of what data is being collected, who has access to it and for what purpose. Users should easily be able to turn features on and off and decide who has access to their data. We are all familiar with the legalistic and lengthy consent messages we get when we are trying to access any app. None of us is reading them and those of us who are have no idea what the consent factually provides for. We need succinct, clear, plain English communication to enable users to understand and exercise free choice and control the way data on them is collected and used.
AI applications should be transparent and accountable in communicating what they can and cannot do and that they will make mistakes. Safeguards to protect users from harm as a consequence of errors should be an integral part of AI design requirements.
Finally, I note that older people who need assistance to negotiate digital platforms must often divulge very sensitive information to those who are helping them. This exposes them to a critical safeguarding risk. Adult safeguarding legislation is long overdue. From a financial and health perspective, it needs to take account of the safeguarding risks that are created by these new technologies.
AI also creates the risk of more intense social isolation for some older people who are susceptible to AI companions who can mimic empathy. People who are experiencing cognitive decline, dementia or grief, for example, can be left very vulnerable to potential exploitation and manipulation through such applications, so monitoring post-deployment should also include the impact of AI applications on quality of life, loneliness and well-being.
No comments