Oireachtas Joint and Select Committees
Wednesday, 24 May 2023
Joint Oireachtas Committee on Jobs, Enterprise and Innovation
Employee Experiences of Technological Surveillance in the Financial Services Sector: Discussion
We are now in public session. I remind members who are attending the meeting remotely that they need to do so from within the Leinster House complex. Today we will discuss employee experiences of technological surveillance in the financial services sector. Advances in technology have brought many opportunities for positive change in the workplace. However, developments in technology have also brought new risks and challenges that require appropriate scrutiny to ensure that the rights of businesses and staff are sufficiently protected through robust legislation and policy. One area where workplace technology has advanced in recent years is in respect of technological surveillance of employees. I am pleased that we have the opportunity to consider this and related matters with representatives of the Financial Services Union, FSU, Mr. John O'Connell, general secretary, and Mr. Brian McDowell, head of communications and public affairs. We are also joined by Dr. Michelle O'Sullivan from the department of work and employment studies in the Kemmy Business School at the University of Limerick, UL.
Before we start, I will explain some of the limitations to parliamentary privilege and the practices of the Houses as regards references made to other persons in evidence. The evidence of witnesses physically present or who give evidence from within the parliamentary precincts is protected, pursuant to both the Constitution and statute, by absolute privilege. Witnesses are reminded of the long-standing parliamentary practice to the effect that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable or otherwise engage in speech that may be regarded as damaging to the good name of the person or entity. If their statements are potentially defamatory in relation to an identifiable person or entity, they will be directed by me to discontinue their remarks and it is imperative that they comply with any such direction.
The opening statements have been circulated to all members. To commence our consideration of this matter, I invite Mr. O'Connell to make some opening remarks on behalf of the Financial Services Union.
Mr. John O'Connell:
Thank you, Chairman. I express my sincere gratitude to all members of this committee for facilitating this meeting. I know the committee's schedule is hectic so the fact that members are taking the time today to listen to the reports of workers in the finance sector on their experiences and attitudes towards technological surveillance by their employers is much appreciated by the FSU and the staff working in the sector.
I would like to introduce Dr. Michelle O'Sullivan, one of our research partners in the department of work and employment studies at the Kemmy Business School in UL, who collaborated with us on this research. Without the work and professionalism of Dr. O'Sullivan and the team from UL, this report would not have happened. The report was published just four weeks ago but even since then, the developments in artificial intelligence, AI, have been dramatic. Recent advances in technology and, in particular, the creation of ChatGPT, have brought technological advances in the workplace to a new level. Indeed, two Government Departments have seen fit to use ChatGPT for a short while to see what benefits it might bring. It is claimed that recent developments in AI will transform our healthcare and education sectors. I mention this to emphasise the ever-changing nature of technology and the rapid advances that have been made and to warn about the shortage of research on the benefits or detriments these advances may bring to the workplace and to workers.
Workplace technology will advance and be enhanced over the next few years. It is vital that research and legislation keep pace with these changes so society, businesses and staff can both benefit and be protected from any unforeseen consequences.
The FSU first commissioned this research in partnership with the University of Limerick in 2019 and again in 2021, with a follow-up survey carried out by the FSU in 2023. The research project sought to establish the experiences and attitudes of financial services employees to technological change in their job and this report focuses on employees’ experiences and attitudes towards technological surveillance by their employer in particular. Globally, this took a massive leap forward as millions of office-based workers moved overnight to being home workers due to the Covid-19 pandemic. The findings are based on interviews conducted with financial services employees by the University of Limerick pre-Covid in 2019, followed by a large-scale survey of employees during Covid-19 restrictions in 2021, with a further survey of members conducted in February 2023.
Surveillance of staff by their employers and how data on staff are collected, stored and used were always issues that needed to be addressed. However, it seems clear from these findings there is a need for regulation and legislation to keep pace with the changing nature of technology. A notable finding in the report is the extent to which respondents to the survey were unaware of the level of employer tracking and monitoring, with over half indicating they did not know if their office or home computer was monitored. Almost one quarter of respondents reported that their employer had increased data collection on their work since they started home working, while 28% said data collection had stayed at the same level. Two thirds of respondents felt surveillance was demoralising and indicated that surveillance increased their levels of stress, while over half felt that surveillance at work was a violation of privacy. A majority, or 60%, felt that surveillance indicated a lack of trust on the part of their employer, while an even larger percentage, 63%, felt that the use of surveillance erodes trust.
Survey respondents reported having some experience with technological surveillance of their devices, but significant proportions were unaware if their devices were monitored or not. Interviewees spoke of varying levels of employer technological surveillance pre-Covid, from very little to very extensive depending on where they worked and their role. Overall, employees had negative attitudes towards technological surveillance, viewing it as demoralising, stressful and indicating, as has been mentioned, a lack of trust by employers.
The report makes recommendations for employers and the Government. For employers, it is the view of the FSU that the concerns of employees evident in the survey findings can be addressed through collective bargaining between employers and trade unions. The FSU calls on employers to prioritise the following issues: providing evidence on the necessity and proportionality of technological surveillance in all its forms; that surveillance functions should only be introduced with the agreement of the union and affected employees; that employers’ collection of data from surveillance functions and the nature of its usage should be negotiated with the agreement of the union and affected employees; to address, with the union, the impact of surveillance on employees, such as their stress and trust levels; and, finally, providing evidence to the union on the organisations’ cybersecurity measures to protect employee data collected, in particular, biometric data.
For the Government and legislators, we ask this committee to commission a report into the use of surveillance in the workplace with a view to exploring what, if any, legislative changes may be required to best protect workers and provide for workers' voices on this issue. The research for the report would concentrate on possible legislative changes that may be required to keep pace with technology advances while ensuring proper regulation of employers’ collection and use of data from surveillance functions. This may involve stronger legislative mechanisms that support trade unions' access to, and representation of, workers for feedback on these issues; requiring employers to meaningfully negotiate with unions on technological surveillance; legislation regulating employers’ collection and use of data from surveillance functions with the aim of ensuring procedural justice, distributive justice, interpersonal justice and informational justice within organisations; and empowering the Data Protection Commission to proactively inspect employments to ensure compliance on current GDPR legislation.
I again thank the committee for inviting us to address the meeting. I hope members find the report insightful and worthy of consideration.
I thank the witnesses, specifically for this report and for giving us the opportunity to have this important discussion this morning. This surveillance has been happening quite a lot without remarks being made about it. There is something a little sneaky about it, although there is probably another word for it. It has been creeping in. I have a speaking slot in the Dáil Chamber shortly so I will have to leave a bit early. I apologise for that but I will come back if I can.
Yesterday, Bloomberg reported that the White House is hosting a forum for workers whose employers use automated systems to monitor them. It is planning a broader effort to ask Americans what priorities the US Administration should pursue regarding artificial intelligence, AI, in terms of new regulations on emerging workplace technologies. The Administration is seeking to better understand how companies use automated technologies. Is a similar forum needed here or is there a danger that such a forum could just become a talking shop when we actually need to stop talking, accept the evidence and move on to legislation?
Mr. John O'Connell:
I honestly think it would be of benefit to have an open debate to first establish what the floor is and what is the current level of surveillance taking place. Employers could come forward to openly and transparently share with us information on that. Equally, we must recognise that the pathway out of that is to determine where we go then and how quickly we can respond. We would be afraid that this will end up like social media, in that it will go so far and so fast that by the time we legislate for it and regulate it, it will have moved away from us and the future entities, whichever they may be, will be all-powerful, similar to social media companies are today.
That is the danger. This has already started and goes largely unremarked, although I know Mr. O'Connell and others have been talking about it. In the broader sense of discussions at the level of the workplace, and even beyond, workers may be talking to themselves and asking what they think that means and how much information they think they have. We have not had that broader conversation and we did not have one with social media, which got ahead of everything.
Mr. O'Connell made a good point. I am wary of something that may become a talking shop and interfere with the regulation that could be needed. However, we need to establish what the levels of workplace surveillance are in the first instance.
We all know workers have been surveilled. There has been surveillance on workers since the Industrial Revolution. There can be a requirement to clock in and clock out. I am a former union official, as was Senator Gavan, and we clocked in and clocked out on our phones. I did not consider it surveillance, although I did sometimes wonder why nobody looked at the hours that were worked as the old European Union working time directive could have been invoked. That never happened and it is unlikely to happen for FSU members either.
Employers now have a technological way of gauging what workers are doing without the knowledge of the worker, to a large extent. To me, this represents a fundamental change. My employer knew when I came in and left and when I started and finished. There was not constant monitoring. However, AI in particular facilitates constant monitoring. It is not just a matter of someone clocking in, clocking out for a break and then clocking back in, and someone seeing that the person took 16 minutes instead of 15 minutes. This is constant monitoring.
Specifically, what legislative changes would the FSU like to see to protect workers? Nobody wants to interfere with an employer's right to know that the worker is doing his or her work. To me, the output tells us that but there may be a burning need to do that. Specifically, what legislative changes do we need to consider?
Dr. Michelle O'Sullivan:
There are two approaches. The first is that regulation has to address the question of whether monitoring should take place at all. There is legislation in other countries where, for example, there are very strict limits on the level of technological surveillance that can take place. For example, in Germany, legislation provides that employer monitoring of the Internet and emails can only take place where there is a suspicion of criminal activity. In Portugal, biometrics, which are physical or behavioural characteristics that are unique to the employee such as fingerprints or eye scans, can only be used to record attendance or to get access to buildings. Here, we are all familiar with instances of waitresses and waiters using their fingerprints to access tills.
Legislation has to address whether there should be monitoring in the first place. If a level of flexibility is given to monitoring, a lot of international policy documents talk about the need for co-governance with worker representatives. For example, the European Agency for Safety and Health at Work talks about co-governance with worker representatives on the development, acquisition, introduction and implementation of any kind of artificial intelligence systems. There is a range of difficult legal questions. A lot of companies would buy in third party software, and there are legal questions as to who is responsible. A lot of legal scholarship says that in regulation there has to be accountability to the companies who are the employers of the employees, and not diverse responsibility to the software companies. There are more far reaching proposals. For example, some law experts in the field of technology argue that there should be a licensing of technologies involved in surveillance to ensure they are compliant with employment laws before they are introduced into organisations. Most international policy documents talk about co-governance of these systems. For example, unions and worker representatives should be involved. We have GDPR and there are impact assessments. However, GDPR would not go far enough and is not specific enough when it comes to employment relation issues. For example, should worker representatives be involved in the development of impact assessments? There is currently no transparency about when companies have to develop impact assessments on data protection. The International Labour Organization, ILO, code of practice on the processing of personal says that the extent of people's protections against risks of personal data depends on collective rights. It is important. There are a lot of developments at EU level. The EU is developing an AI Act. It would categorise employee monitoring through technology as a high-risk category. The current difficulty with the AI Act is that it only places responsibility on the developer of the technology to assess themselves. There is no emphasis on workers and little on employers. This is one of the first studies in Ireland about technological, so we are behind other EU member states in terms of regulation.
I thank Dr. O'Sullivan. She is correct that we are behind. That is why this research is so important. At the very least, it will kick start the conversations we need to be having. The phrase "better late than never" is probably applicable, but at least we are starting. Monitoring is more than just monitoring the number of key strokes or the screen activity. It is has gone very deep. There is software capable of picking up key words like interview, and job search websites where that can be accessed. Is this the level of surveillance now? Is it already much deeper than just monitoring the key strokes? Is that already happening? What kinds of information are employers storing about workers? Obviously, GDPR applies, and for the moment that seems to be the only protection we have for our own data. Does Dr. O'Sullivan believe employers are using these monitoring tools, and also collecting and storing data in certain ways? Are they monitoring the worker while also keeping that information without any guidance at all as to how it is being kept? Can Dr. O'Sullivan give the committee an idea of the type of information that is being held, and where it is being held, even in terms of her broader experience outside the financial area?
Dr. Michelle O'Sullivan:
Mr. O'Connell was correct that what we need is more information on what is happening. As I have mentioned, this is one of the first studies on surveillance in Ireland. The level of surveillance very much depends on the organisation and the type of job you are doing. Even within the financial services sector, some companies have invested enormous amounts in technology. The difference now is that the technology provides constant and granular data compared with previously. The other difference is that, whereas in the past when you talked about the industrial revolution and so on, a lot of automation up to the present was about the processes of work. It is now about the management of workers and managerial decision making. In some other organisations, such as some banks involved in retail banking, there is an enormous amount of data collection particularly focused on sales targets. Every aspect of what employees do is collected into score cards. In some organisations their conversations are recorded, both within retail banks and obviously within call centres. Other organisations have invested far less in technology and people have a lot more autonomy. It depends on the occupation someone is in. Internationally there is a growing, but not a huge amount of data, on what kinds of data are collected. A European parliament resolution on mental health in the digital world of work states that 40% of HR departments in international companies are using artificial intelligence. That can be anything from the hiring phase, to the monitoring, to the termination phase. It varies. In some organisations, for example in warehousing, it is well know that people have palm pilots attached to them tracking their productivity and what they do. It varies a lot across organisations.
Without regulation, it is possible that some employers could be storing data without necessarily understanding how they should be doing so. Everything Dr. O'Sullivan said points to the need for us to regulate this area fairly heavily. I have one final question as my time is tight. I come from the perspective that workers are adults. We all know that if people want to earn their wages, they have to do the work. Most people come to work to do their work, either willingly or disgruntled because they are not multimillionaires or whatever, but they have to work. They have to get up, go to work and do their work. Most people understand that, but the current level and the depth of surveillance is driving that "always on" culture. That is my opinion, and I would welcome Dr. O'Sullivan's views on this. We need to legislate for the right to disconnect. What is now happening is that people are being driven by the apps, by surveillance and by monitoring. Clearly, employers believe they can get more productivity out of their workers. I do not know if it is proven that the more monitoring you do the more productive your workers are. I doubt if that is the case, but I am willing to be corrected if it is. There are many reasons why employers are monitoring.
I apologise. Some of them are doing it because they feel they should. In terms of the right to disconnect, are the two related? Is one without the other going to work or do we need to have regulation plus the right to disconnect?
Mr. John O'Connell:
They are complementary. We argued at the time in favour of legislation. An assessment of the effectiveness of the code of practice could be part of it, because our feeling was that good employers will adhere to the code of practice. Other employers will not, and it is those other employers we are interested in.
I welcome the witnesses and thank them for a most interesting piece of work. It is fascinating, and timely as well. I have gone through it in detail. I am on another committee dealing with foreign affairs and defence. In the past number of months, we have been working on cybersecurity, hybrid threats and that kind of stuff. It is a whole different world and it is moving so fast. It is good they have brought this today, and for commissioning the research. It is a responsible thing to do. One thing that jumped out at me, which I want to bring to their attention, is the use of the word "surveillance". When I looked it up, one definition was, "close observation, especially of a suspected spy or criminal." The word "monitoring" has also been used. I came across the word "oversight". I wonder, if somebody were to be asked if they were happy about surveillance in their workplace, whether that word itself is a bit loaded.
That is just one thing that struck me when I saw the word itself. If the word was “oversight” or “monitoring”, I wonder if we would have received the same kind of responses from the people who were questioned. It is just a comment. That is not at all to take from the value or importance of what is here. I am minded of George Orwell’s Nineteen Eighty-Fourand everybody being watched all the time. There has been lots of work done in that space, in the world of media, film and so on over the years. It is moving so fast now. Artificial intelligence has been mentioned on a few occasions. I think somebody last week said that the development of artificial intelligence is going to be more groundbreaking than the development of the Internet itself. That is pretty scary and it is moving so quickly. I have looked at ChatGPT. It is just unbelievable how we can put something into it and straight away it will write an essay or a poem. It can do all this stuff for us, no problem at all. God knows what is happening.
Dr. O’Sullivan referred a few times to co-governance with worker representatives. I think that is good and I agree with it. She has also been talking about the limitations of GDPR. We have had GDPR mentioned on a few occasions. That is having an impact on all of us as well. She might say a little more about the limitation and how it pertains to this.
Previously we did a lot of work on remote working, the right to request remote working and all that kind of stuff. We had a report on it with which Dr. O’Sullivan is probably familiar. One of the things that came up was that an employer has responsibility for the safety of a worker at work, whether that is at home or in the office. We were wondering how that might happen. In one instance, it was said to us that the worker could take the employer on a virtual journey through his or her home to make sure it was safe. There is a whole area here about who is responsible for the safety of a worker and for ensuring staff have the proper chair and desk and that their workspace is actually safe. There is a certain amount of supervision and monitoring going on there and there might be a little tension in that. I do feel that the idea of a code of conduct is important, as Deputy O’Reilly said a while ago. We have a lot of work to do to catch up. It is moving so fast; it is moving away from us as we are working. This place moves very slowly as we know. Legislation takes a long time to come through.
I think one of the witnesses said at one stage that workers did not know if the computers were monitored or not. If the computer is supplied by the employer, is there not some form of responsibility on the employer to ensure it is used for the purpose for which it was supplied, in other words, that somebody would not be doing private business on a work-supplied computer or doing something illegal or immoral, for instance? How does an employer ensure that nothing illegal happens? The employer might be liable as well if something was going on with a computer that was supplied by the employer, without having some form of monitoring of the device. I am not going to use the word “surveillance”. I will give the witnesses a chance to address those few points as I am conscious of time. I will contribute again.
Dr. Michelle O'Sullivan:
I thank the Deputy. The issue of the definition of surveillance is interesting. The definition can be different in employment relations as opposed to other fields. Monitoring and surveillance would have an interchangeable meaning in labour law and employment relations. We provided a definition in the survey so people would have understood it is about tracking of employee performance. We then asked them specific questions on it. It is an interesting issue.
On GDPR, there is a lot of debate about it and while it has taken privacy and transparency a step further from where they were, there are questions as to how applicable it is in the area of artificial intelligence in the workplace. For example, it mentions automated decision-making. There is a legal question as to whether people have a right to an explanation about decisions that are from automated decision-making. There is a legal question about semi-automated decision-making which the GDPR does not mention. There are different levels of automation of decision-making. One of the criticisms of GDPR is that there is a lot of focus on the procedural aspects of what should be done as opposed to the rights of workers. As I mentioned, there is no right to transparency over impact assessments that are conducted in the workplace. There are some ambiguities as to the level of applicability of GDPR in the area of artificial intelligence.
In terms of monitoring and what employers are allowed to monitor, there is a whole range of issues. I mentioned health and safety. There is a really excellent report by the European Agency for Safety and Health at Work about artificial intelligence in the workplace. The issue of health and safety is quite broad. We are talking first of all about physical safety but mental health is now a big issue when it comes to digitalisation and remote work. On things like bullying, one of the supposed benefits of monitoring or surveillance in the workplace is that it might help reduce bullying. Not everything is included in the report that we have. In our survey for the Financial Services Union Ireland we asked whether employees felt that having extra technological surveillance helped reduce bullying and only about 11% of employees believed that it did. We have to define what employers should be allowed to do. International documents refer to the threats to employee privacy and autonomy, which also comes into the area of health and safety. It is not a given in international legislation that employers should be allowed to monitor. I refer to the examples I mentioned earlier. In some countries there are very strict limits about any sort of level of monitoring, including remote surveillance. For example, in Portugal there are strict limits on the level of technological surveillance of remote working.
There is a legal question as to the legal basis upon which employers should be able to collect data on workers. Employers might be mistakenly thinking that having the consent of employees is enough as a legal basis. There is strong legal opinion that this is incorrect because of the inequality in power between employers and employees. For example, the European Data Protection Board, which is responsible for monitoring data protection, would say there is never a situation in which employees can freely give consent to offering data. That is an issue as well.
Mr. John O'Connell:
The intention, beyond publishing the report, is to engage with employers. In other areas where we have published reports and engaged with employers, we have achieved progress. However, we do so in the absence of a legislative framework. We would argue that the starting point is a report to see the extent of this practice. We know from the employee’s point of view but we do not know from the employer’s point of view to conduct a report into the extent of it. Then it is about understanding the scale of it and the legislative requirements that need to come into place. Part of that is the issue of collective bargaining, to empower workers to have that dialogue and to restore some of the power imbalance, starting with transparency in terms of having it explained in plain language what is being monitored, why it is being monitored and the legal basis of the monitoring.
In the absence of legislation or a code of conduct, I get the sense that this whole area is drifting and developing on its own and that it probably does need to have parameters put on it, which would include the union, employers and the Oireachtas working together to develop structures and boundaries as they have done in Portugal, as Dr. O’Sullivan suggested, and other places.
When we make a phone call we often get a message that the call is being recorded for training and monitoring purposes. I often wondered what happens to that information when it is recorded. Where does it go and who is in charge of it? There is the issue of personal data, and some of the stuff we might be saying over the telephone could be quite personal. The message says the recording is also done for training purposes with respect to the employee, which could be good and bad. If an employee is making a mistake and treating a customer in a way that could be better, there might be room for oversight and training to ensure the employee has a good experience with the customer and vice versa.
Will Dr. O'Sullivan comment on this space? Does it relate to what she referenced?
Dr. Michelle O'Sullivan:
There are benefits in terms of helping quality of work – it depends on the job – but there can be excessive levels of monitoring as well. We do not know what the rights of the worker are in that situation. We conducted interviews as part of the study and a bank manager provided an example. Customers who come into banks now are often people who do not have access to the Internet or find it difficult, so some employees might go to great lengths to explain to them what the services are, but if their conversations are recorded and they are found to have gone off script, they can be disciplined or receive a heightened level of monitoring even though they feel they were trying to help the customer as much as possible. Employees want to give a good service. In some of our interviews, they felt that technology was working against them or that it was not there to help the customer as much as it could. They do not mind technology that helps them in their work, but when it works against them and they feel it is leading to inefficiencies and unwarranted disciplining, that is when they are concerned.
If someone is put into a job without proper training, it can have an impact as well. I thank Dr. O'Sullivan for her remarks and feedback. This has been interesting and I believe we have a fair bit to go on. Perhaps we should invite the employers to attend and give their views in a little while after we have had a chance to digest this information.
This is an interesting subject. Some of the report's conclusions are dynamite. I would support progressing legislation on this matter but I note the point that has been made about consent where there is a major power imbalance. People have to sign the contracts to get these jobs, so if they do not consent, they do not get the jobs. This points towards the need for workers to organise and join unions so as to ensure that any legislation is respected and implemented.
Interesting points have been raised about debates on whether to meet this challenge by, for example, regulating surveillance or almost banning it. The latter seems to be the German case, which I will revisit. I support the proposal that we commission a report.
I wish to start with some questions for Dr. O'Sullivan. How many people were surveyed as part of the study and do we have a rough idea of how many workers in society are being affected by this situation?
Dr. Michelle O'Sullivan:
We had two surveys. The one in 2019 was part of a broader survey that the FSU did. We had a few questions relating to surveillance in that and it received more than 1,000 responses. The main survey from which these data are taken received just over 1,000 responses. We also conducted 23 interviews with people in different occupations, for example, IT specialists and bank managers.
Wider data are limited in Ireland. I have not seen other data about the extent of surveillance. I mentioned how 40% of HR departments in international companies were using AI. According to the European Social Partners Autonomous Framework Agreement on Digitalisation, approximately 8% of companies are using machines to monitor performance. The Chartered Institute of Personnel and Development, CIPD, which is the HR professional body for the UK and Ireland, conducted a survey in the UK recently. According to that, approximately 45% of employees indicated that monitoring of performance was taking place in their workplaces.
Dr. Michelle O'Sullivan:
The UK. Those are the kinds of data we are discussing. Eurofound did some analysis of the European company surveys and found that half of establishments were using data analytics for monitoring employee performance or improving the processes of work or both. This is a bit of a blurred area. In banking, for example, there is direct surveillance like monitoring emails and Internet usage. From what we have seen in the financial services sector, I do not believe that there is much recording of keystrokes or webcams, but emails, logging in times and phone calls are recorded.
I referred to process improvements. Sometimes, there are indirect forms of surveillance. An employee might say that it is not that his or her performance is being monitored, but that he or she has to fill out an Excel sheet that is part of a process improvement issue and his or her performance is monitored as part of that. People will then tell the employee that they could see from the system that it took him or her X amount of hours to fill out the Excel sheet and ask why the employee is not doing it in less. It also muddies the waters for employees because they do not know whether their performance is being monitored or whether they are just working on a system, an offshoot of which sees their performance being monitored as well.
We have varied levels of surveillance. We know it is growing and that some companies in other fields are using it to a great extent, but we have limited data on its prevalence in Ireland.
I will ask a short tranche of questions before making my final point. Do we have any sense as to whether this is mainly affecting people working at home or in an office environment? Will Dr. O'Sullivan take us through the different forms of surveillance? She mentioned that there was not major webcam surveillance currently, but can there be a situation where someone is working at home and is not even aware that he or she is being monitored by a webcam, which does not just see what he or she is doing at the desk, but what is happening in the home? Is that happening in Ireland?
Dr. Michelle O'Sullivan:
The webcams were specific to this study. People did not mention them in the interviews and so on that we did. However, the technology is available and can be used in other sectors.
As to other types of surveillance, there is something called keystroke logging. It is difficult to know how extensive that is and we did not see much evidence of it in the financial services sector. Some banks have systems that record every technological interaction, for example, phone systems, logging in times, how long people are spending on calls or emails, etc. Internationally, other technologies being used include badges that can detect emotions and how employees talk to people. One of the most talked about examples is the Humanyze badge. If every employee wears the badge, it records every interaction they have with other employees. There are some organisations in Japan where you cannot enter a building unless you are smiling because they want to promote enthusiasm among employees.
There is a good American website, coworker.org, which has a database of all the technology apps that are available and the types of data they collect. This is now known as little tech. Big tech is the Googles and so on of this world, but "little tech" is the term being used to describe companies – many of them are start-ups – that are developing these apps and software to record everything from your conversations and emotions to your technological interactions.
I will make a brief point. I was interested in what Dr. O'Sullivan said about Germany. It is not only a country that has a comparatively strong trade union movement.
It is also a country which, at a certain point in the 20th century, was under a dictatorship. Then the east of the country experienced communism and the Stasi. There is a real sensitivity about these issues in Germany. It seems that what Dr. O'Sullivan described in Germany is that it is essentially illegal and banned. The one circumstance being if it is a criminal investigation. That is a real opt-in scenario. The other option is to say this is the reality and we deal with it but it has to be covered by legislation, there has to be collective bargaining and we will work out what can be done and cannot be done in terms of consent. I would lean more closely towards the German model. Will the witnesses comment on that?
Dr. Michelle O'Sullivan:
In the German model the issue about criminal activity is specific to the monitoring of the Internet and emails. Outside of that, in other areas of surveillance of employees, there are strong co-determination rights so that works councils have co-determination powers in regard to the introduction and use of technology to monitor work. Countries such as Austria, Denmark, Norway and France all have laws that are about co-determination or consultation of worker representatives on anything regarding technology and monitoring performance.
I thank the FSU and Dr. O'Sullivan for being here today. I commend the FSU on its work. I think back to 2020 and the unit's work on the right to switch off and disconnect, which set the agenda then and brings us on to this conversation today. We are all very aware that the use of technology is transforming the concept of the workplace. The FSU's recommendations are very clear on the need to rebalance power in the workplace between workers and employers.
I want to ask about technology being used as a management tool. Technology is used for performance assessment, which is obviously related to management. Dr. O'Sullivan might be best able to answer this. To what extent is technology used to manage workers now, thereby displacing managers? The financial services sector is ripe for this kind of technology. Are there also other sectors where technology is displacing human managers?
Dr. Michelle O'Sullivan:
The financial services sector comes out internationally as one of the sectors with the highest levels of technological surveillance of employees. That has been found from a survey done by the Trades Union Congress, TUC, in the UK and another by the European Agency for Safety & Health at Work, EU-OSHA. The transport and storage sector scores quite highly as well. In banking, there is no doubt that the introduction of technology has been used to contribute to job losses. Some bank managers we interviewed said that in one particular branch there might be a reduction of five bank tellers because of the introduction of self-service machines. It is also part of the wider restructuring of banks, which Mr. O'Connell could speak more about. The work that managers do also becomes downgraded and in some banks the pay is not as high as it used to be. Mr. O'Connell can speak more about the efforts that banks are making regarding employee well-being.
Some sectors are much more ripe for technological displacement than others. Technology has benefits as well. In some sectors, it is not about the wiping out of jobs but really about displacing tasks as opposed to whole jobs. Some employees we spoke to appreciated the application of technology in trying to get rid of what they might term repetitive or boring work. The other side of the coin is that while it has helped to make work more interesting, it has also led to greater expectations about the quantity of work that employees are able to do. Employees feel that it has enabled banks to increase the intensity and the pace of work. That is the downside.
Mr. John O'Connell:
To complement what Dr. O'Sullivan said, the overriding principle has to be human intervention. Whether people are being hired into the sector or their performance is being managed, there has to be human oversight. There has to be a pathway for workers to be able to have a conversation. Everything Dr. O'Sullivan has outlined are the tools and the data, that then feed into decision-making. If there is a bias in the system, it then propagates that on an ongoing basis with no human oversight, or right of appeal. It extends the limitations because it does not have boundaries. For our people who are working in call centres, thousands of whom do so remotely now, the boundaries that existed in a physical space are no longer there. If we have a private conversation, it is a private conversation. However, if I have a private conversation with the Senator and it is being monitored without us being aware, we can find ourselves in a situation of being performance managed, or more serious than that, as a result of that conversation. If people do not know and they are busy, they can have interactions and conversations. From our point of view, it is about human oversight in all forms. No form of AI should have the independence to operate with no human oversight. How that is then used has to be subject to human interaction, appeal and oversight.
I want to get the witnesses' perspective on the software being used. To what extent are we seeing the software being developed in-house by organisations or contracted out to third party providers? That can give rise to a series of issues with three parties now involved in a workplace: employer, employee and the third party providing the service.
Mr. John O'Connell:
What we are finding is layers. We talked about call centres and the level of monitoring earlier. That is then overlaid with AI and other software technologies that look at keystrokes and downtime. The system then starts to de-segregate things. It looks at it from the point of view of the customer interaction. For example, if a lot of customers are calling about a certain issue, the time being spent on it can be analysed. The issue can be restricted or limited to better utilise the workers' time. We have all encountered the frustration of wanting to do something and not being allowed to by the system. In that case, one has to find another method, which usually means doing it for oneself. That leads to de-segregation of roles in terms of looking at the roles and deciding that workers do not need to do certain tasks anymore by stopping it or forcing the customer to deal with it themselves.
Dr. Michelle O'Sullivan:
The simple answer is that in Ireland we do not know. That would be the next stage for the FSU's conversations with employers. Internationally, a lot of the software is third party, with some level of customisation for different employers. That is why there is this question about who is responsible. We do not have regulation in Ireland and even internationally there is still not strong regulation about ensuring that the employer is responsible. There are some proposals that there should be joint liability and that this might incentivise tech companies to ensure that their technology is compliant with national employment laws. If they have joint responsibility or liability, in the way in which GDPR does in some instances, this might be a good thing for regulation to look at. However, at the moment the answer is that we do not know.
This discussion is very timely. The difficulty we have is that the speed of technological change is so fast that we are finding it impossible to keep up. It is funny when we talk about people's data privacy. I am on to my kids the whole time about mobile phones. They are a tracking device. They have a person's biometrics, retinal scan and fingerprints. People are uploading all their personal data.
It will not be long before they will have a DNA profile on people using phones as a security lock. That is where technology is going. It is difficult to see how we will push back on all of this without almost Draconian legislation, particularly in the financial services centre where so many people are using digital platforms. It is so easy to embed monitoring as part of performance reviews so that as the time clicks, it monitors the number of customers approached.
I am interested also in where they were talking about conversations being recorded, let us say, in terms of banking performance. We have seen the move in Irish banks to digitisation. It is ridiculous now that you can barely get a teller in a bank anymore, and very difficult for older people. The banks have obviously done the metrics and they have decided the money is ultimately in getting fully digitised platforms. It will be difficult to stop or arrest the development of all of this. Their work is timely.
I also wonder about the monitoring of company resources. If I take on a job and I am given a phone and a laptop, the first question I should be asking myself is, "Is there software on this which is monitoring me?" What right does someone have to ask of an employer what monitoring systems there are in addition to the phone as provided from Apple and what has he or she put on the phone or in the laptop that is monitoring my time? Have employees a right to seek that information and can they get that information in a form that they can understand?
Mr. John O'Connell:
I suppose that is why we would argue for workplace agreements being underpinned by collective bargaining. If you are an employee, you have these concerns and you are approaching your employer, there is a real power imbalance. We need legislation around collective bargaining to underpin people's right to be represented and then to have these issues thrashed out on the basis of a workplace agreement.
As Dr. O'Sullivan has outlined, co-governance is a feature in other countries. That should not be a threat to employers. Co-governance around workplace policies will lead to better workplace policies but it should initially lead to a clear understanding from employees of what is being monitored and then that transparency piece in terms of people understanding the boundaries and the boundaries being respected.
I would agree with that. I merely wonder, in terms of one's computer's camera, for instance. Essentially, if we get into this idea of legislation, in financial services, I am sure there will be strong pushback from large financial concerns which say they need a certain amount of data profiling and they must have it, in terms of time spent, activity, accounts monitored or whatever, otherwise they cannot monitor what their employees are doing and they cannot say whether one is doing a good job and another is not because they do not know what metrics people are producing.
If someone is at home and is being monitored in that way in the financial services centre, I would have a particular problem with cameras. Is the camera being used to record time? Is it recording people when they are moving around their houses? These are significant questions. This is where legislation needs to be targeting. There is a certain amount that will have to happen. It is already happening.
In terms of artificial intelligence, AI, how is it being used to supplement this monitoring that they are talking about? We all know how AI, and ChatGPT, is being used. We have seen it being used in Departments here on a trial basis. How is it being used to effect greater monitoring of employees?
Dr. Michelle O'Sullivan:
On remote work, for example, one is talking about different types of monitoring. On webcams, for example, there is a long history of decisions from the Data Protection Commission on video surveillance in the workplace. I am not aware of it in terms of remote work.
Regardless of what the system is, a key issue is the legitimacy and the proportionality as to whether the technology is being used for a legitimate purpose but, more than that, it is about proportionality. A lot of international proposals about regulation would say that a key issue is that the employer must show that he or she could not use less intrusive methods. They have to justify why a particular method is being used to record what employees do and if they are using videos, biometrics or whatever, they have to explain why they could not use something less intrusive to achieve the same objective. Proportionality is a key issue there.
In terms of artificial intelligence, in some organisations they do not use it at all. In others, the ones that are putting more money into technology and surveillance might be using it to develop, for example, score cards. In some organisations internationally, there is a level of almost complete automation where some employees in some organisations would, for example, be terminated purely on the basis of an algorithm saying that they have had X number of transgressions. We do not know, but I do not suspect, that level of use is that widespread yet. What we have is more an assistance level of artificial intelligence where it is helping management to make decisions but management are still at present being the key decision-makers. They are using AI to develop data about the level of employees' work, or the quantity or the quality, but at present humans are still making the key decisions or they have the ability to override what AI might be telling them. The potential for that data to be used in the future can go further.
I would say it probably is the case - I imagine it is in financial services - that a qualitative score, let us say, in terms of people's interaction with technology, is part of an employee review. If it is in terms of business activity, etc., that is not something you will take out or manage easily.
The witnesses said that the European Commission is looking at a new AI Bill. Do they know what the scope of that is? Where is that point and what are they trying to encapsulate in that?
Dr. Michelle O'Sullivan:
It is not specific to employment. It is AI in general. They have a draft AI regulation or so-called "AI Act".
On the weakness, law technology experts internationally would say that it is very much about getting developers of technology to assess the extent to which they are transparent in the way in which they work but that it is not specific to employment. Even though it mentions that its use in employment is high risk, it places very little emphasis on what employers should be doing and very little mention, if anything at all, on workers or workers' rights. There are a lot of gaps with that. The strongly held view internationally is that will not be strong enough in its present form to be able to protect workers in the workplace or even give guidance to employers in the workplace in terms of artificial intelligence.
I thank the Chair and thank our guests.
This is one of the most important meetings that we have held. Huge credit is due to Dr. O'Sullivan and her team on the level of research conducted and the findings.
I will read a couple of the quotes from that research into the record because they are so stark: "Everything you type is being recorded in instant messaging."; "Up the line they want oversight of everything, centralisation of tasks brings control. Automation brings control." and "Sales targets in the biggest things like mortgage products are crazy, are unattainable and that adds pressure to you when technology breaks down. That takes time out of your day."
My first job was in a call centre. It was a long time ago. We had a lot of monitoring then. I remember colleagues being called out to be effectively disciplined because they were spending too long talking to customers on calls, etc. I agree that tone can be important, as Deputy Stanton said, but, to be frank, the level of micro-management in those call centres, even then, was quite frightening.
I happened to have occasion to telephone an insurance company yesterday and my heart went out to the person I was dealing with in terms of their desperation at the end of the call in asking would I give them a good review when I got the email through. Can the committee think about the pressures that young people are facing?
Dr. O'Sullivan mentioned a power imbalance. The elephant in the room here, in my opinion, is the fact that co-governance, for example, as Dr. O'Sullivan has raised, only works if workers have a right to collective bargaining. This country, 100 years after independence, remains an outlier in that we do not have a right to collective bargaining.
Reference was made to the importance for trade unions accessing employers.
I will declare an interest. I was a trade union official. I believe in workers' rights. When I was a trade union official, it was incredibly hard to access employers when they did not want me there. I ask the guests to deal with an issue that is the elephant in the room, namely, the fact we do not have collective bargaining rights. All the concerns all of us have expressed will not be meaningful unless we address that power imbalance. If that power imbalance is not addressed, we are kidding ourselves. Clearly, we are heading in a very dark direction right now. How do we fix the issue of collective bargaining? How urgent a task is it, given this report?
Mr. John O'Connell:
It is an extremely urgent issue for workers to be able to have those conversations in the context of the workplace changing so fast. AI is one aspect of it in addition to, as Dr. O'Sullivan said, digitisation in the sector, the use of analytics and so forth.
There are two important developments in the area of collective bargaining for which we seek support from the Oireachtas. One is the minimum wage directive coming from Europe, which will set a target of 80% collective bargaining for each member state. That has to be embraced and not just adhered to. Recognising that collective bargaining leads to a stable and fairer society needs to be embraced. The second piece is good-faith bargaining which, equally, has come through the Labour Employer Economic Forum, LEEF, subcommittee involving the Government, employers and trade unions and is coming forward before the end of the year. Those are two very important developments that require the support of the full Oireachtas in getting us to the starting block for collective bargaining.
It is then about permeating it down on issues such as this, where workers and their representatives have access - not just physical but digital access - which is now a major issue for our members. We have had collective meetings of 1,200 people online and on screen regarding issues. Digital and physical access are two key elements of building a society where workers can highlight issues like these, without fear or favour, knowing they will be addressed by their employers in a responsible way. That would start to rebalance that power imbalance we talk about.
Both those points are very important. I have a concern. Officials from the Department and the Minister were before the committee two weeks ago to talk about the works council directive which, as the representatives know, has been appallingly inscribed here. Effectively, it is useless for workers but the attitude of the Minister was that there is nothing to see here. How much confidence does Mr. O'Connell have regarding those key points, especially regarding the minimum wage directive? There was a great tradition in this country of implementing EU directives on workers' rights in the most minimalist way possible so those rights are not actually enhanced. Is Mr. O'Connell concerned or confident that the Government will deliver as it should do on that directive?
Mr. John O'Connell:
I think it has to. The mood music has changed. The President and the President of the United States are publicly saying that the recognition of workers, and trade union and collective bargaining, are key tenets of a civil society. There is not an option of adopting a minimalist approach. We have to adopt an approach that actually goes for the 80% target for collective bargaining. We would have a fairer society as an output of that. Given the challenges our society is facing with societal issues, which we see on a daily basis, workers are at the stage where they believe they are entitled to fair representation. We have had that from workers across the tech sector recently where there have been redundancies and where they have turned to us regarding their representation. It is a key issue and one that needs to be grasped by the Oireachtas.
I pay tribute to the FSU in particular because it is a great example of a union that goes out to support workers and to address their needs directly. That is a huge credit to it as regards the way it is growing. We need to support a report being written by this committee to follow through on the excellent work of the FSU and Dr. O'Sullivan and her team.
I have one or two brief questions. I join with Senator Gavan in congratulating the representatives on the work they have done on this issue, the responsible approach they have taken to it, and the leadership they have given. It is very important. Dr. O'Sullivan mentioned inter-logical surveillance. Is that correct?
I thought Dr. O'Sullivan said "inter-logical". I am sorry. I misheard. I just wanted to be clear.
She also mentioned another topic that we also are interested in, that of bullying. She asked whether surveillance in some shape or form would have an impact on bullying in a negative or positive way. If I heard Dr. O'Sullivan correctly, she said 11% of people stated it helped. I am interested in the extent of the bullying that is going on, which she might have come across either informally or formally. Not everybody is impacted by bullying and we hope it is at a very low level, but if 11% of people felt that they were helped by surveillance or monitoring in a bullying situation, that is interesting.
To get back to AI, and it is a little off today's theme, we have been told that AI has the potential to have a massive impact on employment and jobs in certain areas, in particular banking. Does Dr. O'Sullivan have any views on that? Does she plan to do any work on that or has she done any work on it? It is moving so quickly now but its impact on jobs is something I am quite concerned about. There are positives to it, like everything else, but it is also something we need to keep an eye on.
Dr. Michelle O'Sullivan:
On bullying, research in a number of areas suggests there are benefits to surveillance in respect of, for example, bullying, productivity and the idea of having a more accurate measure of an employee's performance. We asked people about those things in the survey but low proportions agreed with the statement that there were benefits. Less than one fifth agreed that technological surveillance provides an accurate measure of performance that leads to more fair decisions and helps with bullying. Bullying was not the focus of this research so I do not know its prevalence. Mr. O'Connell might have a greater sense of it in banking. In other studies we have done, for example, in nursing, it is quite significant.
My colleague, Dr. Sarah MacCurtain, did a larger study on incivility and bullying in Ireland that showed quite significant levels, in certain sectors more than others. For example, financial services and the health sector are two such areas, especially where there are customer interactions, because these can be quite difficult jobs. It is an unknown area, particularly as regards remote working. On the one hand, it could be argued remote working could help with bullying because there is less contact with people who might be the source of bullying but, on the other hand, people are more isolated in remote work, which might lead to increased bullying because of their isolation. We do not know enough about that area yet.
Mr. John O'Connell:
I will complement that. We have done studies on stress in work, which scores very high in the financial services sector. The Irish Banking Culture Board, of which I am a member, has also done surveys that showed, over the period of the surveys over a number of years, stress continues to be an issue for workers in the sector. One of the key issues that comes across to us is work intensification. That feeling of work intensification comes with surveillance or monitoring and that oversight piece, where workers feel they have to do certain things because they are being monitored or analytics are being developed regarding what is occurring. Work intensification is a huge issue for the sector. In fairness to workers in financial services, they transitioned overnight to being remote workers and now there is a hybrid element for lots of them. Many workers are still working remotely. That is something people value as a new way of working but it is about that work intensification piece and the expectation and ask continually increasing, which is shown by the quotes in the report, particularly around sales. That led to bad things in banking previously, when that went unchecked. That is a concern.
Mr. John O'Connell:
If it is properly managed, AI can complement what is happening. Yes, jobs will change but jobs are changing. It is a question of how they change. That has been our argument all along in terms of the financial services sector, and particularly digitisation. It is not that we are opposed to change; it is how change comes about. We saw communities being left behind due to branch closures. People pushed back when they were told that cash was not wanted in their communities. It is a question of how the change comes about. This is no different except that it is an extremely fast-paced change. It seems to be like the wild west in that there are no rules. We need to get to a situation where we have a sheriff in town and it is regulated properly to protect workers and consumers.
Mr. Brian McDowell:
There are many key findings in the report but one that stands out to me is that more than half of respondents did not know that they were being monitored or that there was any surveillance taking place on their home or work computers. That goes back to the collective bargaining element of it. Those discussions should happen with the trade union movement prior to anything going on employees' computers or phones. That is the important element and it comes back to collective bargaining.
The other part, as Dr. O'Sullivan mentioned, is that this is one of the first pieces of research in Ireland on this issue. That is why we feel today is really important and why we asked the committee if it could commission a report into this. We need to know what is happening out there. There is a lack of data and knowledge of what is happening with employers, where the data is being stored, and so forth. That is why it would be really important if the committee would agree to commission a report into this issue. I ask it to get employers in to find out exactly where data is being stored, what data are being stored and how long they are being stored for. Discussions should happen with the trade union movement prior to anything happening.
In the FSU's recommendations there are a series of calls regarding legislative change, I appreciate it is asking this committee to detail that in some ways. I want to tease that out. Ireland does not have a right to be recognised for collective bargaining purposes, and I want to be clear that this needs to change. The framework will change because of the EU directive. Obviously that in itself will not be sufficient in some ways to improve workers' power within organisations. Regarding the right to organise, there is then the co-determination with regards to surveillance policies and all the rest. From the FSU's perspective are there other legislative changes, particularly in the bullying and harassment sphere, that are needed? Will other complementary pieces of legislation be needed to get that right to organise and to engage in collective bargaining, right when the EU directive comes in?
Mr. John O'Connell:
For us, aside from the legislation to enshrine collective bargaining as a right, one of the key aspects is around real, physical access to workers in their workplace. Equally, we need to know that we can digitally visit workers in the workplace and that those conversations will be private in terms of the dialogue that takes place. We are a successful country in terms of the economics and so forth. It is a matter of how we enshrine and embed that going forward. We believe one of the best ways to do this is by legislating for collective bargaining and creating a workplace environment where workers feel the empowerment Senator Sherlock spoke about. Workers need to feel it is okay to improve the workplace by highlighting issues like the use of AI, digitisation and so forth, and how that is impacting on them. Again, we stress we are not opposed to change. We are not even opposed to AI but it has to be used in a context that is for the benefit of consumers and staff, and used in a way that is ethical. Countries such as Australia are developing ethical AI frameworks and so forth. There are issues around copyright that the music industry is lobbying hard on. There are many aspects to this. For us, the issue of access enshrined in a right to collectively bargain for workers that is supported and recognised by the State is key.
Mr. Brian McDowell:
On a connected point, the Workplace Relations Commission has a consultation out on remote working and flexible working. Obviously, remote working comes into this. It is enshrined in the right to disconnect that we spoke about before. They are all intertwined because when someone who is physically in their workplace and in front of their computer or on their phone gets a message to inform them that they are being monitored, it is a little different from when the same thing happens to someone who is sitting at home in their bedroom if they live in a one-bedroom apartment, or in their box room. These matters are all intertwined. There is a consultation process out now on flexible and remote working. Perhaps the question of surveillance of someone's computer or phone, in particular when working from home, should be part of that code and be included in that code. It would be worthwhile to explore that.
I will pick up on the other legislative changes required. We spoke about how data protection is really important. Health and safety is perhaps an area of concern as well. Are there other legislative areas we need to look at here? I am thinking of, for instance, how this discussion touches on the growth of platform work. We have seen the enormous use of AI to facilitate the growth of platform work. There is a demand that there is a right to explanation with regard to algorithmic change and how it is designed. Are there other legislative areas I have not thought of that we need to look at in the context of this committee's deliberations?
Dr. Michelle O'Sullivan:
What Senator Sherlock mentioned about the numbers, privacy, and health and safety is correct. Regardless of whether the technology should be used at all as part of monitoring or surveillance of employees, it is particularly important is to consider the stages at which rights kick in. At the moment, many data protection rights might kick in after the fact as opposed to before the thing is developed at all. Some international legislation provides that worker representatives should have a lot of input at the development stage and that no technology should be introduced before they are involved. In the survey, we asked employees about their level of influence over workplace policies on the introduction of technology. Three quarters of respondents said they had no influence at all. Research shows that if people are influencing it, they are more likely to be accepting of technology and technology related to surveillance. That is a big issue as well. The platform economy has a very sophisticated level of algorithm activity. The EU is bringing in a directive on platform work. Some of it is instructive. Part of the directive is about the use of algorithms. Even though it is specific to platform work, some of it is instructive as to how it could be applied in other workplaces as well in terms of the proportionality of AI and so on.
I thank the witnesses again. I have a final question. I think we are all getting to the point where we understand that what is really required here is legislation. If Dr. O'Sullivan has done any work on this, can she outline who should be the final arbiter of deciding what monitoring or surveillance should take place in terms of employees across different work platforms? Where would that responsibility lie? Do we have the technological capability in public administration to get to the kernel of these issues and understand what it is we are trying to legislate for?
Dr. Michelle O'Sullivan:
Some of the legislation models that rely on co-determination and co-governance say that technology should not be implemented and has to be agreed, particularly where there is an absence of law on the issue. The question the Deputy asked is a question that has not been answered yet. The difficulty with answering a question like that is that the companies themselves, particularly if they are buying in the software, will not necessarily know how the algorithm works if they are using it to a sophisticated level.
Employees, likewise, do not understand how it works. Even with a level of training, they could not necessarily be expected to understand the technical aspects of how the algorithms work. Regardless of that, we have to consider what it is that any arbiter would be deciding. It would be quite difficult for any kind of arbiter to make decisions on the technical aspects and what the technical bits should and should not be allowed to do. As I said, employers might not be able to answer that question. One then has to consider, regardless of the technical aspects, what kinds of rights employees should have in any case. Should there even be a level of technological monitoring of their work? That is the first question. Then there are varying levels of flexibility to consider if such monitoring is allowed.
Does Dr. O'Sullivan think any consensus on that would radiate across sectors? It seems to me it will be very hard to come to a blanket position on this because different sectors will have different requirements and will argue for different approaches to monitoring. I am trying to understand how we might look at this if we want to underpin it legislatively. I accept that co-operative agreement is the best agreement, but it is a question of getting everyone to agree. It is unlikely there will be total agreement across different sectors. Some employers will insist they need certain levels of monitoring, which some employee groups may find intrusive. How best can we arrive at an understanding around that? Will it be a case of individual sectoral bargaining? We are all a bit afraid of big tech. It is probably easier to do some kind of determination around small tech and try to manage those operators. The big tech stuff is really hard to manage and it is not based here. Another consideration is that legislation we introduce might be overridden by legislation in the jurisdictions in which those companies are incorporated.
Dr. Michelle O'Sullivan:
There are some good proposals from law experts internationally, such as Valerio De Stefano and Jeremias Adams-Prassl. They have pointed out that there is scope for more specific regulation of AI in the context of employment than in society in general. There is potential for legislation to be quite specific. For example, the equality legislation is quite specific on the rules according to which people should be treated in the workplace. There is a level of specificity that is possible there. However, most of the legislation on AI we have seen so far puts a lot of emphasis on trying to let workers and employers sort it out together. It is not overly specific because there is a recognition that there may be specifics in particular sectors. In some instances, blanket bans have been introduced. However, in areas where there is flexibility, the legislation provides that workers and employers should work together, rather than there being unilateral decision-making. After that, what would be up for discussion is what we would ask an arbiter to do and what kinds of complaints he or she would be asked to look at.
I apologise for missing the start of the discussion, which was because I had to attend another meeting. If I am repeating a question that was already answered, the witnesses need not answer it again.
I am interested to know what the precedents are for proportionality in this sphere. Obviously, employers have to monitor for any illegal activity that could occur on their network. There is legitimate oversight of work. Are there good models for distinguishing what the principles might be, even if they apply only in a negotiated settlement?
Allied to that, I want to ask about recruitment. I am sure it is already happening that AI is being used to sort through applications for jobs. God only knows what biases are built into the algorithms those applications are using to select candidates. Where is that heading? Does equality legislation apply to the sorts of sifting processes that are undoubtedly happening when loads of CVs are submitted? I am interested to know what we ought to start thinking about in that regard, as regulators in this sphere.
Dr. Michelle O'Sullivan:
There are some good proposals internationally on what proportionality means. A number of guiding principles have been proposed and developed. One is that employers should not use technology to provide an answer that could not credibly be answered by technology. For example, it would not be credible for an organisation to use technology to do a 30-second recording of an individual and, based on those 30 seconds, an algorithm would decide whether he or she is suitable for a job. Principles are being proposed that could help with the issues around proportionality.
Dr. Michelle O'Sullivan:
The difficulty at the moment is that regulation is generally very far behind internationally. People had an assumption that AI would be biased. There is now lots of research showing that while equality legislation is useful, the information being fed into systems in recruitment is itself biased. Therefore, the algorithmic management systems are themselves biased. We know from research that AI systems are more likely to select men for certain types of jobs, for example, because they are picking up on particular words men are more likely then women to use in CVs. That is because of the information the systems are being fed and what they are looking for. Legislation is only useful if people know and can identify the source of the problem to begin with. The complexity of AI systems means we do not know what they are deciding or what information they are being fed. That is why having people involved at the development stages of technology, before it is introduced in the workplace, is important, rather than that happening after a decision has been made, when it is too late to know what is being done. There are some very useful proposals on these issues. I am happy to forward some of them to the committee if members feel they might be useful.
The implications of this are potentially far-reaching. There is potential use of personal data, profiling people, going into people's private lives to decide suitability for promotion, etc. It is a minefield. I do not where we should stick our oar to try to guide the boat. I thank the witnesses for their engagement, which has been very thought-provoking.
Mr. John O'Connell:
That is why our request to the committee in the first instance is to consider working on a report. We need to know the extent of the issues, looking across a broader scope than recruitment and taking into account performance and so forth. We need to have an understanding of the extent of what is currently in play and the potential for where it might go, including the extent of the algorithms, the impact and so on. We are saying we do not have all the answers. We are giving the workers' perspective. There is a need for an extensive study over a number of years to look at the potential of AI, both good and bad, into the future. We are asking the committee to peel the issue back a bit, look at the extent of it and have employers in to talk about it, including how they currently self-regulate and what level of transparency there is at the moment. That type of investigation could start to guide our direction of travel in regard to legislation.
I have been given a note that relates to this topic. I do not know whether what is claimed in it is accurate. The note states:
Are you aware of an AI data collection tool called Workplace Activity Data Utility, or WADU for short? This AI tool, it is alleged, monitors workers' activity and body language through cameras in the office space to deduce their performance, their attention, their focus, whether they are sad, stressed, happy and so forth, and the data is provided to managers. Additionally, the system allegedly monitors online activity from what the worker watches, as well as recording them through the camera and microphone on a work-issued device. I am told that this tool was specifically built for a bank [it is one of the major banks, but I will not name it] and the topic has been covered on several occasions by Business Insider.
The note seemed to be worth highlighting. I am interested to hear the witnesses' response to it.
Dr. Michelle O'Sullivan:
That is right. It is adding to the database all the time but it concentrates on America. The level of information that can potentially be used is limitless. There are extreme examples and it is not all employers. It takes a significant level of companies' resources to develop technology. In the FSU study, we found that many employers are putting more money into the customer-facing technology than into the background systems that employees use. That is a hindrance for employees. However, the potential is there. That is not unusual in terms of the kinds of apps that are available. Of course, they can develop apps and employers can use them in the absence of regulation or collective agreement.
That concludes our consideration of the matter. I thank all the representatives for assisting the committee its consideration of this important matter. As everyone has agreed, the committee will further consider the matter as soon as possible.