Oireachtas Joint and Select Committees
Tuesday, 13 February 2024
Joint Oireachtas Committee on Justice, Defence and Equality
General Scheme of the Garda Síochána (Recording Devices) (Amendment) Bill: Discussion
Dr. Daragh Murray:
I thank Senator Ruane nature for the questions and I will try to answer the first couple of them. With regard to the existence of a reference database, for facial recognition to be effective there has to be a reference database. If a person in a crowd is identified, the benefit of facial recognition is being able to match that face against a database to know who you are looking for. To say this is not present seems difficult to believe. In the UK the reference database is the police national database, which is a database of custody images. There are approximately 19 million images. In the UK we have a problem whereby in my opinion there is not an adequate legal basis for this. We have referenced other databases under the carpet. For example, there have been reports that the passport database is being used. At a minimum it is very important to define in the draft Bill what the reference database would be and how it will be used.
With regard to the human operator, a human safeguard is often presented as a fail-safe. If an automated algorithm produces an inaccurate result, the human is an appropriate safeguard. There is a big body of literature that speaks about automation bias and the risk of deference to an algorithm. In our experience when we reviewed the London Metropolitan Police's use of live facial recognition we found a presumption to intervene. Because of the nature of deployment, the pressure of the environment and how a camera is set up there is often no scope for an effective adjudication of the image. It often happens very quickly. There are very few cases where the operator has overturned and where the fail-safe was, in effect, a fail-safe.
With regard to the point on accuracy I am not sure where the 99% comes from either. In the UK when we refer to deployment it has all been about the metrics that are chosen. If you choose to scan 1,000 people over the course of an operation and an alert is generated for five and there is a false positive of one, this is a false positive of one in 1,000 so you could say it is 99.9% accurate but equally it is one in five so it is potentially 20% inaccurate. It all depends on how it is evaluated. The very important matter is not the people who are scanned and no action is taken but the people who are engaged with by the police who may face consequences.
No comments