The technology has already been used at high-profile events but concerns about privacy, accuracy and bias remain.

By Dr Peter Bentley

Published: Monday, 31 July 2023 at 15:00 PM


Government ministers in the UK are pushing for the MET police to make use of automated facial recognition for routine law enforcement.

As police offices already wear body cameras, it would be possible to send the images they record directly to live facial recognition (LFR) systems. This would mean everyone they encounter could be instantly checked to see if they match the data of someone on a watchlist – a database of offenders wanted by the police and courts.

The technology has already been used for high-profile gatherings such as the King’s Coronation, but could rolling it out more widely lead to a rise in distrust of the police force due to concerns about accuracy and privacy?

What is LFR and how does it work?

Artificial intelligences trained to perform face recognition were one of the first types of practical machine learning systems developed by computer scientists.

They are commonly used alongside ‘dot projector’ lasers, which can map thousands of points on a human face, to create the highly accurate biometric readers that we use to unlock our phones.

LFR used by the police is much simpler. It relies on a camera to scan the surroundings and create a flat image. This image is then split into segments by the AI and faces mapped to find key features such as distances between the eyes, nose and mouth and build simple biometric records.

These records can then be compared to those stored in a database of known offenders using a neural network – a type of AI inspired by the human brain.

However, this method of using images alone is far less accurate than the laser-mapping method used by phones as it does not consider the three-dimensional shape of faces.

It also relies on the neural network to be able to match faces correctly. The AI has to be trained on enough examples of faces to enable it to distinguish properly between them. If the data it was trained on is biased towards certain types of face, then it will be biased in its ability to classify faces.

What is it currently used for and what is its legal standing?

LFR has been used in England and Wales for a number of events including protests, concerts, Notting Hill Carnival, Remembrance Sunday and also on busy shopping streets such as Oxford Street in London.

While the UK government is pushing for increased use of this AI surveillance tech, many other countries, other than China, in particular, are not.

In fact, in October 2021, the European Parliament called for a ban on the use of LFR in public places. MEPs also asked for a ban on private facial recognition databases and supported an AI bill which aims to ban the type of social scoring systems used in China, where citizens are given a ‘trustworthiness’ rating based on their observed behaviour.

In the UK, the use of LFR has been successfully challenged by the courts. the Information Commissioner, and civil liberty groups, who all argue that the technology can infringe on privacy, data protection laws, and can be discriminatory.

Liberty claims that because of their legal victory against South Wales Police, the use of this technology is unlawful in the UK as it “violates human rights, equality and data protection laws”. However, the UK government dispute this claim.

Why is it controversial?

There are two main reasons: the first is the argument that we should have the right of privacy. If cameras scan our faces and read our biometric data without our consent, then some argue that our rights are being infringed.

Proponents say that as the images are deleted immediately after being scanned then this is worth a minor loss of privacy.

But when the images used to train the AIs might have been scraped from the Internet – including your social media – then it is harder to argue that data protection rights are being maintained.

The other important reason for the controversy is that LFR and similar technologies have previously been found to be inaccurate and biased.

Often the neural network trained to distinguish faces has been given biased data – typically as it is trained on more male white faces than other races and genders.

Researchers have shown that while accuracy of detecting white males is impressive, the biased training means that the AI is much less accurate when attempting to match females faces and of the faces of people of colour.

How could it be used less intrusively?

As LFR technology continues to develop, its accuracy will improve. This may mean that concerns of bias should one day disappear. However, police training should make it clear that a face matching performed by LFR will never be as accurate as simpler technologies such as ANPR (automatic numberplate recognition).

Also, to build trust, the use of LFR should be clearly displayed, and members of the public given the right to say they do not wish it to be switched on if it violates their perceived privacy.

Read more about artificial intelligence: