by Scott Faulds

Across the UK, it is estimated that there are 1.85 million CCTV cameras, approximately one camera for every 36 people.  From shopping centres to railway stations, CCTV cameras have become a normal part of modern life and modern policing, with research from the College of Policing indicating that CCTV modestly reduces overall crime. Currently, most of the cameras utilised within the CCTV system are passive; they act as a deterrent or provide evidence of an individual’s past location or of a crime committed.

However, advances in artificial intelligence have allowed for the development of facial recognition systems which could enable CCTV cameras to proactively identify suspects or active crime in real-time. Currently, the use of facial recognition systems in limited pilots has received a mixed reaction, with the Metropolitan Police arguing that it is their duty to use new technologies to keep people safe. But privacy campaigners argue that the technology possesses a serious threat to civil liberties and are concerned that facial recognition systems contain gender and racial bias.

How does it work?

Facial recognition systems operate in a similar way to how humans recognise faces, through identifying familiar facial characteristics, but on a much larger and data driven way. Whilst there are a variety of different types of facial recognition system, the basic steps are as follows:

An image of a face is captured either within a photograph, video or live footage. The face can be within a crowd and does not necessarily have to be directly facing a camera.

Facial recognition software biometrically scans the face and converts unique facial characteristics (distance between your eyes, distance from forehead to chin etc) into a mathematical formula known as a facial signature.

The facial signature can then be compared to faces stored within a database (such as a police watchlist) or faces previously flagged by the system.

The system then determines if it believes it has identified a match; in most systems the level of confidence required before the system flags a match can be altered.

Facial recognition and the police

Over the past twelve months, the Metropolitan Police and South Wales Police have both operated pilots of facial recognition systems, designed to identify individuals wanted for serious and violent offences. These pilots involved the placement of facial recognition cameras in central areas, such as Westfield Shopping Centre, where large crowds’ faces were scanned and compared to a police watch-list. If the system flags a match, police officers would then ask the potential match to confirm their identify and if the match was correct, they would be detained. Police forces have argued that the public broadly support the deployment of facial recognition and believe that the right balance has been found between keeping the public safe and protecting individual privacy.

The impact of the deployment of facial recognition by the police has been compared by some to the introduction of fingerprint identification. However, it is difficult to determine how successful these pilots have been, as there has been a discrepancy regarding the reporting of the accuracy of these facial recognition systems. According to the Metropolitan Police, 70% of wanted suspects would be identified walking past facial recognition cameras, whilst only one in 1,000 people would generate a false alert, an error rate of 0.1%.  Conversely, independent analysis commissioned by the Metropolitan Police, has found that only eight out of 42 matches were verified as correct, an error rate of 81%.

The massive discrepancy in error rates can be explained by the way in which you asses the accuracy of a facial recognition system. The Metropolitan Police measure accuracy by comparing successful and unsuccessful matches with the total number of faces scanned by the facial recognition system. Independent researchers, on the other hand, asses the accuracy of the flags generated by the facial recognition system. Therefore, it is unclear as to how accurate facial recognition truly is, nevertheless, the Metropolitan Police have now begun to use live facial recognition cameras operationally.

Privacy and bias

Civil liberties groups, such as Liberty and Big Brother Watch, have a raised a variety of concerns regarding the police’s use of facial recognition. These groups argue that the deployment of facial recognition systems presents a clear threat to individual privacy and privacy as a social norm. Although facial recognition systems used by the police are designed to flag those on watch-lists, every single person that comes into the range of a camera will automatically have their face biometrically scanned. In particular, privacy groups have raised concerns about the use of facial recognition systems during political protests, arguing that their use may constitute a threat to the right to freedom of expression and may even represent a breach of human rights law. 

Additionally, concerns have been raised regarding racial and gender bias that have been found to be prevalent in facial recognition systems across the world. A recent evaluative study conducted by the US Government’s National Institute of Standards and Technology on 189 facial recognition algorithms has found that most algorithms exhibit “demographic differentials”. This means that a facial recognition system’s ability to match two images of the same person varies depending on demographic group. This study found that facial recognition systems were less effective at identifying BAME and female faces, this means that these groups are statistically more likely to be falsely flagged and potentially questioned by the police.

Final thoughts

From DNA to fingerprint identification, the police are constantly looking for new and innovative ways to help keep the public safe. In theory, the use of facial recognition is no different, the police argue that the ability to quickly identify a person of interest will make the public safer. However, unlike previous advancements, the effectiveness of facial recognition is largely unproven.

Civil liberties groups are increasingly concerned that facial recognition systems may infringe on the right to privacy and worry that their use will turn the public into walking biometric ID cards. Furthermore, research has indicated that the vast majority of facial recognition systems feature racial and gender bias, this could lead to women and BAME individuals experiencing repeated contact with the police due to false matches.

In summary, facial recognition systems provide the police with a new tool to help keep the public safe. However, in order to be effective and gain the trust of the public, it will be vital for the police to set out the safeguards put in place to prevent privacy violations and the steps taken to ensure that the systems do not feature racial and gender bias.  


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

Icons made by monkik from www.flaticon.com

Related Posts