Facial recognition systems: ready for prime time?

by Scott Faulds

Across the UK, it is estimated that there are 1.85 million CCTV cameras, approximately one camera for every 36 people.  From shopping centres to railway stations, CCTV cameras have become a normal part of modern life and modern policing, with research from the College of Policing indicating that CCTV modestly reduces overall crime. Currently, most of the cameras utilised within the CCTV system are passive; they act as a deterrent or provide evidence of an individual’s past location or of a crime committed.

However, advances in artificial intelligence have allowed for the development of facial recognition systems which could enable CCTV cameras to proactively identify suspects or active crime in real-time. Currently, the use of facial recognition systems in limited pilots has received a mixed reaction, with the Metropolitan Police arguing that it is their duty to use new technologies to keep people safe. But privacy campaigners argue that the technology possesses a serious threat to civil liberties and are concerned that facial recognition systems contain gender and racial bias.

How does it work?

Facial recognition systems operate in a similar way to how humans recognise faces, through identifying familiar facial characteristics, but on a much larger and data driven way. Whilst there are a variety of different types of facial recognition system, the basic steps are as follows:

An image of a face is captured either within a photograph, video or live footage. The face can be within a crowd and does not necessarily have to be directly facing a camera.

Facial recognition software biometrically scans the face and converts unique facial characteristics (distance between your eyes, distance from forehead to chin etc) into a mathematical formula known as a facial signature.

The facial signature can then be compared to faces stored within a database (such as a police watchlist) or faces previously flagged by the system.

The system then determines if it believes it has identified a match; in most systems the level of confidence required before the system flags a match can be altered.

Facial recognition and the police

Over the past twelve months, the Metropolitan Police and South Wales Police have both operated pilots of facial recognition systems, designed to identify individuals wanted for serious and violent offences. These pilots involved the placement of facial recognition cameras in central areas, such as Westfield Shopping Centre, where large crowds’ faces were scanned and compared to a police watch-list. If the system flags a match, police officers would then ask the potential match to confirm their identify and if the match was correct, they would be detained. Police forces have argued that the public broadly support the deployment of facial recognition and believe that the right balance has been found between keeping the public safe and protecting individual privacy.

The impact of the deployment of facial recognition by the police has been compared by some to the introduction of fingerprint identification. However, it is difficult to determine how successful these pilots have been, as there has been a discrepancy regarding the reporting of the accuracy of these facial recognition systems. According to the Metropolitan Police, 70% of wanted suspects would be identified walking past facial recognition cameras, whilst only one in 1,000 people would generate a false alert, an error rate of 0.1%.  Conversely, independent analysis commissioned by the Metropolitan Police, has found that only eight out of 42 matches were verified as correct, an error rate of 81%.

The massive discrepancy in error rates can be explained by the way in which you asses the accuracy of a facial recognition system. The Metropolitan Police measure accuracy by comparing successful and unsuccessful matches with the total number of faces scanned by the facial recognition system. Independent researchers, on the other hand, asses the accuracy of the flags generated by the facial recognition system. Therefore, it is unclear as to how accurate facial recognition truly is, nevertheless, the Metropolitan Police have now begun to use live facial recognition cameras operationally.

Privacy and bias

Civil liberties groups, such as Liberty and Big Brother Watch, have a raised a variety of concerns regarding the police’s use of facial recognition. These groups argue that the deployment of facial recognition systems presents a clear threat to individual privacy and privacy as a social norm. Although facial recognition systems used by the police are designed to flag those on watch-lists, every single person that comes into the range of a camera will automatically have their face biometrically scanned. In particular, privacy groups have raised concerns about the use of facial recognition systems during political protests, arguing that their use may constitute a threat to the right to freedom of expression and may even represent a breach of human rights law. 

Additionally, concerns have been raised regarding racial and gender bias that have been found to be prevalent in facial recognition systems across the world. A recent evaluative study conducted by the US Government’s National Institute of Standards and Technology on 189 facial recognition algorithms has found that most algorithms exhibit “demographic differentials”. This means that a facial recognition system’s ability to match two images of the same person varies depending on demographic group. This study found that facial recognition systems were less effective at identifying BAME and female faces, this means that these groups are statistically more likely to be falsely flagged and potentially questioned by the police.

Final thoughts

From DNA to fingerprint identification, the police are constantly looking for new and innovative ways to help keep the public safe. In theory, the use of facial recognition is no different, the police argue that the ability to quickly identify a person of interest will make the public safer. However, unlike previous advancements, the effectiveness of facial recognition is largely unproven.

Civil liberties groups are increasingly concerned that facial recognition systems may infringe on the right to privacy and worry that their use will turn the public into walking biometric ID cards. Furthermore, research has indicated that the vast majority of facial recognition systems feature racial and gender bias, this could lead to women and BAME individuals experiencing repeated contact with the police due to false matches.

In summary, facial recognition systems provide the police with a new tool to help keep the public safe. However, in order to be effective and gain the trust of the public, it will be vital for the police to set out the safeguards put in place to prevent privacy violations and the steps taken to ensure that the systems do not feature racial and gender bias.  


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

Icons made by monkik from www.flaticon.com

Why UK-sourced evidence matters … and why it is so often ignored

By Morwen Johnson

If you follow our blog, you’ll know that we care passionately about promoting the uptake of evidence and research by policymakers and practitioners. It’s easy to be complacent and assume that when public money is at stake, decisions are made on the basis of evaluations and reviews. Unfortunately, this is still not always the case.

The current evidence-based policy agenda in the UK encompasses initiatives such as the What Works network, the Local Government Knowledge Navigators and independent organisations such as the Alliance for Useful Evidence. They are working on fostering demand for evidence, as well as linking up academics with those in the public sector to ensure that the research community is responsive to the needs of those making decisions and designing/delivering services.

A recent article in Health Information and Libraries Journal highlights another challenge in evidence-based policy however. A mapping exercise has found that literature reviews often ignore specialist databases, in favour of the large, well-known databases produced by major commercial publishers. Within the health and social care field (the focus of the article), literature reviews tend to use databases such as Medline, Embase and Cinahl – and overlook independent UK-produced databases, even when they are more relevant to the research question.

Why does it matter?

Research has shown that how (and why) databases are chosen for literature searching can “dramatically influence the research upon which reviews, and, in particular, systematic review, rely upon to create their evidence base”.

To generate useful evidence for the UK context (relating to UK policy issues or populations), researchers need to understand the most appropriate database to search – but unfortunately our own experience of looking at the detail of methodologies in evidence reviews, suggests that in many cases the only databases searched are those produced by American or international publishers.

Grey literature is a valuable source in evidence reviews – and again this is often overlooked in the major databases which tend to focus only on peer-reviewed journal content. A recent Australian report ‘Where is the evidence?‘ argued that grey literature is a key part of the evidence base and is valuable for public policy, because it addresses the perspectives of different stakeholder groups, tracks changes in policy and implementation, and supports knowledge exchange between sectors (academic, government and third sector).

Another benefit of UK-produced databases is that they will make use of UK terminology in abstracts and keywords.

Social Policy and Practice – a unique resource

At this point I should declare a vested interest – The Knowledge Exchange is a member of a UK consortium which produces the Social Policy and Practice (SPP) database. The SPP database was created in 2005 after five UK organisations, each with a library focused on sharing knowledge in community health and social care, agreed to merge their individual content in order to make it available to the widest possible audience.

The current members of the SPP consortium – the National Children’s Bureau, the Idox Knowledge Exchange, the Centre for Policy on Ageing and the Social Care Institute for Excellence – have just been joined by the National Society for the Prevention of Cruelty to Children. Inclusion of the NSPCC’s bibliographic data greatly enhances the coverage of child protection research in the database. SPP has been identified by NICE, the National Institute for Health and Care Excellence, as a key resource for those involved in research into health and social care.

We want the UK research community to understand what SPP offers, and to use it when undertaking literature reviews or evidence searches. This process of awareness raising should start with students – librarians in universities and the UK doctoral training centres have a key role in this as it ties in with the development of information literacy and critical appraisal skills. Ignoring specialist sources such as SPP risks introducing bias – at a time when initiatives are attempting to embed research and analytics in local government and the wider public sector.


Information on the coverage of Social Policy and Practice is available here and the distributor Ovid is offering a free 30-day trial.