Facial recognition systems: ready for prime time?

by Scott Faulds

Across the UK, it is estimated that there are 1.85 million CCTV cameras, approximately one camera for every 36 people.  From shopping centres to railway stations, CCTV cameras have become a normal part of modern life and modern policing, with research from the College of Policing indicating that CCTV modestly reduces overall crime. Currently, most of the cameras utilised within the CCTV system are passive; they act as a deterrent or provide evidence of an individual’s past location or of a crime committed.

However, advances in artificial intelligence have allowed for the development of facial recognition systems which could enable CCTV cameras to proactively identify suspects or active crime in real-time. Currently, the use of facial recognition systems in limited pilots has received a mixed reaction, with the Metropolitan Police arguing that it is their duty to use new technologies to keep people safe. But privacy campaigners argue that the technology possesses a serious threat to civil liberties and are concerned that facial recognition systems contain gender and racial bias.

How does it work?

Facial recognition systems operate in a similar way to how humans recognise faces, through identifying familiar facial characteristics, but on a much larger and data driven way. Whilst there are a variety of different types of facial recognition system, the basic steps are as follows:

An image of a face is captured either within a photograph, video or live footage. The face can be within a crowd and does not necessarily have to be directly facing a camera.

Facial recognition software biometrically scans the face and converts unique facial characteristics (distance between your eyes, distance from forehead to chin etc) into a mathematical formula known as a facial signature.

The facial signature can then be compared to faces stored within a database (such as a police watchlist) or faces previously flagged by the system.

The system then determines if it believes it has identified a match; in most systems the level of confidence required before the system flags a match can be altered.

Facial recognition and the police

Over the past twelve months, the Metropolitan Police and South Wales Police have both operated pilots of facial recognition systems, designed to identify individuals wanted for serious and violent offences. These pilots involved the placement of facial recognition cameras in central areas, such as Westfield Shopping Centre, where large crowds’ faces were scanned and compared to a police watch-list. If the system flags a match, police officers would then ask the potential match to confirm their identify and if the match was correct, they would be detained. Police forces have argued that the public broadly support the deployment of facial recognition and believe that the right balance has been found between keeping the public safe and protecting individual privacy.

The impact of the deployment of facial recognition by the police has been compared by some to the introduction of fingerprint identification. However, it is difficult to determine how successful these pilots have been, as there has been a discrepancy regarding the reporting of the accuracy of these facial recognition systems. According to the Metropolitan Police, 70% of wanted suspects would be identified walking past facial recognition cameras, whilst only one in 1,000 people would generate a false alert, an error rate of 0.1%.  Conversely, independent analysis commissioned by the Metropolitan Police, has found that only eight out of 42 matches were verified as correct, an error rate of 81%.

The massive discrepancy in error rates can be explained by the way in which you asses the accuracy of a facial recognition system. The Metropolitan Police measure accuracy by comparing successful and unsuccessful matches with the total number of faces scanned by the facial recognition system. Independent researchers, on the other hand, asses the accuracy of the flags generated by the facial recognition system. Therefore, it is unclear as to how accurate facial recognition truly is, nevertheless, the Metropolitan Police have now begun to use live facial recognition cameras operationally.

Privacy and bias

Civil liberties groups, such as Liberty and Big Brother Watch, have a raised a variety of concerns regarding the police’s use of facial recognition. These groups argue that the deployment of facial recognition systems presents a clear threat to individual privacy and privacy as a social norm. Although facial recognition systems used by the police are designed to flag those on watch-lists, every single person that comes into the range of a camera will automatically have their face biometrically scanned. In particular, privacy groups have raised concerns about the use of facial recognition systems during political protests, arguing that their use may constitute a threat to the right to freedom of expression and may even represent a breach of human rights law. 

Additionally, concerns have been raised regarding racial and gender bias that have been found to be prevalent in facial recognition systems across the world. A recent evaluative study conducted by the US Government’s National Institute of Standards and Technology on 189 facial recognition algorithms has found that most algorithms exhibit “demographic differentials”. This means that a facial recognition system’s ability to match two images of the same person varies depending on demographic group. This study found that facial recognition systems were less effective at identifying BAME and female faces, this means that these groups are statistically more likely to be falsely flagged and potentially questioned by the police.

Final thoughts

From DNA to fingerprint identification, the police are constantly looking for new and innovative ways to help keep the public safe. In theory, the use of facial recognition is no different, the police argue that the ability to quickly identify a person of interest will make the public safer. However, unlike previous advancements, the effectiveness of facial recognition is largely unproven.

Civil liberties groups are increasingly concerned that facial recognition systems may infringe on the right to privacy and worry that their use will turn the public into walking biometric ID cards. Furthermore, research has indicated that the vast majority of facial recognition systems feature racial and gender bias, this could lead to women and BAME individuals experiencing repeated contact with the police due to false matches.

In summary, facial recognition systems provide the police with a new tool to help keep the public safe. However, in order to be effective and gain the trust of the public, it will be vital for the police to set out the safeguards put in place to prevent privacy violations and the steps taken to ensure that the systems do not feature racial and gender bias.  


Follow us on Twitter to see which topics are interesting our Research Officers this week.

If you enjoyed this article you may also like to read:

Icons made by monkik from www.flaticon.com

Assistive digital technology and older people: technology “bricolage” in dementia care

A key focus of social care teams today is helping people to grow old at home, safely, with dignity and with appropriate levels of care if needed, without breaking the budget. Increasingly, local authorities are looking to advances in technology to facilitate this “growing old in place”.

Telecare packages and assistive technologies are often the preferred way for care teams to deliver social care in a home setting. And in situations where care is required around the clock (for example, support for people with dementia and other life limiting degenerative diseases), families and carers are adapting everyday technology and integrating it into their care-giving in order to supplement the telecare provided by local authorities.

Notruf und Hilfe für Rentner und Kranke

 

Bricolage in dementia and elderly care

Bricolage means adapting an object to allow it to carry out a function which was not necessarily its original intended function. Relatives who care for loved ones with dementia, often adapt everyday objects to help them with their day-to-day caring. They find new, innovative and often non-conventional ways to use technology in diverse ways.

dementia post it

One example from dementia care was a man who bought a chicken ornament with a sensor which “crowed” whenever anyone walked past it. He placed it beside the front door so that if his wife, who suffered from dementia, walked up to the door to go out, he would hear and be able to go to her.

Other examples of technology being adapted include: setting alarms and reminders on mobile devices to remind people to take medication, or using webcams to act as personal CCTV so familly carers can monitor loved ones when they go out, or go into the next room.

These examples show that objects don’t have to be digital in order to be effective. The rise in capability of digital technologies and the relative decrease in cost, however, means it is often quicker and easier for families to invest in additional technologies themselves, rather than waiting for an assessment and an allocation of additional technology from their council.

Image by Buddi

Image by Buddi

Ethical challenges

Although there may be practical motivations, some charities have expressed concern about the ethics of some of the practices regarding adaptation of digital technology to form part of an assistive care package. While they recognise the strain of caring is significant for many people, rigging up a webcam in each room to allow you to “monitor” a loved one, or attaching a GPS tracking bracelet, for example, while often done with the best of intentions, could be interpreted as a breach of human rights.

Active assistive technology (technology which requires an active call for assistance) rather than passive technology (which is constantly monitoring) may be a better way of using technology ethically. It may also be used as an additional stimulant or interactive tool to allow patients to communicate. Apps and interactive devices, such as tablet computers, can inform a carer or loved one that someone had been using the app (providing a type of reassurance and monitoring) and the activities the app promotes might also be a visual stimulant and a communicative tool. The Dementia Citizens project has adopted this method and aims to help people with dementia and those who care for them, using apps on smartphones and tablets.

Dementia Citizens from Nesta UK on Vimeo.

Final thoughts

If we are mindful of the ethical challenges of integrating more technology into care, it might be possible for families and carers to work with social care and assistive technology development teams to adapt the tools available in a more empowering way. It might also mean that the onus is not on carers and their loved ones to build what they can from the standardised telecare provided by local authorities.

Bricolage in assistive care has, for many families, become the norm without them realising it. By adapting and supplementing assistive technology, like telecare packages, with non-assistive technologies or adapted additional digital technologies, families and carers can create a bespoke and personalised care package.

In future, understanding the extent to which families and carers adapt the technology given to them, could help creat more flexible care packages which can be more easily adapted to suit individual needs.