On the Face of It: The Implications of Facial Recognition

Facial recognition technology, particularly in terms of law enforcement, is spreading rapidly and in a very much global sense. It has become particularly prevalent in China, as well as in some US States and, more recently, it has been introduced into the UK on trial bases.  

In simple terms, the technology works by matching faces of people in public areas, in most instances, as they move past special cameras, to images of people on watch lists. This is technically accomplished by scanning distinct facial points and creating uniquely identifiable bio-metric maps. It can be most readily compared to forming a fingerprint, rather than a 2-dimensional photograph. However, with all such rapidly advancing technology, lies the potential for abuse and legality issues. There is currently the potential, according to Liberty, for “pictures of anyone, including people who are not suspected of any wrongdoing, and the images can come from anywhere – even from our social media accounts.” 

Facial recognition systems being used in CCTV monitoring room
Facial recognition in use in CCTV monitoring rooom

Key issues related to this technology include Human Rights concerns, an unclear or absent regulatory framework and issues of bias and abuse of vulnerable groups. Debate on this topic by the UK government has yet been limited; however, a recent case may provide some guidance as to the direction the law may take in the future. It is these issues and the legal ramifications which require discussion here.  

Is Facial Recognition Legal in the UK? 

Facial recognition has been trailed in the UK since as early as 2016, with events such as the Notting Hill Carnivals in 2016 and 2017, and more recently in South Wales. There is currently no specific legislation regulating the use of facial recognition technology; however, it can be said to be indirectly regulated by the Data Protection Act 2018, as to how images gathered are handled, and the Protection of Freedoms Act 2012, which has a code regulating security surveillance devices, with further protections viable under the Human Rights Act 1998 (HRA).  

With regard to Human Rights protections, the Metropolitan Police have self-imposed conditions on the technology’s use, ensuring compliance with the HRA with the right to respect for private and family life (Article 8 of the ECHR). Under these conditions, the technology will only be used if the following criteria are met: 

  1. The overall benefits to public safety must be great enough to outweigh any potential public distrust in the technology 
  2. It can be evidenced that using the technology will not generate gender or racial bias in policing operations 
  3. Each deployment must be assessed and authorised to ensure that it is both necessary and proportionate for a specific policing purpose 
  4. Operators are trained to understand the risks associated with use of the software and understand they are accountable 
  5. Both the Met and the Mayor’s Office for Policing and Crime develop strict guidelines to ensure that deployments balance the benefits of this technology with the potential intrusion on the public. 

The issue clearly presented by the above is that, as there is no specific law, challenging decisions to install or use facial recognition technology may be rather difficult. While the trials, at least in London, have ended pending review, if more were to be conducted, there is no legal basis to appeal installation, with only general ethical principles created by the Biometrics Commission offering safeguards: 

  1. The public should be informed and consulted when a trial, or series of trials, is to be conducted and the purpose and general approach to evaluation explained. 
  2. The public should be informed at a trial location that a trial is in progress and given a contact where further information can be found/requested.  

Hannah Parsons of DAS law summarises that, 

… if the police can show that they have legitimate aim and meet the oversight and regulation framework outlined by the Security Camera Commission, Biometrics Commission and Information Commissioners Office, their use of the technology is likely to be justified. 

There is some further issue with this, however, in the realm of necessity and proportionality. The mere use of the technology is surely at odds with these key Human Rights elements, showing a symptomized mistrust, by the State, of its citizens.  

Legal Concerns 

According to the Metropolitan Police advice and information, post-trials, the system only keeps faces matching the watch list for 30 days, while all others are deleted immediately. This seems to be in-line with the Data Protection Act’s requirements; however, a governmental report stated (in 2018) that “deletion of images of innocent people is unacceptable, and questions the legality of the Police’s ‘deletion on application’ (rather than automatic) process.” This is clearly at odds with legal requirements as no distinction is made between those who are convicted and those who are not with regard to custody images. Arguably, an automatic deletion system is needed and has not been. Norman Lamb, Chair of the Science and Technology Committee, offers a salient argument: 

Large scale retention of the facial images of innocent people amounts to a significant infringement of people’s liberty without any national framework in place and without a public debate about the case for it. The Government must urgently set out the legal basis for its current on-request process of removing images of innocent people. It is unjustifiable to treat facial recognition data differently to DNA or finger print data 

Further concerns have been highlighted, after an independent review of the Metropolitan Police’s trial of facial recognition, that the use of the technology is clearly inadequate due to a lack of publicly available, clear, online guidelines. This speaks to issues of consent and ‘in accordance with the law’ requirements under Human Rights Law.  This further draws attention to the clear need of an explicit legal basis, instead of the current implicit basis, so that challenges may be made and the public may be aware of the law in a clear fashion. 

Returning the key Article 8 (ECHR) issue raised earlier, while the Metropolitan Police’s conditions could be stated to be in line with this, it could be argued that facial recognition technology involves biometric processing of images taken in public for the means of identifying them. This first biometric processing and retention of potential video footage could act as interferences with the right to private life under S. and Marper v United Kingdom. The biometric processing element is not the same as simple surveillance cameras, which itself is not an interference as per P.G. and J.H v United Kingdom, as instead of merely being passively monitored, a public scene is being recorded subject to processing. It is this latter part that raises rights interference concerns.  

Further to the above legal concerns, there are clear issues of vulnerable groups being targeted. For instance, studies have noted that the technology has been less accurate for women and BAME subjects. This seems to be a criticism for both the technology, which would require police compensation to remedy inherent machine bias, rendering decisions unlawful.  

Key Case 

In September 2019, the High Court in Cardiff found that the South Wales Police’s use of facial recognition was consistent with the Human Rights Act. The challenge was originally brought by an ex-Liberal Democrat councillor (Ed Bridges), but has been dismissed. Lord Justice Haddon-Cave said: 

“We are satisfied both that the current legal regime is adequate to ensure appropriate and non-arbitrary use of AFR Locate, and that South Wales police’s use to date of AFR Locate has been consistent with the requirements of the Human Rights Act and the data protection legislation.” 

Lord Justice Haddon-Cave

While this may provide some clarification as to the current regime, it far from settles whether further law is required, and debate is sorely needed as to this. There is also a clear lacuna to the discussion here, in that little debate is ongoing as to the use of facial recognition by private bodies. While much has left to be decided in this arena, the potential issues have been highlighted and it is now up to the government to debate further and listen to reports from such bodies as the Science and Technology Committee and The Human Rights, Big Data and Technology Project. As ever, we will endeavour to provide updates as the legal developments and debates on facial recognition technology in the UK as they unfold.