A black and white image of a CCTV camera
Credit: Pexels Cotton Bro

In this new report, researchers develop “minimum ethical and legal standards” for the governance of facial recognition technology and test them on three British police deployments – with all three failing.

Police use of facial recognition technology can pose serious threats to fundamental rights of privacy, equality, and freedom of expression and assembly, especially for marginalised communities.

We propose this sociotechnical audit as a tool to help outside stakeholders evaluate the ethics and legality of police use of facial recognition.

The adoption of facial recognition by police has been the subject of significant debate. Police forces often advocate for the use of this technology to help prevent crime and threats to public security. However, there have been calls for greater accountability and legislation on police use of the technology. Given that police forces continue to deploy facial recognition, we need to assess how police are using the technology today.

Developed by Evani Radiya-Dixit for England and Wales, this audit extends to all types of facial recognition for identification, including live, retrospective, and mobile phone facial recognition. We developed this audit using a review of existing literature and feedback from academia, government, civil society, and police organisations on the ethics and legality of adopting facial recognition technology.

We designed the audit based on extensive research as a tool to help:

-Reveal the risks and harms of police use of facial recognition

-Evaluate compliance with the law and national guidance

-Inform policy, advocacy, and ethics scrutiny on police use of facial recognition

 

Read our full report on the links below.

 

Additional resources:

Report press release

Audit scorecard for scrutiny of police use of facial recognition technology