Assessing police use of facial recognition technology

Minderoo Centre for Technology and Democracy, University of Cambridge

17:00 BST

27 October 2022

Speakers

GINA: Hello, welcome, thank you for joining us today. This is the latest event from the Minderoo Centre for Technology and Democracy centre for technology and democracy. I'm Gina Neff, the Director of the Centre. Before I introduce tonight's guests, I would like to let you know that this event is being live captioned professionally by human Andrew from MyClearText. If you want to have captioning, you can use it by selecting the team Zoom bar at the bottom of the screen. Additionally, we will provide a link to a Streamtext captioning, a fully adjustable live transcription of the event in your browser. If you want to see this, you can see the link now. A transcript will be available online after tonight's event. Some housekeeping before we begin: the event will be recorded by Zoom, and streamed to an audience to the platform. By attending the event, you're giving your consent to be included in the recording. Our guest tonight will speak to us for about 40 minutes, and then we will have a discussion with the audience and will take questions. Questions can be asked through the Q&A function on Zoom. You will see it at the bottom of the screen, and I will share those questions during our discussion. Please do not place the questions in the chat. You can follow us on Twitter and other platforms by using @MCTDCambridge. I'm delighted that you can join us tonight for the launch of a new socio-technical audit to assess the police use of facial recognition technology, developed by Evani Radiya-Dixit, Visiting Fellow at the Minderoo Centre for Technology and Democracy. Now, the report comes from a year-long engagement that Evani had with us, and her fellowship was funded by a Rotary Foundation Global Grant Scholarship. Over the last few years, police forces around the world, including here in England, and in Wales, have deployed facial recognition technologies for their work. Our goal in this report, and this audit tool, was really to assess whether these deployments that we know about used the known practices for safe and ethical use of facial recognition technologies. This report builds on an existing body of research, what we know about these best practices and the use of data-intensive technologies in public, and what Evani does in the report is examine the complexities and challenges that exist when police forces use these technologies. Here at the Minderoo Centre for Technology and Democracy at the University of Cambridge, we are studying how digital technologies are transforming society. And we are doing that to ensure democratic accountability over the increasing power of tech across the globe. What my hope for the audit tool and the report is that it will be useful to a wide range of different kinds of stakeholders, different kinds of people, who are scrutinising how the police are using these technologies, and evaluating the use of biometric technologies more generally. Tonight, I'm delighted to be joined by our speakers, and I will invite them all on the camera to come on to the screen. First, we have Evani Radiya-Dixit. Evani is the report author, and Visiting Fellow, at Minderoo Centre for Technology and Democracy. She was a 2021/22 Rotary Scholar. She's interested in questions of technology, accountability, and power. She completed her bachelor's degree in computer science from Stanford, and is now pursuing her Masters degree at that university as well, in sociology. We are also joined by Fraser Sampson, the Biometrics and Surveillance Camera Commissioner, with 40 years of experience working in the criminal justice sector, having served as a police officer for 19 years before becoming a solicitor specialising in policing law, conduct, and governance. We're also joined by Nour Haider, Legal Officer, for Privacy International. Nour's a solicitor working across at Privacy International and working on existing surveillance, ensuring social protection, and protecting civic spaces. Now, I hand the virtual mic over to Evani to introduce the report with her presentation, and the unique audit tool she has developed. I will take a step back and be camera-off and invite our co-panellists to do the same. Evani?

EVANI: Thank you so much for that introduction. As Gina mentioned, we are really excited to launch our report today on assessing police use of facial recognition. I will start this talk with some background: as many of you might know, police use of facial recognition has been heavily debated, and we looked at this issue in the context of the UK, given that the UK is known to be one of the leading countries in terms of tech policy. In the UK, police often advocate for facial recognition to prevent threats to public security. Facial recognition is seen as an important tool that can help address crime and identify missing and wanted people. Now, at the same time, police use of this technology can pose threats to human rights. Especially for marginalised communities. For example, facial recognition can impact the rights to privacy, equality, and freedom of expression. For example, in 2017, the London Metropolitan Police tested facial recognition at a Black Caribbean festival in the UK. Here, many innocent people were stopped as a result of being incorrectly misidentified. Now, with the increasing use of facial recognition, there have been greater calls for greater accountability and legislation. So, to better assess how police are using the technology today, we explore the question of how ethical and lawful is police use of facial recognition? Here, "legality" refers to compliance with the law in England and Wales -- for example, has there been an adequate human rights assessment as required by the Human Rights Act? Or does the use of facial recognition apply with principles established in the Data Protection Act? There are also important principles that go beyond the scope of law. This is why we also consider ethics. For example, do police engage with the community for feedback? Or are police held accountable if people are harmed? For our research, we developed a tool that brings together ethical and legal standards for the use of facial regular in addition. We then tested this tool on three British police deployments and found that all three failed to meet the standards. Today, I will be talking about this research in more detail. I will start with our motivation and lead up to our results and recommendations. So our research focused on assessing how police are using facial recognition. What exactly is this technology? Facial recognition is a technology that identifies images of human faces. And it involves an image of a person being captured. This image is then compared with a watch list, or a database of known images. And this comparison is done to see if there is a match. And in our report, we launch a socio-technical audit which is a tool that can help outside stakeholders evaluate the ethics and legality of police use of facial recognition. This audit brings together social and technical aspects of facial recognition in policing. , and has been developed for England and Wales. To build this audit, we consolidated standards from a variety of perspectives, including academia, government, civil society, and police organisations. For this we used current guidelines, including UK legislation, outcomes from court cases, and recommendations from the information commissioner's office. We also got feedback from more than 30 stakeholders across these different perspectives. In terms of who this audit is for, we designed the audit to be administered by outside stakeholders independent of police, and this can help provide impartial scrutiny, so regulatory and oversight bodies, policy-makers, and civil society groups in the UK might be well positioned to either administer the audit or use the findings. How can this audit help these groups? For one, the audit can reveal the risks and harms of police using facial recognition. For example, a risk might be an impact on the right to protest, or the lack of oversight over how the technology is being used. The audit can also help evaluate compliance with the law, and national guidance in the UK. And the audit was informed by the UK Human Rights Act, the Equality Act, and the Data Protection Act. Finally, the audit can inform policy, advocacy, and scrutiny from an ethics perspective. So what gap does this audit fill? Current work such as impact assessments and guidance are often broad and not tailored to a specific technology. Many also focus only on the legal aspects, and they are often aimed for police forces. And this audit complements such work, and it is unique in that it is tailored to this specific context of police use of facial recognition in England and Wales. It addresses both legal and ethical standards, and it is a tool for external scrutiny. And this makes the audit helpful in improving accountability to the public. Now that I've shared our motivation behind this audit, I will next talk about our audit scorecard. This scorecard is a tool that other people can use and can be applied to other places as well. The audit scorecard is composed of a comprehensive set of questions that can be scored, and the questions are divided into these four sections. The questions are not exhaustive, but they provide a set of important considerations. The first section of the audit scorecard evaluates how police show their compliance with the law. This section includes questions related to the Human Rights Act, the Data Protection Act, and the Equality Act, as the use of facial recognition engages with the rights protected by these acts. The second session evaluates how reliable facial recognition is in terms of its accuracy. Facial recognition has been shown to perform worse on people of colour and other groups so we consider questions about algorithmic fairness. We also include questions about whether police use robust practices when using the technology. The third evaluates how the technology shifts police decisions, including police review of the facial recognition output, questions about preparation and police training. And questions about accountability when people are harmed. The fourth section is evaluating expertise and oversight over facial recognition, and this includes oversight from an independent ethics committee, civil society, and importantly the broader community. After developing this audit, we deployed it to three places in the UK. The first deployment we examined was a South Wales Police live trial of facial recognition from 2017 to 2019. These operations were ruled unlawful in the Bridges court case. Our second case study was of the Metropolitan Police' trial of live facial recognition from 2016 to 2019. And, finally, our third case study was of a recent trial where South Wales Police deployed facial recognition using a mobile phone app. This trial was just from December of last year to March of this year. After applying our audit to these three deployments, we found that all three failed to meet the ethical and legal standards for the governance of facial recognition. These deployments didn't incorporate many of the known practices for the safe and ethical use of technology. And I will next talk about the key reasons behind this. Including issues related to discrimination and oversight. We found these deployments might have infringed upon privacy rights, because they may not have been in accordance with the law, or necessary in a democratic society. For example, South Wales used live facial recognition at a peaceful protest and this interferes with the right of freedom of assembly. The watch list that South Wales Police used included innocent people who were arrested but not convicted. This broad scope raises concerns about whether their deployments met human rights law legal requirements. Secondly, we found that there was a lack of transparent evaluations of discrimination, both in terms of bias, in the Firebase Firebase algorithm and discrimination in how it was used. For example, for example, the Metropolitan Police did not publish an evaluation of the racial or gender bias in their technology. Although they conducted an evaluation, they did not make the results public. They didn't publish demographic data on the arrests resulting from the use of facial recognition, and this makes it hard to assess whether there was racial profiling. This highlights the importance of transparency, but transparency does not equate to accountability. And police forces are not necessarily held responsible for the harms. This connects to our third concern which is there is no clear framework to ensure accountability for when facial recognition doesn't work or when it is misused. There is a lack of robust remedy measures for those harmed. We also note that police force documents were not fully accessible to those with disabilities, or provided in immigrant languages. This can make it hard for certain groups to understand how they are being impacted. Finally, we found that the three deployments lacked regular oversight from an independent ethics committee, as well as the broader public, especially the communities most affected. For example, the ethics body overseeing South Wales police's trials didn't have independent experts in human rights or data protection, even though such expertise has been cited as crucial for oversight. South Wales Police didn't consult the public or civil society for feedback before their trials. Now that I've highlighted some of the key issues that we found, I will next share the recommendations that we make in our report. First, the audit can be used to understand the extent to which facial recognition deployments meet ethical and legal standards. And we encourage others to engage with the broad range of questions that this audit brings together. This audit can also be adapted to assess the risks that technologies pose in different environments, and we encourage others to examine how technologies are used in specific geographical, and historical contexts, including outside the UK. Finally, our report shows that some facial recognition deployments fail to meet ethical and legal standards. There is also a growing consensus that the current legal framework for governing this technology is not sufficient. Given this gap in legislation, and the failure to meet standards, we support calls for a ban on police use of facial recognition in public spaces. Now, as I near the end of this talk, I would like to share a few final thoughts. First, our report focuses on the adoption of facial recognition by police. But this work can also be applied to its use by private companies. The line between the public and private sectors is becoming increasingly blurred as police and private companies often collaborate. Many of the concerns that we raise in our report extend to the use of surveillance by private companies. I also want to highlight one of the big takeaways from this work, which is that the harms move well beyond the issues of bias, in the algorithm. In our report, we highlight important issues, including accountability and oversight. And we need to consider not only harms related to technology, but also the broader structures in our society. We hope this work is a step in this direction. Finally, I would like to express some thanks to everyone who helped make this work possible, especially those who provided feedback, my supervisor, Gina, and the Rotary Foundation. Please check out our report on the MCTD website, and I will pass it back to Gina for panel discussion.

GINA: Thank you, Evani. Thank you. And, I will bring Fraser and Nour back to join us for our discussion. First to bring in Fraser Sampson, the Biometrics and Surveillance Camera Commissioner, you know, there are many ways to think about how to situate this report and this audit that we've done. What do you want people to take away from the concern and focus on police use of facial recognition technologies?

FRASER: Thank you very much for inviting me to this discussion. Evani, thank you for the report, which I think is very welcome. It is very timely, and it is necessary. Because this is very fast-moving area, but also it is an area where there can be a fairly narrow focus, possibly just around the legal or mechanics of the algorithms and technical things themselves, so I think I really welcome this approach,- I found you've taken this socio-technological focus to it, and the wider issues around ethics. There are probably three points I would make on first reading this. The first one is that, from my perspective, public-space surveillance is no longer about where police put a camera, it is what they will do with the petabytes of images on everybody's camera, and regulating that use will require something very different from applying the old rules that we have to a few free-standing CCTV cameras. when it needed a human to process it all, there was simply too much data to make it useful to the police or anybody else, really. Now video analytics mean they can now process it and edit it, and synthesise it in a way that will transform the surveillance capability making a lot of good things possible that weren't, but also raising a number of very significant questions about how we move regulation and monitoring and review on. The second one is that early experimentation by police forces has certainly set back issues of trust and confidence, and not just for the public, also for the police themselves, so when they are considering investment and innovation in the legitimate and lawful use of available surveillance technology, the police will still be met nowadays with statistical evidence that is four or five years old in relation to their technology, and of course in technological terms, that is the Pleistocene era , so much has moved on. As you said in the report, because the algorithms have moved significantly, doesn't mean the usage will. Designing out bias from an algorithm won't cure your continued biased use where it is to be found, and  one of the key questions for me is how do we know where it will be found? What are the signs of that? And who is watching? And what will they do about it? I think that the third issue for me, which comes from the kind of ethical oversight point, is there is not only the importance of having within a structure an ethical element and a lot places will have them as individuals or a committee), but for me, it will be really important that they have a so-what capability, an ability to intervene, or, if necessary, to halt something until there has been further evidence put forward around the issue that they've raised. All of those things came out very strongly in the report when I looked through it, and I think they're much needed at the moment, and I would be really interested to see how they get taken up and developed elsewhere.

GINA: Thank you for that. Nour, we spoke about the risks that remain around the use of facial recognition technologies by police. What would you like the people to take away from this report, but also attention on this issue?

NOUR: Yes, thanks, Gina, and again, thank you to the Minderoo Centre and Evani for putting this together. I would encourage everyone who is here today to have a look at the report because it does actually present quite an innovative way to this about who should be consulted and how they should be consulted before biometric surveillance is rolled out simply because it meets policing needs, which is often what happens when surveillance is pushed out. To answer the question which was posed to me, one of the -- I will also highlight three issues we think are continuing risks that come out from the report, or that build on the report. The first which is quite specific, and so I apologise to anyone in the audience, who doesn't have a very strong grasp of how facial recognition works, we can maybe go into it in the Q&A, but we have thought a lot at PI -- Privacy International, which is the organisation I work at -- about issues with watchlists which are the lists that get fed into facial recognition technology. In addition to the issues raised in the report, when we think about auditing this kind of technology, we are interested in raising questions around the source of information for the data that is fed into the watchlists. Is it only people who have been convicted of crimes? Is it anyone who has been arrested or come into contact with the police? Is it anyone who has a government IT, or can the NHS database be accessed? Can the DWP's database be accessed? All of these are questions that we think are still to be addressed in this context, which goes to the bigger issue of what is the rationale of including people in watchlists? What happens when this becomes indiscriminate surveillance? The second issue we've looked at a lot is that, in general, police forces are not necessarily trained to have an in-depth understanding of how this technology works, and quite often, as Evani mentioned, and as I'm sure Fraser will discuss in depth, there is an outsourcing of the technology to third-party contractors, which brings up the issue of public-private partnerships. This means that third parties and private companies have a huge amount of power when there is no expertise in-house in the police. One thing I wanted to raise is that you know, within the police, in the UK at least, at the moment, we should not forget there are issues of institutional racism that have been raised by the Assistant Commissioner of the Met himself, and credible and long-term accusations of misogyny and sexism. These issues matter when we are talking about the kinds of technologies that are being rolled out, especially imperfect technologies. The final risk we would be raising is the exporting of this kind of surveillance from the UK to the rest of the world. When we are thinking about the safeguards that can be put in place in the UK, we can imagine an ideal scenario where they get perfect scores on the socio-technical audit scorecards, but what happens when this technology becomes commonplace and is exported all around the world? We've already seen in Buenos Aires in Argentina, the equivalent of a High Court has ruled that facial recognition technology is unconstitutional, remains to be seen if it is challenged, and is receiving a lot of pushback internationally as well. I will stop there and we can continue with the discussion.

GINA: Thanks, both, for that. One of the things we talked about was the notion of what transparency and accountability looked like, and they really look differently when we are talking about large-scale data systems, automated systems, surveillance systems. In public agencies, versing pushing for transparency from private companies, the blurring of the line between public and private, though, is quite interesting, because, you know, Nour, as you just pointed out, these are not tools that are being developed by police forces or even public agencies, there is a whole lot of the data pipeline and the supply chain of how these tools get built that are complicated. One thing I would like to just note is how much work it took Evani this year to really understand what principles should apply, and then apply them to these cases, and you know, so it is not a straightforward task to kind of think through some of all of these implications. So, to what extent is the goal to have trusted surveillance partnerships, or minimal involvement from companies? This is a question that came up in our pre-meet discussion. How should we on the outside of this be pushing? I think, Nour, you and Fraser might have pretty different expectations of that. Would you like to take a start on that?

NOUR: Sure. Given our perspective, given the current climate of the relationship between Data Protection Act technologies and surveillance, we would be highly sceptical of the context where we are thinking of building trusted surveillance partnerships with the private sector. We've seen time and time again that the primary interest when the private sector has control of intellectual property related to surveillance technology is keeping those secrets so they can protect their profits. That is in direct opposition to the need for transparency and accountability which we see in the public sector. When, just to give a very brief example, when we make Freedom of Information requests to understand how certain algorithms are being used by the DWP, for example, which is the Department for Work and Pensions, they often rely on the fact that this is -- they're exempt from disclosing this information for the purpose of preventing crime, but also because there are trade secrets within those algorithms. Now, when you allow a private company then to not only hide behind their own trade secrets but also national security risks, that makes transparency a lot more difficult. We don't think the state should become necessarily a conduit for the growth of the surveillance technology merely because that is what is being pushed. We think there needs to be a much stronger resistance, so our answer will be that it should be minimal, and we can go into details about that.

FRASER: Thank you. So I think ideally, you might want to, or we might want to, minimise the amount of commercial ownership, control, direction, motivation, in this area for the reasons that Nour has just set out, but I think the reality is that almost all our surveillance and biometric capability is already  in private ownership and it is accessed through properly drafted and monitored legal agreements. Now, whether those legal agreements are up to the job in new areas such as this, only time will tell, but things such as this audit tool section of the report, I think they take us quite a bit further down that line. I think if you look at how we currently manage fingerprint databases, DNA, forensic science laboratories that are working day and night for the criminal justice system, across the UK, those are by and large private organisations, private entities. But I think  there are a number of very real risks. One is to be able to say we are absolutely certain we're not simply off-shoring legitimate areas of accountability that would otherwise fall on the police. There is another one which is not allowing a commercial entity simply to hide various things that are centrally important to accountability as set out in this report, and I think the algorithm one, and the AI black box discussions are really good examples of that. I have had conversations with people who say, look, our  black box technology is so complicated. Even we don't understand it and can't explain it. If you've got a statutory duty and public bodies, like the police have, to demonstrate that you are complying with your public sector equality duty, that won't do. So, you need to think about that before you (a) bid for the contracts, or (b) sign them up from the police. If you are unable to demonstrate to your communities that there is no inherent bias in the technology that you're buying and then employing in their name, that won't do. The onus is on you  to demonstrate it. I've even had some very public correspondence with surveillance technology companies where I've questioned their involvement in human rights atrocities in other countries. And I’ve asked, do you acknowledge that those have taken place, and, if so, would you explain the extent of your involvement in them? And they have sought to characterise those questions and answers as being somehow commercially confidential, which is plainly nonsense. I think we have to be very clear about an irreducible minimum of public expectation and transparency, and, if you can't get beyond that in the way you operate, then don't put your hand up to bid for large amounts of public money, but I think what we can do, and there are very good examples of this elsewhere in policing, is have trusted surveillance partnerships with trusted surveillance partners, because, if we can't, the reality is that we will be in a lot of trouble, not just as a sector, but as a society.

GINA: So, taking that, thank you both for that. One of the things I want to remind people, so I've got two reminders: first, there is a question -and-answer box down below. Please, there are a couple of questions there. Please, if you have a question, type it into that Q&A box. And the second thing is that this report is -- there is an audit tool, and I would love to hear from first you, Nour, and then Fraser, how, from your worlds, how would you see the audit tool being used? What use could you see this tool being put to?

NOUR: Thank you, Gina. So, in terms of the audit tool being used, I think from our perspective, as a civil society organisation, it is a really helpful tool to organise beyond just questions of lawfulness the ethics of any kind of biometric surveillance deployment, which we touched on in the beginning and we can expand on now. When we are trying to advocate in this space for increased transparency, increased oversight and scrutiny of any kind of acquisition of new technologies, which are supposed to enable enhanced policing, we constantly run up against the problem that new technologies are being developed much faster than we can think about these things. I think one of the really good uses of this tool is to give us a mechanism to think about biometrics specifically. So this relates not just to facial recognition technology but any system where your biometrics can be captured, stored, and then used to identify you in a range of contexts. From one of the examples that I would say we would be interested in using it for more in depth is in the protest space. We have done a lot of work on what it means to be subjected to high levels of indiscriminate surveillance in a protest context, and what it does to not just the right of freedom to expression and the right to freedom of protest, but when people who are fighting the very system that is surveilling them, feel that they can't go out and protest anonymously, what happens to civic space? What happens to public space? So, from our perspective, it would be interesting to apply it to these more specific case studies, so putting it down into the areas that we work on.

GINA: Fraser?

FRASER: Thank you, yes. So I completely endorse that. I mean, one of the things that I found immediately attractive with and about this report was that there are a number of practical uses that I could see immediately. One of them is as Nour just described, whereby you can use it not just for this biometric technology and its use, but for others. For example  internal checking and guidance in the way that you might do internal audit for other risk-based upstream monitoring. Earlier this week, from the office, we have  been at the Biometrics Congress Institute in London and there was a clear need recognised to provide people externally with information about the technology in the way it is used, what the policing purposes are, how you can push back against that, where you can complain about it, and how it fits in with their wider statutory and common law duties. But I think it is just as important to arm people with questions as much as information, and I think what this audit tool does is that it provides you with a series of assurance questions, i.e. a lot of people that I speak to when they're discussing their views about, and their approach to, the adoption of new technology in this type of setting will say, "I think it's all right."which is kind of a felt response, which is fine up to a point but not very far. So what I encourage them to do is go through a series of challenge questions and then see if they've assured themselves by the end of it. If they have, fine; if they haven't, there are areas for feedback. It is reaching for those challenge questions to get self-assurance even internally because they don't really exist, and I think they exist very clearly in this setting, so it has a very practical application that I think will certainly move a lot of the conversations on from a felt response of I think that is all right. Or absolute horror of horrors, the view that “if you've done nothing wrong, you've nothing to worry about”. If anybody has any residual sympathies with that awful view and the worst possible surveillance advice you could give anyone, look on my  website, I set out some  very compelling reasons why that is about the worst approach you can have to any form of biometric, but that is a different point. This is a practical tool kit I think, then, that can be applied across a whole series of emerging biometrics.

NOUR: One thing I would add, because I remembered as Fraser was speaking, that I think socio-technical audit is really great for, is that it shows how many considerations -- at this stage, and in the current, like, how the technology is at this point, it shows how many considerations need to be taken into account before anything even close to lawful or ethical can be used to describe this kind of technology, specifically of how intrusive it is, and how inaccurate it can be, and how it can be deployed. So one of the things that also helps us to do is to say sure, if you think that you can meet all these different things in all these different ways, maybe we can have a conversation about when it can be used. But given that that is highly unlikely, and really difficult to do, we need to restrict the use of facial recognition technology to instances where it is not live, where it is retrospective, where it is one-to-one, and in a very restricted context -- so, for example, within a police station, rather than having these open scenarios where you can be building a watchlist from private companies and government databases and so on, and so forth, and deploying it without the safeguards and real time. Sorry, I just wanted to add that.

GINA: That's great. You know, that falls on to the next question, which is we really saw the goal of moving from a kind of general principles about large-scale data into practice, right? It is time to get real about lots of these technologies, how do we put those principles into practice? And that has been the work that Evani has done. Evani, I would love to invite you to reflect on how you think this tool can be adapted for use in other jurisdictions? What do you think it would take?

EVANI: Thanks for that question, and Fraser and Nour, for your perspectives on the audit, and its connection to the use by private companies. In terms of adapting the audit to other countries and jurisdictions, I think there is a lot of potential in how the audit can be used. As you mentioned, the audit is built on these basic principles of accountability, transparency, and fairness, and those principles can definitely extend to other countries. I think a few considerations that would need to be made if the tool were adapted is first the law in that jurisdiction, I think second is how policing is structured, and third is the history and culture of the policing in that region. So with regards to law, this audit was developed using specific legislation in the UK, and so specific legislation would need to be incorporated depending on the jurisdiction where the tool is being adopted. With regards the structure of policing, in the UK, policing is fairly decentralised and police forces often have local ethics committees, so we use this feature of British policing in our audit to assess whether those local committees provide independent oversight. When applying this audit to other areas, the specific structure of policing would need to be considered. Third, as Nour mentioned, the history and culture of policing is really important. In the UK, there are known to be discriminatory policing practices. For example, Black people are more likely to be stopped and searched. In other regions, for example, in India, other factors like religion or caste might be more relevant, so history and culture needs to be incorporated. But overall, I do encourage others to use this audit both in the UK as well as other regions to help bring more accountability in how the technology is used.

GINA: That is a great launching point for our first set of audience questions. So I'm going to turn to a question by Mary-Ann Oswald. How should the police themselves use this audit tool? Could you see a link with the government's algorithmic transparency standard, and, in a quicker of Zoom, I scrolled down, because someone asked a great question, and are there any relevant factors or issues or questions that police might think are missing from the tool? How might the police use it? How might it link to the UK government's algorithmic transparency standard, and what might police think are missing?

EVANI: Thanks for that question. Yes, definitely, in terms of how police themselves should use this tool, I think the audit is primarily aimed for outside stakeholders, and the reason behind this is that stakeholders independent of police can help provide more meaningful and impartial scrutiny. I think police, though, can definitely engage with the broad range of questions in the audit about ethics and legality, and Fraser talked about this as well in terms of those being items that can be considered especially as police are deciding or proposing the use of this technology, and going through those questions can be helpful in assessing whether the structures are in place to use this technology in a way that is accountable and has oversight. And then, to the second question about the government's algorithmic transparency standard, we definitely use that in developing this audit. I think that standard can apply to technologies more broadly, while this audit is tailored to police use of this technology in a specific context, so I think in that way, they're both complementary to each other. Finally, in terms of relevant factors that police might think are missing, I think those are definitely important to consider in terms of policing priorities, and a public risk, and we hope that we are able to start some of those conversations, and that releasing this report today opens the dialogue for some of those conversations, and maybe feedback from many stakeholders, including police organisations.

GINA: Great, thank you. We have a question from, and forgive me, Jake, from CDT, how does the use of facial recognition with images taken from smartphones work in practice, Jake asks? Maybe Nour or Fraser? Do you want to pop in? How does that work?

NOUR: I can try to start answering the question, but Fraser and Evani, please jump in with corrections about my description of the technology. Evani, I think, because the case study included the South Wales Police's use of that, I think you might be able to correct us on that. When we think about facial recognition images taken from smartphones, we think about the images that are taken, extracted from the phone, and then stored by police. So I'm not sure about taking the photo from a distance prior to interacting, that I don't think they would need phones to do that because they would be mounting surveillance cameras to capture those images, but that would be an instance where it is used -- the data that you hold is being used to inform the watchlist that they are using. Evani, do you want to jump in?

EVANI: Yes, so I think in the case where, for example, South Wales Police used a mobile phone app, it was the case where an individual was stopped and then I think in South Wales Police's procedures, they had certain requirements in terms of reasons for stopping that individual, and once that individual was stopped, an officer would take a photo of them, and compare it to their watchlist, so it would entail stopping or questioning them first and then taking an image of them.

GINA: So we have another pair of related questions. Kevin asks: can you give an example of a use of facial recognition that would pass the audit? And an anonymous attendee asks: there are open source researchers like Bellingcat who use the same facial recognition technologies for justice and accountability purposes. Can we ever justify good use of facial recognition technologies? Maybe, Evani, can you suggest a use that might pass the audit?

EVANI: I think that is a really good question. We applied the audit to three case studies in England and Wales. I think there is a potential for a use of this technology to pass the audit, but as Nour highlighted, there is really this long list of questions that are critical to consider before such a technology is being deployed. I'm not sure whether the way the technology is currently being used comes close to at least meeting the requirements or standards established by the audit, and I think moreover, this audit too has limitations. There are risks that go beyond the specific questions that we consider in the audit, for example, the export risk that Nour talked about in terms of even if we are able to get a technology, say, to pass this audit in the UK, what happens now when that technology is exported to other areas? And so I think while we can maybe work towards improving accountability and oversight and the standards in the audit, we also need to consider what are things beyond this audit, and what questions and considerations still might need to be there before deploying such a technology?

FRASER: Gina, I think it's a very good question. I think that there has to be a use of this, or a series of uses of this, whereby it would pass the audit, otherwise we are creating a list of ideals we know are unattainable, against a technology which is if used properly, the first true game-changer, I think, in some areas of policing since DNA profiling. And what that means is that it is not only the police who will be very keen to use it appropriately and proportionately, and very much accountably, and with an eye on learning all lessons from it, there is a legitimate expectation from the public that they will do so, and I think that is either a general expectation, particularly when people know what they can do in their own lives, and, if they were to go out looking for their own missing children, or their own vulnerable relatives, or so forth, and then thinking well why can't our police do this? And I think there will be areas where we very quickly move into a world where the police are spending at least some of their time explaining why they haven't used some of this technology. And the third point related to that is, which picks up on the Bellingcat point, increasingly will be police rely on the images  they've been sent by the public. They've gone from a world where they  needed images of the citizen to one where they also now heavily rely on images from the citizen, and that creates an entirely new surveillance relationship with the system, but it also ramps up some of the expectation, because if people are submitting images, whether it is following a specific request, or whether it is of their own volition, there will be an expectation that something will be done with that. Now, whether that is a legitimate expectation or not, it is still some way down the line, and I think that is another area that we are only just beginning to understand in the context of how much the police rely on images that somebody else has, which goes back to my first point, about public space surveillance is about what they do with available images, and those images will be available in yottabytes by the time we finish the year.

GINA: I think we are bumping up right next to time, but another great question, and I want to use a minute to do this, and I will paraphrase in the interests of time. Does our understanding of, and jurisprudence of rights and ethics, need to evolve? For example, established understanding of human rights such as the right to dignity, or self-incrimination. What is our fast-moving technology doing to our notion of rights?

FRASER: Can I pick up on that really quickly? I think it's doing a great deal. One is that it is making it much more than data protection. Some of the  reasons outlined by Evani, the chilling effect of your right to protest, have  very profoundly constitutional effects  which are nothing to do with protecting the data. I also think when you look at cases where things have happened involving people who have died, and taking images of them, or photographing yourself doing things to them, there are invasions, gross invasions of dignity, and respect involved there, but, again, data protection laws that we currently have only protect the living, so they don't extend to that. The more that those things are shared, the more we realise that we need to expand some of the legislative framework to take account of that if we are really serious about dignity, respect, human rights, and ethics.

NOUR: One thing I would add to that is that definitely we addressed the human rights issues, but from a criminal justice perspective as well. We have to understand what it means when we are subjected to indiscriminate surveillance, and we've compared it to, for example, being constantly stopped and searched, which is something that nobody at this stage, at least, is okay with. So when we allow this kind of technology to proliferate, we have to understand that it kind of amounts to that unlawful and continuous stop-and-search, whether it is with or without cause is another question, but that's an example of how our rights do need to move on.

GINA: Thank you, Nour, thank you, Fraser, and thanks to Evani's tremendous work on this report. We have a new set of tools for advancing conversations about the values we want and the technologies that we have. And the values that we as a society should seek to protect, including these notions of human rights. I want to thank all of you for attending today. You can download a copy of the report on our website at www.mctd.ac.uk. Details of future seminars can also be found on our website, and, in particular, our next event will be the launch of a new report on the 21st November, this report written by Dr Margie Cheesman, an affiliate of the centre, regarding the problems of Web 3.0 technologies especially when used with at-risk communities around the world. We hope you can join us for that launch. We would very much enjoy hearing from you. If you could send back a feedback questionnaire by Eventbrite. Please continue to follow the work, and the great work of Evani and other scholars who work with us, on Twitter and other platforms, and that handle is @MCTDCambridge. Thank you all again for joining us today. I thank our speakers, Evani, and continue to be invested in, and following this kind of important work. Thank you.

FRASER: Thank you very much.

EVANI: Thank you so much.