Exclusive: Changing the narrative around facial recognition
Share this content
Rob Watts, CEO, Corsight AI argues the case for facial recognition technology to be viewed as a force for good in society.
The concept of monitoring buildings, assets and even people is not new. CCTV is a well-established method, with many of us au fait with cameras on the sides of buildings or reception areas plastered with screens showing activity across the building – inside and out. Here we can often look up and instantly see our face displayed on one of the monitors, as people watch us badge in or out.
So why is it then, despite this familiarity and acceptance that monitoring peoples’ movements is legitimate, that facial recognition – an integration into video surveillance – is so divisive?
Tony Porter, the outgoing Surveillance Camera Commissioner is in the process of creating guidelines for its use and I have to say, they can’t come quickly enough. If this technology is going to be allowed to have the impact that it could do, we need to change the narrative surrounding it. In the US, for example, we hear about jurisdictions banning government employees using the technology through fears (not proof) that the technology could lead to an overly surveillant state. In the UK, people can read headlines following the court of Appeal case in which South Wales Police’s use of live facial recognition technology broke equality law and breached privacy rights. Porter does point to ‘sunny uplands’ for the use of such technology following the judgment; with a closer focus on why people are placed on a watchlist and where the equipment can be deployed, the Court firmly indicated that use of this technology can be lawful. This is why the drafting of police guidance following the Court’s judgment can lay a route-map for its successful deployment.
This will help change the fact that there are not many positive stories demonstrating successful use cases of facial recognition, there isn’t a fair narrative for people to make up their minds. A skewed argument tends to lead to misunderstanding. This therefore often leads to us hearing about how it invades peoples’ civil liberties and puts us on the precipice of an Orwellian society or Stalinist Russia, with our every movement recorded in a central government database. This complete misunderstanding needs to be confronted, with a comprehensive education programme undertaken so everyone is aware of its use, application and parameters. Wouldn’t it be nice if facial recognition could be seen as the hero and not the bad guy? But how?
In my view, this can be achieved in three steps.
We need to talk about watchlists
Part of the fear surrounding facial recognition is the misunderstanding of who is being watched and why. Part of Tony Porter’s current engagement with the Home Office is reviewing watchlists and how they are compiled and audited. I have to say this is a good move. For us as technology and security professionals and providers, it’s only right that we are up front about the technology, including what it does and the parameters in which it works. Whilst private and public sector uses will vary, it’s time we educate the public on why watchlists exist.
This education programme forms two prongs – who is on watchlists. i.e. only people who have committed crimes which require monitoring; and second, what happens with their data? It’s important to stress that the best facial recognition systems will have privacy baked in, so if a camera is monitoring a crowded street, the operator will just see a load of blurred-out faces until a known perpetrator is flagged, alongside the technology noting how confident it is on a match. The operator, based on this information can then decide to unmask them or not, depending on this match level and the reason displayed as to why they are on the watchlist.
The technology doesn’t follow “normal” people, it doesn’t need to store their data (for example, our software is able to delete unmonitored people and their data within 0.6 seconds) and it doesn’t follow people around.
Another issue we as an industry need to counteract is how the technology is applied – why we’re monitoring people in the first place. We have covered criminals, but how about health and safety post COVID-19? The technology could be used by retailers to identify those not wearing a mask properly and are therefore endangering others. This can help drive compliance. Alternatively, it could be used to control ingress and egress. We have seen how many retailers are currently forced to have members of staff manning doorways to control the flow of people. Facial recognition, integrated with automatic doors can do this. If someone is wearing a mask then they can go in, with humans freed up to add value elsewhere. It’s boosting productivity and helping many retailers in a difficult time.
Facial recognition can be a force for good
Like any technology, part of the education programme is about showing how it can make life better. From Alexa making it easier to track orders, facial recognition can be a force for good. For example, the NHS could leverage it to identify those with cases of Alzheimer’s. If they are alone, seemingly lost or in a confused state, the technology could alert local shopkeepers, medics or police to who they are, their condition and where they live. Yes, this is a lot of data, but it could also lead to the best course of action for them. Normally, they would likely be taken to hospital and held there until they are identified – alone. This way, providing they have actively opted in to dedicated, specialist watchlists, they could be taken home or their loved ones contacted quicker, resolving the issue faster and in a humane way.
This technology is a game-changer and can help be a force for good via its speed and accuracy. Now we just need a sense of digital responsibility. If people are willing to give their personal data to their phone provider to unlock their device, they need to understand that it can be used for good in other areas of society. To gain trust, we do need to talk about application and regulation would be no bad thing, to stop potential abuses and help the public trust in what we are trying to achieve. If COVID-19 has taught us anything, it’s that technology is a help, not a hindrance, so now let’s step forth and unleash the power for good some of these integrations can harness, to make our society safer.
This article was published in the December 2020 edition of International Security Journal. Pick up your FREE digital copy here