TPSO magazine asked Adrian Timberlake, chief technical director of 7TG and expert in security and surveillance solutions for the police, to tell us about biases in facial recognition technology, public trust in the police, and whether widespread facial recognition cameras may help to reduce discrimination.Food for thought about an ethical issue growing in importance every day…
Britain isn’t at war with new technologies, but with both real and perceived power imbalances between authorities and the public. Those in society who already feel discriminated against fear that facial recognition will hand even more power to authorities, but what if these systems could be used to ensure that the scales of power are balanced?
Adrian Timberlake, chief technical director of Seven Technologies Group and specialist in security and surveillance solutions for the military and police, takes a candid look at the pain points of facial recognition and the need for ethical guidelines.
Innovation and power
The apparent ‘bias’ of facial recognition in mismatching people with darker skin and women more often than those with lighter skin and males has been widely reported. Although an explanation on this has yet to be agreed among experts, one of the reasons could be that the technology has been tested on people with lighter skin more often during trials and, therefore, the technology has ‘learnt’ how to recognise lighter-skinned faces more than it has learnt how to recognise darker-skinned faces.
In the UK, this explanation would make sense. A Gov.uk report published in August 2018 reveals that “the total population of England and Wales was 56.1 million, and 86.0% of the population was white”, according to the most recent Census. 1 This means that in facial recognition trials surveying the general public, if we were to generalise, about 80% of people the camera ‘saw’ would have been white.
The debate of whether facial recognition could worsen racism has highlighted a far bigger issue and ultimately points to a societal problem: Britain is less diverse than most people would like to believe, and ethnic minority groups and women are still under-represented in authoritative positions, such as in Government and the police.
The House of Commons library reported that, as of September 2019, that “52 or just over 8% of Members of the House of Commons were from non-white ethnic backgrounds” however, “if the ethnic make-up of the House of Commons reflected that of the UK population, there would be about 90 non-white members.”2 Additionally, in September 2019, the number of women Members of the House of Commons was reported to be “an all-time high” but is still only 211 members or 32%.3
There is similar lack of representation in UK police, with Gov.uk reporting that, at the end of March 2019, “93.1% of police officers were from the white ethnic group and 6.9% were from other ethnic groups.”4 The representation of women in policing is a slightly lower percentage than that in Government at 30.4%. As of 31st March 2019, there were 37,428 female police officers across the 43 police forces in England and Wales.5
While developers must strive to improve the accuracy of facial recognition in correctly identifying ethnic minority groups and women, it seems that too much focus has been placed on inaccuracies of a still developing technology, masking a greater issue. The demographics that facial recognition is reportedly biased against also have the least representation in the two authorities that are likely to use and control the technology. Conscious debate focuses on failings of technology but, subconsciously, this is a conversation about unfavourable power dynamics. The issue with gaining public trust in the technology is perhaps not that it may make a mistake, but that certain demographics lack trust in how authorities would handle such a mistake.
As developers, we know that the systems we build today could vastly improve security and public safety in the future, if used ethically. When we look at the statistics, it’s easier to understand why under-represented groups of people may be wary of facial recognition.
However, advanced and accurate facial recognition may help to pave the way towards ending discrimination against ethnic minorities and women. Victims of crime face being judged by a predominantly white male system, wherein lack of evidence leaves room for biases and prejudices to skew judgement. This has been evidenced many times by the courts in cases of violence against women not being taken seriously enough.
Facial recognition can potentially identify perpetrators of crime as well as providing irrefutable proof that a crime occurred and exactly how it played out, making it harder for people in authority to bring individual biases into the case. In police use of AFR body cameras, the scrutiny will go both ways. It may help to deter horrific attacks on police officers and may also encourage officers to conduct themselves correctly.
Facial recognition trials have not benefited by being kept a secret, such as the Kings Cross trials. The lack of transparency concerning trials appears to have increased fear of facial recognition, but while headlines tend to focus on mistrust of the technology itself, what is actually suffering is trust in the authorities: the technology is only the catalyst for discussion.
It’s not necessarily a problem for facial recognition to misidentify someone, if the suspect is treated in a fair and respectful manner and no consequences for the person are incurred after finding that it was a mismatch.
A potential problem with a mismatch could be watch list data. The data for facial recognition camera watch lists could include information from arrest records, criminal records or any police involvement. Potentially, an innocent person could be arrested in the case of a mismatch, but then an arrest record may exist for them. They could then appear on police watch lists, leading to unjustified suspicion and perhaps even more arrests, which would increase their risk level.
This is why we need ethical guidelines and regulations on how data is stored and used. The above scenario would look a lot less ugly with a regulation that all data in connection to a person found to be misidentified is deleted as soon as their innocence is ascertained.
Much of the discussion around facial recognition technology has been around privacy and the function of the technology itself. We should really be talking about the existing power imbalances in society, how we can ensure these are not worsened or continue with advances in technology and what ethical guidelines are needed to protect and use data.
References:
- https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/
- https://researchbriefings.parliament.uk/ResearchBriefing/Summary/SN01156
- https://researchbriefings.parliament.uk/ResearchBriefing/Summary/SN01250
- https://www.ethnicity-facts-figures.service.gov.uk/workforce-and-business/workforce-diversity/police-workforce/latest
- https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/831726/police-workforce-mar19-hosb1119.pdf
Adrian Timberlake – CTO at Seven Technologies Group

Adrian spent nearly 20 years as a Scientific Officer with the UK Ministry of Defence developing technologies and systems that secrecy prevents discussion of….
In 2007 Adrian joined Seven Technologies Group, a UK defence manufacturer, specialising in the provision of Intelligence, Surveillance, Target Acquisition & Reconnaissance (ISTAR) systems, as a Senior Software Architect, rapidly rising to Chief Technology Officer today.