AI, Privacy and Biometrics are high on every business agenda but it is a complex governance area as proven in the UK with the case against South Wales Police, the UK Surveillance Camera Commissioner staying in post specifically to see the case through. So, what comes next for AI regulation?
The European Commission’s highly anticipated proposal for a new Regulation[i] on artificial intelligence (AI) was released in draft, on 21st April. As suspected, in the face of the rapid technological developments in AI fields, the Commission has decided that strong regulation is necessary to address the potential risks to the fundamental rights of data subjects. Though parts of the proposed Regulation seem contradictory – supporting innovation via ‘AI regulatory sandboxes’, for instance – and it will be subject to some criticism, it will nevertheless make provisions for stricter rules relating to high-risk systems, including ‘real-time’ and ‘post’ remote biometric identification systems (RBIS)[ii]. The definition set out in the proposed Regulation would encompass automatic facial recognition (AFR) as well as ‘post’ identification where images of faces are matched to existing ones in a database. In recent years, some law enforcement agencies (LEAs) have used RBIS discreetly in publicly-accessible spaces in order to enhance their ability to detect and prevent potential crime, though the Home Office insists that it wants new crime-reducing technology to be used while ‘maintaining public trust’[iii]. There are critics who believe that the processing of biometric data in this way amounts to unlawful surveillance that infringes on the rights and freedoms of citizens, and who would support new regulations to disallow the processing.
As was the case with the introduction of the General Data Protection Regulation (GDPR) in 2018, the new Regulation has been broadly welcomed, though its success will be borne out by how well it is enforced. The tougher rules relating to the implementation of AI systems will only succeed in protecting European citizens if the relevant supervisory authorities in the member states are able to ensure widespread compliance with the rules. The GDPR does mandate that biometric data – where it is used for identification – should be subject to more rigorous controls and considerations, although some data controllers have been found lacking in their due diligence. The AFR implementation by South Wales Police[iv] over the last few years was recently found to be unlawful and their data protection impact assessment was ‘deficient’. Examples like this raise questions over whether – regardless of the regulations in place – supervisory authorities are properly resourced and able to enforce the law. If LEAs and other organisations are not able to ensure that what they are doing is lawful, the project in question should not go ahead, but there is currently nothing to stop it from doing so and any reprimand or remedial action is reactive. Supervisory authorities and those responsible for enforcing the new Regulation must be proactive in their oversight. That said, with the advent of such powerful AI systems and rapid technological progress, it is difficult for regulators to keep up, particularly in the case of the European Commission and Parliament where regulations are instituted at a glacial pace, due to the way that the legislative procedure works. Where technical innovation is allowed to interfere directly with society, without oversight or regulation, a gulf opens up between those who control the technology and those who should democratically govern its usage, into which the data subjects fall.
If the controllers of the latest technology are ungovernable and beyond the reproach of supervisory authorities, then has society already crossed the Rubicon in relation to artificial intelligence? Ideally, lawmakers would be able to evaluate the latest technology and its potential for misuse before it was made available on the market, but that is unfortunately not the way that the developed world works. Though that is not case, and because it is useful to be able to regulate based on real-world examples, the aforementioned ‘sandboxes’ appear to be a middle-ground that will afford the Commission that opportunity. Where actions are taken by regulators after the fact (e.g., investigations into biometric identification providers[v]), the privacy of data subjects is not upheld in the first instance when the processing takes place, though the further society moves down this road, the greater the amount of legal precedent for use by regulators. As biometric identification and the ethical issues surrounding AI systems move further into the public consciousness, there will surely be a sea change in public opinion too, likely from one of ambivalence towards a desire to protect the rights and freedoms of the citizen. Subsequently, lawmakers – in democratic states and unions — will be made to regulate strongly against abuses of technological power. Analogously, though the introduction of the GDPR did not facilitate a total cultural shift in how data is stewarded, it has undoubtedly encouraged more care to be taken by data controllers and processors alike.
The journey into the age of artificial intelligence and ‘smart’ technology has only just begun, and as such, global citizens are still learning about the prospects and pitfalls. In countries with authoritarian governments, pervasive technology like AFR is already used as a tool to control the lives of its citizens[vi], so it is incumbent upon developed, democratic countries to legislate wisely and uphold societal rights and freedoms. The proposal of the new Regulation by the European Commission is a good start, though it remains to be seen whether the supervisory authorities that will enforce it will be able to effectively moderate the people and organisations who may seek to provide and sell AI as a service. In its scope, the Regulation is ambitious and like the GDPR, it is extraterritorial, meaning that systems and providers based in ‘third countries’ but operating in the EU (or with EU citizen data) and systems operating in ‘third countries’ where the output is processed in the EU, will be subject to the new rules. The burgeoning relationship between artificial intelligence and humanity will likely be a turbulent one, fraught with missteps and oversight, but with informed decision-making and sensibility, the use of AI can be a powerful tool for the good of society.
Liam is currently a Security Consultant at independent information security consultancy, Advent IM, specialising in data protection and cyber/information security.
He works with clients in both the public and private sectors to advise and ensure compliance with industry standards and regulation. In the public sector, Liam is involved in projects with central government departments, local government and police forces. In the private sector, he lead projects with clients that typically need guidance to prepare for public body tenders.
Liam has wide experience, and comprehensive understanding of the UK GDPR (and EU GDPR), DPA, ISO/IEC 27001, HMG IA & SPF, NCSC CAF, CloudSec Principles, NIST CSF, PCI DSS and NHS DSPT. He is interested in the fair & ethical use of emerging technologies.
[ii] Annex III – 1(a).