Ursula von der Leyen, who will soon take up her new role as President of the European Commission, has pledged action in a range of areas within her first one hundred days in office. This includes introducing new legislation on artificial intelligence (AI).
A number of EU countries, including Germany, have already adopted national strategies in this area and are now pushing for greater cross-border cooperation. Chancellor Angela Merkel is a vocal proponent of a legally binding body of European rules to ensure that AI ‘serves humanity’ – something which is urgently needed.
AI is already embedded in tools we use every day, from the navigation systems in our cars that show us how to avoid traffic jams, to the streaming services that suggest what movie to watch next. All of us make daily choices based on offers pre-curated by AI. The fact that these services make our lives so much easier means that we rarely question them. But perhaps we should.
As beneficial as AI can be at both an individual and a societal level – from helping with early cancer detection and tackling climate change, to assisting law enforcement with the prosecution of serious crimes such as child pornography – it also raises serious questions about the protection of fundamental rights. This is particularly true when it comes to facial recognition technology.
Around the world, activists have been warning about the misuse of facial recognition technology for mass surveillance and censorship. In the UK, the increased use of facial recognition in shopping centres, museums and conference centres has been deemed an ‘epidemic’ by privacy campaigners. In Germany, a facial recognition pilot deployed in Berlin’s Südkreuz railway station has been fiercely criticised by lawyers and data protection groups.
The debate around facial recognition demonstrates the need for regulators to take the issue firmly in hand. This is not just about what the technology can do, but what it should do, and how it should do it. High-level principles can provide a roadmap for an ethical approach, but they aren’t enough. When it comes to facial recognition, we need binding regulation – and we need it now.
Some people will ask ‘What’s the rush? After all, facial recognition is only in its infancy’. That is true, but it is also why we shouldn’t wait to set standards for responsible use. The stakes are too high to allow a race to the bottom without any regulatory floor in place.
The areas we need to pay the most attention to are privacy, bias, and the protection of democratic freedoms.
The EU has already established extensive privacy and data protection standards for its citizens, which also cover the use of biometric data and thereby form the foundation for any future regulation in the area of facial recognition technology.
But this may not be quite enough to ensure that the technology is trustworthy. Researchers have documented higher inaccuracy rates for racial minorities and women, increasing risks of misidentification and bias. Facial recognition technology is only as good as the data it learns from. If the training data is inaccurate or unrepresentative, the output will be skewed.
While this is a problem that technology can address, there is currently no way to check how facial recognition services are performing. That’s why we need new transparency laws requiring technology companies that develop such services to disclose their capabilities and limitations: what they can do and, perhaps even more importantly, what they cannot do.
These technologies should also undergo thorough, independent and external testing to check for accuracy and human bias, so that those who want to procure and deploy facial recognition services can rely on their trustworthiness. If such technologies advance to a point where they can reliably be used in high-stakes scenarios such as policing, criminal sentencing, or parole hearings, as well as for making decisions about jobs, education or mortgage applications, data documentation must be in place as a safety net and such decisions must be subject to human review.
Finally, the rights and freedoms that form the basis of our democratic societies must remain protected. Facial recognition has the potential to be misused to follow anyone, anywhere and without any oversight. The seriousness of this threat cannot be understated. It requires our politicians and societies to think carefully about how, where and whether this technology should be used, carefully weighing the need for public safety with protecting fundamental freedoms.
The stakes are too high to take a “wait and see” approach on facial recognition. As always, prevention is better than cure. By the time we see the first dystopian applications of this technology, the ship will have sailed. We need to anticipate and regulate for these risks today to prevent dangerous societal repercussions from emerging tomorrow. AI technologies will transform our society. The imperative we face is to ensure that they serve humanity and reflect our fundamental social values.
This blog was translated from an op-ed that first appeared on FOCUS Online.