The EU is experimenting with an AI-powered lie detector in a bid to help solve its border control problems and policing demand.
For some, gaining EU citizenship is a dream as passports allow free movement between all member states. However, for those ineligible, this also makes them a prime target for criminals.
Just earlier this week, Bulgarian officials were arrested for selling fake EU passports ‘to 30 people a week’ for £4,445 each.
According to The Times:
“Over the past ten years, 115,000 people were given citizenship based on Bulgarian origin certificates, with a huge spike in applications after Bulgaria joined the EU in 2007 — which allowed them to live and work in Britain and other European countries.”
This is just one example of the problems faced by EU border controls.
Once documents have been issued, it’s difficult to determine whether it was obtained legitimately or not. A lie detector may be able to aid in these situations.
The EU and Hungary’s police will run a six-month pilot project called iBorderCtrl that will help screen travellers in Hungary, Greece, and Latvia. Visitors will be required to upload photos of their passport, visa, and proof of funds in advance.
Using a webcam, visitors will then answer basic questions from an AI border agent. Such questions include “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”
Facial micro-expressions will be analysed that can reveal when someone is lying.
Once at the physical border, human officers will perform a more stringent check if there are signs of lying or a mismatch in the provided information. Those who passed the pre-check will only be subject to usual basic checks.
George Boultadakis of European Dynamics in Luxembourg told the European Commission:
“We’re employing existing and proven technologies—as well as novel ones—to empower border agents to increase the accuracy and efficiency of border checks.
iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.”
Early tests of the system have resulted in 76 percent accuracy, so well below where it needs to be. iBorderCtrl hopes to increase that to 85 percent.
The real concern here is the current well-documented bias of AI-powered facial recognition systems.
A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.
Earlier this year, the ACLU found Amazon’s facial recognition software erroneously labelled those with darker skin colours as criminals more often.
Whether the developers of iBorderCtrl have managed to remove such automated discrimination remains to be seen.