The European Union proposed rules that would restrict or prohibit certain uses of artificial intelligence within its borders, including by tech giants based in the United States and China.
The rules are the most significant international effort to regulate AI to date, covering facial recognition, autonomous driving, and the algorithms that drive online advertising, automated hiring and credit reporting. The proposed rules could help shape global standards and regulations around promising but controversial technology.
“There is a very important message globally that some applications of AI are not allowed in a society based on democracy, the rule of law and fundamental rights,” says
Daniel Leufer, Policy Analyst Europe with Access now, a European non-profit organization in the field of digital rights. Leufer says the proposed rules are vague, but represent an important step towards verifying potentially harmful uses of the technology.
The debate will likely be followed closely abroad. The rules would apply to any business selling products or services in the ME.
Other advocates say there are too many loopholes in the EU’s proposals to protect citizens from many misuse of AI. “The fact that there is some sort of prohibition is positive,” says Ella Jakubowska, Policy and Campaign Manager at European digital rights (EDRi) based in Brussels. But she says there are provisions that would allow businesses and government authorities to continue to use AI in questionable ways.
The proposed regulations suggest, for example, to ban “high-risk” applications of AI, including the use of AI by law enforcement agencies for facial recognition – but only when the technology is used to locate people in real time in public spaces. This provision also suggests potential exceptions when the police are investigating a crime punishable by a minimum of three years.
Jakubowska therefore notes that the technology could still be used retrospectively in schools, businesses or malls, and in a series of police investigations. “There are a lot of things that don’t go far enough when it comes to fundamental digital rights,” she says. “We wanted them to take a bolder stance.”
Facial recognition, which has become much more effective thanks to recent advances in AI, is very controversial. It is widely used in China and by many law enforcement officers in the United States, through commercial tools such as Clearview AI; some American cities have banned police use the technology in response to public outcry.
The proposed EU rules would also ban “general purpose AI-based social scoring performed by public authorities”, as well as AI systems that target “specific vulnerable groups” in such a way as to “materially distort their behavior “for harm.” This could potentially restrict the use of AI for credit scoring, hiring or some forms surveillance advertising, for example if an algorithm placed advertisements for betting sites in front of people with a gambling addiction.
EU regulations would require companies using AI for high-risk applications to provide risk assessments to regulators that demonstrate their safety. Those who break the rules could be fined up to 6% of global sales.
The proposed rules would require companies to notify users when they try to use AI to detect people’s emotions, or classify people based on biometric characteristics such as gender, age, race, etc. sexual orientation or political orientation – also technically dubious applications.
Leufer, the digital rights analyst, says the rules could discourage certain areas of investment, shaping the course the AI industry is taking in the EU and elsewhere. “There is a narrative that there is an AI race, and that’s absurd,” says Leufer. “We must not compete with China for forms of artificial intelligence that allow mass surveillance.”
A draft regulation, created in January, was disclosed last week. The final version contains notable changes, for example removing a section that would ban high-risk AI systems that could cause people to “behave, form an opinion or make a decision against them that they would not have taken otherwise ”.