As governments around the world reflect on how to regulate AI, the European Union is considering one-of-a-kind legislation that would place strict limits on the technology. On Wednesday, the European Commission, the executive branch of the body, detailed a regulatory approach which requires a four-tier system that groups AI software into distinct risk categories and applies an appropriate level of regulation to each.
At the top would be systems that pose an “unacceptable” risk to human rights and security. The EU would ban these types of algorithms outright under the legislation proposed by the Commission. An example of software that would fall into this category is any AI that would allow governments and businesses to implement social rating systems.
Below is a category for so-called high-risk AIs. This section is the most comprehensive in terms of the variety of software included and the limits offered. The Commission says these systems will be subject to strict regulation that will affect everything from database used to train them in what constitutes an appropriate level of human oversight and how they relay information to the end user, among other things. This category includes AI related to law enforcement and all forms of remote biometric identification. The police would not be allowed to use the latter in public spaces, although the EU provides some exceptions for reasons of national security etc.
Then there is a category for limited risk AIs like chatbots. Legislation will require that these programs disclose that you are talking to an AI so that you can make an informed decision about whether or not you want to continue using them. Finally, there is a section for programs that pose minimal risk to people. The Commission says the “vast” majority of AI systems will fall into this category. The programs that fall under this section include things like spam filters. Here, the organization does not plan to impose regulations.
“AI is a means, not an end,” Internal Market Commissioner Thierry Breton said in a statement. “Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from laboratory to market, to ensure that AI in Europe respects our values and rules, and to harness the potential of AI for industrial purposes. ”
The legislation, which the EU will likely take years to debate and implement, could see companies face fines of up to 6% of their global sales for breaking the rules. In the GDPR, the EU already has some of the strictest data privacy policies in the world, and it envisages similar measures with regard to content moderation and antitrust laws.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through any of these links, we may earn an affiliate commission.