In two years, if all goes according to plan, EU residents will be protected by law against some of the most controversial uses of AI, such as street cameras that identify and track people, or computers. government agencies that assess an individual’s behavior.
This week, Brussels presented his plans become the world’s first bloc with rules for the use of artificial intelligence, in an attempt to place European values at the heart of rapidly developing technology.
Over the past decade, AI has become a strategic priority for countries around the world, and the two world leaders – the United States and China – have taken very different approaches.
The Chinese state-led plan has led it to invest heavily in technology and rapidly deploy apps that have helped the government strengthen surveillance and control the population. In the United States, the development of AI has been left to the private sector, which has focused on commercial applications.
“The United States and China have been the innovators and leaders in investing in AI,” said Anu Bradford, professor of European law at Columbia University.
“But this regulation aims to put the EU back in the game. It tries to balance the idea that the EU needs to become more of a technological superpower and put itself in the game with China and the United States, without compromising its European values or fundamental rights.
EU officials hope the rest of the world will follow suit and say Japan and Canada are already scrutinizing the proposals.
While the EU wants to limit how governments can use AI, it also wants to encourage start-ups to experiment and innovate.
Officials said they hoped the clarity of the new framework would help build confidence in these start-ups. “We will be the first continent where we will give guidelines. So now if you want to use AI applications go to Europe. You will know what to do and how to do it, ”said Thierry Breton, the French commissioner in charge of the bloc’s digital policy.
In an attempt to be pro-innovation, the proposals recognize that regulation often falls most heavily on small businesses and thus incorporate support measures. These include ‘sandboxes’ where start-ups can use data to test new programs to improve the justice system, healthcare and the environment without fear of heavy fines. In case of error.
Along with the regulation, the commission published a detailed road map to increase investment in the sector and pool public data across the block to help train machine learning algorithms.
The proposals are likely to be hotly debated by both the European Parliament and member states – two groups that will have to sanction the bill in law. Legislation is expected by 2023 at the earliest, according to people closely following the process.
But critics say that in trying to support commercial AI, the bill does not go far enough to ban discriminatory applications of type AI predictive policing, border migration control and biometric categorization of race. , sex and sexuality. These are currently marked as ‘high risk’ applications, which means anyone who deploys them will need to educate the people about whom they are used and provide transparency on how the algorithms made their decisions – but their Widespread use will always be permitted, especially by private companies.
Other high-risk, but not banned, applications include the use of AI in recruiting and managing workers, as currently practiced by companies like HireVue and Uber, the AI that assesses and monitors students, and the use of AI for the granting and revocation of public benefits and support services.
Access Now, a Brussels-based digital rights group, also pointed out that the outright bans on live facial recognition and credit scoring only affect public authorities, without affecting companies such as the recognition company. Facial Clearview AI or AI credit scoring start-ups such as Lenddo and ZestFinance, whose products are available worldwide.
Others pointed out the glaring absence of citizens’ rights in the legislation. “The entire proposal governs the relationship between suppliers (those who develop [AI technologies]) and users (those who deploy). Where do people go? Sarah Chander and Ella Jakubowski of European Digital Rights, an advocacy group, wrote on Twitter. “There appear to be very few mechanisms by which those directly affected or harmed by AI systems can seek redress. This is a huge lack for civil society, discriminated groups, consumers and workers. ”
On the other hand, lobby groups representing Big Tech’s interests have also criticized the proposals, saying they will stifle innovation.
The Center for Data Innovation, a think tank whose parent organization receives funding from Apple and Amazon, said the bill had dealt a “damaging blow” to the EU’s plans to become a leader. world of AI and that “a thicket of new rules will cripple tech companies” in the hope of innovating.
In particular, he challenged the ban on AI that “manipulates” people’s behaviors and the regulatory burden of “high-risk” AI systems, such as mandatory human oversight and proof of safety and efficacy. .
Despite these criticisms, the EU fears that if it does not act now to set rules around AI, it will allow the global rise of technologies contrary to European values.
“The Chinese have been very active in applications that preoccupy Europeans. These are actively exported, especially for law enforcement purposes and there is a lot of demand for this among non-liberal governments, ”Bradford said. “The EU is very concerned that it must do its part to stop the global adoption of these deployments that compromise fundamental rights, so there is definitely a race for values.”
Petra Molnar, associate director of York University in Canada, agrees, saying the bill has more depth and focuses more on human values than the first proposals in the United States and Canada.
“There are a lot of people waving their hands at ethics and AI in the United States and Canada [proposals] are more superficial. “
Ultimately, the EU is betting that the development and commercialization of AI will be driven by public trust.
“If we can have better regulated AI that consumers trust, that also creates a market opportunity, because. . . it will be a source of competitive advantage for European systems [as] they are considered trustworthy and of high quality, ”said Bradford of Columbia University. “You are not just competing with the price.”