EU’s proposed AI laws would regulate robot surgeons but not the military


As U.S. lawmakers mingle in yet another congressional hearing on the dangers posed by algorithmic bias In social media, the European Commission (essentially the executive arm of the EU) unveiled a broad regulatory framework which, if adopted, could have global implications for the future of AI development.

This is not the Commission’s first attempt to guide the growth and evolution of this emerging technology. After lengthy meetings with advocacy groups and other stakeholders, the EC issued the first European AI strategy and Coordinated plan on AI in 2018. These were followed in 2019 by the Guidelines for Trusted AI, then again in 2020 by the Commission AI White Paper and Report on the Security and Liability Implications of Artificial Intelligence, Internet of Things and Robotics. Much like with its ambitious General Data Protection Regulation (GDPR) plan in 2018, the Commission seeks to establish a basic level of public trust in technology based on strict user and data privacy protections. , as well as against its potential misuse.

OLIVIER HOSLET via Getty Images

“Artificial intelligence should not be an end in itself, but a tool at the service of people with the ultimate goal of improving human well-being. Artificial intelligence rules available on the Union market or otherwise affecting Union citizens should therefore put people at the center (be human-centered), so that they can have confidence that technology is used in a safe and lawful manner, including respect for fundamental rights ”, the Commission included in its draft regulation. “At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily constrain or hinder technological development. This is particularly important because, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to anticipate all the possible uses or applications of it that might occur in the future.

Indeed, artificial intelligence systems are already ubiquitous in our lives – from recommendation algorithms that help us decide what to watch on Netflix and who to follow on Twitter to digital assistants in our phones and driver assistance systems that. watch the road for us (or not) when we are driving.

“The European Commission has once again taken a bold step to tackle emerging technologies, just as it had done with data privacy through GDPR,” CITRIS Director Dr. Brandie Nonnecke told Engadget. Policy Lab at UC Berkeley. “The proposed regulation is quite interesting in that it tackles the problem from a risk-based approach”, similar to that used in Canada’s proposed AI regulatory framework.

These new rules would divide the EU’s AI development efforts into a four tier system – minimal risk, limited risk, high risk and outright banned – based on their potential harm to the public good. “The risk framework they work in is really risk-oriented to the company, whereas every time you hear about risk [in the US], that’s pretty much a risk in the context of ‘what’s my responsibility, what’s my exposure,’ ”Dr Jennifer King, a privacy and data policy researcher at Stanford University Institute, told Engadget. for human-centered artificial intelligence. “And one way or another, if it encompasses human rights as part of that risk, then it’s built in, but to the extent that it can be outsourced, it’s not included. “

Totally prohibited uses of the technology will include all applications that manipulate human behavior to circumvent users’ free will – especially those that exploit the vulnerabilities of a specific group of people due to their age, physical or mental disability. – as well as biometric identification systems and those that allow “ social scoring ” by governments, according to the 108-page proposal. It’s a direct nod to China Social credit system and given that these regulations would still in theory govern technologies that impact on EU citizens, whether or not these people are physically within the borders of the EU, could lead to interesting international incidents in a close future. “There is a lot of work to be done to implement the guidelines,” King noted.

The images show three robotic surgical arms at work in an operating theater around the world during a media presentation at the Leipzig Heart Center on February 22.  One of the arms contains a miniature camera, the other two contain standard surgical instruments.  The surgeon looks at a monitor with an image of the heart and manipulates the robotic arms with two handles.  The software translates large natural movements into precise micro-movements in surgical instruments.

Jochen Eckel / Reuters

High-risk applications, on the other hand, are defined as any product in which the AI ​​is “intended for use as a security component of a product” or the AI ​​is the security component itself (think collision avoidance feature of your car.) In addition, AI applications aimed at any of eight specific markets, including critical infrastructure, education, legal / judicial matters, and employee hiring, are considered as part of the high risk category. These may hit the market but are subject to strict regulatory requirements before they go on sale, such as requiring the AI ​​developer to comply with EU regulations throughout the lifecycle of the device. product, ensure strict confidentiality guarantees and perpetually keep a human being in the control loop. Sorry, that means no fully autonomous robot surgeons for the foreseeable future.

“The reading I got from that was that Europeans seem to envision an oversight – I don’t know if that’s an overstatement to say cradle to grave,” King said. “But there seems to be a feeling that there is a need for continuous monitoring and evaluation, especially hybrid systems.” Part of that oversight is the EU push for regulatory AI sandboxes, which will allow developers to build and test high-risk systems under real conditions, but without the real consequences.

These sandboxes, in which all non-governmental entities – not just those large enough to have independent R&D budgets – are free to develop their AI systems under the watchful eyes of EC regulators, “aim to prevent the kind of deterrent seen as a result of GDPR, which led to a 17% increase market concentration after its introduction ”, Jason Pilkington recently argued for The truth about the market. “But it is not certain that they would achieve this goal.” The EU also plans to create a European Artificial Intelligence Council to oversee compliance efforts.

Nonnecke also points out that many of the areas addressed by these high-risk rules are the same ones that academic researchers and journalists have been examining for years. “I think this really underscores the importance of empirical research and investigative journalism in enabling our lawmakers to better understand what the risks of these AI systems are and what the benefits of these systems are,” a- she declared. One area that these regulations will not explicitly apply to is AI built specifically for military operations, so bring in the killbots!

STANDALONE PHOTO The cannon and sighting equipment on top of a Titan Strike unmanned ground vehicle, equipped with a .50 caliber machine gun, moves and secures the ground on Salisbury Plain during Exercise Autonomous Warrior 18 , where military personnel, government departments and industry partners take part in Exercise Autonomous Warrior, working with NATO allies as part of a groundbreaking exercise to understand how the military can harness the technology in robotic and autonomous situations.  (Photo by Ben Birchall / PA Images via Getty Images)

Ben Birchall – PA Images via Getty Images

Limited risk apps include things like chatbots on service websites or featuring deepfake content. In these cases, the AI ​​creator simply needs to let users know in advance that they will be interacting with a machine rather than another person or even a dog. And for minimal risk products, like AI in video games and really the vast majority of applications that the EC expects to see, the regulations do not require any particular restrictions or additional requirements that should be met before they are released. ‘to be put on the market.

And if a company or developer dares to ignore these regulations, they will find that breaking them comes with a hefty fine – a fine that can be measured as a percentage of GDP. Specifically, fines for non-compliance can be up to € 30 million or 4% of the entity’s overall annual revenue, whichever is greater.

“It is important for us at European level to send a very strong message and to set the standards in terms of how far these technologies should be allowed to go,” said Dragos Tudorache, Member of the European Parliament and head of the committee. of artificial intelligence. Bloomberg in a recent interview. “Putting a regulatory framework around them is a necessity and it is good that the European Commission is taking this direction.”

It remains to be seen whether the rest of the world will follow Brussell’s lead in this regard. Considering how the regulations currently define what an AI is – and they do it in very broad terms – we can probably expect this legislation to have an impact on almost every aspect of the global marketplace and every aspect of the global marketplace. sectors of the global economy, and not just the world. digital realm. Of course, these regulations will have to go through a rigorous (often controversial) parliamentary process that could take years to be enacted.

And as for America’s chances of enacting its own similar regulations, well. “I think we’ll see something proposed federally, yes,” Nonnecke said. “Do I think it will pass?” Those are two different things.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through any of these links, we may earn an affiliate commission.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *