The fight to define when AI is “at high risk”

EU leaders insist that addressing ethical issues surrounding AI will lead to a more competitive market for AI goods and services, increase adoption of AI and help the region compete with China and the United States. Regulators hope high-risk labels encourage more professional and responsible business practices.

Businesses polled say the bill goes too far, with costs and rules that will stifle innovation. Meanwhile, many human rights, AI ethics and anti-discrimination groups argue that the AI ​​law does not go far enough, leaving people vulnerable to businesses and companies. powerful governments with the resources to deploy advanced AI systems. (In particular, the bill does not cover military uses of AI.)

(mainly) strictly commercial

While some public comments on the AI ​​law came from European citizens, responses mainly came from professional groups of radiologists and oncologists, Irish and German educators’ unions and large European companies like Nokia, Philips, Siemens and the BMW group.

US companies are also well represented, with comments from Facebook, Google, IBM, Intel, Microsoft, OpenAI, Twilio and Workday. In fact, according to data collected by European Commission staff, the United States ranks fourth as the source of most comments, after Belgium, France and Germany.

Many companies have expressed concern over the costs of the new regulations and asked how their own AI systems will be labeled. Facebook wanted the European Commission to be more explicit about whether the AI ​​law’s mandate to ban subliminal techniques that manipulate people extends to targeted advertising. Equifax and MasterCard have each opposed a general high-risk designation for any AI that judges a person’s creditworthiness, saying it would increase costs and reduce the accuracy of credit scores. However, many studies to have found instances of discrimination involving algorithms, financial services and loans.

NEC, the Japanese facial recognition company, argued that the AI ​​law places undue liability on the provider of AI systems rather than users, and that the project’s proposal to label all identification systems Remote biometric as high risk would result in high compliance costs.

One of the main disputes companies have with the bill is how it treats general-purpose or preformed models capable of performing a series of tasks, such as OpenAI GPT-3 or Google’s experimental multimodal model MOM. Some of these models are open source, and others are proprietary creations sold to customers by cloud service companies who have the AI ​​talent, data, and computing resources to train such systems. In a 13-page response to the AI ​​law, Google argued that it would be difficult or impossible for creators of general-purpose AI systems to comply with the rules.

Other companies working on the development of general purpose or general artificial intelligence systems like Google’s DeepMind, IBM and Microsoft have also suggested changes to accommodate AI which can multitask. OpenAI urged the European Commission to avoid banning general-purpose systems in the future, even though some use cases may fall into a high-risk category.

Businesses also want to see the creators of the AI ​​Act change definitions of critical terminology. Companies like Facebook have argued that the bill uses too broad terminology to define high-risk systems, resulting in overregulation. Others suggested more technical changes. Google, for example, wants a new definition to be added to the bill that distinguishes “deployers” of an AI system and “suppliers”, “distributors” or “importers” of AI systems. Depending on the business, in doing so, the responsibility for changes to an AI system may fall on the business or entity making the change rather than the business that created the original. Microsoft made a similar recommendation.

The costs of high-risk AI

Then there is the question of the cost of a high risk label to businesses.

A to study by European Commission staff estimates the compliance costs for a single AI project under the AI ​​law at around 10,000 euros and finds that companies can expect up-front overall costs of ‘around 30,000 euros. As businesses develop professional approaches and become considered business as usual, she expects costs to drop closer to $ 20,000. The study used a model created by the Federal Statistical Office in Germany and recognizes that costs can vary depending on the size and complexity of a project. As developers acquire and customize AI models and then integrate them into their own products, the study concludes that a “complex ecosystem would potentially involve complex sharing of responsibilities.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *