It’s an exceptional week for the government’s crackdown on the misuse of artificial intelligence.
Today the EU released its long-awaited set of AI regulation, a first project of which leaked last week. The regulations are very varied, with restrictions on mass surveillance and the use of AI to manipulate people.
But a statement of intent from the United States Federal Trade Commission, presented in a Short blog post by staff lawyer Elisa Jillson April 19, could have more teeth in the immediate future. According to the post, the FTC plans to target businesses using and selling biased algorithms.
A number of companies will be scared right now, says Ryan Calo, a professor at the University of Washington who works in technology and law. “It’s not really a single blog post,” he says. “This blog post is a very vivid example of what appears to be a sea change.”
The EU is known for its hard line against Big Techs, but the FTC has taken a softer approach, at least in recent years. The agency’s goal is to control unfair and dishonest business practices. Its mandate is limited: it has no jurisdiction over government agencies, banks or non-profit organizations. But it can intervene when companies distort the capabilities of a product they are selling, meaning that companies that claim their facial recognition systems, predictive policing algorithms or health tools are not biased can now be in the crosshairs. “Where they have power, they have tremendous power,” Calo says.
To take part
The FTC has not always been willing to exercise this power. After being criticized in the ’80s and’ 90s for being too aggressive, he backed down and chose fewer fights, especially against tech companies. It seems to be changing.
In the blog post, the FTC warns vendors that AI claims must be “true, not misleading, and supported by evidence.”
“For example, let’s say an AI developer tells their customers that their product will provide ‘100% unbiased hiring decisions’, but the algorithm was built with data devoid of racial or gender diversity. The result can be deception, discrimination, and FTC law enforcement action. “
FTC action enjoys bipartisan support in the Senate, where the commissioners were interviewed yesterday what more they could do and what they needed to do it. “It’s windy behind the sails,” Calo says.
Meanwhile, although they draw a clear line in the sand, the EU’s AI regulations are only guidelines. As with the GDPR rules introduced in 2018, it will be up to EU member states to decide how to implement them. Some of the language is also vague and open to interpretation. Take a stand against “subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior” in a way that could cause psychological harm. Does this apply to social media feeds and targeted advertising? “We can expect that many lobbyists will attempt to explicitly exclude ad or recommendation systems,” says Michael Veale, a faculty member at University College London who studies law and technology.
It will take years of legal challenges in the courts to uncover the details and definitions. “It will only be after an extremely long process of investigation, complaint, fine, appeal, cross-appeal and referral to the European Court of Justice,” says Veale. “When will the cycle start again?” But the FTC, despite its limited mandate, has the autonomy to act now.
A big limitation common to both the FTC and the European Commission is the inability to curb government use of harmful AI technologies. EU regulations provide exceptions for the use of state surveillance, for example. And the FTC is only allowed to go after businesses. It could intervene by preventing private providers from selling biased software to law enforcement. But implementation will be difficult, given the secrecy surrounding these sales and the lack of rules on what government agencies must report when technology acquisition.
Still, this week’s announcements reflect a huge global shift towards serious regulation of AI, a technology that has been developed and deployed with little oversight so far. Ethical watchdogs have been calling for restrictions on unfair and harmful AI practices for years.
The EU considers its regulations to place AI under existing protections for human freedoms. “Artificial intelligence must be at the service of people, and therefore, artificial intelligence must always respect the rights of people”, says Ursula von der Leyen, President of the European Commission, in a speech before the exit.
The regulation will also help AI in its image problem. As von der Leyen also said: “We want to encourage our citizens to feel confident in using it.”