A sign at the headquarters of the Consumer Financial Protection Bureau (CFPB) in Washington, DC, USA on August 29, 2020.Reuters/Andrew Kelly
NEW YORK (AP) – Amid growing concerns over increasingly powerful artificial intelligence systems like ChatGPT, the country’s financial watchdog is working to ensure companies comply with the law when using AI, the report said. Stated.
Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees, and other aspects of our financial lives. AI will also affect employment, housing and working conditions.
Ben Winters, senior adviser to the Electronic Privacy Information Center, said the joint enforcement statement issued by federal agencies last month was a positive first step.
“There is a theory that AI is not regulated at all, but that is not really true,” he says. “They’re saying, ‘Using AI to make a decision doesn’t absolve you of responsibility for the impact of that decision. This is our opinion and we’re watching it.'” is.”
The Consumer Financial Protection Bureau last year accused banks of relying on new technology and flawed algorithms, mismanaging automated systems and causing unwarranted home and car foreclosures, and loss of benefits payments. announced that he had been fined.
Regulators say there are no “AI exceptions” to consumer protection, citing these enforcement actions as examples.
read more: Sean Penn backs WGA strike, says studios’ stance on AI is ‘human obscenity’
Consumer Financial Protection Bureau Director Rohit Chopra said the agency “has already begun efforts to bring in data scientists, technologists and others within the company to ensure that we are able to meet these challenges and to continue to strengthen it.” said the agency was continuing. To identify potentially illegal activity.
Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice, as well as the CFPB, all take aim at new technologies and identify the adverse effects they can have on consumers’ lives. It says it is directing resources and staff. .
“One of the things we’re trying to make clear is that if companies don’t even understand how AI makes decisions, they can’t really take advantage of it,” Chopra said. said Mr. “In other cases, we are investigating how the Fair Lending Act is being complied with regarding the use of all this data.”
For example, under the Fair Credit Reporting Act and the Equal Credit Opportunity Act, financial providers are legally obligated to explain unfavorable credit decisions. These regulations apply equally to decisions regarding housing and employment. If AI makes decisions in ways that are too opaque to explain, regulators should not use the algorithms.
“I think there was a sense of, ‘Oh, if we let the robots do it, there won’t be any more discrimination,'” Chopra said. “I think we’ve learned that it’s not really true at all. In a way, the bias is built into the data.”
clock: Why AI developers say regulation is necessary to curb AI
EEOC Chairman Charlotte Burroughs said there would be a crackdown on AI recruitment technology that screens job applicants with disabilities, for example, and so-called “bossware,” which illegally monitors workers.
Burroughs also explained how the algorithm dictates when and how employees can work in ways that violate current law.
“If you need a break because you have a disability or you’re pregnant or whatever, you need a break,” she said. “Algorithms don’t necessarily take that adjustment into account. These are things we’re watching closely…we realize technology is evolving, but the underlying message here is , the law still applies and we want to be clear that we have the tools to enforce. “
At a conference earlier this month, OpenAI’s top lawyers proposed an industry-led approach to regulation.
“I think it starts with trying to get to some kind of standard,” OpenAI general counsel Jason Kwon said at a technology summit in Washington, D.C. hosted by the software industry group BSA. “They could start with industry standards and some kind of integration around them. And then the decision to make them mandatory, and what the process would be for updating them, these things. will probably provide fertile ground for further discussion.”
Sam Altman, head of OpenAI, which develops ChatGPT, says government intervention will be key to de-risking “increasingly powerful” AI systems to license and regulate the technology. suggested the creation of a US or global agency for
While there are no immediate signs that Congress will enact sweeping new AI rules, as European lawmakers do, the public concern prompted Altman and other tech CEOs to visit the White House this month to discuss these issues. answered a difficult question about the impact of the tool.
Mr. Winters of the Electronic Privacy Information Center provides information on the relevant AI market, how the industry works, who the biggest players are, how the information collected is used in the ways of regulators, and more. He said government agencies could do more to investigate and expose He also has experience working with new consumer finance products and technologies in the past.
“The CFPB has done a pretty good job on this for ‘buy now, pay later’ companies,” he said. “There are still a lot of little-known parts of the AI ecosystem.
Technology reporter Matt O’Brien contributed to this report.