A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and stop it from releasing GPT-4.
OpenAI said, “For the consumer market, we have released a product GPT-4 that is biased, deceptive, and threatens privacy and public safety. was not done,” the complaint said. FTC filed today by the Center for Artificial Intelligence and Digital Policy (CAIDP).
Calling for “independent monitoring and evaluation of commercial AI products offered in the United States,” CAIDP asked the FTC to “open an investigation into OpenAI, prohibit further commercial releases of GPT-4, and to protect consumers.” We called on them to ensure the establishment of the necessary guardrails for the business, and commercial marketplaces.”
The nonprofit noted that the FTC “declared that the use of AI ‘should be empirically sound while promoting transparency, accountability, fairness, and accountability,'” adding that “OpenAI’s The product GPT-4 does not meet either of these requirements.”
GPT-4 was published by OpenAI on March 14th and is available to ChatGPT Plus subscribers. Microsoft’s Bing already uses GPT-4. OpenAI called his GPT-4 a major breakthrough, saying it “passes the simulated bar exam with scores around the top 10% of candidates” compared to the bottom 10% of GPT-3.5 candidates. said.
OpenAI says it has had outside experts assess the potential risks posed by GPT-4, but CAIDP is not the first group to express concern about the AI space moving too fast. As we reported yesterday, the Future of Life Institute issued an open letter urging AI labs to “pause for at least six months from training AI systems more powerful than GPT-4.” The letter’s long list of signatories included many professors, along with prominent tech industry names such as Elon Musk and Steve Wozniak.
Group claims GPT-4 violates FTC law
CAIDP authorizes the FTC to use its powers under Section 5 of the Federal Trade Commission Act to investigate OpenAI to investigate, prosecute, and said it should be banned. The group said, “The commercial release of GPT-4 violates Section 5 of the FTC Act, the FTC’s established guidance for businesses on the use and advertising of AI products, and new norms on the governance of AI.” claim. The U.S. government has formally endorsed universal AI guidelines recommended by leading experts and scientific societies. “
The FTC will “suspend further commercial deployment of GPT by OpenAI,” require independent evaluation of GPT products prior to deployment and “throughout the GPT AI lifecycle,” and require “compliance with FTC AI guidance before future deployment.” “The publicly accessible GPT-4 incident reporting mechanism is similar to the FTC’s mechanism for reporting consumer fraud,” the group said.
More broadly, CAIDP called on the FTC to issue a rule requiring a “baseline standard for products in the generative AI market sector.”
We have reached out to OpenAI and will update this article with a response.
“OpenAI has not disclosed any details.”
The president and founder of CAIDP is Marc Rotenberg, who previously co-founded and led the Electronic Privacy Information Center. Rotenberg was adjunct professor at Georgetown Law and a member of the Expert Group on AI run by the International Organization for Economic Co-operation and Development. Rotenberg also signed an open letter for his Future of Life Institute cited in the CAIDP complaint.
The CAIDP Chair and Research Director is Merve Hickok, who is also a Data Ethics Lecturer at the University of Michigan. She testified at her congressional hearing on AI on March 8. CAIDP’s team her list of members includes many others working in the technical, academic, privacy, legal, and research fields.
The FTC last month warned companies to analyze “reasonably foreseeable risks and impacts before bringing any AI product to market.” In its report to Congress last year, the FDA also expressed a range of concerns about “AI’s harm, including inaccuracies, bias, discrimination, and creeping commercial surveillance.”
CAIDP told the FTC that GPT-4 poses many kinds of risks and the technology underlying it is poorly explained. “OpenAI has not disclosed any details regarding its architecture, model size, hardware, computing resources, training techniques, dataset construction, or training methods,” he said in the CAIDP complaint. “The practice in the research community was to document training data and training techniques for large language models, but OpenAI he chose not to do this with GPT-4.”
“Generative AI models are unusual consumer products because they may exhibit behavior that the company that sold them could not previously identify,” said the group.
OpenAI releases GPT-4 with ‘perfect knowledge’ of risks
CAIDP’s complaint points to some of OpenAI’s own statements about the risks of GPT-4. “OpenAI expressly acknowledges the risk of bias, or more precisely, of ‘harmful stereotypical and degrading relevance to specific marginalized groups,’” the complaint states.
For example, OpenAI said in its GPT-4 system card, “This model may reinforce and reproduce certain prejudices and worldviews, such as harmful stereotypes and humiliating associations with particular marginalized groups. ‘ said. CAIDP also cited his OpenAI blog post that ChatGPT “can react to harmful instructions and exhibit biased behavior.”
“OpenAI fully understood these risks and released GPT-4 for commercial use,” the FTC complaint states. Raising concerns about children using GPT-4, he said, “The GPT-4 system card did not provide details of the safety checks performed by OpenAI during the testing period and It also does not detail the measures taken by OpenAI in
CAIDP noted concerns raised by the European consumer organization BEUC. “When ChatGPT is used to score consumer credit or insurance, it can produce unfair and biased results, impede access to credit, and reduce the price of certain types of consumer health and life insurance. Is there a way to prevent it from pulling up on the BEUC asked In the tweet cited in the CAIDP complaint.
Security and privacy concerns
Turning to cybersecurity, CAIDP believes that ChatGPT’s proficiency in various programming languages has allowed ChatGPT to “create very realistic texts” for phishing purposes, as well as texts for propaganda and disinformation. I noticed Europol’s warning that it could be used to create
Regarding privacy, CAIDP said an incident was reported earlier this month in which OpenAI revealed private chat histories to other users. Navigate between sessions and distinguish between specific sessions. “
In another case, an AI researcher “explained how he was able to ‘hijack someone’s account, view their chat history, and access their billing information without their knowledge,'” the complaint states. .researcher said last week That OpenAI fixed the vulnerability after he reported it.
GPT-4’s ability to provide text responses from photo inputs “has a surprising impact on individual privacy and individual autonomy,” allowing users to “link images of people to detailed personal data.” says CAIDP. It can also be used for “GPT-4 to make conversational recommendations and ratings about the person”.
“OpenAI has reportedly paused the release of its image-to-text feature known as Visual GPT-4, but the current situation is difficult to determine,” the complaint states.