In a statement on its website, the Italian data protection authority explained that this is an interim measure “until ChatGPT respects your privacy”. Watchdog’s measures include temporarily restricting the company from holding Italian users’ data.
US-based OpenAI, which developed ChatGPT, did not immediately return a request for comment on Friday.
The move is also unlikely to affect applications from companies that already have OpenAI licenses that use the same technology that powers chatbots, such as Microsoft’s Bing search engine.
The AI systems powering such chatbots, known as large-scale language models, are able to mimic human writing styles based on large amounts of ingested digital books and online texts.
Italian watchdog says OpenAI must report steps taken to ensure user data privacy within 20 days, with up to €20 million (around $22 million) for non-compliance or a fine of 4% of annual world turnover.
According to the agency’s statement, ChatGPT faced data loss on March 20 “with respect to user conversations and information related to the payment of service subscribers.”
OpenAI previously announced that ChatGPT had to be taken offline on March 20th to fix a bug that allowed some users to see the title or subject of other users’ chat histories.
“Our research also found that 1.2% of ChatGPT Plus users may have exposed their personal data to another user,” the company said. “We believe the number of users whose data was actually exposed to others is very small, and we have reached out to potentially affected users.”
The Italian privacy watchdog has stated that “the lack of notification to users and all parties whose data is collected by OpenAI” and “above all, the lack of legal grounds to justify the large-scale collection and retention of personal data”. lamented the “training” algorithm that underlies the functionality of the platform. ”
The agency said that the information provided by ChatGPT “does not always correspond to actual data and therefore determines that inaccurate personal data is being held.”
Finally, I pointed out that “there is no filter to check the user’s age, and they are exposed to responses that are completely inappropriate for the development and level of self-awareness of minors.”
The San Francisco-based company’s CEO, Sam Altman, announced this week that he will travel to six continents in May to discuss the technology with users and developers. This includes a planned suspension in Brussels, where European Union lawmakers are negotiating sweeping new rules to limit risky AI tools.
Altman said his European stops include Madrid, Munich, London and Paris.
O’Brien reported from Providence, Rhode Island. AP Business Writer Kelvin Chan contributed from London.
Copyright 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.