A open letter signed by technology leaders and prominent AI researchers called on AI labs and companies to “immediately pause” their work. Signatories like Steve Wozniak and Elon Musk agree that the risks warrant pausing production of technologies beyond GPT-4 for at least six months to take advantage of existing AI systems, allow people to adapt and ensure that they benefit everyone. The letter adds that caution and foresight are needed to keep AI systems safe – but are being ignored.
The reference to GPT-4, a model of OpenAI that can respond with text to written or visual messages, comes as companies rush to create complex chat systems that use the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been powered by GPT-4 model for more than seven weeks, while Google recently launched Bard, its own generative AI system powered by LaMDA. Unease around AI has been circulating for a long time, but the apparent race to deploy the most advanced AI technology first has raised more pressing concerns.
“Unfortunately, this level of planning and management is not happening, even though the past few months have seen AI labs locked in an uncontrollable race to develop and deploy digital minds ever more powerful than anyone – not even their creators – cannot reliably understand, predict, or control,” the letter reads.
The relevant letter was published by the Future of Life Institute (FLI), an organization dedicated to minimizing the risks and misuse of new technologies. Musk has already donated $10 million to FLI for use in AI safety studies. In addition to him and Wozniak, the signatories include a slew of global AI leaders, such as Center for AI and Digital Policy President Marc Rotenberg, the MIT physicist and President of the Future of Life Institute. , Max Tegmark, and author Yuval Noah Harari. Harari also co-wrote a editorial in the New York Times last week, warning about the risks of AI, along with Center for Humane Technology founders and fellow signatories Tristan Harris and Aza Raskin.
This call looks like the next step in a Survey 2022 of more than 700 machine learning researchers, in which nearly half of the participants said there was a 10% chance of an “extremely bad outcome” from AI, including human extinction. When asked about safety in AI research, 68% of researchers said more or a lot more should be done.
Anyone with concerns about the speed and safety of AI production is welcome to add their name to the letter. However, the new names are not necessarily verified, so any notable additions after the initial posting are potentially fake.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.