A open letter signed by hundreds of eminent artificial intelligence experts, tech entrepreneurs and scientists are calling for a pause in the development and testing of more powerful AI technologies than Open AIlanguage model GPT-4 so that the risks it may present can be properly studied.
He warns that language models like GPT-4 can already compete with humans in an increasing range of tasks and could be used to automate tasks and disseminate misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.
“We call on all AI labs to immediately suspend training of AI systems more powerful than GPT-4 (including the GPT-5 currently in training) for at least 6 months,” the letter reads. signatories include Yoshua Bengio, a University of Montreal professor considered a pioneer of modern AI, historian Yuval Noah Harari, Skype co-founder Jaan Tallinn and Twitter CEO Elon Musk.
The letter, written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the break should be “public and verifiable” and should involve everyone working on advanced AI models like GPT-4. It does not suggest how a halt in development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium”, which seems unlikely to happen within six months. .
Microsoft and Google did not respond to requests for comment on the letter. The signatories apparently include people from numerous tech companies who create advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, said the company spent more than six months working on GPT-4 security and alignment after training the model. She adds that OpenAI does not currently train GPT-5.
The letter comes as AI systems make increasingly bold and impressive leaps. The GPT-4 was only announced two weeks ago, but its capabilities have given a significant boost enthusiasm and a good amount of concern. The language model, which is available via ChatGPTOpenAI’s popular chatbot, performs great on many academic testsand can correctly solve tricky questions which are generally thought to require more advanced intelligence than AI systems have previously demonstrated. Yet GPT-4 also does a lot of trivial and logical errors. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be tricked into saying hateful or potentially harmful things.
Part of the concern expressed by the signatories of the letter is that OpenAI, Microsoft and Google have started a profit-driven race to develop and release new AI models as quickly as possible. At such a pace, according to the letter, developments are happening faster than society and regulators cannot afford.
The pace of change and the scale of investment are significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its Bing search engine as well as other apps. Although Google developed some of the AI needed to create GPT-4 and previously created its own powerful language models, until this year it chose not to release them. for ethical reasons.