
Alphabet and Google CEO Sundar Pichai isn’t ready to pause work on advanced artificial intelligence. Mateusz Wlodarczyk — NurPhoto via Getty Images
Artificial intelligence poses a huge threat to society and humanity, so AI labs should stop working on advanced systems for at least six months.So says an open letter signed by tech luminaries this week, including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk.
But the CEO of Google, one of the most advanced AI companies, isn’t committed to the idea.
Sundar Pichai addressed the open letter in an interview with the Hard Fork podcast published Friday. He believes the idea of companies collectively taking such action is problematic.
“As for the actual details, I don’t think it’s entirely clear to me how you would do something like that today,” he said. When asked why he can’t pause , he replied: At least for me, there is no way to do this effectively without government involvement. So I think we need to think more. ”
The open letter calls on all AI labs to suspend training for systems more powerful than GPT-4 in particular, rather than development in general. Microsoft-backed OpenAI released his GPT-4 earlier this month as the successor to ChatGPT, the AI chatbot that took the world by storm when it was released in late November.
OpenAI itself said last month:
An open letter released this week claims that “some point” is now. It warns of the risks posed by AI systems with “competitive intelligence” and asks:
“Should we let machines flood our information channels with propaganda and falsehood? Should all jobs be automated, including fulfilling jobs? Should we develop an inhuman mind that could become and replace us? Should we risk losing control of our civilization?”
Such decisions should not be left to unelected technical leaders, and powerful AI systems should be developed “only after we are convinced that their effects are positive and the risks manageable.” Yes,” the letter claims.
Pichai has acknowledged the potential of the AI system to “cause massive disinformation” to the hard fork. And in a hint of the malicious AI use that may follow, phone scammers are now using Voice His clone AI tools to trick people into believing that their relatives are in urgent need of money. increase.
As for jobs that can be automated, last weekend a University of Pennsylvania business professor explained that he recently gave an AI tool 30 minutes to work on a business project and called the result “superhuman.”
Asked on the podcast whether AI could lead to the doom of humanity, Pichai replied, “There are many possibilities, and what you’re talking about is one of a range of possibilities.” rice field.
The open letter warns of “an uncontrollable race to develop and deploy an ever more powerful digital mind that no one (not even its creators) can understand, predict, or reliably control.” During the suspension, the AI Lab and independent experts will jointly develop and implement a set of shared safety protocols for advanced AI design and development, which will be rigorously audited and supervised by independent external experts. I have to,” he adds.
If they can’t enact a six-month moratorium quickly, they argue, “the government must step in and set a moratorium.”
Pichai agreed with the need for regulation, if not a moratorium. “AI is too important an area not to regulate,” he said. “It’s also an area that’s too important to regulate well.”
He said of the open letter:
Meanwhile, Google, Microsoft, OpenAI, and others are ahead.
luck We reached out to Google and OpenAI for comment, but did not immediately hear back. Microsoft declined to comment.