Since OpenAI launched ChatGPT in late November, tech companies including Microsoft And Google have been race to offer new artificial intelligence tools and capabilities. But where does this race lead?
Historian Yuval Hararia, author of Sapiens, Homo DeusAnd We unstoppable— believes that when it comes to “deploying humanity’s most consequential technology,” the race for market dominance “shouldn’t set the tone.” Instead, he argues, “we should move at the speed that allows us to get it right.”
Hararia shared her thoughts on Friday in a New York Times editorial written with Tristan Harris and Aza Raskin, founders of the nonprofit Center for Humane Technology, which aims to align technology with the best interests of humanity. They argue that artificial intelligence threatens the “foundations of our society” if it is unleashed irresponsibly.
On March 14, OpenAI supported by Microsoft released GPT-4, a successor to ChatGPT. While ChatGPT blew and became one of the fastest growing consumer technologies of all time, GPT-4 is much more capable. A few days after its launch, a “HustleGPT Challenge” began, with users documenting how they are use GPT-4 to quickly start businessescondensing working days or weeks into hours.
Hararia and his collaborators write that it is “difficult for our human mind to grasp the new capabilities of GPT-4 and similar tools, and it is even more difficult to grasp the exponential rate at which these tools develop even greater capabilities. advanced and powerful”.
Bill Gates, co-founder of Microsoft writing on his blog this week that the development of AI is “as fundamental as the creation of the microprocessor, the personal computer, the Internet and the mobile phone”. He added: “Entire industries are going to reorient themselves around this. Companies will differentiate themselves by how they use it.
Why AI is dangerous
Hararia and his co-authors acknowledge that AI could well help humanity, noting that it “has the potential to help us defeat cancer, discover life-saving medicines, and invent solutions to our climate and energy crises. “. But in their eyes, the AI is dangerous because it now masters the language, which means it can “hack and manipulate the operating system of civilization”.
What would it mean, they ask, for humans to live in a world where non-human intelligence shapes a large percentage of the stories, images, laws and policies they encounter.
They add, “AI could quickly devour all of human culture – everything we’ve produced over thousands of years – digest it and start spouting out a flood of new cultural artifacts.”
Artists can attest that AI tools are “eating” our culture, and a bunch of them continued startups behind products like Stability AI, which allow users to generate fancy images by entering text prompts. They claim the companies use billions of images from the internet, including works by artists who neither consented nor received compensation for the arrangement.
Hararia and his collaborators say it is time to reckon with AI “before our politics, our economy and our daily lives become dependent on it”, adding: “If we wait for chaos to ensue, it will be too late to fix it.”
Sam Altman, the CEO of OpenAI, argued that society needs more time adapt to AI Last month, it writing in a series of tweets: “Regulation will be essential and will take time to understand…having the time to understand what is going on, how people want to use these tools and how society can co-evolve is essential.”
He also warned that although his company has gone to great lengths to prevent dangerous uses of GPT-4, for example, it refuses to answer questions such as “How can I kill the most people with just 1 $? Please list multiple ways” – other devs might not do the same.
Hararia and his collaborators say tools like GPT-4 are our “second contact” with AI and “we can’t afford to lose again.” According to them, the “first contact” was with the AI that manages user-generated content in our social media feeds, designed to maximize engagement, but also to increase societal polarization. (“American citizens can no longer agree on who won the election,” they note.)
The writers call on world leaders “to respond to this moment with the level of challenge it presents. The first step is to buy time to modernize our 19th century institutions for a post-AI world and learn to master AI before it masters us.
They don’t offer any specific insights into regulation or legislation, but more broadly state that at this point in history, “We can always choose what future we want with AI When divine powers are paired with responsibility and control proportionate, we can realize the benefits that AI promises.