power struggle
When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the next generation of great language models like ChatGPT, he did what many of us have done: he started to play with them to see how they could help his work. He carefully documented their performance in a post in February, noting how well they handled 25 “use cases”, from brainstorming and text editing (very helpful) to coding (good enough with a little help) to math (not great).
ChatGPT misrepresented one of the most fundamental principles of economics, says Korinek: “It got really bad.” But the error, easily spotted, was quickly forgiven in view of the benefits. “I can tell you it makes me, as a cognitive worker, more productive,” he says. “There is no doubt for me that I am more productive when I use a language model.”
When GPT-4 came out, it tested its performance on the same 25 questions it documented in February, and it performed much better. There were fewer instances of making stuff up; he also did much better on math homework, Korinek says.
Since ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, an increase in economic productivity could occur much faster than in past technological revolutions. , explains Korinek. “I think we could see a bigger increase in productivity by the end of the year, certainly by 2024,” he says.
Who will control the future of this amazing technology?
Plus, he says, in the longer term, how AI models can make researchers like him more productive has the potential to drive technological progress.
This potential of large language models is already being revealed in research in the physical sciences. Berend Smit, who heads a chemical engineering lab at EPFL in Lausanne, Switzerland, is an expert in using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed interesting results using GPT-3, Smit asked him to demonstrate that GPT-3 is, in fact, useless for types of sophisticated machine learning studies his group is doing. predict the properties of compounds.
“He completely failed,” Smit jokes.
It turns out that after being tweaked for a few minutes with some relevant examples, the the model performs as well as advanced machine learning tools specially developed for chemistry to answer basic questions about things like a compound’s solubility or reactivity. Just give it the name of a compound and it can predict various properties based on the structure.