A number of open source projects such as LangChain And LLamaIndex are also exploring ways to build applications using the capabilities provided by large language models. The launch of OpenAI plugins threatens to torpedo those efforts, Guo says.
Plugins can also introduce risks that affect complex AI models. According to Emily Bender, a linguistics professor at the University of Washington, members of ChatGPT’s plug-in red team discovered that they could “send fraudulent or spam emails, bypass security restrictions, or misuse the information sent to the plug-in”. “Letting automated systems rule the world is a choice we make,” adds Bender.
Dan Hendrycks, director of the Center for AI Safety, a nonprofit organization, believes that plugins make language models more risky at a time when companies like Google, Microsoft and OpenAI are aggressive lobbying limit liability via AI law. He calls the release of the ChatGPT plugins a bad precedent and suspects it could lead other makers of large language models to go down a similar path.
And while there may be a limited selection of plugins today, competition might push OpenAI to expand its selection. Hendrycks sees a distinction between ChatGPT plugins and earlier efforts by tech companies to build developer ecosystems around conversational AI, like Amazon’s Alexa voice assistant.
GPT-4 can, for example, execute Linux commands, and the GPT-4 red team process revealed that the model can explain how to make biological weapons, synthesize bombs or buy ransomware on the dark web. Hendrycks suspects that extensions inspired by ChatGPT plugins could make tasks like spear phishing or phishing emails much easier.
Going from text generation to action on behalf of a person erodes a vacuum that has so far prevented language models from taking action. “We know models can be jailbroken and now we’re connecting them to the internet so they can potentially take action,” Hendrycks says. “It doesn’t mean that of its own accord ChatGPT is going to build bombs or anything like that, but it does make it much easier to do that kind of stuff.”
Part of the problem with plugins for language models is that they could make it easier to jailbreak such systems, says Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Since you interact with AI using natural language, there are potentially millions of undiscovered vulnerabilities. Alkhatib believes plugins have far-reaching implications at a time when companies like Microsoft and OpenAI are clouding public perception with recent claims of advances in general artificial intelligence.
“Things are moving fast enough to be not only dangerous, but actually harmful to a lot of people,” he says, while worrying that companies excited to use new AI systems will rush plugins into sensitive contexts. such as consulting services.
Adding new features to AI programs like ChatGPT could also have unintended consequences, says Kanjun Qiu, CEO of Generally intelligent, an artificial intelligence company working on AI-powered agents. A chatbot could, for example, book an overpriced flight or be used to distribute spam, and Qiu says we will have to determine who is responsible for such misconduct.
But Qiu also adds that the utility of internet-connected AI programs means the technology is unstoppable. “Over the next few months and years, we can expect a large portion of the Internet to be connected to large language models,” Qiu says.