Over the past eight months, ChatGPT has impressed millions of people with its ability to generate realistic-looking text, writing everything from stories to code. But the chatbot, developed by OpenAI, is still relatively limited in what it can do.
The large language model (LLM) takes “prompts” from users that it uses to generate ostensibly related text. These responses are created partly from data scraped from the internet in September 2021, and it doesn’t pull in new data from the web. Enter plugins, which add functionality but are available only to people who pay for access to GPT-4, the updated version of OpenAI’s model.
Since OpenAI launched plugins for ChatGPT in March, developers have raced to create and publish plugins that allow the chatbot to do a lot more. Existing plugins let you search for flights and plan trips, and make it so ChatGPT can access and analyze text on websites, in documents, and on videos. Other plugins are more niche, promising you the ability to chat with the Tesla owner’s manual or search through British political speeches. There are currently more than 100 pages of plugins listed on ChatGPT’s plugin store.
But amid the explosion of these extensions, security researchers say there are some problems with the way that plugins operate, which can put people’s data at risk or potentially be abused by malicious hackers.
Johann Rehberger, a red team director at Electronic Arts and security researcher, has been documenting issues with ChatGPT’s plugins in his spare time. The researcher has documented how ChatGPT plugins could be used to steal someone’s chat history, obtain personal information, and allow code to be remotely executed on someone’s machine. He has mostly been focusing on plugins that use OAuth, a web standard that allows you to share data across online accounts. Rehberger says he has been in touch privately with around a half-dozen plugin developers to raise issues, and has contacted OpenAI a handful of times.
“ChatGPT cannot trust the plugin,” Rehberger says. “It fundamentally cannot trust what comes back from the plugin because it could be anything.” A malicious website or document could, through the use of a plugin, attempt to run a prompt injection attack against the large language model (LLM). Or it could insert malicious payloads, Rehberger says.
Data could also potentially be stolen through cross plugin request forgery, the researcher says. A website could include a prompt injection that makes ChatGPT open another plugin and perform extra actions, which he has shown through a proof of concept. Researchers call this “chaining,” where one plugin calls another one to operate. “There are no real security boundaries” within ChatGPT plugins, Rehberger says. “It is not very well defined, what the security and trust, what the actual responsibilities [are] of each stakeholder.”
Since they launched in March, ChatGPT’s plugins have been in beta—essentially an early experimental version. When using plugins on ChatGPT, the system warns that people should trust a plugin before they use it, and that for the plugin to work ChatGPT may need to send your conversation and other data to the plugin.
Niko Felix, a spokesperson for OpenAI, says the company is working to improve ChatGPT against “exploits” that can lead to its system being abused. It currently reviews plugins before they are included in its store. In a blog post in June, the company said it has seen research showing how “untrusted data from a tool’s output can instruct the model to perform unintended actions.” And that it encourages developers to make people click confirmation buttons before actions with “real-world impact,” such as sending an email, are done by ChatGPT.
Source