ChatGPT is no BFF.
An Oxford University computer science professor has sounded the alarm on why it’s an awful idea to confide personal information and deep, dark secrets with language learning models such as ChatGPT.
“The technology is basically designed to try to tell you what you want to hear – that’s literally all it’s doing,” Mike Woolridge told the Daily Mail. “It has no empathy. It has no sympathy.”
While the human-trained artificial intelligence may mirror authentic emotions at times in its responses, users should not be fooled by a seemingly sympathetic cyber ear.
“That’s absolutely not what the technology is doing and crucially, it’s never experienced anything,” he added.
What’s worse, Wooldridge warned, users should be more concerned about where their innermost confides are actually going.
“You should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT,” he said.
So, just as we were warned with platforms like Facebook over a decade ago, it’s “extremely unwise to start having personal conversations or complaining about your relationship with your boss, or expressing your political opinions” on ChatGPT, he said.
There are no retractions in cyberspace, after all.
Beyond information being worked into future training data, there have been instances when private chat histories were accidentally exploited as well.
Last March, a roughly estimated 1.2 million users saw their prior prompts exposed due to a massive bug.
Italy temporarily banned ChatGPT from the nation due to the data breach.
After that occurred, OpenAI, ChatGPT’s parent company, has implemented ways to disable chat history — but user data is still stored for 30 days after the fact.
OpenAI will “review them only when needed to monitor for abuse, before permanently deleting,” the Microsoft-owned company stated at the time.
Still, almost a year later, experts have fears over risks associated with a lack of protection for user data.
This month, security researcher Johann Rehberger flagged “a well-known data exfiltration vulnerability” that remains in ChatGPT as OpenAI looks to fix the glaring issue.
“The data exfiltration vulnerability was first reported to OpenAI early April 2023, but remained unaddressed,” he wrote, adding that measures are finally being taken — even though a final solution has not yet been drawn.
“It’s not a perfect fix.”
Source