Hacker creates false memories in ChatGPT to steal victim data but it might not be as bad as it sounds
Date:
Thu, 26 Sep 2024 04:28:00 +0000
Description:
Researchers have found an exploit which could allow hackers to steal information from ChatGPT users.
FULL STORY ======================================================================
Security researchers have exposed a vulnerability which could allow threat actors to store malicious instructions in a users memory settings in the ChatGPT MacOS app.
A report from Johann Rehberger at Embrace The Red noted how an attacker could trigger a prompt injection to take control of ChatGPT, and can then insert a memory into its long-term storage and persistence mechanism. This leads to
the exfiltration of the conversation on both sides straight to the attackers server.
From then on, the prompt is stored as memory persistent, so any future conversations with the chatbot will have the same vulnerability. Because ChatGPT remembers things about its users, like names, ages, locations, likes and dislikes, and previous searches, this exploit presents serious risk for users. Staying safe
In response, OpenAI had introduced an API which means the exploit is no
longer possible through ChatGPTs web interface, and has also launched a fix
to prevent memories from being used as an exfiltration vector. However, researchers say that untrusted third-party content can still inject prompts that could exploit the memory tool.
The good news is, whilst the memory tool is automatically turned on by
default in ChatGPT, but can be turned off by the user. The feature is great for those who want a more personalized experience using the chatbot , as it can listen to your wants and needs and make suggestions based on the info - but clearly there are dangers.
To mitigate the risks from this, users should be alert when using the
chatbot, and particularly look at the new memory added messages. By reviewing the stored memories regularly, users can examine for any potentially planted memories.
This isn't the first security flaw that researchers have discovered in ChatGPT, with concerns over the plugins allowing threat actors to take over users' other accounts and potentially access sensitive data. More from TechRadar Pro Take a look at some of the best AI chatbots for business Telegram says it will provide user data to authorities in major policy shift Check out our pick for best monitors
======================================================================
Link to news story:
https://www.techradar.com/pro/hacker-creates-false-memories-in-chatgpt-to-stea l-victim-data-but-it-might-not-be-as-bad-as-it-sounds
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)