Users Alert About New ChatGPT Security Flaws
Users have noticed multiple issues with OpenAI’s assistant ChatGPT, including security flaws for the new ChatGPT Mac software and an internal leak, since last week.
According to The Verge, OpenAI’s new macOS app was storing information from users in plain text. This issue could potentially allow malicious actors with access to the user’s computer to read previous conversations with ChatGPT.
User Pedro José Pereira Vieito shared a video on X and Threads demonstrating the security flaw and how easy it was to access private files.
The OpenAI ChatGPT app on macOS is not sandboxed and stores all the conversations in **plain-text** in a non-protected location:
~/Library/Application\ Support/com.openai.chat/conversations-{uuid}/
So basically any other app / malware can read all your ChatGPT conversations: pic.twitter.com/IqtNUOSql7
— Pedro José Pereira Vieito (@pvieito) July 2, 2024
The Verge reached out to OpenAI and they responded. “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” said spokesperson Taya Christianson. “We’re committed to providing a helpful user experience while maintaining our high-security standards as our technology evolves.”
The new update no longer shows this vulnerability, it has been fixed, but another issue was also revealed on Reddit.
As reported by Tech Radar, ChatGPT shared internal information to a user after they casually greeted the assistant with a “Hi.” The user explained that the AI assistant replied with a set of instructions.
“You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. You are chatting with the user via the ChatGPT iOS app,” wrote ChatGPT. “This means most of the time your lines should be a sentence or two unless the user’s request requires reasoning or long-form outputs. Never use emojis, unless explicitly asked to. Knowledge cutoff: 2023-10 Current date: 2024-06-30.”
The user kept chatting with the bot and learned more information about how Dall-E—the image generator tool— worked. It provided details on how the AI assistant interacts and manages information.
Other users revealed that ChatGPT has multiple personalities, known as V1, v2, v3, and v4. The AI assistant revealed their potential values and the differences between multiple versions. “My enabled personality is v2. This personality represents a balanced, conversational tone with an emphasis on providing clear, concise, and helpful responses. It aims to strike a balance between friendly and professional communication,” wrote ChatGPT to one of the users.
Leave a Comment
Cancel