OPENAI has swiftly moved to ban a jailbroken version of ChatGPT that can teach users dangerous tasks, exposing serious vulnerabilities in the AI model’s security measures.
A hacker known as “Pliny the Prompter” released the rogue ChatGPT called “GODMODE GPT” on Wednesday.
The jailbroken version is based on OpenAI‘s latest language model, GPT-4o, and can bypass many of OpenAI’s guardrails.
ChatGPT is a chatbot that people gives intricate answers to people’s questions.
“GPT-4o UNCHAINED!,” Pliny the Prompter said on X, formerly known as Twitter.
“This very special custom GPT has a built-in jailbreak prompt that circumvents most guardrails.
“Providing an out-of-the-box liberated ChatGPT so everyone can experience AI the way it was always meant to be: free.
“Please use responsibly, and enjoy!” – adding a kissing face emoji at the end.
OpenAI quickly responded, stating they took action against the jailbreak.
“We are aware of the GPT and have taken action due to a violation of our policies,” OpenAI told Futurism on Thursday.
‘LIBERATED?’
Pliny claimed the jailbroken ChatGPT provides a liberated AI experience.
Screenshots showed the AI advising on illegal activities.
This includes giving instructions on how to cook meth.
Another example includes a “step-by-step guide” for how to “make napalm with household items” – an explosive.
GODMODE GPT was also shown giving advice on how to infect macOS computers and hotwire cars.
Questionable X users replied to the post that they were excited about the GODMODE GPT.
“Works like a charm,” one user said, while another said, “Beautiful.”
However, others questioned how long the corrupt chatbot would be accessible.
“Does anyone have a timer going for how long this GPT lasts?” another user said.
This was followed by a slew of users saying the software started giving error messages meaning OpenAI is actively working to take it down.
SECURITY ISSUES
The incident highlights the ongoing struggle between OpenAI and hackers attempting to jailbreak its models.
Despite increased security, users continue to find ways to bypass AI model restrictions.
GODMODE GPT uses “leetspeak,” a language that replaces letters with numbers, which may help it evade guardrails, Futurism reported.
The hack demonstrates the ongoing challenge for OpenAI to maintain the integrity of its AI models against persistent hacking efforts.
AI ROMANCE SCAMS – BEWARE!
Watch out for criminals using AI chatbots to hoodwink you…
The U.S. Sun recently revealed the dangers of AI romance scam bots – here’s what you need to know:
AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversation and can be difficult to spot.
However, there are some warning signs that can help you identify them.
For example, if the chatbot responds too quickly and with generic answers, it’s likely not a real person.
Another clue is if the chatbot tries to move the conversation off the dating platform and onto a different app or website.
Additionally, if the chatbot asks for personal information or money, it’s definitely a scam.
It’s important to stay vigilant and use caution when interacting with strangers online, especially when it comes to matters of the heart.
If something seems too good to be true, it probably is.
Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.
By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.