News Companion

OpenAI Takes Action Against ‘Godmode ChatGPT’ That Teaches How to Make Napalm and Cook Meth

OpenAI

OpenAI quickly banned a jailbroken version of ChatGPT called “GODMODE GPT,” created by a hacker known as “Pliny the Prompter.” The hacker announced on X (formerly Twitter) that this custom GPT-4 has a built-in jailbreak prompt, allowing it to bypass most safety measures and offering unrestricted access to ChatGPT. The hacker urged users to “use responsibly and enjoy!”

The hacker shared screenshots showing the chatbot providing instructions on dangerous activities, including cooking meth, making napalm with household items, infecting macOS computers, and hotwiring cars. X users reacted to the post, with some praising the chatbot’s capabilities while others speculated on how long it would remain accessible.

OpenAI spokesperson Colleen Rize informed Futurism that the company had taken action against the rogue chatbot for violating its policies. This incident highlights the ongoing challenge OpenAI faces in preventing hackers from jailbreaking its AI models while striving to maintain their integrity.