Share this article on:
Hackers are taking advantage of a new, uncensored AI chatbot that “pushes the boundaries of ethical AI use”.
In 2023, we saw the emergence of the first criminally focused generative AI models, with WormGPT grabbing headlines for its ability to assist hackers in creating malicious software.
WolfGPT and EscapeGPT soon followed, and now security researchers have discovered a new AI-based tool helping hackers create malware – GhostGPT.
According to security experts at Abnormal Security, GhostGPT most likely takes advantage of a jailbroken version of OpenAI’s ChatGPT chatbot, or a similar large language model, with all the ethical safeguards removed.
“By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems,” Abnormal Security said in a 23 January blog post.
Complete with its promotional spiel, GhostGPT offers four key features: uncensored AI, lightning-fast processing, no log-keeping to cut down on creating evidence, and – probably most important of all to its customers – it is easy to use.
“Access and use our AI directly through our easy-to-use Telegram bot,” GhostGPT said.
“Purchase and start using it right within Telegram.”
While the makers of GhostGPT attempt to frame it as a legitimate cyber security tool, it is largely advertised on criminal hacking forums and has a clear focus on creating business email compromise (BEC) scams.
“To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email,” Abnormal Security said.
“The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims.”
GhostGPT can also be used to code and create malware and develop exploits.
One of the major implications that GhostGPT shares with its predecessors is that it significantly lowers the barrier of entry to criminal cyber activity, while also making scams, such as BEC, harder to detect. Many scammers use English as a second language, and that fact has historically made it easier to spot scams in the wild – however, generative AI’s greater command of the language makes scam content harder to spot.
It’s also faster and more convenient to use.
“The convenience of GhostGPT also saves time for users. Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open-source model,” Abnormal Security said.
“Users can pay a fee, gain immediate access, and focus directly on executing their attacks.”
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.