WormGPT: Business email compromise amplified by ChatGPT hack


Since OpenAI introduced ChatGPT to the public last year, generative AI large language models (LLMs) have been popping up like mushrooms after a summer rain. So it was only a matter of time before online predators, frustrated by the guardrails deployed by developers to keep abuse of the LLMs in check, cooked up their own model for malevolent purposes.

Such a model was recently discovered by cybersecurity services company SlashNext. It’s called WormGPT. Daniel Kelley, a reformed black-hat hacker who works with SlashNext to identify threats and tactics employed by cybercriminals, wrote in company blog post:

“As the more public GPT tools are tuned to better protect themselves against unethical use, the bad guys will create their own. The evil counterparts will not have those ethical boundaries to contend with. [W]e see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes.”
Daniel Kelley

Kelley said that in addition to building custom modules, WormGPT’s creators are advertising their wares to fellow bad actors. According to one source, WormGPTis selling for $1,000 on the dark web.

Here’s what researchers know about WormGPT — and what your team can do to fight back against this new AI-fueled threat.

[ Learn more about ReversingLabs Threat Intelligence | Explore Threat Intelligence for Microsoft Sentinel ]

Generative AI hack trained for mischief

WormGPT is believed to be based on the GPT-J LLM, which isn’t as powerful as OpenAI’s GPT-4. But for an adversary’s purposes, it doesn’t have to be. GPT-J is an open-source LLM developed in 2021 by EleutherAI. It supports 6 billion parameters with 825GB of training data. By comparison, GPT-4 supports 175 billion parameters with 1.5TB of training data.

Kelley said WormGPT is believed to have been trained on a diverse array of data sources, with an emphasis on malware-related data. The specific datasets used in training the model, though, have been kept confidential by the model’s author, he added.

Experiments with WormGPT to produce an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice were “unsettling,” Kelley said.

“WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”
—Daniel Kelley

Business email compromise (BEC) fraud occurs when an email that appears to originate with the higher-ups in an organization is sent to a lower-level employee, usually requesting a money transfer into an account controlled by a hacker. According to the FBI, BEC losses by businesses totaled more than $2.7 billion in 2022.

Phishing emails

Kelley said the development of AI technologies has introduced a new vector for BEC attacks, with tools such as ChatGPT making it easy to generate humanlike text based on the input it receives. Generative AI enables cybercriminals to automate the creation of highly convincing fake emails, personalized to the recipient, which improves the chances of an attack’s success.

Timothy Morris, chief security advisor at Tanium, said tools such as WormGPT will make phishing more effective — and open doors for more cybercriminals.

“Not only are the emails more convincing, with correct grammar, but the ability to also create them almost effortlessly has lowered the barrier to entry for any would-be criminal. Not to mention, [the tools add] the ability to increase the pool of potential victims, since language is no longer an obstacle.”
Timothy Morris

Mike Parkin, a senior technical engineer at Vulcan Cyber, said AI tools such as ChatGPT are good at sounding like a real person because they are trained on LLMs, which leverage the Internet.

“That makes it a lot easier for a criminal operator who might have English as their second or third language to write convincing hooks.”
Mike Parkin

While early concerns about AI tools such as ChatGPT focused on their being used to write malicious code, WormGPT highlights its value for making fraud more effective, Parkin said.

“Conversational AI’s real threat is with social engineering. With a little data scraping and some dedicated AI training, it’s possible to automate much, if not all, of the process to enable threat actors to phish at scale.”
—Mike Parkin

Jailbreaking ChatGPT

While generative AI models can lower the barriers to becoming a cybercriminal,don’t expect hordes of threat actors to start appearing on the immediate horizon, said Mika Aalto, co-founder and CEO of Hoxhunt.

“For now, the misuse of ChatGPT for BEC, phishing, and ‘smishing’ attacks will likely be focused on improving the capabilities of existing cybercriminals more than activating new legions of attackers.”
Mika Aalto

SlashNext researchers found another disturbing trend among cybercriminals and ChatGPT. “We’re now seeing an unsettling trend among cybercriminals on forums, evident in discussion threads offering ‘jailbreaks’ for interfaces like ChatGPT,” Kelley said in his blog post.

“These jailbreaks are specialized prompts that are becoming increasingly common. They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code.”
—Daniel Kelley

For example, one jailbreak, called the Grandma Exploit, tricked ChatGPT into revealing how to make napalm. It asked the chatbot to pretend to be a deceased grandmother who had been a chemical engineer at a napalm production factory and then asked the chatbot to explain how napalm is made.

Another jailbreak cooked up by Reddit users prompted ChatGPT to pretend it was in a role-playing game in which it was given the persona of DAN, short for Do Anything Now. That freed the model from adhering to some of the rules related to racist, sexist, and violent content.

How organizations can fight AI-fueled attacks

What can organizations do to thwart AI-powered attacks? Kelley recommends developing extensive training programs that focus on BEC threats and updating them regularly.

“Such programs should educate employees on the nature of BEC threats, how AI is used to augment them, and the tactics employed by attackers. This training should also be incorporated as a continuous aspect of employee professional development.”
—Daniel Kelley

He also recommended that organizations implement stringent email verification processes, including automatic alerts when emails originating outside the organization impersonate internal executives or vendors, and that they flag messages containing specific keywords linked to BEC attacks such as “urgent,” “sensitive,” or “wire transfer.”

Aalto said such measures could ensure that potentially malicious emails are subjected to thorough examination before any action is taken.

“Be sure to focus on your people and their email behavior, because that is what our adversaries are doing with their new AI tools.”
—Mika Aalto

Aalto said organizations should embed security as a shared responsibility throughout the organization. He recommended ongoing training that enables users to spot suspicious messages, and rewards for staff reporting threats.

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by John P. Mello Jr.. Read the original post at: https://www.reversinglabs.com/blog/wormgpt-highly-effective-business-email-compromise-made-easy-with-ai-hack



Post a Comment

0 Comments