A black-hat hacker circulated a malicious version of OpenAI’s ChatGPT called WormGPT, which was then used to launch an effective email phishing attack on thousands of victims.
WormGPT is based on EleutherAI’s 2021 large language model GPTJ and was specifically designed for malicious activity, according to a report by cybersecurity firm SlashNext. Features include unlimited character support, chat memory and code formatting, and WormGPT has been trained on malware-related datasets.
Cybercriminals are now using WormGPT to launch a type of phishing attack known as Business Email Compromise (BEC).
“The difference [from WormGPT] is that ChatGPT has guardrails to protect against illegitimate or nefarious use cases,” David Schwed, chief operating officer at blockchain security company Halborn, told Decrypt on Telegram. “[WormGPT] does not have those guardrails, so you can ask it to develop malware for you.”
Phishing attacks are one of the oldest, as well as most common, forms of cyberattacks and are typically carried out via emails, text messages or posts on social media under a false name. In a business email attack, an attacker poses as a company executive or employee and tricks the target into sending money or confidential information.
Thank you to rapid advances in generative AI, chatbots like ChatGPT or WormGPT can compose convincing, human-like emails, making fraudulent messages harder to detect.
SlashNext says technologies like WormGPT lower the barrier to effective BEC attacks, empowering less experienced attackers and creating a larger pool of would-be cybercriminals.
To protect against corporate email attacks, SlashNext recommends organizations improve email scanning, including automated alerts for emails impersonating internal individuals and flagging emails with keywords such as “urgent” or “referral” that are typically associated with BEC.
With the ever-increasing threat from cybercriminals, organizations are constantly looking for ways to protect themselves and their customers.
In March, Microsoft — one of the largest investors in ChatGPT developer OpenAI — launched a security-focused generative AI tool called Security Copilot. Security Copilot harnesses AI to improve cybersecurity defenses and threat detection.
“In a world where there are 1,287 password attacks per second, fragmented tools and infrastructure have not been enough to stop attackers,” Microsoft said in its announcement. “And even though attacks have increased 67% in the last five years, the security industry has not been able to hire enough cyber risk professionals to keep up.”