august 2023

New ‘wormgpt’ crimeware generates malicious code and phishing lures

A new artificial intelligence tool spreading on the underground called ‘WormGPT’ claims it can create malware and produce convincing phishing lures. The tool allegedly has all the benefits of legitimate AI models, such as ChatGPT, without restrictions on malicious activities. Cybersixgill observed threat actors advertising WormGPT on a Russian cybercrime forum, in addition to extensive chatter on one of the top English-language underground forums.

THE HEADLINE

Since the roll-out of OpenAI’s ChatGPT and similar deep language learning1 models,2 threat actors have attempted to abuse legitimate artificial intelligence (AI) platforms, which block illegal and harmful requests,3 such as producing malicious code or creating phishing emails. While some threat actors have devised workarounds for ChatGPT,4 others have developed alternate AI tools that are designed with malicious activities in mind.

To that end, Cybersixgill recently observed threat actors advertising a new AI-based cybercrime tool called WormGPT, which is capable of facilitating phishing campaigns and business email compromise5 (BEC) attacks. WormGPT’s developers also claim the tool can perform malware coding, a feature that could improve efficiency and also significantly lower the barrier to entry with regard to novice threat actors.

Touted by its developers as the "biggest enemy of ChatGPT," capable of “doing all sorts of illegal stuff,” WormGPT could prove to be a major advantage to the BEC attack chain. Among the primary obstacles in launching BEC campaigns are errors in syntax, word choice, sentence structure, and spelling, which tip off recipients that messages are not from alleged senders. Language models like WormGPT eliminate these issues by producing natural sounding, realistic emails that can be hard to distinguish from actual human-generated messages. The ultimate result could be higher success rates for phishing and BEC campaigns, increasing the risk posed by these attack vectors.

In addition to individual threat actors, WormGPT could also significantly improve the products and services sold by Malware-as-a-Service6 (MaaS) and Phishing-as-a-Service7 (PaaS) operations. Such services could use the tool in the same manner described above, to assist attacks launched by affiliates and licensees. Similarly, these operations could also use WormGPT’s malware code generating capabilities to help develop cyber weapons. In addition to tools such as WormGPT, threat actors continue to develop workarounds for legitimate AI tools’ safety features. Specifically, new schemes to abuse tools like ChatGPT and Google Bard are regularly discussed on the underground. For example, threat actors were observed earlier this year abusing ChatGPT's API8, hacking ChatGPT accounts, and developing prompts to trick AI models into producing malicious content.

DIVING DEEPER

In the July 14, 2023 post below (Figure 1), a member of a popular Russian-language underground forum advertised the aforementioned ChatGPT knockoff, providing a lengthy list of features, but emphasizing its suitability for cybercrime (blackhat) and malware coding. The forum member also emphasized the product’s prioritization of privacy and operational security (OPSEC), claiming the tool does not store conversations, which could be collected by law enforcement if they infiltrate the operation.

In response to a reply asking if WormGPT is based on LLaMA9 or Alpaca,10 the forum member identified an open-source AI model called GPT-J-6B11 as its source. Researchers recently observed threat actors modifying GPT-J-6B to spread disinformation as part of a process called supply chain poisoning. The technique, also referred to as PoisonGPT, can be used to disseminate typosquatted12 versions of legitimate entities’ assets, diverting traffic to sites for malicious purposes.

The ad below includes WormGPT’s subscription model, with monthly (€100 for WormGPT v1) and annual (€550 for WormGPT v2) licenses. A private build for WormGPT v213 is also available for €5000, with payment requested in five enumerated cryptocurrencies. To illustrate the tool’s value, a link to images of sample prompts was also provided (Figure 2), which appeared to show the AI platform generating malicious code and answering questions about malware development.

The contact provided by the advertiser linked to an encrypted instant messaging platform channel that offers assistance with Distributed-Denial-of-Service14 (DDoS) attacks, commonly referred to as a DDoS-for-hire service. The channel listed prices, durations, and samples of successful DDoS attacks. Based on the multiple attack types in which WormGPT’s promoters/developers appear to be involved, the operation bears the hallmarks of a sophisticated cybercrime organization.

In general, the response to WormGPT on the forum was positive, with members expressing interest in using the tool. One member also congratulated the tool’s developers for attracting coverage in open source (OSINT) news sites.

In addition to the forum ad for WormGPT, Cybersixgill also observed a July 15, 2023 discussion about the tool on an English-language forum (Figure 3). Forum members appeared highly interested in trying the new tool and sought details about where it could be purchased. In response, a member provided the ad with details about the tool.

This discussion reflects intense interest in ChatGPT-like products on the underground. It also shows the speed with which news travels among cybercriminals when a tool like WormGPT is launched. Notably, neither of the forum posts in this report attracted replies with explicit endorsements of WormGPT. While the lack of testimonials from users at this stage may merely be a function of the tool’s recent release, it may also cast doubt on WormGPT’s ability to deliver on its promises.

TAKEAWAYS

With the threat of new tools like WormGPT, organizations must be prepared for a heightened risk of phishing campaigns, BEC attacks, and other malicious activities. In addition, malicious actors may also continue to attempt to abuse ChatGPT’s capabilities to produce malicious content. With these risks in mind, organizations should remain vigilant with regard to legitimate-looking phishing emails and other social engineering schemes.


1 ChatGPT was introduced in November 2022 by OpenAI, with a conversational, human-like style that produces customized outputs to user prompts. Unlike previous AI models, ChatGPT can write software in different programming languages, debug code, and explain complex topics, among other capabilities.

2 Deep language learning models are a type of machine learning model (MLM) that processes human language, training artificial neural networks to learn complex patterns and identify relationships between data.

3 ChatGPT is equipped with security protocols to identify inappropriate requests from users.

4 Among the workarounds are omitting direct mentions of harmful terms (such as malware, hacking, phishing, etc.), which can potentially trick the ChatGPT bot into answering prompts without flagging malicious requests.

5 In business email compromise (BEC) attacks, threat actors gain access to companies’ email accounts, often through phishing or other methods, using them to send emails to employees, clients, or suppliers requesting money transfers, payment of invoices, or disclosure of sensitive information. The emails impersonate legitimate entities and are designed to appear authentic, often including precise instructions, company logos, and other convincing elements. BEC attacks can result in significant financial losses for businesses and individuals.

6 Malware-as-a-service (MaaS) offers malware for sale or rent to cybercriminals of all proficiency levels, who can then use it to launch attacks on targeted systems.

7 As its name implies, Phishing-as-a-Service (PaaS) operations provide ready-to-go phishing tools for a fee.

8 ChatGPT’s application programming interface (API) allows developers to interact with the model programmatically to integrate ChatGPT into applications, products, or services.

9 LLaMA (Language Learning and Modeling Architecture) is a collection of large language models developed by Meta, ranging from 7 billion to 65 billion parameters, which are trained on publicly available datasets. LLaMA models can be applied to various language-related tasks.

10 Alpaca is a smaller open-source AI language model based on LLaMA and developed by computer scientists at Stanford University. Alpaca has around 7 billion parameters and is designed to be more accessible and cost-effective. It can be fine-tuned for specific tasks and can run on devices like Raspberry Pi computers and smartphones.

11 GPT-J-6B (aka GPT-J) is EleutherAI’s open-source generative pre-trained transformer language model, which is designed to produce human-like text that continues from prompts.

12 Typosquatting, also known as URL hijacking, involves registering domain names that are deceptively similar to legitimate websites to trick site visitors into performing actions that capture data or result in malware downloads.

13 According to the developers, “WormGPT V2 is the second version of the GPT, [with] a lot of improvements and upgrades. WormGPT v2 will always be 1/2 months ahead [of] WormGPT v1 in terms of dataset updates and neural updates.”

14 Distributed denial-of-service (DDoS) attacks flood targeted computers with traffic to prevent legitimate users from accessing websites and disrupting Internet connections.

You may also like

Man with glasses looking at screen

February 01, 2023

‘Threat-to-life crime’: cyber attack shuts down emergency rooms in U.S

Read more
Digital Swirl

January 17, 2023

Attackers abuse ‘havoc’ framework in financial institution breach

Read more