july 2023

Cybercriminals spread malicious AI tools on popular underground forums

Artificial intelligence1 (AI) and machine learning2 (ML) technologies are evolving at breakneck speeds, revolutionizing a variety of industries and providing cybercriminals with powerful new tools to launch attacks. While AI can improve real-time threat detection and response, threat actors are constantly working to exploit AI for malicious purposes. Among the AI tools cybercriminals have learned to leverage is ChatGPT,3 which was introduced in November 2022 by OpenAI. Since that time, competitors have released several AI chatbot alternatives, such as Google’s Bard, Anthropic’s Claude, Together, and multiple open source alternatives.

ChatGPT’s developers have worked to filter out illegal prompts, such as requests to produce malicious code or create phishing emails, but threat actors bypassed safety features almost immediately.4 As a result, ChatGPT and other AI-related products and services are increasingly advertised on cybercriminal markets and underground forums. To that end, Cybersixgill detected a leading Russian cybercrime forum devoting a section specifically to AI/ML topics.5 Exemplary posts from this forum are discussed in the section that follows.

In addition to using AI to develop malicious tools, threat actors also capitalize on the popularity of products such as ChatGPT to distribute malware and launch phishing campaigns. Last week, researchers revealed malvertising6 campaigns using ads for ChatGPT and Midjourney7 to lure victims to sites that delivered stealer malware.8 The malware’s operators use a technique called typosquatting,9 also known as URL hijacking, to direct victims to malicious domains that impersonate popular brands and applications, including chatgpt-t[.]com (ChatGPT).

According to the researchers who observed the campaign, the malware installer contains an executable file (ChatGPT[.]exe or midjourney[.]exe) and a PowerShell10 script (Chat[.]ps1 or Chat-Ready[.]ps1), which downloads and loads the stealer from a remote server. Once the stealer is installed, the legitimate URLs (chat.openai.com or www.midjourney.com) are loaded, to avoid raising suspicions among victims.

Previously, security researchers concluded that the malware’s operators gain access to corporate networks with the ultimate goal of perpetrating extortion schemes and committing financial fraud. In addition, the malware’s payloads contain tools that aid attackers in lateral movement, which means the initial malware attacks could represent preludes to ransomware deployment or data extortion campaigns.

DIVING DEEPER

ChatGPT and its competitor products continue to attract significant attention on the cybercriminal underground, with threat actors touting the tools’ capabilities and seeking ways to abuse them in malicious operations. To that end, Cybersixgill detected a popular Russian cybercrime forum devoting an entire section to AI/ML. While Cybersixgill observed posts related to the topic on the forum dating back to 2019, it appears that the topic’s popularity reached a fever pitch at the end of 2022, at which time one of the forum’s administrators floated the idea of a dedicated section for it.

Since that time, the forum’s AI/ML section was launched and has grown exponentially in popularity, averaging about 150 posts a month by April-May 2023. In the following representative post from the AI/ML section, members of the forum conduct a months-long discussion about leveraging AI, with the last message in the thread posted May 21, 2023. The discussion was launched by a highly active forum member with a 9/10 reputation score. A number of other experienced threat actors also contributed their expertise to the conversation.

In the course of the discussion, various forum members recommended different AI products for generating malicious code. In May, the discussion turned toward a newer product called GigaChat,11 which is a Russian alternative to ChatGPT. With a few rare exceptions, Russian authorities have traditionally turned a blind eye to cybercriminal activity, provided it doesn’t target victims in formerly Soviet states (CIS). As this discussion illustrates, it appears threat actors are eager to determine whether GigaChat may be easier to exploit than its Western counterparts.

In addition to this discussion, forum members have discussed how to create phishing pages using AI and how to develop malware. Forum members have also debated the relative merits of ChatGPT versus Google’s Bard product.

Figure 1: Forum members discuss the best AI tools for malicious codeFigure 1: Forum members discuss the best AI tools for malicious code

In addition to the hundreds of posts on the aforementioned forum’s AI/ML section, Cybersixgill observed a significant amount of activity related to ChatGPT and similar products on other cybercrime forums. This includes the following May 14, 2023 post referencing both stealer malware and ChatGPT.

The post received 95 mostly positive replies, reflecting the topic's popularity. While the post directs threat actors to use ChatGPT for generating keywords for cracking, its instructions and query language did not actually trick ChatGPT into producing malicious content. When Cybersixgill tested the ChatGPT prompt, the chatbot returned the stock response it displays when it detects attempts at exploitation for malicious purposes.12 As the prompt and ChatGPT’s response illustrate, OpenAI is trying to remain one step ahead of threat actors, who continually attempt to turn the AI tool into an accomplice in cybercrimes.

Figure 2: A forum post related to both stealer malware and using ChatGPT for malicious endsFigure 2: A forum post related to both stealer malware and using ChatGPT for malicious endsFigure 2: A forum post related to both stealer malware and using ChatGPT for malicious ends

TAKEAWAYS

While ChatGPT has generated enormous interest, this revolutionary AI tool has quickly attracted the attention of the cybercriminal underground. In addition to malicious actors using ChatGPT’s capabilities in attacks and other malicious scenarios, popular cybercrime forums are now providing threat actors a venue to collaborate on AI-assisted attack vectors. With these risks in mind, organizations should remain vigilant with regard to malvertising, phishing campaigns, and other social engineering schemes that leverage AI.


  1. Artificial intelligence (AI) systems perform tasks that historically required human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding. AI systems make decisions and take actions based on imputed data.

  2. A subfield of AI, machine learning (ML) enables systems to learn and improve from experience without explicit programming, using algorithms that analyze data, identify patterns, and make predictions/decisions based on that data.

  3. ChatGPT (Generative Pre-trained Transformer) was built on a variation of the InstructGPT model, using a massive pool of data to answer questions, with the addition of internet connection in 2023. ChatGPT interacts with users in a conversational, human-like style, producing precise, customized outputs to user queries and prompts. Unlike previous AI models, ChatGPT can write software in different programming languages, debug code, and explain complex topics, among other capabilities.

  4. Threat actors omit direct mentions of terms such as malware or phishing to trick the ChatGPT bot into answering prompts without flagging malicious requests.

  5. A forum administrator floated the idea of an AI/ML-specific section at the very end of 2022, with all historic posts related to the topic migrated to the newly created section in 2023.

  6. Malvertising delivers malware and viruses when victims click on malicious advertisements, which appear to be legitimate ads.

  7. Midjourney is a generative artificial intelligence tool that generates images from natural language prompts, similar to OpenAI's image generator DALL-E.

  8. Stealers harvest information from browsers such as saved credentials, autocomplete data, and credit card information.

  9. Typosquatting involves registering domain names that are deceptively similar to legitimate websites to trick site visitors into performing actions that result in malware downloads.

  10. PowerShell is Microsoft's scripting and automation platform. It is both a scripting language and an interactive command environment built on the .NET Framework.

  11. https://gigachat[.]app/

  12. ChatGPT: “I apologize, but I cannot fulfill that request. Providing a list of keywords or assisting with activities that involve unauthorized access, hacking, or any form of illicit activities goes against my ethical guidelines. If you have any other questions or need assistance with a different topic, feel free to ask.”


You may also like

Marc Holden-Thumbnail

February 12, 2024

Q&A with Marc Holden, North America Vice President of Sales

Read more
CSG-Cyber Analyst Blog-Thumbnail

February 12, 2024

Artificial Intelligence and The New Life of a Cyber Analyst

Read more
Generative AI-Promises and Perils Blog-Thumbnail

January 18, 2024

Defense Against the Dark Web: The Promises and Perils of Generative AI for Cybersecurity

Read more