When a new innovation is introduced, media, markets, and even regular users often herald its revolutionary potential. People buzz about how it will change the world broadly, deeply, positively, and negatively.
Generally, the hype subsides. The technology might not live up to expectations, and the world’s way of doing things might prove too resilient for change. There might be issues with scaling and application. In the end, change comes gradually, as the technology undergoes iterative improvement and is implemented in a more deliberate way.
ChatGPT appears to be following this trend. At its onset many revered the AI model as nearly omnipotent; as time passes, many limitations, inconsistency, and outright hallucinations have emerged, quelling much of the hype.
On the dark web, threat actors were immediately excited about the prospects of the new technology and its potential in cybercrime, but the hype largely subsided in the following months. However, over the last few weeks several AI-powered tools emerged on the underground. While far more modest in capabilities than doomsday predictions of autonomous cyberattacks, they nevertheless demonstrate that threat actors are constantly seeking ways to use generative AI to their advantage.
Peaks and troughs
Analysis of underground mentions of "ChatGPT" unambiguously conveys a hype cycle. Specifically, mentions peaked on February 6, with a total of 10,203 items discussing ChatGPT on that day. In the following months, mentions declined to only thousands of items per day. As expected, once the technology’s novelty wore off and its limitations came to light, underground actors discussed it less.
Figure 2: Number of mentions of “ChatGPT” on dark web forums in the past year.
An examination of more recent discussions among threat actors about ChatGPT echos much of the disappointment and critiques of the technology that have been expressed on the surface web.
For example, in a thread from a well-known Russian underground forum (figure 3), threat discussed whether Bard or ChatGPT is a better AI tool. One user asserted that all AI tools have flaws and that GPT-4 is actually worse than version 3.5 (on which ChatGPT is based). Another actor added that AI tools are not always accurate and should not be completely relied upon.
Figure 3: A discussion between threat actors’ discussion on a well-known Russian underground forum indicates doubt about AI technology
In another thread (Figure 4), a threat actor called AI tools "not that great" and asserted that the journey toward attaining genuine artificial intelligence remains considerably extensive.
Figure 4: On an underground forum, an actor asserts that today’s various AI tools are just a “fad” and distant from actual intelligence.
AI-Powered Hacking tools
The declined hype does not mean that threat actors have given up on AI tools altogether. In fact, three new AI-powered hacking tools have emerged on the underground in recent weeks--WormGPT, FraudGPT, and DarkBARD. However, these tools are quite simplistic: WormGPT, for example, offers little beyond allegedly allowing its users to evade ChatGPT’s security restrictions, enabling them to prompt, for example, for phishing messages and malware code (figure 5).
Figure 5: A screeshot from a well-known Russian underground forum advertising “WormGPT”, allegedly capable of facilitating malicious activities using Chat-GPT like technology. The subscription for the product can be bought for 100 EUR/month or 500 EUR/year
While a malicious actor could theoretically benefit from unrestricted prompting, methods of evading ChatGPT’s security measures are well-documented, and anyways, there is a considerable difference between generating a few lines of code and an entire malicious program. Indeed, many actors expressed doubt about its true potential for facilitating malicious activities, and one even speculated that WormGPT is a honeypot (figure 6).
Figure 6: A screenshot from a well-known underground forum in which a threat actor asserts that WormGPT does not offer much and even speculates that it might be a honeypot.
When ChatGPT was released, many industry analysts announced a new era of AI-powered cyberattacks, and many underground actors excitedly discussed how they could use the technology for malicious purposes. In parallel to broader trends, the hype subsided into disappointment. Underground discussions of ChatGPT diminished.
It is too soon to tell if the recent emergence of three AI-related hacking tools is the beginning of a new stage, in which underground actors create meaningful malicious applications of AI technology. What is certain, however, is that three tools pales in comparison to the thousands of hacking tools available on the underground, and that these tools offer only marginal utility, not at all approaching predictions of autonomous cyberattacks.
Lest we become complacent, this story is not over. The technology continues to improve, and threat actors will continue to develop ways to exploit it. Like with any threat, defenders must conduct level-headed analysis and broad monitoring to understand adversarial tactics, techniques, and procedures and protect their organizations accordingly.