news
January 18, 2023by Dov Lerner

Artificial intelligence is creating a new class of hacking tools

While companies and governments have been using machine learning for a while, 2022 might be marked as the year that this artificial intelligence technology became available to the masses. DALL-E, Stable Diffusion, and Midjourney enable users to input a text prompt and generate an image. ChatGPT enables users to enter text prompts and receive a generated response. These programs are free to use and feature convenient user interfaces.

Excited users worldwide have shared the results of their work. Image generators have created beautiful artwork, and ChatGPT has written works of fiction, self-help guides, and code. The only limit to these tools, it seems, is the human imagination.

These tools have prompted vast speculation about their applications and impact. Many have predicted that they will eliminate or fundamentally alter many professions. Others have sounded the alarm that we are entering a new era of cyberattacks in which threat actors can unleash their AI to attack our systems. Threat actors are already trying to test the limits and figure out what is possibles.

Are we doomed for an AI cyber armageddon?

Let’s not get carried away. Instead, let’s perform a level-headed analysis.

First, it is critical to distinguish between automation and autonomy. In automation, a machine receives an input and a set of instructions, from which it produces an output. Humans have long been using tools for automation, from the printing press to computers.

Just about all cyberattacks rely on automated tools. Threat actors use scanners to perform reconnaissance and discover vulnerable IP addresses and software. They use malware to infect victim machines. And they use hacking tools to achieve various purposes, varying from spamming to cracking password hashes to launching DDoS attacks.

In autonomy, however, a machine makes decisions according to an abstract goal. For example, a human can instruct a self-driving car to drive safely from point A to point B. It is up to the system to choose which route to take and how to avoid obstacles.

Image and text generators are autonomous. The user gives input without rules defining whether to choose a particular color or word. Their input is an abstract idea, which the machine must process to create something original yet grounded in the user’s predefined constructs.

What would an autonomous cyberattack look like? Imagine if a threat actor could simply enter a prompt, such as “launch a ransomware attack against Org A,” or even “gain access to a system with administrator privileges in Bank B.” The system would understand the full process of an attack and make decisions about the tactics, techniques, and procedures to use at each stage. Just like a self-driving car, the attacker inputs the destination but not how to get there and how to avoid obstacles on the way.

Indeed, tools such as ChatGPT are incapable of this, mostly because this was not their designed purpose. However, even if someone set out to create an autonomous hacking tool, they would face significant challenges:

Ethics: The organizations behind current public AI tools have limitations in their terms of service to prevent causing various types of harm. While imperfect, these do demonstrate effort by providers to prevent abuse.The AI industry would generally be averse to creating an automated hacking tool. While a state actor might be able to design it, it would be improbable that it would ever become public and get into the hands of a run-of-the-mill threat actor.

Training data: Autonomous AI “learns” how to make decisions through being loaded by vast corpora of training data. For example, training ChatGPT required hundreds of billions of parameters. Tesla uses the billions of miles driven by its entire fleet to train its self-driving systems. To our understanding, there is no single, central, and labeled repository of cyberattacks, highlighting the full menu of TTPs and the variety of targeted systems. Without this, an end-to-end autonomous hacking tool is essentially a non-starter.

Rage along with the machines

Even as an autonomous hacking tool, fortunately, remains untenable, autonomous systems can still perform many functions that facilitate cyberattacks. For example:

Reconnaissance and research: AI-powered searching promises to return answers to sophisticated questions. Threat actors can use AI tools to perform deep reconnaissance to select their targets and discover their weaknesses. For example, someone could query, “Find me a list of financial services companies based in New York that recently recruited junior IT engineers,” and thus produce data that could help in a social engineering attack.Similarly, actors can use AI search functions to discover which malware may be the most effective for a specific type of attack. For example, “Where can I find a RAT that has command-and-control functions via DNS tunneling?”

Text generation: Very often, a potential victim can identify a social engineering attempt because the attacker simply does not speak their language well enough. That is, awkward phrasing, weird punctuation, and random capitalization are red flags for an English speaker. However, text output by ChatGPT sound like it originated from a native speaker.

Even further, users can also direct AI to assume a particular style and tone. Thus, it can create an urgent request from the CEO asking for a password reset or an invoice from a vendor instructing to change a payment account. Altogether, AI-generated text can already create more convincing phishing pretexts.

Deepfakes: An attacker can use AI to create fake images, videos, and voice notes. They can use these in a well-placed social engineering attack, such as generating an official email, document, website, or voice note impersonating a company or one of its members.

Malware coding: Already, AI is adept at writing short scripts. An actor posted on a prominent underground forum that they used ChatGPT to code a Python infostealer malware.

In its current form, AI cannot create complex software (in the same way that it can write short stories but not full novels with intricate plots and developed characters). However, an attacker can daisy-chain several scripts to effective ends. In any case, we can presume that AI’s capabilities to create more complex software will improve over time.

Vulnerability research: AI excels in finding patterns and aberrations in large datasets. Attackers might be able to use it to discover bugs and zero-day vulnerabilities in software, and they might even be able to develop exploits. A vulnerability-finding AI could theoretically train on the National Vulnerability Database’s vast library of CVEs to understand how software vulnerabilities exist, and it could use repositories like Exploit-DB to understand how exploits take advantage of these flaws.

These five functions ought to be concerning. Better reconnaissance, more convincing social engineering attempts, and easily-produced malware can inflict huge damage. They can enable attacks to be better targeted and more effective.

And if AI can successfully discover vulnerabilities better than a human researcher, we might experience a torrent of critical zero-days, destabilizing systems worldwide. (Fortunately, defenders can also use AI, but we’ll save this for another article.)

Even so, all of these functions require a considerable level of prior expertise in hacking. Just like a bulldozer operated by a construction worker or a calculator in the hands of a mathematician, more advanced actors are posed to reap the most significant benefits from AI. AI will not give a script kiddie APT-level capabilities. Instead, those with the greatest understanding of systems, processes, and networks will be able to direct and wield AI to the greatest effect.

At the bottom line, we should not be worried about malicious AI launching cyberattacks. However, we should be concerned that malicious actors will use AI to assist them in launching cyberattacks. Providers of AI platforms must do their utmost to prevent abuse. The rest of us must follow technological developments and understand to what extent AI becomes a part of the threat actor toolbox of tactics, techniques, and procedures. We must continuously assess the risks and prepare our defenses accordingly.

You may also like

Manufacturing workers equipping themselves with threat intelligence

April 26, 2024

Gabi Reish speaks with manufacturing.net about threat intelligence and ransomware attacks

Read more
Pink and blue geometric orb symbolizing Third-Party Intel Supply Chain

April 24, 2024

Illuminating a threat analyst’s blind spot: third-party threat intelligence

Read more
View from the entrance of a tunnel with tracks extending towards a futuristic, dystopian cityscape.

April 19, 2024

Critical Atlassian Flaw Exploited to Deploy Linux Variant of Cerber Ransomware

Read more