news
November 27, 2023by Cybersixgill

2024 Predictions: AI Will be Used as an Attack Tool and Target

Powered by cybersixgill IQ

In the second installment of our blog series discussing 2024 predictions, we explore how threat actors will use artificial intelligence (AI) for adversarial use, from automating large-scale cyberattacks and creating human-like phishing email campaigns to developing malicious content targeting companies, employees, and customers.

CSG-Prediction ThumbnailWhile AI’s potential is exciting, the technology has also become a game-changer for cybercriminals, enabling them to mount attacks faster and at a grander scale. Its misuse spans across industries, leaving organizations vulnerable to increasingly sophisticated attacks such as speech synthesis impersonating people and companies, spam emails using information pulled from social media, and even exploiting AI systems themselves.  

In our recently released Cybersecurity in 2024: Predicting the Next Generation of Threats and Strategies, we reveal how AI will shape the industry, influence attackers, and change security strategies. In the coming year, our experts predict that AI will be used as both an attack tool and a target, as black hat hackers use AI to improve effectiveness and legitimate uses of AI become a prominent attack vector. For instance, AI will enable cyber attackers to target users’ credentials that can be compromised and sold in underground markets. Additionally, malicious attacks like data poisoning and vulnerability exploitation in AI models will gain momentum.

Of particular concern is the rise of social engineering, like pretexting, that is further enabled by AI. Why? Generative AI can easily and quickly mimic the writing styles of legitimate organizations and individuals so that phishing emails seem more credible. Additionally, threat actors can now pretext in multiple languages to target a larger group of victims and conduct attacks over an expanding attack surface.

Across the globe, governments, technology companies, and industry thought leaders are growing increasingly concerned over the uncertainty and risks presented by AI’s many unknowns. For instance, Europol, the European Union’s law enforcement agency, released a report on ChatGPT’s impact on large language models (LLMs) and law enforcement earlier this year. The report stated that ChatGPT makes it possible for those with limited English proficiency to realistically impersonate English-native organizations and individuals.

As AI models become more sophisticated and the call for regulation gathers momentum, technology companies and governments must work together to minimize AI’s risks. One such example of this much-needed collaboration took place at this year’s DEFCON, which held the largest red teaming exercise for any group of AI models. Supported by the White House and technology leaders, including OpenAI, Google, and Meta, the event had hackers use generative AI to make LLMs create discriminatory statements, false information, and more. Additionally, in November 2023, the Federal Trade Commission (FTC) announced the Voice Cloning Challenge to promote the development of preventing, monitoring, and evaluating malicious use of voice cloning technology. The results of this type of collaboration can significantly enhance AI safety and help to solve AI security challenges.

Want to learn more about Cybersixgill’s insights and predictions for 2024 to keep your assets and stakeholders safe? Download Cybersecurity in 2024: Predicting the Next Generation of Threats and Strategies.

This article was created using Cybersixgill IQ, our generative AI capability that supports teams with instant report writing, simplifies complex threat data and provides 24/7 assistance, transforming cybersecurity for every industry and every individual, at every level.

You may also like

Screen showing a malware alert

May 09, 2024

New 'Latrodectus' Malware Linked to Notorious 'IcedID' Developer: A Deep Dive into Targets, Potential Impact, and Remediation Steps

Read more
Chris Strand-Thumbnail

May 07, 2024

Enhancing Security Posture with Cyber Risk Intelligence Part 2

Read more
Two cybersecurity professionals looking at a laptop

May 01, 2024

State of the Underground 2024: Combating RisePro, Lumma, Vidar, and other top stealer malware

Read more