A Discussion About How to Make Generative AI Safer for All
Generative AI applications like ChatGPT have quickly captured the world’s attention and transformed how people and organizations are using it in their daily operations. As many have experienced firsthand, generative AI can bolster an organization’s productivity by automating repetitive tasks, identifying patterns, and creating text and images within seconds.
However, as discussed in The Promises and Perils of Generative AI for Cybersecurity, the next of our Defense Against the Dark Web podcast series, Cybersixgill’s Chris Strand, Chief Risk and Compliance Officer, and Delilah Schwartz, Security Strategist, delve into generative AI and its relationship with cybersecurity and corporate governance risk and compliance (GRC). From a cybersecurity perspective, generative AI holds great promise. It can simulate potential impacts, identify and automate the detection of emerging risks, and help organizations further refine their risk assessment. However, there are concerns about its uses for malicious purposes and how it enables threat actors to exploit and gain access to an organization.
In today’s world, it is not uncommon for individuals to use generative AI to optimize and accelerate their productivity within an organization. However, as evidenced in the case of Samsung, the use of ChatGPT with little corporate oversight can lead to sensitive data being compromised. When a user instructs AI to do something, that individual may inadvertently share sensitive data, breaching a data protection policy the organization should be following.
The Samsung incident occurred when internal data was accidentally leaked to ChatGPT in three separate incidents, highlighting the critical need for GRC regulations and data privacy mandates.
On the flip side, using generative AI can be outright malicious, and all too often, bad actors are the first ones to use new, innovative technologies to up their game. For instance, the use of AI and generative AI tools generated immediate attention within the underground, with discussions about its potential applications for cybercrime. Threat actors were some of the first users of the tool after it was released by OpenAI, and by learning to leverage these technologies, they can now create exploits that are more believable and with less effort, like data spoofing.
Regulations such as the European Union’s General Data Protection Regulation (GDPR) are necessary to provide consumer protection by outlining how data is private, how data can be acquired, and individuals’ rights to their data. For example, with GDPR, people have the right to get back all their data from an organization and to have their data deleted from all corporate systems.
Looking ahead, as new announcements and mandates come out, organizations will need to prove without a doubt that the data they have collected is theirs and is unaltered from its original state. More attention needs to be paid to the possible exposure of data from an AI perspective, and organizations must build better security awareness from the top down. As many regulations have decreased the amount of time an organization has to identify and disclose a data or security incident, they also need to think about including the different vectors that could have caused the incident.
Without question, generative AI is a force multiplier, providing organizations with critical value from cyber threat intelligence (CTI). Cybersixgill IQ, our new generative AI tool, simplifies access to CTI, making it easier to answer complex intelligence-related questions with readily available, actionable insights.
Want to learn more about generative AI’s potential to revolutionize cybersecurity and the compliance process? Listen to our podcast series Defense Against the Dark Web.