news
by Benji Preminger & Gilad Rosenthal

Cybersixgill IQ: Leveraging generative AI to secure customer data

At Cybersixgill, our highest priority is to protect the privacy and security of our customers' assets. Keeping our clients' sensitive data safe is one reason why we never share customer asset information with OpenAI. Any generative AI processes that involve external services, such as OpenAI's GPT/ChatGPT, use various methods to protect the confidentiality of our customers. These methods include masking sensitive data and local processing, ensuring that your data is secure with us.

How Cybersixgill leverages generative AI without compromising customer safety

Generative AI is a promising field with exciting potential for cybersecurity. However, it is still an emerging technology, and we have already seen some risks involved with relying on third-party vendors like OpenAI to handle sensitive customer information. Cybersixgill takes these risks very seriously and have implemented many measures to ensure your data security and privacy:

  • We have signed a Data Processing Addendum (DPA) with OpenAI to safeguard our data and IP.

  • Minimizing Data Transfer: As a principle, we only transfer the bare minimum of data needed. By employing efficient data reduction strategies and smartly using local resources, we ensure that only the most essential, non-sensitive information is shared.

  • Masking Sensitive Data: We use a data masking process before sharing data with OpenAI or any other third party. Utilizing this approach, we replace the actual data with randomized characters or other data ‘noise’ to ensure that the structure of the data remains intact for analysis. At the same time, the sensitive information in the data set is well-secured.

  • Sending Metadata Only: In some scenarios, we only send metadata to OpenAI. Metadata is the 'data about the data' – it doesn't include the actual content but contains details about it. An excellent example of this is our leaked credentials module: The module's data is stored in a dataframe, and instead of sending all the data, we only send the column names (metadata) to OpenAI. We then receive code from OpenAI, which is run locally on our machines. This way, we can extract the necessary information without exposing sensitive credential information to the external party.

  • Using Differential Privacy: This is a technique where we publicly share information about a dataset by describing group patterns within the dataset while withholding individual-specific information. This way, individual privacy risk is mathematically bounded, even amidst external information.

  • Local Processing: We always prioritize local data processing to limit the amount of data transferred over the Internet. We may have to extract features from the data, convert it into lower-dimensional representations, or use local models to anonymize it before it's sent to OpenAI.

  • Developing Our Proprietary Models: To further tighten our data security measures, we build our own proprietary machine learning models. These models train on our sensitive data, but they do so on our secure servers. Using our own servers guarantees that we maintain control and ownership of the data and the insights we derive from it. 

The protection of sensitive customer data is a paramount objective at Cybersixgill. The measures we've put in place not only safeguard the privacy and security of our data but also ensure the responsible and efficient use of AI technology.

You may also like

A close-up of a person's face is depicted, focusing on their eye, which appears to be augmented with advanced technology. The scene is bathed in blue and orange hues, with numerous digital elements, holographic displays, and data streams surrounding the eye. The overall aesthetic is futuristic and cybernetic, suggesting a deep integration of human and technology.

May 23, 2024

Guard against surprise attacks with our supply chain intelligence module

Read more
Financial performance graph with downward arrows indicating a decrease in numbers. Economic downturn concept.

April 30, 2024

LockBit Ransomware Strikes US Finance Agency through a Third-Party IT Vendor

Read more
View from the entrance of a tunnel with tracks extending towards a futuristic, dystopian cityscape.

April 19, 2024

Critical Atlassian Flaw Exploited to Deploy Linux Variant of Cerber Ransomware

Read more