Skip to main content
false

Perspectives Home / Trends

Rise of the Machines: A CISO's Perspective on Generative AI

Here are three risks leaders should consider — plus, how to mitigate them.

CISO headshot

Humans have been interacting with a version of AI through voice assistants, facial recognition software and phone photo apps for years. AI’s progress in the last few months, however, has been nothing less than mind-blowing. With its new enhanced capabilities, a meteoric rise in AI’s popularity ensued, and the recent new generative AI services are quickly becoming essential tools for users of all kinds.

This rapidly changing landscape is making the job of a CISO increasingly challenging, with pressure from the business to use AI to increase productivity while ensuring the company’s safety and security. Generative AI has the possibility to revolutionize our life and work — and companies are scrambling to not be left behind by their competitors in this shift. For example, the AI models of today enable companies and employees to automate tasks, generate reports and even create and modify code. These capabilities can significantly increase productivity, both individually and for the whole organization. Generative AI, including ChatGPT, is here to stay — and banning it, like some have suggested, is not a viable option.

AI face generation model

Thus, CISOs must focus on managing risks associated with AI rather than trying to restrict its use. The relevant risks with generative AI can broadly be categorized into three areas: legal and compliance, ethics and security.


Legal and compliance risks arise from the fact that the legal and regulatory landscape surrounding generative AI is still nascent. Consequently, companies may not be aware of all the legal requirements they need to comply with when using this technology. Additionally, there are concerns around data privacy, intellectual property rights and liability issues.

Ethical risks relate to the potential for generative AI to be used in harmful or discriminatory ways. For example, there are concerns around bias in training data sets, leading to output not representative of the real world or high moral standards. There is also a risk that generative AI could be used to create deep fakes or other forms of manipulated content that could be used to spread misinformation or harm individuals.

More on the security risks later.

Data leakage risk

In recent reports, employees of a global industrial conglomerate inadvertently leaked sensitive data by using ChatGPT to check source code for errors and to summarize meeting minutes. These are the tasks that Large Language Models (LLMs) like ChatGPT excel at. While no direct public disclosure of sensitive data occurred after being entered into ChatGPT, the data could be used by ChatGPT’s creator OpenAI to train future models, which in turn could disclose it indirectly through future replies to prompts.

In the specific example of ChatGPT, the retention period for prompts is 30 days. Opt-in for training future models based on your prompts is on by default for free accounts and off for fee-based accounts. OpenAI also recently introduced a feature where you can turn off chat history for specific prompts for free accounts. Conversations started when chat history is turned off won’t be used to train the models, nor will they appear in the history sidebar.

There is, of course, also the immediate risk of erroneous accidental disclosure by ChatGPT itself. For a brief period, due to a bug, ChatGPT exposed the search prompts of other users in its interface.

To mitigate the risks associated with data leakage and LLMs when dealing with critical and sensitive information, use self-hosted copies of the models or cloud-provided ones, where the terms of use and privacy policy match your organization's risk appetite more closely. If this is not an option, enforcing limits on the amount of data fed into public models could lower the risk of accidental data leakage. This would allow the use of LLMs like ChatGPT for most tasks while preventing a user from copying and pasting large amounts of proprietary data into the web form for summarization or review by the model.

Vulnerability exploitation risk

There are multiple use cases for using generative AI to scan source code for vulnerabilities and produce reports. This is incredibly useful for developers and vulnerability specialists in finding and fixing issues quickly. The same capability, however, can allow malicious actors to find vulnerabilities before the defenders do. On top of that, LLMs can also generate the exploit code for the vulnerabilities discovered, effectively becoming zero-day creation machines. So, new vulnerabilities could be exploited in seconds or minutes instead of days and weeks like before. The zero-day attack window for vulnerabilities is expected to grow for threat actors unless the defenders also increase the speed at which they can produce and publish patches.

To lower the risk of vulnerabilities in your environment, it is always prudent to put extra effort into keeping your systems patched and up to date. In light of the current advances in AI, it might be relevant to use its productivity-enhancing capabilities to lower the time to find vulnerabilities and do something about them. To keep track of your software risk exposure, it is also advisable to implement and maintain a software bill of materials (SBoM). This should drastically reduce the time and effort to find out if you are exposed to the risk of emerging vulnerabilities and exploits. 

Phishing risk

Phishing has been one of the most common starting points for breaches for many years. Therefore, mitigating and managing this risk should be top of mind for every CISO. Spear phishing, a subset of phishing, uses emails that require two components to be successful: advanced research on the target and a customized attack email. Together, these efforts invariably lead to higher response rates. These tasks have historically been labor-intensive for attackers, but, unfortunately for defenders, they are now easy to automate with AI. The quality of its output is very high, further lowering the effort of spear phishing. Mitigating phishing risks requires deploying anti-phishing software, educating employees and making reporting suspicious emails to “phishing ponds” a functional part of your email software.

Looking forward

While generative AI has the potential to increase productivity and competitiveness for companies, CISOs must be aware of its strategic, legal, ethical and cybersecurity-related implications. Proper safeguards and countermeasures must be put in place to mitigate the risks associated with AI use. Opting for self-hosted or cloud-based language models that support protective measures can help ensure that generative AI is used safely and securely.

Read more Perspectives by Splunk

July 11, 2023  •  3 Minute Read

The Best Pieces We’ve Read (And Watched) This Year — So Far

Splunk’s thought leaders share the most valuable reports, blogs, webcasts and articles they’ve encountered in 2023.

July 11, 2023  •  5 Minute Read

Strategic Investments CISOs Should Make for Long-term Success

Philadelphia’s new deputy CISO shares tips on training the next generation of security leaders and more.

July 11, 2023  •  8 Minute Read

Why the Talent Pipeline Is About To Get Even Smaller — And What Effective Tech Leaders Can Do About It

There are a few solutions savvy hiring managers can explore to be proactive.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.