Humans have been interacting with a version of AI through voice assistants, facial recognition software and phone photo apps for years. AI’s progress in the last few months, however, has been nothing less than mind-blowing. With its new enhanced capabilities, a meteoric rise in AI’s popularity ensued, and the recent new generative AI services are quickly becoming essential tools for users of all kinds.
This rapidly changing landscape is making the job of a CISO increasingly challenging, with pressure from the business to use AI to increase productivity while ensuring the company’s safety and security. Generative AI has the possibility to revolutionize our life and work, and companies are scrambling to not be left behind by their competitors in this shift. For example, the AI models of today enable companies and employees to automate tasks, generate reports, and even create and modify code. These capabilities can significantly increase productivity, both individually and for the whole organization. Generative AI, including ChatGPT, is here to stay, and banning it, like some have suggested, is not a viable option.
Thus, CISOs must focus on managing risks associated with AI rather than trying to restrict its use. The relevant risks with generative AI can broadly be categorized into three areas: legal and compliance, ethics and security.
Legal and compliance risks arise from the fact that the legal and regulatory landscape surrounding generative AI is still in nascency. Consequently, companies may not be aware of all the legal requirements they need to comply with when using this technology. Additionally, there are concerns around data privacy, intellectual property rights and liability issues.
Ethical risks relate to the potential for generative AI to be used in harmful or discriminatory ways. For example, there are concerns around bias in training data sets, leading to output not representative of the real world or high moral standards. There is also a risk that generative AI could be used to create deep fakes or other forms of manipulated content that could be used to spread misinformation or harm individuals.
We will go into more detail on the security risks below.
Data Leakage Risk
It was reported recently that employees of a global industrial conglomerate inadvertently leaked sensitive data by using ChatGPT to check source code for errors and to summarize meeting minutes. These are the tasks that Large Language Models (LLMs) like ChatGPT excel at. While no direct public disclosure of sensitive data occurred after being entered into ChatGPT, the data could be used by ChatGPT’s creator OpenAI to train future models, which in turn could disclose it indirectly through future replies to prompts.
In the specific example of ChatGPT, the retention period for the prompts is 30 days. Opt-in for training future models based on your prompts is on by default for free accounts and off for fee-based accounts. OpenAI also recently introduced a feature where you can disable chat history for specific prompts for free accounts. Conversations that are started when chat history is disabled won’t be used to train the models, and won’t appear in the history sidebar.
There is, of course, also the immediate risk of erroneous accidental disclosure by ChatGPT itself. For a brief period, due to a bug, ChatGPT exposed the search prompts of other users in the user interface.
Vulnerability Exploitation Risk
There are multiple use cases for using generative AI to scan source code for vulnerabilities and produce reports. This is incredibly useful for developers and vulnerability specialists in finding and quickly fixing issues. The same capability, however, can allow malicious actors to find vulnerabilities before the defenders do. On top of that, LLMs can also generate the exploit code for the vulnerabilities discovered, effectively becoming zero-day creation machines. So, new vulnerabilities could be exploited in seconds or minutes instead of days and weeks like before. The zero-day attack window for vulnerabilities is expected to grow for threat actors unless the defenders also increase the speed at which they can produce and publish patches.
To lower the risk of vulnerabilities in your environment, it is always prudent to put extra effort into keeping your systems patched and up-to-date. In light of the current advances in AI, it might be relevant to use its productivity-enhancing capabilities to lower the time to find vulnerabilities and do something about them. To keep track of your software risk exposure, it is also advisable to implement and maintain a software bill of materials (SBoM). This should drastically reduce the time and effort to find out if you are exposed to the risk of emerging vulnerabilities and exploits.
Phishing has been one of the most common starting points for breaches for many years. Therefore, mitigating and managing this risk should be top-of-mind for every CISOs. Spear phishing, a subset of phishing, uses emails that require two components to be successful: doing advanced research on the target and a customized attack email. Together, these efforts invariably lead to higher response rates. These tasks have historically been labor-intensive for attackers, but, unfortunately for defenders, they are now easy to automate with AI. The quality of its output is very high, further lowering the effort of spear phishing. Mitigating the risk of phishing requires the deployment of anti-phishing software, educating employees and making reporting suspicious emails to “phishing ponds” a functional part of your email software.
While generative AI has the potential to increase productivity and competitiveness for companies, CISOs must be aware of its strategic, legal, ethical and cybersecurity-related implications. Proper safeguards and countermeasures must be put in place to mitigate the risks associated with AI use. Opting for self-hosted or cloud-based language models that support protective measures can help ensure that generative AI is used safely and securely.