Skip to main content

Perspectives Home / EXECUTIVE STRATEGY

Rogue AI or Jedi Ally: On the Impact of Artificial Intelligence on Enterprise Security

AI is heading towards ubiquity, presenting both new vulnerabilities and opportunities. When you must protect and innovate in already complex environments, it’s important to consider how AI can impact your organization’s security posture.

A women sitting on her desk, places her palm on a monitor while a circle gradient encompasses it.

Richard M Marshall, Founder of Concept Gap Ltd
January 10, 2024  •  4 minute read

AI killing off the human race may not have CIOs waking up in a cold sweat (yet). But the very real risks of how both colleagues and bad actors can use AI should. Equally, in the light of day, technology executives should consider how AI will help defend their business against ever-expanding threat profiles. Like the Force, AI has both light and a dark side — and every enterprise is going to have to face up to this dichotomy.

While advancing AI is democratizing the creation and operation of cyber attacks, it’s also enabling the creation of new tools to help secure our infrastructure. Technology is the enabler, but people are the key. Whether it’s training staff to recognize ever more credible phishing attacks or to know when it is safe to use a public generative AI service, or simply keeping humans in the loop, it’s essential that your SOC team is equipped with a new generation of AI-enabled tools.

AI with ill will

AI-powered tools, whether machine-learning-powered analytics or transformer-based large language models, are incredibly powerful, and criminal networks are just as ready to take advantage of that power as regular businesses. They already have the technology talent and compute capacity to build their own dark AI tooling, and this is creating a whole new set of threats.

Black-hat AIs never sleep and are infinitely patient.

Black-hat AIs never sleep and are infinitely patient. AI-powered tools looking for vulnerabilities in your network will not get bored and switch to easier targets. They will keep going until they exhaust all possibilities so we have to be equally assiduous in locking everything down.

While malware creation tools have been around for a long time, AI-based tools are now simplifying the creation of viruses and similar, allowing relatively-unskilled hackers to create enterprise-specific attack tools. Custom attack code clearly is much more difficult to detect, meaning we need to raise our game to meet that.

Phishing is the ransomware operator’s weapon of choice for getting into a network, but they are often handicapped by poor language and design skills. Not anymore. Generative AI is particularly good at writing persuasive emails with perfect fluency in dozens of languages. No longer can we use bad grammar as an indicator of potentially harmful messages, and we need to train our colleagues about the threat of ever more plausible phishing and spam messages.

Talking of colleagues, they too can unintentionally become a threat. It’s all too easy to put confidential material into a public AI service such as ChatGPT without realizing that it may surface in a competitor’s generated text. Indeed, given that a competitor will be asking about the same topics, that is exactly where it will appear.

Accidentally training AI models with company data is only one side of the risk, however, as data poisoning is a nascent danger. Generative AI makes it easy to flood the internet with fake content, for example trashing your products. The AI scooping that up doesn’t know it is not true and will offer them up to users. Without proper protection, a public-facing chat bot may be turned, as Microsoft found to their cost with Tay. It’s essential to test your chat bots, both internal and external, regularly to see that they’re not developing bias or other undesirable behaviors.

But AIs can still be one with the Force

Fortunately, the white hat AI is as unwavering in its mission to keep out the bad guys. Awake, vigilant and trained, AI monitoring tools are not new in cybersecurity, but they are getting better. The key is the quality of the training data. A domain-specific training corpus is essential; this is not a general problem that OpenAI can solve. For companies that have been using machine learning for many years, their data models may be already well refined, as well as continuously updated for the evolving threat landscape.

Better detection reduces false positives and alarm fatigue, helping SOC staff focus on important work in securing the infrastructure. AI tools can also be invaluable in helping train new staff joining a security team, making it easier to bring on new talent that is in painfully short supply.

While machine-learning enhanced detection is well established, AI is now moving to help create customer filters, rules and alert definitions based on natural language requirements. Faster, better coding of these essential components is critical for strong infrastructure protection. Providing tools that not only accelerate their creation, but broaden the range of people who can create them, is a great benefit, enabling businesses to scale up their security teams and allowing more people to develop valuable cybersecurity skills. And will soon be non-negotiable to keep up with the pace of threats.

Decision responsibility must remain with the people in the SOC.

But it is important to remember that while AI may seem miraculous, it’s just another tool in the box. The people in the SOC must remain responsible for decision making. Keeping a human in the loop is a vital element of governance — no matter how smart the tools become, we still need to trust the human instinct for distinguishing trouble from normal operation. The last thing we need is adding internal AI overreach to the ever-growing array of risks.

Read more Perspectives by Splunk

NOVEMBER 2, 2023 • 18 minute watch

How the C-Suite Should Think About AI Today

Here’s why being methodical in an approach to AI adoption will increase efficiency and deliver more value to customers.

January 5, 2024 • 2 minute read

Data Privacy in the Era of AI

What impacts will new generative AI advancements have on data privacy regulation in 2024? And how should companies prepare?

July 31, 2023 • 5 minute read

Top 3 Strategies for Tech Leaders To Thrive in the AI Revolution

AI is changing the workplace. Your leadership strategy should change, too. These three best practices will set you up for success.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.