false

Perspectives Home / CISO CIRCLE

CISO Q&A: Walking the risk tightrope to drive innovation

Understanding the evolution of threats as generative AI ups the stakes for defenders

In the era of AI, security executives and teams are coming to terms with new data privacy regulations, opportunities and a host of advanced threats that the technology will surely introduce. But along with the meteoric rise of AI comes a host of new business risks. And security professionals will often have to adjust to these new risk levels if they want to innovate and stay competitive.

 

During Splunk’s .conf24 in Las Vegas, a Perspectives editor sat down with Leonard Wall, current Observability Advisor at Splunk and former Deputy CISO of Clayton Homes, to discuss how AI will compel organizations to reevaluate their risk posture, how it will help teams uplevel their skills and capabilities, and how threat actors will leverage AI to execute new, malicious attacks.

 

The following is an edited excerpt of the discussion. 

 

 

Perspectives: With the surge of Large Language Models (LLMs).There's going to be a lot of data privacy implications with the adoption and creation of LLMs. What do you see as the biggest risk, and what are you most afraid of?

 

Leonard: When we, as security practitioners, talk about this, we always talk about exceeding the threshold of other risk factors. We don't want to exceed the top end of the risk threshold, but at the same time, we aren’t addressing the lower end either. In my opinion, the risk with AI is that companies are going to be too conservative. Look at cloud computing. One of the things that I’ve seen as a successful use case is to start with the things thaht give you back time — because it’s low risk. So don’t start with your most critical data, or anything that will cause you problems with regulators. Maybe it’s public data, marketing, or other use cases. And then allow an AI task force, maybe a combination of innovators, business stakeholders, and security professionals to come in and build those policies, and build those security controls. 

 

But you also have to be innovative. It doesn't matter what the risk is; if the upper limit of your risk appetite and the lower limit are too far apart, you’re leading them off a cliff. If they’re too close together, you’re not innovating. And security practitioners and security leaders have to be the enablers. We can’t be the ones preventing this. We have to work with our stakeholders. We have to work with creators and find ways to build these roads to use this technology. That’s a part of the role. 

 

 

Perspectives: Security professionals tend to be very conservative about risk. Do you think that you have to take a certain amount of risk because you have to survive and innovate?

 

Leonard: Yeah, that's why you've seen people in leadership roles and non-security folks distance themselves from security professionals. Security folks are not viewed as business leaders, so I see it all the time. I’ve seen it on my team; we say, “No, we can’t do that; it’s too risky.”  We, as security leaders, should be enabling the business to go fast. Yes, they should be able to make informed decisions, but we’re not business decision makers. They should be educated. They should be informed. And we should be able to talk about the decisions. But on the flip side of that, security professionals should be the enablers. Security leaders need to keep business outcomes at the forefront of their decision making process related to the enablement of business.

 

Think about it like a car.  It’s the business’s job to go fast. And it's our job to be the brakes. Brakes in a car are there so it can go fast. We should be able to pull the lever and have that conversation at any point in time when things are risky, but when the shareholders or senior business leaders want to take things in a new direction or go a little bit faster, that’s their job to make that decision. 

 

It’s our job to deliver that information so they can make the decisions. As security leaders, we need to have the mindset that there’s a certain level of risk we need to take. That’s how you get more leadership buy-in, is by managing those outcomes and then being able to map security to that. 

 

 

Perspectives: How should organizations be thinking about their AI strategy and their AI policies? What do they need to consider?

 

Leonard: There are builders and there are consumers of AI. If you’re consuming AI, you’ll need to determine what AI guardrails are necessary for business associates and third-parties, like supply chain partners, as that will have legal and compliance consequences. You’ll need to understand what data is being used and what their AI policies are.  It is also important to understand what rights these AI vendors have to your data — not just what they're doing but what they have the ability to do in the future. Does language in their contracts allow them to use AI or large language models for future use cases? They may not be able to implement AI now, but they might a year from now. Businesses need to focus on what they have in their contracts and what the third parties are planning to do. Spend time with them. Have them explain to you what their large language model is. And don't be afraid to say no, push back, and put additional controls in place. Now, if you're building AI, I think that opens a whole new can of worms. I think that is a long conversation with your legal team to understand your exposure. 

 

 

Perspectives: What do you see as the most opportunity with AI? How do you see it being used in cyber defense? Or just being a business enabler in general?

 

Leonard: On the cyber defense side, I think that the biggest opportunity is up-skilling talent. It’s going to allow us to be more valuable. An example of upskilling is the Splunk AI assistant. My team can use the Splunk AI Assistant to help with context and understanding the platform quicker than ever. This allows analysts and engineers to focus more on the outcomes and value more than administering the Splunk Platform. I think we have a long road ahead of us with AI, just as we did with cloud computing. We have to learn it, and AI’s not always going to be right. I’m also not the type that has my head in the sand, saying it won’t replace jobs. But it will help us improve our teams’ performance. And it will reduce the amount of time for us as security practitioners to do routine jobs. 

 

 

Perspectives: What about filling talent gaps that exist in security teams?

 

Leonard: Going back to the possibility of upskilling, where AI will be most useful will be as an assistant and an enabler. If you have generative AI as an overlay on Splunk, for example, I now have somebody who can use it for a number of ways to navigate Splunk quickly and easily. They could do it inside the platform versus opening another tab or an untrusted generative AI tool. And the fidelity of the data coming out of a platform specific generative AI tool is much better than an open source AI, which makes these tasks more effective. Also, there’s a risk of not using it. If you're not using it now, what does that look like six months, six years from now? When the company isn’t using these things, what does that look like? 

 

 

Perspectives: How do you see AI presenting new and unexpected threats to organizations?

 

Leonard: So the number one thing that we've seen is attackers who may not be proficient in the  English language using AI to generate an email or message to be used in a phishing or smishing attack. Security practitioners have long used certain greetings, wording, etc. to help identify these attacks, but generative AI adds a whole new level of complexity. 

 

The other threat that we’re seeing is AI bots that tell you how to exploit a vulnerability. Before generative AI, you didn’t know what the tool was or where to find it. And so now you're going to see attackers leveraging threat intelligence to access mobile assets and turn around and use generative AI to write the script to exploit them. 

 

In the early 2000s, there were lots of attacks, but the impact of those attacks was very, very small. Because they were more of an annoyance. But now, while the attacks aren’t going away, the likelihood and impact of a successful attack is increasing. Now, instead of fighting off a bunch of small animals, you're now fighting off a bunch of large animals. Instead of fighting off ducks, you’re fighting off grizzly bears. Attackers are going to continue to find new ways to leverage AI and drive these threats wherever they are. 

 

So realistically, we're going to see is an increase in the impact and volume of attacks. And going back to the regulation around risk, a lot of organizations aren’t going to leverage AI because they view it as too risky or their CISO says no.

 

 

Perspectives: So whereas before maybe the attacks were more severe and intense, but maybe the number was smaller. Now they’re going to be more voluminous?

 

Leonard: Yes, they’re going to be more voluminous, and they’re also going to be more impactful. Going back to the speed of delivery and value, attackers are not going to spend a week or two weeks learning how to write a script. They can ask AI.

 

We don’t know what legislation or regulation that's going to come out in the next five years. And the other thing, too, with this new SEC regulation, it doesn't matter how you think a breach got out. It just matters that it's out. And it's your fault. 

 

 

Perspectives: Security professionals are in a frenzy about the SEC cybersecurity ruling. How's AI going to complicate that even more?

 

Leonard: With AI and large language models, if your data ends up on the internet, how do you determine where it came from? Because right now, you have multiple organizations leveraging OpenAI, Microsoft…so how do you determine whose fault it is? Just using AI could be a material event. And the other thing, it just adds another layer of complexity.

 

We must have regulations, and it's a matter of time before somebody does something, specifically in the United States, about regulating AI. But what makes that really hard is, what is AI? Are we talking about large language models? Are we talking about AI? Is it machine learning? Machine learning we’ve had for a long time. That’s also why I think that we don't have a dedicated federal regulation and privacy policy.

 

 

Perspectives: Right, do we have a general, agreed upon definition of privacy?

 

Leonard: Well, the thing is, you have the National Institute of Standards and Technology (NIST) at the federal level for each state. But that’s one thing that makes security really difficult: we don't use the same nomenclature, and we don’t use the same definitions. Whereas, like going back to the board conversations for CISOs, accounting is accounting. Shareholders are shareholders. In accounting, you have GAAP rules; it’s static. But security is very subjective because technology is changing. 

 

 

Perspectives: I think peoples’ and governments’ relationships with technology are different across the US, which makes it more complicated to unify any kind of policy legislation.

 

Leonard: Right. The definitions written by this are more applicable to government agencies. And also, I think one of the hardest things to measure is how AI is increasing risk. Are you actually decreasing or increasing the risk with AI? We can easily look at a project and say, “Okay, I believe we're gonna make X dollars on this.” Security is more qualitative. 

 

If you were to take all these major security metrics, people still have a hard time articulating that because it's an arbitrary number. Whereas, like, stock price goes up, that’s pretty clear. But measuring quantitative and qualitative risk is extremely subjective because it is based on the context. And that’s hard to understand. 

 

 

Perspectives: Looking ahead, how do you see AI two to five years from now, especially in security applications? What’s on your wishlist? 

 

Leonard: My wishlist five years from now is that AI would be more context-dependent. Looking ahead, I want to leverage the power of a large language model and a large generative AI tool, but only in the context of my data. I’d want it to be more exact and much quicker. The other thing that I want to see is better security and threat analytics based on what AI models see. That doesn’t mean it’s accurate all the time. But AI should be coming to me with data and coming to me with ideas. AI of the future will include a human in the loop and will be able to provide humans with the level of context and details to make an informed decision quickly reducing the potential impact of an incident. 

 

 

As shared in our report Splunk 2024 State of Security: The Race to Harness AI, CISOs are constantly re-evaluating their relationship with AI. For many, that means reassessing the parameters of their risk appetite, strengthening their risk posture, creating new policies — and new defenses — to address related threats, and deepening their understanding of the opportunities it will create for their teams. 

 

For more security thought leadership perspectives on AI, download Splunk’s 2024 State of Security report. And for more insights into Splunk’s AI strategy in our products and guidance on AI policy, check out our AI Philosophy Powering Digital Resilience e-book

Read more Perspectives by Splunk

April 8, 2024 • 3 minute read

With Observability and AI, If Data Is the New Oil, What Is Its Pipeline?


As with oil, data is informational energy that must be found, extracted, refined, and transported to the location of consumption. Here's how it's done.

May 21, 2024  •  22 Minute Listen

Is Your Organization in Step with AI? Check on Your Data Tenancy.


Forget the lone-wolf mentality of a single SOC. Today, it’s all about cross-sector collaboration and information sharing.

MAY 15, 2024 • 4 minute read

The Makings of a Successful Organization in 2027 and Beyond


How do organizations future-proof tech against threats, both known and novel? Splunk’s SVP and GM of products and technology weighs in.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.