false

Perspectives Home / CISO CIRCLE

When Should You Trust AI For Mission-Critical Tasks in the SOC?

AI can transform security operations, but human intuition still reigns in mission-critical SOC decisions.

AI and mission-critical. Many security professionals would argue the two phrases should never be uttered in the same sentence. Yet 11% of respondents say they trust AI completely to perform mission-critical activities in the SOC, according to State of Security 2025: The smarter, stronger SOC of the future. Are these respondents overly optimistic, or incredibly advanced?

 

Security teams shouldn’t overlook the massive productivity benefits AI can deliver to the SOC, a department that tends to suffer from inefficiencies, talent gaps, and alert storms. In fact, 59% of State of Security 2025 respondents say they’ve moderately or significantly boosted efficiency in the SOC with AI. For some projects, relying on AI for mission-critical tasks in the SOC can be too much of a risk — especially if that task could result in reputational loss, regulatory fines,  or even cost human lives when it fails. Life isn’t black and white, so it’s important to take a nuanced lens when it comes to trust in AI.

 

Shades of trust in AI 

Not all AI is created equal. When considering AI for a task in the SOC, teams should consider the ‘flavor’ of AI: generative AI or traditional AI. Generative AI is well-suited for answering questions and helping with research, but it generally produces too many hallucinations and errors to be trusted with mission-critical tasks. It’s like a friend that will never admit they’re wrong, even when presented with evidence that proves otherwise.

 

Differences between general generative AI, which is trained on large sets of data, and domain-specific generative AI, which is trained on data sets specific to a domain like cybersecurity, can influence trustworthiness. A general generative AI tool like ChatGPT, for example, introduces a greater risk of hallucinations because it’s trained to predict text, not verify facts, and isn’t typically connected to real-time databases that would provide more accurate information. Domain-specific AI, on the other hand, tends to produce better results because it’s trained on a narrow set of data. Nearly two-thirds of State of Security 2025 respondents (63%) agree that domain-specific AI significantly or extremely enhances security operations compared to publicly available tools.

 

Traditional AI analyzes datasets to give concrete results from defined data. It’s trained to follow specific rules and perform a particular task rather than create something new, resulting in fewer hallucinations. If you’re considering incorporating AI into a mission-critical task within the SOC, understand what your goals and expected outcome are and consider the pros and cons to each. Using Generative AI for the sake of it may not produce the results you are looking for.

 

No matter what type of AI you decide to rely on, leaning on a human-in-the-loop approach is crucial for providing oversight, intuition, and common sense.

 

per-newsletter-promo-v3-380x253

Navigating AI for smarter, more resilient operations

Enhance observability, secure operations, and drive resilience with AI strategies for executives.

 

When to trust (and not to trust) AI in the SOC

AI can be a force multiplier for effectiveness and efficiency in the SOC — but only if used properly. On the other hand, when used for the wrong purpose, its errors could have long-lasting downstream effects within the SOC and even across the business, like widespread outages across internal systems that lead to disruptions.

 

Let’s dig into which scenarios that AI can help with, and which are best left to people.

 

Don’t: Close out alerts. Alert volume continues to be a pain point in the SOC; 59% say they have too many alerts, according to State of Security 2025 research. But solely relying on AI to automatically identify suspicious alerts and close them out before a person reviews them may be putting too much faith in that system. Human intuition and previous experience play a big role in the investigation process, too — a subtle gut feeling or your inner voice saying ‘That doesn’t seem right’ could lead to detecting suspicious activity. 

 

In a previous role, my team discovered a web shell on one of our servers because when we happened to look at the logs, more data was going out than coming in — which was abnormal for that particular server. Depending on the model, AI could flag that as suspicious activity, or simply write it off as a spike. AI models in security often lack deep contextual awareness, nuanced behavioral baselines, and intent understanding — all of which are critical in catching subtle or novel threats like a web shell exfiltrating data.

 

Don’t: Automatically block an IP address or domain. Similarly, using AI to automatically block certain IP addresses or domains without any safeguards is risky. For example, let’s say your AI model has been trained to recognize Okta for your identity and access management (IAM), but your organization decides to switch to a different authentication provider. Then, the AI model could recognize that as abnormal and automatically block all authentication, effectively halting productivity across the organization. Generally, making major changes without considering the downstream effects of leveraging AI could cause you to run into a really bad day. 

 

Do: Improve detections. Detections are an area of opportunity for many SOCs, according to State of Security 2025 research. A mere 8% rate their detection quality as excellent, and 53% say their SOC doesn’t have the skills or expertise to create effective detections. Generative AI could help here by improving the efficiency of detections and closing knowledge gaps for less seasoned analysts. For example, a SOC analyst tasked with cleaning up detections could ask a generative AI assistant for help by inputting the rule and signature of the detection and then ask the assistant to suggest tweaks to produce fewer false positives. Again, domain-specific generative AI would be particularly powerful in this scenario.

 

Do: Decipher logs. A search engine is every SOC analyst’s best friend, especially in the earlier stages of their career. But it’s often difficult to uncover the right information. Junior analysts might not be experienced enough to craft an effective search string, or they might not know the right industry terms to input. Generative AI can act as a supercharged search engine, helping junior personnel understand the meaning behind logs and bringing more context to their research. 

 

A junior analyst could ask an AI assistant to tell them more about a weird string they’re seeing in a malicious binary, or a Windows event log they’ve never seen before. Asking AI is faster and drains fewer resources than asking a senior analyst — especially for smaller, overworked teams that may not have a senior analyst readily available. However, SOC leaders need to ensure that generative AI doesn’t replace the need for analysts to think for themselves. Tinkering and problem solving, although sometimes time-consuming and frustrating, is important to developing crucial skills within the SOC. 

 

Adopting AI optimistically but cautiously

When it comes to leaning on generative AI for better understanding and context, logs are just the tip of the iceberg. Analysts can use AI to synthesize research on certain vulnerabilities and attack techniques, as well as summarize long form documents such as executive orders or advisories.

 

Ultimately, trusting in AI comes down to understanding its risks. Implementing AI shouldn’t be a matter of finding a cool new vendor, plugging it in, and letting the AI model do its thing. For every task or decision you’d like to entrust AI with, complete a cost-benefit analysis. If AI fails at that task, how much money would it cost the company in a worst-case scenario? Could it result in downtime? Knowing AI’s impact and understanding what could go wrong if you put too much trust in a system, ensures that you’re not sacrificing accuracy and security for quick wins.

 

 

 

To learn more about AI’s role in the SOC of the future, download the State of Security 2025 report.

Related content

MARCH 25, 2025

The AI Genie is Out of the Bottle. Now What?

 

Read more Perspectives by Splunk

JUNE 3, 2025  •  6 minute read

Agency is the Leadership Skill Required to Thrive in the Agentic AI Era

 

Redefining leadership when action isn’t only human anymore.

MAY 14, 2025  •  6 Minute Read

Build or Buy? Deciding the Best Path for Your Next AI Cybersecurity Tool

 

Discover how to weigh the true costs of building or buying your AI solution.

MAY 20, 2025  •  6 minute read

Your AI’s Blind Spot is Bigger Than You Think

 

There are more AI models than ever and not enough visibility. Here’s how to close the gap.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.