When to trust (and not to trust) AI in the SOC
AI can be a force multiplier for effectiveness and efficiency in the SOC — but only if used properly. On the other hand, when used for the wrong purpose, its errors could have long-lasting downstream effects within the SOC and even across the business, like widespread outages across internal systems that lead to disruptions.
Let’s dig into which scenarios that AI can help with, and which are best left to people.
Don’t: Close out alerts. Alert volume continues to be a pain point in the SOC; 59% say they have too many alerts, according to State of Security 2025 research. But solely relying on AI to automatically identify suspicious alerts and close them out before a person reviews them may be putting too much faith in that system. Human intuition and previous experience play a big role in the investigation process, too — a subtle gut feeling or your inner voice saying ‘That doesn’t seem right’ could lead to detecting suspicious activity.
In a previous role, my team discovered a web shell on one of our servers because when we happened to look at the logs, more data was going out than coming in — which was abnormal for that particular server. Depending on the model, AI could flag that as suspicious activity, or simply write it off as a spike. AI models in security often lack deep contextual awareness, nuanced behavioral baselines, and intent understanding — all of which are critical in catching subtle or novel threats like a web shell exfiltrating data.
Don’t: Automatically block an IP address or domain. Similarly, using AI to automatically block certain IP addresses or domains without any safeguards is risky. For example, let’s say your AI model has been trained to recognize Okta for your identity and access management (IAM), but your organization decides to switch to a different authentication provider. Then, the AI model could recognize that as abnormal and automatically block all authentication, effectively halting productivity across the organization. Generally, making major changes without considering the downstream effects of leveraging AI could cause you to run into a really bad day.
Do: Improve detections. Detections are an area of opportunity for many SOCs, according to State of Security 2025 research. A mere 8% rate their detection quality as excellent, and 53% say their SOC doesn’t have the skills or expertise to create effective detections. Generative AI could help here by improving the efficiency of detections and closing knowledge gaps for less seasoned analysts. For example, a SOC analyst tasked with cleaning up detections could ask a generative AI assistant for help by inputting the rule and signature of the detection and then ask the assistant to suggest tweaks to produce fewer false positives. Again, domain-specific generative AI would be particularly powerful in this scenario.
Do: Decipher logs. A search engine is every SOC analyst’s best friend, especially in the earlier stages of their career. But it’s often difficult to uncover the right information. Junior analysts might not be experienced enough to craft an effective search string, or they might not know the right industry terms to input. Generative AI can act as a supercharged search engine, helping junior personnel understand the meaning behind logs and bringing more context to their research.
A junior analyst could ask an AI assistant to tell them more about a weird string they’re seeing in a malicious binary, or a Windows event log they’ve never seen before. Asking AI is faster and drains fewer resources than asking a senior analyst — especially for smaller, overworked teams that may not have a senior analyst readily available. However, SOC leaders need to ensure that generative AI doesn’t replace the need for analysts to think for themselves. Tinkering and problem solving, although sometimes time-consuming and frustrating, is important to developing crucial skills within the SOC.
Adopting AI optimistically but cautiously
When it comes to leaning on generative AI for better understanding and context, logs are just the tip of the iceberg. Analysts can use AI to synthesize research on certain vulnerabilities and attack techniques, as well as summarize long form documents such as executive orders or advisories.
Ultimately, trusting in AI comes down to understanding its risks. Implementing AI shouldn’t be a matter of finding a cool new vendor, plugging it in, and letting the AI model do its thing. For every task or decision you’d like to entrust AI with, complete a cost-benefit analysis. If AI fails at that task, how much money would it cost the company in a worst-case scenario? Could it result in downtime? Knowing AI’s impact and understanding what could go wrong if you put too much trust in a system, ensures that you’re not sacrificing accuracy and security for quick wins.
To learn more about AI’s role in the SOC of the future, download the State of Security 2025 report.