The promise of agentic AI is enormous, delivering faster insights, autonomous response, and relentless monitoring. The question is, how much can you trust it?
The notion of “agents” in technology may evoke spy-movie imagery, with intelligent actors working around the clock, executing missions, and reporting back with precision. But the move from automation to agentic intelligence is not just a technological shift, it is also a cultural one. It requires leaders to relinquish control with confidence.
Only a few years ago, many SOC teams were still wary of automation. Full adoption was slowed by concerns about accountability, visibility, and control. But then large language models (LLMs) upended the trust equation in a major way. Unlike fixed and scripted automation, AI adapts to context and exercises judgment amid uncertainty - which can be both a benefit and a drawback.
Now, agentic AI represents the next evolution. Instead of short bursts of assistance, such as answering a query or summarizing a report, agents can operate autonomously for longer periods of time. It is a new era of security resilience that requires new dimensions of trust.
While agents are here, the fully autonomous SOC is still some way off (if it ever comes at all). Even today, there is still distrust of automation in the SOC. What will it do? Who will get blamed? How can we keep track of it? Automation will always run, even in unexpected edge cases, so some control must be relinquished - but the benefits outweigh the risks. By adopting automation in a modular fashion, leaders saw their time free up, the quality and consistency of response improve, and machine speed response.
Fast-forward to LLMs, where this technological leap made even the automation laggers more comfortable, though still distrustful of AI. “At least automation will take the specific action I tell it to” was the sentiment. But that same surety is now also not enough. Security leaderrs want more adaptability to the situation in front of us.
The trend will continue with agentic AI, and our comfort level will shift further, with letting AI do more, for longer. Agentic AI moves us from 20-second or 2-minute queries (or 20-minutes if you use deep research), to 2 hours, 2 days, even 2 months of autonomous work.
Technology adoption is often about discomfort; we have to try something new and trust that it will improve capability, minimise risk, or make us happier. But there’s always a leap to take, and depending on your resources and what risk you are trying to manage, you might take smaller or bigger leaps than your peers. If you don’t have enough people to manage the alerts you face, you might leverage an imperfect AI because the overall risk is still reduced. If you have plentiful people resources, you might face more challenges with fast adoption, as you have to quickly rebalance rewarding your teams and measuring productivity.
When we're taking 'trust leaps' from known to unknown technology and outcomes, there’s often a trust deficit. Trust is not only digital signatures and cryptography; the definition is maturing.
Trust often comes down to two key factors: capability and character. We tend to trust others both for what they can do (their skills or reliability) and who they are (their integrity or intentions). - I trust my friends to watch my drink in a bar, but not to fix my car.
But with AI and humanisation of technology, character comes into play too. Does it have integrity? Is it aligned with what I’m trying to achieve? This might remind you of “the value alignment problem” but it’s not theoretical. Trust in technology is no longer only about its reliability and competence, but integrity and empathy too. This is the shift from capability-only, to include character.
So consider your AI usage or integration with these two aspects of “character”: integrity and alignment. If agents mainly provide capability, you need to refocus analysts on providing more of the “character” aspect, mentoring and shepherding the agents that they oversee. Perhaps the future role of an analyst is to ensure that agents are empathetic and aligned to the goals of the SOC, and that they are truthful, providing the most integrity in their responses and actions.
Technology used to be just about capability. Does it work? Does it work consistently? But now leaders are faced with an AI trust deficit.
Technologists need to codify alignment into their AI, and be specific about what it should optimize for - true positives, caution, risk minimisation, and so on. Leverage metrics to test for integrity, build feedback loops to check alignment at considered intervals and ensure checks for veracity are pervasive. Ensure that the business logic surrounding the AI is documented, to cope with any lapses in integrity or alignment, or to clearly accept the risk if that happens.
One way to build trust is to demonstrate capability and character consistently over time, just as with a person. That’s where AI can be difficult, because we can’t always see how it reasons, it’s hard to demonstrate “character” and it can be impractical to check all of its thinking.
It makes sense to use AI to help our understanding and trust in AI, while avoiding a “who watches the watchers” situation, by leveraging a human-in-the-loop approach. Just as online chat functionality helped us to understand generative AI, generative UI[1] (no, that’s not a typo) will help us to build trust in agents, by showing visually what they are doing.

Agents can “report” back visually while going through a longer task, showing outputs along the investigative journey. When building AI, create hooks that allow this reporting, easy user interface, auditability and the ability to interrupt and course-correct. This allows the trust cycle to be built.
AI that you can interact with puts you at ease; it keeps your trust, by building up new capability, and consistently demonstrating its alignment to your goals. And this is just the beginning.
We can learn lessons from the adoption of automation. Once we get past the initial fear of being replaced, we start to see the strategic advantage, the limitations, and where we need to redirect human effort.
In the early stages of adoption, document where AI is built into your business logic. It makes it easier to fully or partially roll back changes if the value is not proven.
Start small, and iterate over successes. Try leveraging an agent that does something tightly scoped, such as malware reverse engineering, or familiar, such as a triage agent in the SOC. Whatever agent you choose, quantify what success would be upfront; whether that is reduction in time (equating to risk reduction), a certain false positive rate, or access to new services (extending capability). Be sure to also timebound the expected ROI.
Once the agents have been selected and parameters chosen, ensure tight feedback loops and the most time-consuming but simple tasks. Common time drains in security, that unfortunately require more ‘intelligence’ than automation, include writing incident reports, determining false positives, and quickly surfacing relevant information about alerts for triage.
Gradually, as the programme builds, you’ll make mistakes but hopefully also make huge gains, building confidence and understanding all the way about the limitations and potential of agentic AI. By automating all of your Tier 1 security operations, the benefits realised will encourage you to look at what you can automate in Tier 2 as well, and the cycle becomes virtuous. Bringing your analysts along on the journey too is crucial.
If you haven't already, be bold and push ahead with your automation journey. Not only will it help with quality, consistency and speed, but the discomfort is a good muscle to exercise for adopting assistants and agentic AI in the future.
Even if you’re not there yet, let’s hold this agentic vision as an aspiration and ambition. To me, this really sounds like the SOC of the future.
Subscribe to the Perspectives by Splunk newsletterand get actionable executive insights delivered straight to your inbox to stay ahead of trends shaping security, IT, engineering, and AI.
1 Generative UI is a user interface that is dynamically generated in real time by AI.
The world’s leading organizations rely on Splunk, a Cisco company, to continuously strengthen digital resilience with our unified security and observability platform, powered by industry-leading AI.
Our customers trust Splunk’s award-winning security and observability solutions to secure and improve the reliability of their complex digital environments, at any scale.