“Oh, that view is tremendous.”
We take for granted that when John Glenn marveled at the Earth from orbit: only one other human had ever shared that same view. In the decades since, we’ve watched spaceflight move from miracle to milestone to routine. A perspective once held by two people is now shared by millions. A host of documentaries, social feeds, and astronaut livestreams prove how radically a single breakthrough can reshape our sense of what’s possible.
Today’s SOCs are stepping into a new way of seeing — and running — security operations. Analysts bring deep intuition, pattern recognition, and grit to every incident. But even world-class humans can only go so far inside an operational model with a process built on fragmented tools and manual triage. The next leap forward requires something different — a shift in how work is done, how decisions are made, and how humans and machines team up under pressure.
That pressure continues to build. Data volumes explode, workflows grow more tangled, and AI-enabled attacks accelerate faster than analysts can respond. More tools, more dashboards, and more alerts aren’t merely insufficient — they overwhelm. So much so that more than 50% of security leaders and practitioners report they’ll likely quit their role in the next 12 months.1
Agentic AI offers a new operational model — one where autonomous agents tackle the scale problem, combing through unstructured telemetry and running investigations with precision and speed while humans steer alignment, oversight, and guardrails.
Breakthroughs never reshape the world on technology alone — they rely on the people who use them. The hybrid human-agent SOC becomes the foundation for 2026, empowering analysts to focus on strategy, creativity, and the decisions that matter most — a shift that promises to be as transformative in its own way as our first astronauts’ look back at Earth.
And that’s where our predictions begin.
The next wave of SOC maturity won’t be defined by who has AI — it will be defined by who knows how to enable it. Many teams will continue to “use AI” to summarize alerts, pull context, or draft reports on command. Helpful, yes. But ultimately still bolt-ons. The breakout SOCs will operate differently. They’re built around agents as teammates rather than tools, designing processes, oversight, and workflows that let AI take on real operational weight.
This shift doesn’t diminish analysts. It elevates them. As SOCs adopt human-agent teaming, analysts begin supervising autonomous workflows instead of manually executing every investigative step. They orchestrate how agents collaborate, set boundaries around what agents should and shouldn’t do, and ensure system behavior reflects the SOCs’ mission. As AI systems grow more interconnected, analysts take on a more architectural role: designing workflows, tuning policies, and shaping how agents think and work together.
Trust becomes the foundation of this new operating model. Agents can absorb the repetitive, low-risk work that weighs teams down, freeing analysts to focus on judgement-heavy tasks like spotting deception, weighing risk, and steering strategy. But analysts still guide when to rely on agents, when to intervene, and how to maintain alignment as autonomy increases. This human-on-the-loop oversight transforms analysts into stewards of autonomous decision-making.
This shift also reshapes what experience looks like inside the SOC. As automation levels the execution layer, traditional markers of seniority begin to blur. As Kirsty Paine, Splunk Field CTO, explains: “I suggest building a new set of responsibilities that we designate to a ‘Tier 4’ in the SOC as Tier 1 gets automated away. We can start to do more with agents and AI, and humans can elevate to strategic programs managing agents, rather than doing the work themselves.” With entry-level analysts stepping directly into Tier 2-equivalent workflows, junior analysts may appear as proficient as veterans — not because they’re more seasoned, but because AI accelerates execution for everyone. “Then the question becomes how to still support career progression and account for these new tasks that humans will have time to do thanks to AI,” Paine continues. “In the same way we currently have detection engineers write detections, will we need AI engineers to run agents?”
For SOC leaders, this opens the door to redefine excellence. Success becomes less about volume and velocity, and more about how well analysts supervise agents, tune automation, and guide decisions with long-term impact. Experience is measured not in keystrokes, but in oversight — in the ability to steer a system that learns alongside its human counterparts.
And as we rethink how people work, the next step becomes clear: reimagining how we measure the performance of the SOC itself.
For years, MTTR has been the SOC’s North Star — the metric that promised clarity, accountability, and a sense of progress. But as autonomous agents take on more of the detection-investigation-response cycle, time-based metrics start to lose their meaning. Speed only tells part of the story. A future defined by agentic systems demands a future defined by new measurements — ones that capture quality, context, prevention, and the business impact of decisions made at machine scale.
As agents analyze unstructured telemetry, surface high-fidelity findings, and automate the bulk of Tier 1 work, MTTR becomes less a measure of performance and more a snapshot of how late we were in the process. If an agent neutralizes an issue before it ever becomes an incident, what exactly are we “mean-timing” to? It’s akin to automation, where teams might initially see times drop because automation is working on all the easy tickets, giving those legacy metrics a boost. But then as teams’ AI adoption matures, KPIs might go back up because the things left are manual, or simply just harder. SOC directors will look toward outcome-based measures instead: reduction in false positives, precision of autonomous triage, risk avoided rather than risk responded to, and alignment to business-critical KPIs like downtime avoided or cost per prevented breach. These are the metrics that reflect what agentic systems actually deliver — continuous insight, not just rapid cleanup.
As these new signals emerge, the role of the analyst changes with them. Instead of racing the clock, analysts guide system behavior: tuning agents, validating reasoning, and ensuring each autonomous action mirrors the organization’s appetite for risk. Their impact becomes less about the speed of a task and more about the quality of judgment they bring to shaping the system itself. This shift challenges longstanding assumptions about what “good” looks like. Teams accustomed to celebrating faster MTTR will need a new vocabulary to describe excellence — perhaps one rooted in effectiveness, resilience, and foresight rather than pure reaction time.
This evolution opens the door to stronger alignment with business leaders, too. When KPIs reflect avoided loss, reduced operational risk, and measurable improvements in resilience, security becomes far easier to explain — and to justify.
And as SOCs begin adopting these new measures, a deeper operational change follows: the need for AI systems that can connect insights across tools, platforms, and environments. A shift that leads directly to the next prediction — the rise of the connected, multi-agent ecosystem.
The first wave of AI in security arrived as isolated assistants living inside individual tools — helpful, but boxed in. In 2026, the value will shift from single-purpose assistants to coordinated ecosystems of agents that can reason together, share context, and take action across the entire stack. The SOC has always been a team sport, and now the AI side of that team is beginning to operate the same way: distributed, collaborative, and deeply interconnected.
Instead of hopping between one embedded assistant in a SIEM, another in an EDR, and a third in a cloud console, SOCs will rely on networks of agents that can move fluidly across platforms and workflows. These agents will tap into open standards — the connective “fabric” that lets them speak the same language. Protocols like Model Context Protocol give them a consistent way to gather context across environments. Emerging frameworks for agent-to-agent collaboration — such as within a Multi-Agent System — allow them to sequence tasks, hand off insights, and form a shared understanding of risk. The result is not just speed, but coherence: a system that sees the whole picture, not just one dashboard at a time.
But this evolution also changes the terrain. Older systems may not speak the same language. Legacy tools may not have the hooks needed for insight sharing. And during the transition, teams may find themselves navigating an environment where not every piece fits cleanly with the next.
This is where leadership makes the difference. SOC directors who prioritize interoperability early — whether through standards adoption, phased modernization, or thoughtful integration roadmaps — will see the fastest gains. They’ll create environments where agents can operate safely, predictably, and with a full view of risk, rather than working in narrow silos.
And once these interconnected systems are in place, they unlock something even more powerful: the ability to simulate attacks, test defenses, and train both humans and agents using the same collaborative fabric. Which brings us directly to the next shift — the rise of continuous adversarial simulation powered by autonomous red-team agents.
Attackers aren’t waiting for defenders to catch up — and in 2026, their AI won’t either. Last year’s AI-enabled malware and automated attack chains showed just how quickly adversaries can scale reconnaissance, payload development, and lateral movement with minimal human input. What emerged wasn’t just faster attacks, but more consistent ones — patterns, behaviors, and decision paths that can be studied, anticipated, and replicated. This creates a new opportunity for defenders: to turn the attackers’ predictability into a training ground. AI-powered malware may get smarter,” says Splunk Senior Security Strategist Ryan Fetterman, “but it also becomes more detectable. Each adaptive technique — from on-device script generation to prompt-driven control — gives defenders new patterns to find and stop.”
SOCs will begin embracing continuous adversarial simulation powered by autonomous red-team agents. These systems probe defenses, stress-test detection models, and expose weak points long before a real campaign hits production. “AI now lets attackers automate entire campaigns with almost no human effort,” continues Fetterman. “But autonomy cuts both ways — these repeatable patterns give defenders the chance to simulate attacks and strengthen our own agents.” Instead of waiting for alerts to tell them where they’re vulnerable, SOC teams will generate their own pressure — running agent-driven attack sequences that mirror the very tactics adversaries are automating. It turns readiness into a living, breathing discipline rather than a once-a-year exercise.
For analysts and SOC leaders, the impact is profound. Instead of learning from incidents after the fact, they learn from controlled simulations that unfold at machine speed. Agents can replay patterns seen in emerging AI-enhanced malware, mimic the logic of “vibe-hacking” campaigns, and generate new variants that push defenses harder than any manual red team could. Analysts then step into the loop to interpret results, refine detection logic, and guide agents on how to push or probe next. The SOC becomes an environment of continuous rehearsal — where humans and agents sharpen each other, day after day.
This evolution introduces a new kind of challenge as well. As adversarial agents begin testing defensive agents, the oversight loop grows in complexity. Who reviews what? How do you ensure simulations don’t drift beyond defined boundaries? But SOCs that establish clear supervisory patterns — where analysts govern the rules of engagement, verify system behavior, and anchor simulations to their risk posture — will unlock the full value of this new readiness cycle.
And with continuous simulation comes something even more important: clarity. Analysts and their agentic counterparts begin to understand not just whether defenses work, but why — and how to adapt together. That demand for traceability, reasoning, and auditability leads directly into the next frontier: ensuring every autonomous action leaves a trail that humans can trust.
As autonomous agents take on more investigative and response work, accountability becomes just as important as speed. In the same way analysts track the actions of human teammates, SOCs will expect agents to show not only what they did, but why they did it. Every decision, every step taken, every signal referenced will need to be captured, explainable, and reviewable. Agents aren’t simply new tools — they’re new digital identities with privileges, responsibilities, and governance requirements of their own. As Rod Soto, principal software engineer for the Splunk Threat Research Team puts it, “Agents effectively function as new digital identities, requiring the same governance you apply to users and applications — including defined privileges, oversight, and strict least-privilege access.”
This shift marks a new era of operational transparency. As agents run triage, correlate signals, or automate containment, SOC leaders will rely on richer, high-resolution audit trails that expose reasoning paths and alignment to policy. Analysts will use this visibility to validate conclusions, retrace actions, and correct course when an agent misunderstands context. Compliance teams will use it to meet regulatory expectations as autonomous decisions play a larger role in incident response. And leadership will use it to build trust — ensuring the SOC never becomes a black box, even as machine-speed workflows become standard.
But as autonomy scales, the oversight loop becomes more complex. In some workflows, agents will supervise other agents, reviewing logs, checking outputs, or coordinating escalations. This introduces new questions about hierarchy, supervision, and the boundary where human review must always remain. As Soto observed: “When agents start supervising other agents, the oversight loop can grow beyond human reach unless we anchor the system to clear policies and defined points of human review.” The SOC’s job will be to design those boundaries with intention — ensuring humans stay in control of the mission, even as agents handle more of the execution.
The SOCs that get this right will create something entirely new: an environment where autonomy and auditability reinforce each other. Analysts gain clearer insight into how decisions are made. Leaders gain confidence in scaling automation. Regulators gain the transparency they need for trust. And the entire organization moves toward a model where agents don’t replace human judgment — they strengthen it.
Just as spaceflight reshaped how we see our world, agentic AI is reshaping how we see the SOC — and what it means to be truly secure and catalyze innovation. The breakthroughs ahead won’t come from technology alone, but from how people use it — how analysts guide agents, how leaders set boundaries, and how teams build trust in systems designed to think alongside them.
This next era of security operations belongs to the humans who bring context, judgement, and creativity to every mission. With agents operating as trusted partners, the SOC’s horizon expands. And the view ahead is tremendous.
[1] https://www.iansresearch.com/resources/ians-cybersecurity-staff-compensation-report
The world’s leading organizations rely on Splunk, a Cisco company, to continuously strengthen digital resilience with our unified security and observability platform, powered by industry-leading AI.
Our customers trust Splunk’s award-winning security and observability solutions to secure and improve the reliability of their complex digital environments, at any scale.