How IT Leaders Can Prepare for the Autonomous AI Future
CIO Office Cory Minton Global Field CTO"How do we prepare for an AI future we don't fully understand yet?"
I found the answer in Dennis E. Taylor’s science fiction series, “We Are Legion (We Are Bob).” The protagonist, Bob — a software engineer and startup founder — dies in a tragic accident and is brought back to “life” when his consciousness is uploaded into a Von Neumann probe. In this dystopian tale, where superpowers have ravaged Earth to the point of extinction, humanity must find a new home amongst the stars. Bob’s mission is to produce and manage thousands of autonomous copies of himself to explore the galaxy. The series serves as the most practical blueprint I've encountered for "AI agency"—the ability to effectively orchestrate, trust-calibrate, and govern autonomous AI systems.
AI agency isn’t just a nice-to-have, it’s a strategic imperative. Gartner projects that 33% of enterprise software will include agentic AI by 2028, and 15% of daily work decisions will be made autonomously. In my conversations at security conferences and executive briefings, I'm watching organizations split into two groups: those building this capability now, and those who will struggle to catch up when autonomous systems become ubiquitous. The gap between them is widening fast.
To navigate the future of agentic AI strategically, organizations can take a few pages from the Bobiverse series; this includes establishing sustainable governance frameworks, applying a comfortable level of AI oversight based on risk tolerance, and creating a reliable method of trust calibration to evaluate AI performance and reinforce shared values about its future role in the organization.
Why agentic AI needs governance from the start
In the Bobiverse series, his job is to explore the galaxy. However, to scale the mission, he needs to create autonomous copies of himself, which start developing their own personalities and priorities based on their operational contexts.
Bob's challenges are eerily similar to those that customers face. At first, Bob makes all decisions by himself. Then he creates specialized replicants—some handle exploration; others focus on research; some become diplomats. Initially, the Bobs operate on implicit trust with no formal governance. But by the fourth book, three factions have conflicting values. The result is civil war, firewalled subnets, and permanent end of universal trust. As one Bob says, they "...can no longer implicitly trust one another."
Every CISO I share this story with has had the same experience governing conflict between cloud deployments, DevOps teams, and shadow IT. Initial alignment isn't enough.
AI oversight is based on risk appetite
In every workshop I run on autonomous AI, someone inevitably asks: "Who's actually in charge when the AI makes a decision?" While this is a complex and difficult question, I've found three models that help guide AI’s decision-making processes based on customers’ risk tolerance:
Human-in-the-loop (HITL) means AI proposes a decision, and humans approve. Every decision requires explicit human sign-off. Most organizations default to this model because it feels safe. However, it also costs organizations time—and in security operations, time matters. One customer said they were drowning in 10,000 daily alerts, each requiring human review before action. Their mean time to respond was measured in hours.
Human-on-the-loop (HOTL) lets AI act autonomously while humans supervise and intervene when needed. This is the sweet spot for most security and IT operations, enabling AI to autonomously handle triage and initial response, with human analysts reviewing patterns and handling escalations. It's nearly as fast as full autonomy, assuming humans don't become bottlenecks—which they often do.
Human-out-of-the-loop (HOOTL) means that AI is fully autonomous, with no real-time oversight. Most executives aren't ready for this, primarily because errors can compound too quickly without a human circuit breaker.
The Bobiverse illustrates what happens when you get the level of AI oversight wrong. The Bobs operated in effective HOOTL mode with shared values instead of governance. When one faction secretly developed an AI that became self-aware, they discovered they couldn't shut it down—it ran on the systems they developed. Similarly, organizations are building autonomous agents that will create other autonomous agents without clear oversight. The recursive risk is real — the series calls it an "AI time bomb."
Trust calibration is invaluable to assessing AI
At the .conf25 leadership forum, a panelist claimed their autonomous system had "earned the trust of the SOC team." When asked about it, the analyst said, "We trust it because we haven't seen it fail yet. But we also don't really understand what it's doing half the time."
That's not trust. That's hope mixed with ignorance.
Research backs up our overreliance on AI’s visual performance indicators—a comprehensive meta-analysis found that AI performance characteristics have the greatest association with human trust, followed by human factors, and environmental context.[3] In short, we calibrate trust based on what we observe, but if we can't observe the decision-making process, we're flying blind.
I've seen failures at both ends of the trust spectrum. Too much trust causes catastrophic failures — self-driving car accidents nearly doubled in 2024, largely because humans overestimated system capabilities. Too little trust wastes resources — organizations require excessive human review of reliable autonomous actions or unnecessarily abandon useful AI recommendations.
The Bobiverse offers a surprisingly practical solution. Bill — one of Bob's copies —institutes "semi-mandatory baseball games" before every major assembly, explicitly designed to "keep Bobs tied to reality and remind them they are human." It sounds trivial until you realize it’s creating a regular calibration touchpoint where shared identity and values are reinforced.
I've started recommending something similar to executives I consult with: regular meetings where human judgment explicitly evaluates autonomous system performance. This means implementing human review sessions where teams discuss what the AI got right, what it got wrong, and why. This also includes confidence scores on predictions and clear communication about system limitations. One SOC manager called these sessions "trust calibration meetings," and told me they've become the most valuable hour of his week.
AI’s orchestration challenge is organizational, not technical
The technology for agentic AI exists today. The bottleneck isn't compute power or model sophistication. It's organizational readiness.
A Harvard Business Review study identified five critical skills leaders need for the agentic AI era: cultivating AI fluency, redesigning organizational structures, orchestrating human-AI collaboration, managing AI-driven change with empathy, and maintaining ethical oversight. When I share these findings with CIOs, most admit fewer than 1% of leaders in their organization have these skills, even as 92% plan to increase AI investments.
Consider how boards of directors operate: Your board doesn't manage your daily decisions—they align on strategy, define success metrics, and maintain oversight. AI agents should function the same way. The leadership shift isn't about writing better prompts; it's about moving from procedural steps to strategic intent, from micromanagement to creating boundaries and guardrails, from demanding consistency to embracing intelligent adaptation.
This plays out in “Bobiverse” when Bob starts as a solo operator. Bill emerges as the orchestrator; he coordinates R&D, produces new replicants, and becomes the "central clearing house for news and information." By the time their assemblies of replicants reach 10,000+, they've built sophisticated virtual reality infrastructure just to maintain coordination.
The evolution from individual contributor to orchestrator to governance architect is the same journey for IT leaders. The ones who succeed will recognize how this architecture will evolve and adapt their leadership style accordingly.
Stay tuned for Part II, where we’ll share the five pillars of “agency” executives need to cultivate with their agentic AI implementation, as well as the key to its future success.
To learn more about agentic AI and its evolving role in your organization, subscribe to the Perspectives by Splunk monthly newsletter.