The 5 Pillars of Successful Agentic AI Adoption
CIO Office Cory Minton Global Field CTOIn light of a confusing and fast-moving AI landscape, leaders need to start plotting a course to prepare for agency. But how can they effectively lay the groundwork to get ahead of the agency curve before the gap closes on profitability?
In part 1, I referenced the science fiction book series “We Are Legion (We Are Bob),” by Dennis E. Taylor to illuminate the importance of establishing governance frameworks, the various levels of AI oversight based on an organization’s priorities and risk appetite, and how organizations can infuse trust calibration processes into their systems of AI checks and balances.
But the lessons imparted by “Bobiverse” hardly stop there. Enter one of the Bob replicants named Bill. Bill's emergence as the successful orchestrator of the Bobiverse offers a powerful lesson in why foundation matters more than speed. While other Bobs rushed off to explore and improvise solutions on the fly, Bill stayed put and deliberately built infrastructure: the Skunk Works for R&D, the SCUT communication network, standardized replication procedures, and the moot governance structure. When replicative drift eventually created the factional splits, the Bobs who had operated ad-hoc for decades struggled to retrofit governance and alignment mechanisms—some factions literally had to firewall themselves off from each other. The Bobs who succeeded long-term were those who, like Bill, invested early in the architecture and processes that would enable coordination at scale, proving that the investment in people and processes isn't overhead—it's the foundation that determines whether the other efforts ever deliver value.
Amid the agentic AI gold rush, many organizations will stumble and fall short of material impact unless they lay a strong foundation for its implementation and sustained success over time.
Lessons I’ve learned from field experience and hundreds of executive conversations, point to five critical pillars leaders will need to develop — from understanding limitations and thoughtfully designing architecture to adapting to the unexpected — allowing organizations to effectively implement agency, so they can get ahead of the curve and ensure future success.
Why many will adopt agentic AI, but few will succeed
The remarkable thing about agentic AI is how much consensus this topic elicits across firms — not just about its upward momentum, but about its potential to fall short of expectations. McKinsey documented what they call "the gen AI paradox"—78% of companies use generative AI, yet roughly 80% report no material earnings impact.
BCG found that only 5% of companies have achieved value-generating AI capabilities. These "future-built" organizations follow what they call the "10-20-70" formula: 10% algorithms, 20% technology and data, and 70% people and processes.[7] When I share these stats with executives, I watch them mentally audit their current AI spend. Almost everyone has inverted this ratio — spending 70% on technology and 10% on the people and processes that determine success.
Forrester provides the reality check, noting that 75% of firms will fail at building advanced agentic architectures independently due to governance gaps. The transition from generative AI to agentic AI isn't a natural evolution — it's a difficult leap across a canyon of complexity. Unlike generative AI, which you can trial in isolation, agentic AI requires deep integration with enterprise workflows and robust governance from day one.
However, a 75% failure rate isn't inevitable. It's a choice. Organizations that treat agentic AI as a technology problem will fail. Those that treat it as a capability-building challenge will succeed.
The 5 pillars of developing AI agency
After watching both successes and failures in the field, and synthesizing analyst research, I've identified five pillars that define the "agency" capabilities executives need to develop: awareness, architecture, alignment, accountability, and adaption.
- Awareness means understanding both what autonomous systems can do and their limitations — and keeping that understanding current as capabilities evolve. Implementing agentic AI isn't a one-time training exercise. It's ongoing operational intelligence about model capabilities, failure modes, and drift patterns. The humans in the loop, on the loop, and out of the loop framework I described in Part I provides the foundation for matching human oversight to capability, but you need awareness to make those choices intelligently.
- Architecture is about designing systems for orchestration from the start, not as an afterthought post-deployment. Infrastructure should include multi-agent orchestration platforms, and the mesh architectures recommended by analysts. Architecture must also include observability, security, and governance as foundational elements, not afterthoughts.
- Continuous Alignment addresses the hard truth that it will degrade over time. The early implicit trust mentioned in Part I worked until values diverged through operational context—what the series calls "replicative drift." Trust calibration research provides a toolkit that incorporates confidence scoring, explainability, performance monitoring, and regular human-oriented touchpoints. You're not trying to prevent drift, which is impossible with systems that learn and adapt. You're trying to detect it and continuously recalibrate it.
- Accountability establishes clear responsibility chains for autonomous decisions. Industry governance models emphasize reliability, audit trails, transparency, and fairness. The "Guardian Agent" concept — autonomous systems that monitor other autonomous systems — provides an architectural pattern, although many are still determining who watches the watchers.
- Adaptation recognizes that agentic transformation requires organizational redesign, not just technology deployment. McKinsey describes M-shaped supervisors orchestrating hybrid workforces, T-shaped experts handling exceptions and reimagining workflows, and AI-augmented frontline workers, which illustrate how jobs must evolve.
The agentic AI window for adoption is closing
Gartner found that only 19% of organizations have made significant agentic AI investments as of January 2025, with 31% taking a wait-and-see approach. Meanwhile, AI task completion capabilities are doubling every four months. In my view, we're potentially four months away from AI systems that can work reliably unsupervised for days. The wait-and-see organizations think they're being prudent; they're irreversibly falling behind.
For IT and cybersecurity executives, the message is urgent. The time to develop agency will be before autonomous systems are everywhere, not after. Those who invest now in awareness, architecture, alignment, accountability, and adaptation—learning from both science fiction and emerging practice—will orchestrate the autonomous future rather than be overwhelmed by it.
As Bob learns over the course of his journey from software engineer to galactic coordinator: "How are you supposed to feel if you are forced to do what you would have done anyway?" The executives developing agency now aren't being forced into laying that foundation. They’re choosing to build the capability to thrive in an autonomous future.
The alternative isn't avoiding that future. It's arriving there unprepared.
To learn more about the future of agentic AI and how you can set the stage for a successful rollout, subscribe to the Perspectives by Splunk monthly newsletter.