Breaking Down Silos to Become AI-ready

Data silos have always existed, but AI has turned them from headaches into existential threats. What once felt like an inconvenience now erodes the trust and transparency required for AI to work effectively.

Disconnected tools, fragmented ownership, and competing priorities are no longer acceptable inefficiencies, they’re critical risks.The rise of artificial intelligence is forcing organizations to take a hard look at how their systems and teams operate.

To unpack how leaders can identify and dissolve these silos from both technical and organizational perspectives, we spoke with Keith McClellan, Field CTO at Splunk with over 20 years’ experience in digital resilience and transformation, and Craig Robin, Field CTO at Splunk with a focus on complex technical challenges and strategic growth. Their conversation explores how alignment across people, processes, and technology will define AI-ready organizations.

Perspectives Editor: Why do data silos become mission-critical risks once AI enters the picture?

Craig Robin: Every organization is trying to leverage AI to make faster, more informed decisions, but that depends entirely on the quality, consistency, and availability of data. When your tooling systems are fragmented, your insights are too.

What we’re seeing now is that leaders can’t afford to have disconnected technology or unclear ownership. You need a complete picture of how data moves across the organization: what’s duplicated, what’s outdated, and what’s missing. Without that clarity, you risk feeding poor data into your AI models, and that’s a fast track to unreliable outcomes.

Keith McClellan: I would add that silos aren’t just technical; they’re cultural. When teams operate independently with different incentives and goals, you lose cohesion. AI success requires alignment across the entire business including shared objectives, consistent governance, and trust in the data.

Perspectives Editor: What are the first steps leaders should take to identify and dissolve organizational silos?

Craig Robin: From a tooling perspective, a systems inventory is foundational for successful AI adoption. You have to know what you have before you can improve it. This means cataloging every tool, every data source, and every integration. You want to know who owns it, what it’s used for, and what value it delivers.

The mistake organizations often make is treating inventory like a one-time cleanup project. It’s not. It’s a strategic discipline that needs to be maintained over time. The best organizations build a continuous process of review, refinement, and rationalization because your environment is constantly changing.

Keith McClellan: An inventory audit should also look at organizational structures. Ask how teams are organized, where the handoffs are breaking down, and what processes depend on outdated or redundant systems.

Technology and team structures have to evolve together, or the same silos will just resurface in new forms.

Perspectives Editor: How can executives objectively evaluate what systems and tools to keep, consolidate, or retire?

Craig Robin: From my experience, a balanced scorecard approach will help leaders evaluate systems objectively. This process looks at three main areas: performance, alignment, and risk.

Performance is about how well the tool does what it’s supposed to do. Alignment measures how closely it supports broader company goals. Risk looks at security, redundancy, and integration dependencies. The most impactful of these categories can vary from organization to organization depending on maturity. Some have massive data volumes that need realtime performance, while some are more focused on de-risking the environment. All 3 should be considered when building a solid data foundation.

By scoring tools across these dimensions, you can compare them apples to apples and make informed decisions about what to keep, consolidate, or retire. It’s not about cutting costs; it’s about optimizing for value and efficiency. Over time, this creates a more stable, transparent, and trusted environment that’s much easier to scale for AI adoption

Perspectives Editor: Practically speaking, how do organizations build the connections between siloed teams and systems?

Keith McClellan: Cross-functional teams are one solution. You can create rotation programs that let employees experience different parts of the business, which builds empathy and understanding across departments. But ultimately, it’s about culture. You can’t mandate collaboration; you have to model it. That starts with shared objectives that link team goals to shared outcomes, joint planning sessions where leaders review priorities together, and post-incident reviews that include every function involved, not just one team.

Embedding collaboration into existing workflows makes it habitual, not optional. Leaders need to reward people who help others succeed, not just those who hit individual KPIs. When recognition aligns with shared goals, the organization naturally becomes more cohesive.

Perspectives Editor: What tensions can arise in organizations seeking to dissolve silos, and how can this tension impact AI readiness?

Craig Robin: A common area of tension is deciding when to give teams broad access to data so they can experiment with AI and analytics, and maintaining the controls needed to protect the business. One side pushes for speed and innovation; the other ensures compliance, privacy, and risk management.This kind of tension is actually healthy.

The challenge is that too much control can stifle progress, and too little can create exposure. The goal should be to share data by default and restrict by exception.

Keith McClellan: I agree that data and insights need to flow more freely across teams in order to operationalize AI effectively. The real challenge is making that flow seamless without creating friction or risk. Think of the “rules” as a background framework that keeps everything safe. When the right structures are in place, teams can collaborate confidently without having to second-guess whether they’re breaking something or exposing sensitive information.

Craig Robin: Exactly. Technology can do a lot of the heavy lifting here. If access controls and policies are embedded directly into systems, collaboration happens naturally. People don’t have to pause and think about every rule or permission, which keeps work moving quickly.

Perspectives Editor: What’s a practical path forward in addressing this tension?

Keith McClellan: I would say it’s about manual versus automated safeguards. As organizations adopt AI-driven systems, these boundaries need to be adaptive. Approaches like attribute-based access control (ABAC), are becoming essential. They automatically consider who the user is, what they’re trying to do, when, and even what device they’re using. This makes permissions more adaptive and secure while keeping collaboration smooth.

Craig Robin: And automation is a huge part of that. Some organizations are already implementing systems where completing privacy or compliance training automatically adjusts a user’s access level. That’s governance that scales, empowering users while keeping compliance intact.

Perspective Editor: As AI adoption accelerates, how should leaders rethink their data, systems, and operating models?

Craig Robin: In terms of AI readiness, trustworthy AI depends on trustworthy data. If your data is inconsistent, incomplete, or siloed, your AI outputs will reflect that. Every AI model is only as good as the information feeding it.

I often see organizations deploying AI across dozens of tools, each generating its own insights. The problem is that those insights don’t always connect. The real power of AI comes when you unify those systems, so that all of your data feeds into one consistent strategy. That’s how you move from reactive firefighting to proactive decision-making.

Keith McClellan: And from an organizational standpoint, AI will force simplification. We’ve spent decades decentralizing IT, giving teams their own tools and autonomy. That made sense at the time, but now we’re seeing the limits of that model. AI works best in environments where information flows freely and decisions are connected.

This is going to lead to a wave of consolidation like fewer systems, more alignment, and a renewed focus on shared governance.

Major transitions such as leadership changes, mergers, or technology migrations are perfect opportunities to reassess systems and structures. When an organization is already in motion, people are more open to change. That’s when you can introduce new governance models, consolidate tools, and reset expectations around collaboration.

Craig Robin: Absolutely. Those events create momentum. If you frame inventory and rationalization as value-creating exercises rather than cost-cutting, people get behind it. Streamlined systems reduce risk, improve data trustworthiness, and free up teams to focus on innovation.

Perspectives Editor: If you could leave executives facing these challenges with one piece of advice, what would it be?

Keith McClellan: Remember that technology follows culture. You can buy the best tools in the world, but if your teams aren’t aligned, they won’t deliver value. AI readiness isn’t a technical milestone; it’s an organizational one. The companies that succeed will be the ones that treat data, tools, and teams as part of a single, unified ecosystem.

Craig Robin: I would say make transparency your north star. Regular inventories, clear governance, and open communication build trust. When you can show stakeholders and regulators that your data is auditable, resilient, and well-managed, you’re not just ready for AI, you’re ready for whatever comes next.

Subscribe to the Perspectives by Splunk newsletter and get actionable executive insights delivered straight to your inbox to stay ahead of trends shaping security, IT, engineering, and AI.

No results