AI’s just another workload, until it isn’t
One of the best parts of my job is conversations with CIOs, CISOs, and tech leaders across every vertical. Everyone’s excited about AI, but the real conversation we need to be having isn’t about which model to use. Nor is it about how fast you can fine-tune your LLM. It’s: are your AI initiatives being treated with the same governance, security, and visibility as your most critical digital assets, or are they becoming your next blind spot?
Take, for example, the case of a global e-commerce company that deployed an AI-powered hiring system designed to identify top talent faster. Without proper observability into how the model was trained, it began prioritizing male candidates over equally qualified female applicants. The lack of transparency and monitoring in the AI workflow meant that bias wasn’t detected early, leading to public backlash and reputational damage when the issue came to light. Ultimately, the company had to scrap the system and implement costly remediation measures to rebuild trust.
Here’s my take: AI doesn’t get a free pass just because it’s the new it thing. Like any modern digital workload, whether it lives on-prem, in the cloud, or sprawled across containers and APIs, AI must be monitored, secured, and governed. Period. That means applying the same rigor we bring to traditional infrastructure: observability pipelines, threat detection, anomaly baselines, and governance policies that scale.
AI is a workload. It’s also a risk amplifier. If AI isn’t properly instrumented, it can expose data, accelerate vulnerabilities, and create blind spots. This is no surprise given that 77% of security professionals anticipate an increase in data leakage due to generative AI usage, according to Splunk State of Security 2024 report. The irony is that AI needs observability more than most systems do. This is because its behavior isn’t always deterministic. It learns, it evolves, and it can be exploited in unexpected ways.
If you’re serious about building digitally resilient AI, you can’t just build smarter systems. You have to build safer, more observable systems.
AI moves fast, but the risks move faster
AI workloads don’t live in silos. They stretch across GPU clusters, vector databases, APIs, SaaS endpoints, and cloud-native platforms, often deployed by data science teams that may not fully align with your core security practices. Every new integration, every inference endpoint, and every data pipeline becomes another entry point for a potential breach. And most of those entry points aren’t showing up in your traditional security dashboards.
This sprawl creates a perfect storm of unmonitored assets, data movement you can’t trace, and applications that change dynamically as models learn and adapt. You can’t defend what you can’t see. That’s why observability is now table stakes.
Threat actors aren’t sitting still. They’re using the same tools we are, often better than we do (for obvious reasons of their clear lack of regulatory or moral responsibility). They’re generating malware, crafting highly targeted phishing campaigns, and even probing LLMs for misconfigurations and prompt injection vulnerabilities. And unlike traditional exploits, these attacks evolve with each iteration.
This isn’t theoretical. According to Splunk’s State of Security 2024 report, 77% of security leaders say data leakage will rise as their organizations adopt generative AI. It’s not a matter of if; it’s when. If your AI systems aren’t a part of your incident response planning, threat modeling exercises, and security telemetry, you’re already behind. These aren’t side projects. They’re active attack surfaces.
AI often blurs the line between PII, intellectual property, and public data. Whether it’s scraping unstructured customer data to train a model or logging user prompts in generative tools, AI can easily cross regulatory boundaries. Without clear visibility, you wouldn’t even be aware of it.
Security and compliance leaders urgently need traceability. That is, they need to know what data went into a model, what decisions were made, and who had access. Most AI stacks today aren’t built with that level of transparency. That’s a risk, especially in heavily regulated industries.
There’s a talent gap, and it’s growing
The tech community is moving faster than the security community can keep up. We’ve seen it before, with cloud, containers, and DevOps. Now, it’s AI’s turn.
Data scientists know how to build models, but they often aren’t trained to think like attackers or incident responders. Meanwhile, security teams don’t always have the tools or context to monitor AI pipelines. According to Cybersecurity Dive, more than 40% of IT leaders say their teams aren’t ready to secure AI. That’s a huge blind spot.
As a field CTO, I see this gap playing out in real time. Smart companies are bridging it by upskilling their teams, embedding security into the AI lifecycle, and leaning into tools that give the SOC clear visibility into model behavior. With GenAI automating tier-1 triage and serving as an on-demand mentor, 86% of security leaders say they can now hire more entry-level analysts, and 65% believe their senior staff will be more productive.
Why security and observability are non-negotiable
I’ll be blunt: If you don’t know how your AI systems are behaving, they’re already a liability.
We’ve spent years telling organizations to break down silos between IT and security. AI introduces a new silo between data science and security, and it’s even more dangerous, because it’s often invisible. Observability isn’t just about uptime or latency anymore. It’s about intent, trust, and control.
Think about the questions you should be able to answer:
Is your LLM leaking customer data through responses?
Are unauthorized users probing your inference endpoints?
Did your model’s behavior suddenly drift after a retraining?
Without observability, you’re left guessing. And when AI systems go off the rails, they do it fast.
This is why I’m bullish on building AI security into day-zero architecture discussions, not bolting it on after deployment. Start with a posture of “AI is just another workload,” then harden it like you would your production stack.
AI success begins with, and depends on, visibility
AI will reshape how we work and compete, but without proper oversight its benefits can quickly become liabilities. The companies that win with AI will be the ones that understand it’s not just about building models. It’s about monitoring them, spotting trouble the moment it begins, and staying resilient, not just innovative. If you’re leading enterprise AI adoption, ask yourself, does your SOC have a clear view of every action your LLMs take? Can your IT team detect data drift in real time? And when regulators come knocking, can your risk and compliance teams trace each model decision back to its source? If the answer to any of these is No, now is the time to strengthen your visibility layer.
Keep your AI in clear view. Get weekly, executive insights and actionable tips in the Splunk Perspectives newsletter. Explore how Splunk powers digitally resilient AI