AI Should Guide, Not Decide

CISO Circle Shannon Davis Principal AI Security Researcher at Cisco

A funny thing happens the moment you add the letters “AI” to a problem: perfectly sensible people start treating it like the universal solvent.

I’m not anti-AI. I build in this space. I’m excited about what’s possible. I’m also old enough (and have the scars) to know that enthusiasm is not a strategy.

Here’s the line I keep coming back to: AI should guide, not decide.

Early in my career as a Windows NT administrator, I tried to “harden” security and accidently locked every user out of the system. We spent the day troubleshooting and recovering backups, losing a full day of productivity.

I made that decision without full context, and we paid the price. That’s why I get nervous when people want AI to decide and executive instead of inform. It’s what happens when you let a system take irreversible action without guardrails.

The fastest way to sour an organisation on AI isn’t choosing the wrong model. It’s choosing the wrong lane. It’s applying AI where the system needs to be repeatable, auditable, and boring— and then acting surprised when “clever” becomes “chaos.”

Balancing AI automation and human oversight

Automation is attractive because it promises consistency, speed, and scale. But not all automation is created equal. There’s a spectrum:

LLMs are strong and efficient at the second category. They are not inherently designed for the first. When people push AI into “decide and execute” roles, they usually do it because they confuse fluency with reliability, overvalue a confidence signal, or are trying to skip the hard part of designing guardrails.

But the catch is that using AI as a shortcut often turns into taking the long way around.

A simple way to choose the right AI lane

If you’re making decisions about where AI belongs, here’s a framework I keep coming back to. It’s a 2×2 matrix based on two questions:

  1. How well-defined is the task?
  2. What’s the cost of being wrong? (i.e. blast radius, reversibility, compliance, trust)

Here’s a visual depicting what I mean by “guide vs decide”:

This isn’t about whether AI is “good” or “bad.” It’s about fit, aligning AI to environments or situations where the policy, rule set, or acceptable action path is well-defined enough that the system can reliably operate within known boundaries. The following are ways to determine the best criteria for you.

Most security problems live in that bottom-right quadrant more often than people like to admit. Specifically, many major security decisions are both context-heavy and high consequence. When organizations treat those decisions as if they were simple automation problems, they create risk in two directions: they can overreact to weak signals, or they can miss genuine threats because the model lacks sufficient context.

For organizations, that means operational disruption, mistaken enforcement actions, damaged trust, delayed response to incidents, and leadership making decisions based on outputs that might look authoritative, but in reality are not well-grounded.

AI can tell a story, but not take action

Security already has a long history of successful automation. And we should do more of it. Successful security automation is built on well-defined inputs, predictable outputs, and explicit constraints.

Where AI brings the most value is where those systems start to struggle when the environment is messy — when the evidence is incomplete, when context matters, when consequences of action are shaped by human, technical, legal, and business factors all at once. And when the real challenge is making sense of the situation quickly enough to act.

AI is brilliant at taking scattered signals and turning them into a coherent story and a set of plausible next steps. But stories are not actions. Stories without actions create gaps in accountability, escalation, review, testing, and ownership. It’s easy for organizations to say they use AI responsibly at the level of principles, presentations, and strategy documents, while failing to translate that into concrete operating controls. In practice, that means teams may deploy AI into important workflows without clear guardrails denoting the necessity for human intervention, how outputs are validated, or who is responsible when the system gets something wrong.

Also, confidence is not correctness — and that matters more than people think. A lot of modern interfaces make AI feel more reliable than it is. The output is polished. The tone is authoritative. The system may even provide a confidence indicator.

The problem is that “confidence” in many AI contexts often reflects something like internal consistency or pattern familiarity, not truth.

In security, truth can be annoying to find. It’s buried in logs, it’s conditional on the surrounding environment, it’s dependent on baselines that change, and it’s full of edge cases. It also requires provenance and verification, which can be time consuming and cumbersome.

An AI model can’t “remember” that your environment has a weird legacy rule from 2019 unless you feed it that context. It can’t know your organisation’s risk appetite unless you encode it. It can’t guarantee that the action it suggested won’t hit the wrong system unless you constrain what it’s allowed to do.

So, the question you ask shouldn’t be: “Is the model confident?” It should be: “Can we verify this, and what happens if it’s wrong?”

If you’re a leader driving AI adoption or you’re the person being asked to “add AI” to everything, here’s are a few practical guidelines I’d recommend:

Use AI to:

But be cautious using AI to take irreversible actions, enforce policy without human oversight, make final judgments when the ground truth is hard to verify, or act directly on production systems without deterministic guardrails — these are the situations where model error becomes organizational error.

A simple incident review scenario might be this: an AI system interprets a cluster of unusual administrative activity as malicious, automatically disables critical accounts, and blocks systems in production. Hours later, the organization discovers it was an urgent, but legitimate maintenance action during an outage. The review then is not just about whether the model was “wrong,” but about why it was allowed to take high-impact action without deterministic constraints, staged escalation, or human confirmation.

Adopting the right AI framework for security leaders

At Cisco Foundation AI, we focus on how to apply security models where they genuinely move the needle — not as a novelty feature, but as a force multiplier.

We don’t aim for “AI everywhere.” Instead, we put AI in the right lane: it guides teams through complex situations, compresses time-to-context, and scales expertise without replacing human judgement. Adoption sticks when it performs best and it’s surrounded by the same engineering discipline you demand of any critical system.

If you’re leading AI adoption, here’s the challenge I’d offer: Before you green-light the next “let’s add AI” initiative, ask two questions:

  1. What decision are we trying to improve and who is accountable for it?
  2. If the system is wrong, what’s the blast radius and can we recover cleanly?

Map your use case against the matrix we discussed. Be honest about whether the task requires context or follows a well-defined rule set and acknowledge the true cost of failure. AI doesn’t need to be the hero of every story. Sometimes the best outcome is a boring workflow that runs perfectly for three years, and nobody talks about it again. However, when the environment becomes messy — where the facts are incomplete, the signals are ambiguous, and the context changes quickly —that’s where AI earns its keep. In those settings, the “right” answer is often not obvious from the data alone.

To learn more about how to create AI guidelines and guardrails, please subscribe to the Perspectives by Splunk monthly newsletter.

No results