How CISOs are Building Resilience for a New Era of Supercharged AI Cyber Threats
CISO Circle Ryan Fetterman Lead of AI Security Research Strategy at CiscoUntil recently, layers of defensive security had helped CISOs keep pace with cyber threats. Now, the new era of AI-driven cyber threats has some CISOs white-knuckling their keyboards.
Over the past few years, CISOs have invested heavily in machine learning, anomaly detection, automation (for example, automated correlation), as well as behavioral and predictive analytics to close detection gaps and alleviate the constant strain on their teams. Yet even as defensive AI becomes more capable, security leaders are confronted with quite the uncomfortable truth: Adversaries have access to the same tools, but they don’t have to play by the same rules. Without significant change to their SecOps programs, security leaders will see the scales dangerously tipped in favor of the bad guys.
It’s not necessarily new categories of cyberattacks that could tip the scales. According to Splunk’s latest CISO Report, security leaders are more concerned that AI will amplify existing threats by accelerating how quickly attackers can operate, how convincingly they can impersonate trusted sources, and how efficiently they can iterate their techniques.
Across this new, precarious security landscape, where do CISOs see the biggest AI-driven risks emerging? What are the operational implications for enterprise security? Leaders are fortifying their resilience for an era of supercharged threat activity.
Bad-acting AI will fuel a surge of social engineering
You might not think of AI as having an anti-social element, but CISOs feel a palpable sense of urgency regarding social engineering. An overwhelming 86% believe agentic AI will greatly increase the realism of phishing and impersonation attempts.
Gone are the days of obvious phishing emails riddled with incorrect spelling and grammar. Today’s attacks are far more bespoke and convincing, whether it’s a text from a bad actor posing as your CEO, urgently requesting sensitive data, or an email purportedly from your HR department, asking for confidential information. Generative AI can now produce communications and materials indistinguishable from benign, authorized humans. But the quality of AI-driven attacks isn't the only concern for CISOs; as Google’s Threat Analysis Group has noted, attackers are moving beyond single-shot attempts to facilitate multi-turn, 'rapport-building' conversations that leverage AI to sustain sophisticated, long-term manipulation.
The exponential uptick in quantity of threats poses a very serious challenge. In the past, threat actors had to spend significant time crafting convincing phishing messages, which limited their total output. Today, that constraint is gone. Attackers can instantly generate endless variations, each tailored to a specific role, department, or region, and then iteratively optimize their approach based on which versions receive engagement. Social engineering has become an algorithmic engine, capable of running experiments at scale.
CISOs must pivot from traditional, static awareness programs to a resilience-based operating model that assumes the initial breach is inevitable. Because AI now enables attackers to run hyper-personalized social engineering experiments at scale, human-centered defenses must be reinforced with identity-centric guardrails. This includes moving beyond basic multi-factor authentication (MFA) to phishing-resistant MFA and implementing strict, out-of-band verification protocols for high-value transactions, such as fund transfers or credential changes. By embedding these procedural circuit breakers, organizations can prevent a successful social engineering attempt from escalating into a catastrophic system compromise.
In addition, CISOs should deploy agentic AI defense systems that mirror the speed and scale of AI-driven threats. Manual triage is too slow to counter algorithmic attacks, requiring internal AI agents to autonomously monitor for behavioral anomalies and execute immediate containment actions, such as isolating a compromised user session in real-time. This strategy shifts the security team's role from reactive fire-fighting to adversarial AI engineering, continuously red-teaming the security team’s own AI models and processes to ensure they can withstand the same machine-speed experimentation used by modern attackers.
What would Darwin think? The rise of self-evolving malware.
While social engineering dominates immediate concerns, AI-enhanced malware represents a quieter but equally significant shift. CISOs point to experimental samples like BlackMamba as early indicators of where this trend is heading. These proof-of-concepts demonstrate how generative AI can be embedded into malicious code to dynamically modify behavior, rewrite sections of the payload, or alter signatures with each execution. behavior, rewrite sections of the payload, or alter signatures with each execution.
For defenders who rely on EDR and extended detection and response (XDR) platforms, this adaptability presents an alarming obstacle. Traditional detection often depends on known patterns, static indicators, or observable malicious behavior. But if malware can change itself in the moment, adjusting to evade whatever detection logic it encounters, those foundational methodologies begin to crumble.
Even though AI-adaptive malware isn’t widespread today, CISOs view it as a harbinger of what’s possible. This represents a significant shift in the attacker–defender dynamic where adversaries are no longer limited by their own time or creativity. The code does the work for them.
CISOs feel the squeeze as AI compresses the cyberattack timeline
CISOs also anticipate that AI will reshape not just how attacks begin, but how they progress. AI significantly compresses the cyberattack timeline by automating the initial reconnaissance and delivery phases, which previously required extensive manual effort.
Reconnaissance, once a slow and uncertain process, is becoming far more efficient. Modern attackers use agentic AI; self-directed systems capable of autonomously scraping open-source intelligence, identifying weak targets, and tailoring hyper-personalized phishing campaigns in seconds rather than days. These autonomous agents can chain reconnaissance tools and vulnerability scanners to find and prioritize targets with minimal human oversight, effectively reducing the time from target identification to initial compromise from weeks to mere minutes. For example, AI tools can analyze stolen credentials to predict which identities provide the highest-value access, map internal systems with minimal trial and error, and identify exploitable configurations that would be difficult for a human operator to notice quickly.
Inside the environment, attackers can use AI to generate tailored scripts or payloads in real time, adjusting to the defender’s architecture and tools. As a result, the timeline from initial compromise to impactful action is shrinking. Where attackers once made mistakes during manual exploration, AI now reduces the likelihood of those errors. For defenders, this means the window for detection and response is narrowing.
Once an environment is breached, AI further accelerates the attack lifecycle by automating lateral movement and payload execution. AI-assisted tools can identify exposed services, escalate privileges, and adjust tactics in real-time to bypass defensive controls such as anomaly detection systems. While traditional ransomware might take days to fully encrypt a system, optimized AI-driven payloads are expected to shrink this window to under 15 minutes, often completing an entire data exfiltration cycle 100 times faster than a human operator. This machine-speed execution collapses the traditional response window, often allowing attacks to end before a security ticket is even generated!
Adopt AI-driven detection to prepare for faster, smarter adversaries
In response to this acceleration, CISOs are rethinking the fundamentals of their programs and investing in capabilities that match the speed and adaptability AI brings to attackers.
They’re deploying AI-driven detection and response tools that can correlate signals faster than human analysts, automate triage, and contain threats without waiting for manual intervention. They’re reinforcing identity as the perimeter and adopting phishing-resistant MFA, tightening access governance, and monitoring sessions with greater scrutiny.
Awareness training is also being rewritten for the AI era. Instead of teaching employees to spot awkward grammar or mismatched logos, training now focuses on verification workflows, behavioral red flags, and resilience against voice or video-based impersonation.
Across the board, CISOs are accelerating incident response. Tabletop exercises now include scenarios involving AI-driven threats, and many organizations are redesigning escalation pathways to eliminate delays, with the goal being to react at a pace that anticipates how quickly AI-enabled attackers can move.
Underpinning all of this is a renewed focus on data: where it lives, who can access it, how it’s classified, and how well it’s protected.
Going from cyber defense to cyber sprinting in the age of AI threats
The next wave of cyber threats will bring new scale, speed, and sophistication amplified by AI. CISOs are entering a world where attackers can iterate their techniques at machine speed, test thousands of variations simultaneously, and deploy increasingly convincing social engineering campaigns with virtually no technical effort.
Defensive AI will be essential, but so will the fundamentals: strong identity controls, rapid incident response, and a modern understanding of how employees can be manipulated at scale.
The organizations that thrive in this new environment will be the ones that build resilient systems, adaptive processes, and creative humans.
Your adversaries are moving at machine speed. Learn how to keep pace with the breakneck speed of AI-driven cyber threats, including perspectives and insights from fellow security leaders — get The CISO Report: From Risk to Resilience in the AI Era.