Addressing CISOs AI Anxieties Through Resilience

Through my career, I have been “fortunate” to work on some of the greatest national security challenges, ranging from weapons of mass destruction proliferation to terrorism to cyber insecurity. AI represents our newest challenge but will most comprehensively impact society.  

The rollout of ChatGPT/Bard is creating anxiety among CISOs as we begin to face decision-making via “non-human” logic, which will accelerate the pace and pervasiveness of attacks, including attacks that could slowly bias sets.1

Anxiety extends beyond CISOs, as we have seen in the Future of Life’s open letter in the Financial Times from tech titans like Elon Musk and AI thought leaders like Max Tegmark at MIT. In addition, security luminaries like Dan Geer from In-Q-Tel highlight the challenges of AI with onsite/edge computing and the blend of software with data.

As the Asilomar AI Principles state, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. 

CISOs will undoubtedly encounter pressure from CIOs and CTOs to adopt AI to increase efficiency. As a result, CISOs’ jobs will become more complex as they address AI-driven attacks, automated vulnerability exploitation, battle data poisoning, or deep fakes that make current phishing tactics look quaint.

The concept of computer-driven automated attacks is not necessarily new or fictional.

The Peacemaker, a fascinating read by William Inboden about Ronald Reagan and the Cold War, recounts an exchange between President Reagan and Michael Gorbachev at their Geneva Summit in 1985 over abolishing the Strategic Defense Initiative (SDI). Gorbachev angrily reacted when Reagan said he would not cancel SDI. He made an ominous threat; the Kremlin would adopt “automation which would place important decisions in the hands of computers and political leaders [would] be in the bunkers with computers making the decisions. (emphasis added) This could unleash an uncontrollable process.” Gorbachev exposed that the Soviets were already working on a system called the “Dead Hand.” “Dead Hand would automatically launch all of the USSR’s ICBMs upon detecting an American attack—placing the fate of the world in the hands of machines rather than men.”2

Fortunately, Reagan and Gorbachev negotiated an agreement to dramatically reduce strategic nuclear weapons (START) and pulled the world from the brink of disaster.

Society’s challenges with AI are broader than the existential impact of nuclear weapons. Kissinger, Schmidt, and Huttenlucher detail in their book, The Age of AI and Our Human Future, that our perceptions of reality may change given AI-inspired insight.

CISOs will serve as the gatekeeper for AI, given AI’s capability to disrupt operations potentially.

So for cybersecurity, what strategy can CISOs use today? 

A clear-headed, grounded collaborative framework is required to help CISOs traverse the accelerated adoption of AI.

A doomsday analysis is easy, but perhaps we start by leveraging Reagan's quotation: “Trust but verify” AI, particularly given recent reports of ChatGPT “hallucinating.”3

The 2017 Asilomar principles are starting point to flag research issues, ethics, values, and longer-term issues. For example, “AI research should be to create not undirected intelligence, but beneficial intelligence.” Or “Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Ethics and values cover issues of safety, failure transparency, and judicial transparency. Or, under long-term issues, like AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity, must be subject to strict safety and control measures. Finally, “superintelligence” should only be developed in the service of widely shared ethical ideals and for the benefit of humanity rather than one state or organization.

I find an eerie similarity between the pause proposed in the Future of Life Foundation’s Open Letter and the ethical and humanitarian issues scientists like Robert Oppenheimer experienced high on the plateau in Los Alamos as they raced to develop the atomic bomb. As written in American Prometheus by Kai Bird and Martin Sherman, most scientists believed the target was The Third Reich, only to learn the real focus in ‘45 was checking the Soviet Union by way of Japan. Once the Soviet Union conducted an atomic test in 1949, it was game on for the H-bomb or fusion-based weapon.

We can see this moment in AI time as Julius Caesar crossing the Rubicon, signaling the end of the Roman Republic, or maybe keeping it simple: “The proverbial cat is out of the bag.” 

It is complicated, like Schroedinger’s cat: is the cat alive or dead? Generative AI can produce content indistinguishable from human output. Is it AI or not? How do we know? 

AI’s ambiguity poses various concerns, from bias to security challenges.

Some good work is underway on “AI audits and detecting bias. Several organizations are addressing AI ethics, including the World Economic Forum, The Partnership on AI, and Equal AI. The Data and Trust Alliance, established in 2020, has many nontech employers among its members.4 However, implementation is unclear. Will it be voluntary or mandatory?  

MIT’s technology review documents several ways AI chatbots are a “security disaster,” including jailbreaking, assisting scamming and phishing, and data poisoning.5

  • Jailbreaking, where a user, through a “prompt injection,” prompts the language model to ignore its previous directions and safety guardrails.
  • Assisting with scamming or phishing represents a significant challenge, given that people can integrate ChatGPT into products that browse and interact with the internet.  This will enable attackers to point ChatGPT at spoofed data, which can be used to facilitate phishing and scams. 
  • Data poisoning occurs when false or malicious data is inserted into a large language model before deployment. For example, After a certain number of steps, it is proven impossible to reverse how it reached its current state. And, to revisit Schrodinger, in a system of concurrent computation, you can only know the state by ruining it.

Now what?

According to Goldman Sachs, the latest breakthroughs in generative AI could automate a quarter of the work done in the US and the eurozone.   AI could spark a productivity boom that would eventually raise annual gross GDP by 7% over ten years. At the same time, it would bring “significant disruption” to the labor market.6 In the US, 63% of the workforce could be affected, with 30% of those working physical or outdoor jobs unaffected.  About 7% of workers in the US are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement.  

A paper published by OpenAI, the creator of Chat GPT-4, “found 80 percent of the US workforce could see at least 10 percent of their tasks performed by generative AI, based on analysis by human researchers and the company’s large language model.” Occupations with higher wages generally present higher exposure, contrary to similar evaluations of overall exposure to machine learning.7

I find a reason for optimism, but we must move forward carefully. Many of the companies here today are or will be contemplating leveraging AI.

The Wall Street Journal, on March 31, offered a compelling interview with Bill Braun, CIO of Chevron. He said, “Doing AI responsibly is critical.”  When asked where you would like to see AI embedded that you haven’t seen yet, he answered: “Everywhere. It should be part of every workflow, every product stream. But anything that looks like the more routine part or, the less value-adding part…helping take those aspects out of every worker’s interaction with technology should be the goal.”8

So, where do you start? All In on AI offers valuable guidance, including use cases covering Toyota, Morgan Stanley, Airbus, Shell, Anthem, Kroger, and Progressive. 

There are three archetypes associated with AI adoption:

  1. Creating something new, including new businesses or markets, new business models or ecosystems, products, and services. MIT Sloan Management Review analysis found that companies that use AI primarily to explore and create new forms of business value are 2.7 times more likely to improve their ability to compete with AI than those that use AI primarily to enhance existing processes.

    Loblaw, a grocery store chain in Canada, is using AI to expand into healthcare.

    launched Skywise a few years ago to improve operational performance. Current commercial aircraft can produce more than thirty gigabytes daily, measuring 40,000 operational parameters around the aircraft. Skywise has more than 140 airlines and 9500 connected aircraft. 

    . Anthem’s CEO leveraged AI to build their portfolio, including pharmacy, behavioral, clinical, and complex care assets, and algorithms, to deliver integrated whole-person health solutions.
  2. Transforming operations, becoming dramatically more efficient and effective at the company’s existing strategy. Influencing customer behavior using AI to influence a critical behavior of customers, such as how they socialize, maintain their health, live their financial lives, drive their vehicles, and so forth.9

    For example, Kroger, in partnership with UK’s Ocado, uses a variety of AI programs, including:


    • Predicting when food should arrive at the distribution center for optimal freshness
    • Spotting food close to expiration dates for discounting and donation
    • Hyper-personalized digital ordering experience
    • AI-driven air traffic control system for warehouse robots
    • Computer vision and planning systems for bag-packing robots
  3. Influencing customer behavior. Companies like John Hancock use machine learning to monitor and change health behavior.   

I want to circle back to “Trust but Verify” to delineate three essential points:

First, the most critical step is for the CEO to designate the executive in charge of AI adoption. This executive should lead a process for reviewing all potential AI applications.   

Second, adopt a basic framework to help with the process. For example, Deloitte’s “Trustworthy AI Framework” calls out six areas to help clients with their policy development:

  1. Fair and impartial. Assess whether AI systems include internal and external checks to help enable equitable application across all participants.
  2. Transparent and explainable. Help participants understand how their data can be used and how AI systems make decisions. Algorithms, attributes, and correlations are open to inspection.
  3. Responsible and accountable. Put an organizational structure and policies in place that can help determine who is responsible for the output of AI systems.
  4. Safe and secure. Protect AI systems from potential risks (including cyber risks) that may cause physical or digital harm.
  5. Respectful of privacy. Resect data privacy and avoid using AI to leverage customer data beyond its intended and state use. Allow customers to opt-in and out of sharing their data.
  6. Robust and reliable. Confirm that AI can learn from humans and other systems and produce consistent and reliable outputs.  

Third, double down on a resilience strategy. McKinsey released “A technology survival guide for resilience.”  The good news is that many companies are already pursuing resilient infrastructure. McKinsey underscores the importance of understanding “criticality.” Simply, what is most critical to business operations? As McKinsey says, “This requires a resilient infrastructure with heightened visibility and transparency across the technology stack to keep an organization functioning in the event of a cyber attack, data corruption, catastrophic system failure, or other types of incidents.”

McKinsey has also established a maturity model for resilience.  

  • Level One consists of basic capabilities where resilience is left to individual users and system owners, and monitoring involves users and customers reporting system outages.
  • Level Two consists of passive capabilities where resilience is through manual backups, duplicate systems, and daily data replication. System outages are also monitored at the platform and data center level.
  • Level Three consists of active resilience through failover. Resilience exists through active synchronization of applications, systems, and databases and active monitoring at the application level for each indicator of performance and stability issues.
  • Level Four consists of inherent resilience by design.  Resilience is architected into the technology stack from the start through inherent redundancy and active monitoring at the data level, which includes anomaly detection and mitigation. 

At Splunk, we want to unpack this more. 

Digital resilience covers five areas: visibility, detection, investigation, response, and collaboration. In the context of AI:


How well teams can see across their technology environment, including quality and fidelity of data and completeness of coverage.

Application to AI: Given AI applications will extend across security, DevOps, and observability, visibility must encompass each area. This will require the integration of data workflows into dashboards.


How well organizations leverage data to identify potential issues, including detection coverage and alerting.

Application to AI: CISOs must leverage and integrate detection tools to address AI application security. Devices must detect data and algorithm poisoning. As new AI capabilities come out, each should be gated until tools can detect AI tampering. 


How well organizations use data to search for potential issues and accelerate analysis, including enrichment, threat hunting, and searching logs, metrics, and traces.

Application to AI: Threat hunting among AI applications may require special tools, for example, “sandboxes,” to allow operators to understand how an AI application works.


How quickly security, IT, and DevOps teams respond to day-to-day issues or incidents.

Application to AI: As with existing security operations, detecting and responding to AI-related threats, disruptions, and vulnerabilities is critical.


How well teams and their tools facilitate working cross-functionality across security, IT and DevOps.

Application to AI: Collaboration will be critical across security, IT, and DevOps, each area leveraging automated sharing and pooling insights with peers inside companies and with others.

While I am optimistic, we also need to be realistic. Unlike the beginning of the Cold War, this time, the geopolitical stakes involve China, not the Soviet Union, in the context of the atomic age. In Fortune, Tom Siebel, founder, and CEO of, said:

“These tensions between China and the United States, in both the geopolitical and military realm, are very real. Enterprise A.I. will be at the heart of the design of the next kill chain. Whether you’re dealing with hypersonics, whether you are dealing with swarms, whether you are dealing with sub-surface autonomous vehicles, whether you are dealing with space, A.I. is very much at the heart of that. So we are in, I would say, open hostile warfare with China, as it relates to A.I. right now. And whoever wins that battle will probably dominate the world.”

I wrestle with the implications of what is ahead of us as AI will inevitably grow. 

We know this is a critical time. I seek not to hype the issue but to keep our feet on the ground and acknowledge while AI brings tremendous opportunity, much remains unknown; we must step carefully.  Splunk is serious about working together. This forum is an excellent opportunity to link and think.

Thank you.

[1] See The Age of AI And Our Human Future, Henry Kissinger, Eric Schmidt, Daniel Huttenlocher, 2021
[2] The Peacemaker, Ronald Reagan, The Cold War and the World on the Brink, William Inboden, p.375, 2022
[3] Lets cast a critical eye over business ideas from ChatGPT, Financial Times, March 12, 2023
[4] All in on AI,Thomas Davenport and Nitin Mittal, 2023 p 118
[5] “Three ways AI chatbots are a security disaster,” Melissa Heikkila, MIT Technology Review, Apr 4, 2023
[6] Generative AI set to affect 300mn jobs across major economies, The Financial Times, March 27, 2023
[7] GPTs are GPTs: An Early Look at the Labor Market Impact Potenital of Large Language Models, Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock, OpenAI, OpenResearch, University of Pennsylvania, Mar 27, 2023
[8] Download Extra, Chevron’s Bill Braun calls generative AI a ‘wake up call’ for traditional IT vendors, Wall Street Journal, Mar 31, 2023
[9] All in on AI, How Smart Companies Win Big with Artificial Intelligence, Thomas Davenport and Nitin Mittal. p. 48
[10] “A technology survival guide for resilience.” McKinsey & Company, March 20, 2023

 Paul Kurtz
Posted by

Paul Kurtz

Paul has led organizations involved in the most pressing national security issues, ranging from counter-terrorism, weapons nonproliferation, critical infrastructure protection, and cybersecurity.

His management experience spans government, non-profits, to the private sector, ranging from Special Assistant to the President on the White House's National Security Council to managing partner for a consulting company's overseas operations to founding a venture-funded cybersecurity company.

Career highlights include experience on the ground as a weapons inspector in Iraq and North Korea, serving as Political Advisor in Northern Iraq, coordinating the immediate response to the attacks of September 11, planning a national cyber defense program, to turning an idea to automate cyber intelligence management into a successful company (TruSTAR) ultimately acquired by Splunk.

Show All Tags
Show Less Tags