The Lessons Learned in Cybersecurity 25 Years Ago Are Still Applicable to AI Today

Artificial Intelligence (AI) is a technology that is both exciting and worrisome. It reminds us of events from the past where computer systems were attacked, causing concern for their vulnerability. In 1997, a Department of Defense exercise called Eligible Receiver showed that defense systems could be hacked, which led to the creation of the Joint Task Force for Computer Network Operations. In 1998, an attack on critical infrastructure known as Solar Sunrise was actually carried out by teenagers, but it still raised concerns about security. The National Infrastructure Protection Center was created to address this issue. At the time, Lopht Heavy Industries testified before Congress about cybersecurity, the first-ever hearing of its kind. They even claimed that they could shut down the entire internet. So what can we learn from these three seminal events as AI accelerates the future? Quite a bit.

Eligible Receiver  

Eligible Receiver was a security exercise that showed how cyber attacks could create chaos. The team in charge of causing the chaos said they had the other team running scared by the third day. It was a wake-up call for national security experts, who realized that the cyber domain could be a powerful tool for creating trouble. This exercise was classified for many years, but it's now clear that we must do more to prepare for the challenges posed by artificial intelligence. A group of scientists and scholars have warned that AI could be an existential threat. We need to run more exercises to understand the implications of AI, but we also need to broaden our focus. In the past, humans were in charge of command and control, but that may not be the case in the future. We must be ready for whatever challenges come our way, which means preparing for the unexpected.

Solar Sunrise

A key question prompted by Eligible Receiver was borne out of Solar Sunrise. This event confused the government and private sectors on who coordinated the response and recovery efforts. At the time, there was a lot of uncertainty about who was in charge of cybersecurity, as the role of CISO had not yet been established. Figuring out how attackers could breach or disrupt computer systems was complex, but it was usually possible through a combination of human and technological analysis.

The attacks on September 11 accelerated the U.S. Government’s efforts to organize around the creation of the Department of Homeland Security and, years later, the creation of the Critical Infrastructure Security Agency (CISA). It posed the same question as with AI, but with an added twist: what is “in charge” of AI? Unfortunately, the arrival of AI exacerbates leadership challenges exponentially at three levels:

  1. Who trains, maintains, and monitors the Large Language Models (LLMs) that support AI? Providing human feedback to the models can be helpful or hurtful. For example, the deliberate infusion of incorrect information can poison a model.
  2. While AI can bring efficiency to productivity, it leaves the question of AI’s decision-making process open. LLMs like Generative Pre-trained Transformer 4 (GPT4), commonly called Chat GPT,  leave “decision” calculus unknown. A recent legal brief leveraged AI, but each "case" was fabricated to support the lawsuit.
  3. Given our highly connected and Internet-dependent society, what are the limits or guardrails to keep AI from taking control? For example, individual networks could quietly combine over time into an uber-AI. 

Paul Scharre’s book, Army of None, on autonomous weapons, opens with a harrowing story relevant to AI's potential dangers. He opens, on September 26, 1983, “the world almost ended.” Three weeks earlier, the Soviet Union had shot down a civilian airliner that strayed off course from Alaska to Seoul. It was the height of the Cold War, with Reagan pursuing the Strategic Defense Initiative (SDI), a system designed to intercept Soviet missiles from space. The Soviets had deployed Oko to watch for US missile launches. Just after midnight, within a bunker in the Soviet Union, Lt Col Stanislav Petrov received a warning that the U.S. had launched a nuclear missile at the Soviet Union. Given Oko was new, he waited, thinking the launch was an error.  Another launch, followed by three more, according to Oka. The screen switched from a missile launch warning to a “missile strike.” There was “no ambiguity,” according to Scharre. 

Petrov still held and did not report the launches to higher authorities. Petrov believed launching just five missiles didn’t make sense. He decided to confirm with ground-based radars, which showed nothing. It turned out to be a malfunction, sunlight reflected off of clouds. Perhaps this could be labeled a “hallucination,” computers drawing an incorrect conclusion. World War III would have surely ensued had Petrov notified his superiors according to protocol. How do we verify conclusions drawn from AI? Indeed, a human in the loop is critical in the context of nuclear weapons, but what should be the guidelines or breakers for other weapons systems? 

Congressional Testimony

As history often repeats itself, Congress is concerned and has called on AI leaders to testify. Similar to Lopht Heavy Industries' testimony warning of the things to come with cybersecurity, we had similar testimony from Sam Altman, CEO of OpenAI. Testifying before a Senate committee, he openly asked Congress to regulate AI. Adding to Mr. Altman’s testimony was a warning from more than 350 scientists and computer science experts on the existential implications of AI, equating the threat to pandemics and nuclear war. We need a “whole of government” approach to grapple with AI's implications and build guardrails and tripwires. For example, the European Commission is moving swiftly on an article that would force providers of foundation models like ChatGPT to assess their systems for potential impacts on fundamental rights, health and safety, environment, and democracy.

What Can Organizations Do Now?

Contrary to popular opinion, quite a few companies can prepare to adopt Generative AI. Organizations should double down on resilience, ensuring a synoptic understanding of security, DevOps, and observability.  It is no longer desirable or acceptable for organizations to silo each. Resilience must be a top priority. Why? AI presents an opportunity, but for now, significant unknowns remain given the nascent understanding of decision-making processes within AI. No longer segregating security, DevOps and observability will enable companies to discern the root cause of disruption more quickly.  Several sources provide excellent insight on how best to proceed, including “All in on AI” by Thomas Davenport and Nitan Mittal. McKinsey’s “A Technology Survival Guide for Resilience” is another helpful resource.

To implement Generative AI, the person in charge should utilize available resources and staff to move forward with the adoption. If you want to learn more, McKinsey's "What CEOs need to know about Generative AI, May 22, 2023" is an excellent source of information. The webinar recommends using Generative AI for two scenarios - one where there will be no significant impact on the company's operations and another where it could have a significant impact. Splunk understands the power of AI and how it can be leveraged to increase customer efficiency. For example, customers today can leverage AI to code SPL queries. Splunk works closely with our customers to identify opportunities for applying assistive intelligence and streamlining operations. Learn more about how Splunk can help your strategy.

 Paul Kurtz
Posted by

Paul Kurtz

Paul has led organizations involved in the most pressing national security issues, ranging from counter-terrorism, weapons nonproliferation, critical infrastructure protection, and cybersecurity.

His management experience spans government, non-profits, to the private sector, ranging from Special Assistant to the President on the White House's National Security Council to managing partner for a consulting company's overseas operations to founding a venture-funded cybersecurity company.

Career highlights include experience on the ground as a weapons inspector in Iraq and North Korea, serving as Political Advisor in Northern Iraq, coordinating the immediate response to the attacks of September 11, planning a national cyber defense program, to turning an idea to automate cyber intelligence management into a successful company (TruSTAR) ultimately acquired by Splunk.

Show All Tags
Show Less Tags