How to Use Splunk to Monitor Security of Local LLMs (Part II)

In Part II of "How to Use Splunk to Monitor Security of Local LLMs," we break down the main areas of focus for defending Local Large Language Models (LLMs) and the current frameworks that provide guidance on LLM threat prevention and defense. We will use log data and Splunk to accomplish this.

Defending LLMs Using Splunk

Let’s take a look at the possible log sources to monitor and defend LLMs. We are going to use Ollama as our framework to showcase possible ways of addressing the risks, attack tactics, techniques, and procedures (TTPs) and mitigation techniques we use throughout this blog. It is important to notice that these models interact in very large and complex architectures which involve components that are outside of the scope of this blog. Here are some resources that you may want to explore as well to expand your knowledge of threats against AI.

OWASP AI Security Testing

OWASP Periodic Table of AI Security

We can use the above guidance to translate log sources into threat categories of monitoring and defense.

Log Sources to Monitor LLM Security

Based on the above diagrams there are 4 categories where threats to AI can be approached or assessed.

  1. General controls, such as AI governance
    • Lack of standards, compliance security posture
  2. Threats through use, such as Evasion attacks
    • Malicious Inputs, Unintended Outputs, Evasion
  3. Development-time threats, such as data poisoning
    • Data, training, feedback, Supply Chain components
  4. Runtime security threats, such as insecure output
    • Denial of service, Model theft, Injection Attacks

From the above categories our next step is to attempt to map our log and alert sources to these categories to have a structured approach on how to monitor and defend LLMs specially if we are running them locally. We are going to focus on:

Threats During Use of Model

Threats During Runtime (Systems that Host Model)

As seen in the above categories, many of the log sources can be used in the three categories of threats. The3 categories are different based on the stage of operation of the model, from the use perspective, from the system perspective, and from the development perspective.

Splunking Ollama

In the following examples we are going to be using Ollama as an example on how to apply the above categories of threats, obtain logs and monitor a framework running a local LLM to address security and possible threats.

We are going to monitor the following logs:

Use & Runtime Monitoring (In this example we will not be including the development category)

Here are some examples of Ollama logs located at ~/.ollama/logs/server.log

In order to have the most information from Ollama, we have to enable DEBUG=1 in Ollama, then as we can see in the following screenshot, we can get plenty of information including prompt information.

You can then tail -f ~/.ollama/server.log . As it can be seen in the screenshot below there is prompt information including the actual prompt.

Let’slook at how these logs would look in Splunk. For this demo I created several scripts that interact with Ollama with the objective to obtain parseable logs that can give us useful information on System Messages, Prompts, API calls and Model performance into csv,json files so we can easily import them into Splunk. Dealing with Ollama logs is not easy and requires a bit of maneuvering. You can check my github to see the scripts I wrote for managing these logs.

Ollama System Messages

Ollama Prompt Metrics

Ollama Prompt Inputs

Ollama Output Metrics

Ollama Performance Metrics by Model

Ollama API Calls

In some of the screenshots shown in this blog, for visualization purposes webui was used along with Ollama. If you wish to monitor webui logs in this case via docker container you can also do so by converting or pushing them directly into Splunk.

As we can see from the above screenshots we can confidently use Splunk to gain visibility into local LLMs and approach monitoring and defense. As this technology becomes more prevalent, it is likely that standard log formats might be established and more verbosity added to these logs.

These logs in combination with other host system security, application security logs and enterprise security products such as Cisco AI Defense should complement and enhance security monitoring and defense of locally hosted LLMs, especially companies that seek to implement LLMs locally to avoid any internet interaction. The Splunk Threat Research Team has developed content applicable to this product and it can be found at research.splunk.com.

No results