CI/CD Detection Engineering: Splunk's Security Content, Part 1

It's been a while since I've had the opportunity to take a break, come up for air, and write a blog for some of the amazing work the Splunk Threat Research team has done. We have kept busy by shipping new detections under security-content (via Splunk ES Content Update and our API). Also, we have improved the Attack Range project to allow us to test detections described as test unit files. I would like to share with you today a 3-part series that includes a step-by-step walkthrough of using these projects to do detection development, continuous testing, and deployment as a workflow in your security operation center.  

If we visualize the workflow to follow for detection engineering, it looks something like this:

At the end of this guide you will have a clear understanding of how to:

  1. Customize and add detections to your own private fork of security-content
  2. Convert the detections into Splunk configuration files
  3. Package these configuration files into your very own Splunk App for distribution
  4. Test each detection committed using the Attack Range
  5. Run all of these steps automatically on every PR via Continuous Integration (CI)

Forking Security-Content 🍴 

To get started, you need your own private copy of Splunk’s security-content. If you are not familiar with security-content, it is where all of our detections live. The Threat Research Team open-sourced the project a while back in October 2019, but it has been free for a long time as an extension of Splunk’s Enterprise Security SIEM called ES Content Update. To get a copy/fork of security-content simply visit the Github page and click on the fork button as shown below:

If you can, more information on forking Github projects, here

Once you have a personal fork of the content, you can start customizing it!

Customizing Security-Content 🏗

The first step towards customizing the project is to clone it locally so you can start editing. In the following examples, I will be using the git command-line tool, but there are various amazing IDEs and UIs out there that work just as great. To clone the project run:

git clone

Your specific Github URL is displayed by clicking on the code button in your fork of security-content as seen below:

This is a good time to explain the various parts that create security-content that you can customize. Below is a printout of all the files and directories under the folder security-content/:

Starting from the left the folder: .circleci contains the CI configuration files. We will come back to this later in the Testing portion of the guide. Followed by .dependabot This contains the configuration for dependabot, a tool that keeps dependencies up-to-date for open source projects. The folder .git and  .github, and .gitignore are various files used by Github to track changes locally, the configuration of the project and other things. ⚠️ I do not recommend you change these. The security-content uses pre-commit hooks which are configured by the .pre-commit-config.yaml to implement checks for common mistakes like correct YAML, JSON, and requirements.txt typos among other things during commits. The most important parts 🧩 of security-content are:

  1. detections/: Contains all 209 detection searches to-date and growing.
  2. stories/: All Analytic Stories that are group detections or also known as Use Cases
  3. deployments/: Configuration for the schedule and alert action for all content
  4. responses/: Incident Response Playbooks/Workflow for responding to a specific Use Case or Threat.
  5. response_tasks/: Individual steps in responses that help the user investigate via a Splunk search, automate via a phantom playbook, and visualize via dashboards threats. 
  6. baselines/: Searches that must be executed before a detection runs. It is specifically useful for collecting data on a system before running your detection on the collected data.
  7. dashboards/: JSON definitions of Mission Control dashboards, to be used as a response task. Currently not used.
  8. macros/: Implements Splunk’s search macros, shortcuts to commonly used search patterns like sysmon source type. More on how macros are used to customize content below.
  9. lookups/: Implements Splunk’s lookup, usually to provide a list of static values like commonly used ransomware extensions.

Feel free to change, add, remove any of these parts 🧩 they are what makeup content! Note that they are all loosely coupled via tagging. For example, detections are associated with Analytic Stories via the tag analytic_story: <name>. Just like all responses are associated with Analytic Stories via tags as well, all tags are expressed in a key/value format.

I did not mention the folders bin/, tests/, and package/ because we will be using them, later on, to validate our content, generate a package for Splunk, and test our detections. A final note, the folder spec/ contains the latest (version 3.0) specifications that define what each component in the above list is. I do not recommend you to read them directly, instead check out docs/ which contains detailed pages for each specification.  Now that you have an understanding of the different parts that make up the security content machine 🚜, let's dig into how you can customize it.

Customizing Source Types with Macros

When customizing security-content to fit your organization and Splunk deployment, one of the key things to change is what names your different source types have. A great example of this is sysmon data. If collected using the latest Splunk Add-On for Microsoft Sysmon, it will automatically be source typed with


This might not be the exact source type used in your organization and Splunk deployment. However, to set your own, simply modify the file: security-content/macros/sysmon.yml and change the value of definition. Other, great examples are okta, wmi, and streams http.

Customizing Scheduling with Deployments

To customize how often any detections run and what alert action they should automatically perform, a deployment configuration must be created. By default, security-content includes a deployment called Enterprise Security deployment configuration which schedules all Analytic Stories to run hourly and search in the last 60 minutes. The default deployment also has an alert action configure that creates a notable event in Enterprise Security if anything is detected. This can be easily customized by individual Analytic Story, and Detection by setting a matching tag value. For example:

name: Schedule Credential Dumping Daily
id: bc91a8cd-35e7-4bb2-6140-e756cc46f214
date: '2020-04-27'
description: Schedule Credential Dumping Daily with Email notification to the SOC
author: Jose Hernandez
  cron_schedule: '0 0 * * *'
  earliest_time: -1d@d
  latest_time: -10m@m
  schedule_window: auto
    message: Splunk Alert $name$ triggered %fields%
    subject: Splunk Alert $name$
  analytics_story: Credential Dumping

Schedule the Credential Dumping Story to be executed daily and send results via email to The deployment and Analytic story are linked by the matching tag both share analytics_story: Credential Dumping. By default, we also ship deployments for short (runs hourly, searches back 60 minutes) and long (runs daily, searches back 90 days) running baseline searches as well. It is important to note that security-content uses the file name alphanumeric order to resolve any conflicts in tags. In other words, if you wanted the Credential Dumping deployment example from above to not overwrite the default configuration file 10_enterprise_security_deployment_configuration.yml, you would name the deployment file 11_credential_dumping_hourly.yml or similar. 

Now that you have a basic understanding of how to configure the security content for our organization, you can validate that all looks good, and generate a package to deploy.

Validating Content ✅

Because we are humans and humans often make mistakes, validating our work is important to avoid common mistakes. Like all lazy & good engineers, 😇 we have scripted a solution that helps us with this, just like all our scripts for the project it lives under /bin. Let me introduce you to a script that automatically checks that:

  • Content is valid YAML
  • Content is adhering to spec
  • Lookups in searches are defined under lookups/
  • Macros in searches are defined under macros/
  • Blank UUIDs field
  • Duplicate UUIDs
  • No special characters are in the content name
  • Description and How to Implement fields are ASCII
  • Correct date format
  • Macros are used in searches in place of 'eventtype=', 'sourcetype=', ' source=', 'index='
  • The name of detections match the file name but underscore with spaces replaced by _

You can easily run validate by first installing the requirements of the project, to do this run in security-content/:

pip install virtualenv && virtualenv venv && source venv/bin/activate && pip install -r requirements.txt

Then run validate against your project:

 python bin/ --path . --verbose

The path above assumes that you are under the security-content repository. If all is fine at the end of its execution you will get a No Errors found message. Otherwise, expect a similar error message that will specify what needs to get resolved. 

Generating a Splunk App with your Content 🧬⚗️

Now that we have customized the content to our organization and also validated that all our changes are correct we are ready to generate a Splunk App for deployment. I bet you can guess we also wrote a script for this 😉,  like all our scripts for the project it lives under /bin. The script basically generates all the dynamic components that make content in a Splunk app. These specifically include:

To run you must have the requirements of the project installed as described in the validating content section, but in case you missed it here is the command:

pip install virtualenv && virtualenv venv && source venv/bin/activate && pip install -r requirements.txt

Then run generate to build a Splunk App in a folder:

python bin/ --path . --output package --verbose

Just like before, the path above assumes that you are under the security-content repository. At the end of its execution, you should expect an output similar to:

69 stories have been successfully written to package/default/analytic_stories.conf
206 detections have been successfully written to package/default/savedsearches.conf
71 response tasks have been successfully written to package/default/savedsearches.conf
46 baselines have been successfully written to package/default/savedsearches.conf
65 macros have been successfully written to package/default/macros.conf
workbench panels were generated
security content generation completed..

Note that we use the folder package/ to store the static pieces of our Splunk App like the different dashboards, views, and other components as well. From here you might consider modifying the different static pieces of the app like name, author, version under the app.manifest, and default/app.conf files. Next, let's talk about packaging our application for deployment and committing any changes made to our fork of security-content.

Packaging and Committing Changes 📦

Even though the package/ directory contains all the pieces of a working Splunk App now we can simply just “tar.gz” and upload it to the Splunk server. This is not the recommended way to package Splunk Apps, but it is a quick solution for testing. We will cover the official way to package Splunk apps using slim in detail during part 3 of the series. In order to quickly build a package run:

tar -zcvf mysplunkdetections.spl package

You can now distribute and or upload mysplunkdetections.spl to any Splunk instance. If you did not change the package app.manifests or apps.conf file once uploaded the App will be called ES Content Updates as seen below.

All your new detections, stories, and details can be viewed using the ESCU Analytic Story page.

If you customized the detections, macros or apps let's commit the changes back to our security-content fork. In order to do this under the security-content folder run:

  1. git status to view your current modifications
  2. git add <file name> to include the files in your commit
  3. git commit -m <commit message> to make a commit
  4. git push to push the changes up to your fork

You should see a new commit posted under your fork of security content:

Summary 📓

To quickly recap, let's summarize the basic steps to get started on using security-content for your organization.

  1. Fork the security-content project
  2. Customize deployments and macros to fit your environment
  3. Add/Edit/Remove any pieces of content part 🧩
  4. Validate changes are correct via
  5. Generate configuration files for a Splunk App using
  6. Quickly create a package 📦 via “tar+gz” that can be installed in Splunk
  7. Commit changes back to the Fork. 

A detection developer’s mission does not stop at just shipping a detection 🙅‍♂️. This is just the beginning of doing detection engineering. A good detection engineering workflow should always be able to answer the question: “Are my detections actually working?” 🤔. In order to answer this question follow me to part 2 of the series. In part 2 we cover the attack_range project and how we can use it to test the detections created under your fork of security-content.

José is a Principal Security Researcher at Splunk. He started his professional career at Prolexic Technologies (now Akamai), fighting DDOS attacks from “anonymous” and “lulzsec” against Fortune 100 companies. As a engineering co-founder of Zenedge Inc. (acquired by Oracle Inc.), José helped build technologies to fight bots and web-application attacks. While working at Splunk as a Security Architect, he built and released an auto-mitigation framework that has been used to automatically fight attacks in large organizations. He has also built security operation centers and run a public threat-intelligence service. Although security information has been the focus of his career, José has found that his true passion is in solving problems and creating solutions. As an example, he built an underwater remote-control vehicle called the SensorSub, which was used to test and measure toxicity in Miami's waterways.

Join the Discussion