Deploying Splunk Securely with Ansible Config Management – Part 2

automate all the things

In part one we covered generic deployment of Ansible with a static inventory list. This time, we are going to raise the complexity bar a bit and show you how you can use Ansible to deploy the Splunk environment with a dynamic inventory. Keep in mind that not only can you use this for Splunk, but for other deployable server types in your organization.

Dynamic Inventory

What is a dynamic Inventory and When do I use it?

Dynamic inventory, in our case, is when you have a list of servers and server types that are being destroyed and created very fast. A scenario where this might be needed would be in an auto scalable environment like AWS EC2 where you are using auto scaling to elastically adjust your infrastructure for demand or performance. Internally, it means auto scaling our Splunk deployments in our cloud environment depending on load or processing need! This blog will give you the tools to do this with Ansible on AWS EC2.

If you read through part one the major changes we are making are to the inventory and how we determine which server is listed on which group. As you can see we have replaced the host file and linked it to a script:

Ansible Root Directory Structure

Ansible Root Directory Structure



Take the following hosts created with our customer/buttercupgames.yml playbook on ec2

Instances created with Ansible

Instances created with Ansible

These could have been created as part of an auto scaling job, or another process (like a new business unit that needs a Splunk cluster). Ansible automatically spins the servers up (instances), configures and hardens the installation.

Ansible Run

Ansible Run

By the end of running the playbook we have a new cluster with 3 indexers and 2 search heads. All we needed was the EC2 API key and secret. Keep in mind Ansible command above can be running under cron every 5 minutes to make sure any servers created get provisioned and configured correctly. This will avoid any one specifically changing Splunk configs on the instances without notifying the Ansible manager and or going through a change process. Changes can be forced and orchestrated only through Ansible to keep a tight control over the configuration of your AWS Splunk deployments.


The Installation process is very similar procedure as part 1 of the series with just the added step of configuring your EC2 API keys.

1. install git/pip for your distribution
apt-get install python-pip python-dev
2. Clone ansible repo/submodules and splunk-ansible-advance
cd /opt/
sudo git clone
sudo git submodule update --init --recursive
cd /etc/
sudo git clone ansible

3. Install pip modules (or install it from your distro package manager)
pip install boto jinja2
7. Configure AWS credentials for Ansible (Remember to use IAM to create the API user):

cat ~/.boto
aws_access_key_id = XXXXXXXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxx

8. Test your dynamic script.
/etc/ansible/ --list
9. Configure splunk-ansible-advance.
10. Setup your credentials
11. Copy the credentials just generated into a amazon key pair.
12. Configure Key Pair

Remember to also copy your EC2 selected keypair under your .ssh/id_rsa in order to give Ansible credentials to login. Also where Ansible will run from (example local machines .ssh/id_rsa and .ssh/ to set perms to 400 for keys). This gets me everytime :-).

Supporting Multiple Splunk Environments

Under the splunk-ansible-advance repo you might have noticed a new folder called customer. This was built with the intention of allowing you to not only spin up but also manage multiple deployments of Splunk. Typically in large organizations these are separate business units, or at times you are a managed service provider, which offer Splunk as a service. Either way this folder contains the logic to manager multiple deployments from a single place.


It’s a great working example of a standard customer deployment (2 search heads and 3 indexers) in AWS. In this playbook we first define the AWS infrastructure and provision the instances with the :

# creates 3 search heads
- name: Launching search heads
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"

more at

and then we define what roles apply to each spun up instance:

- name: apply common configuration to all nodes
connection: local
hosts: '*_buttercupgames'

more at

Few Added Bells and Whistles

We added major changes to our original Ansible repository. Due to part 2 of the series changing the expected architecture of Ansible I decided to create a new repository for the playbooks under the name: splunk-ansible-advance.

Jose Hernandez

Posted by


Show All Tags
Show Less Tags