Running Splunk in Amazon’s Elastic Container Service

With Splunk now supported in Docker it’s a great time to be looking at running Splunk in Amazon’s ECS. For those who tuned or attended this year’s AWS re:invent, you’ll have also heard that the docker image is also available via AWS Marketplace.

In this blog, we’ll take a step-by-step look at setting up a simple ECS cluster to run a standalone instance of Splunk. For those new to Splunk, ECS and the world of containers, I would recommend the following could be used for prototypes, testing and development. I’ve not delved into the details of more complex Splunk validated architectures, or advanced configurations of ECS - that’s another blog for another day. For those wishing to get adventurous, there’s more details in our github repo.

So here’s how you get started in just four easy steps:

Step 1 – ECS Cluster

With AWS ECS, you have a choice of running containers using two platforms - Fargate and EC2. Fargate allows you to run containers without worrying about EC2 servers and clusters, and is great for running smaller services. Using EC2 gives a greater level of instrumentation on the base compute and storage platform, which we will need for Splunk, so we will use the EC2 cluster option.  

Navigate to ECS in your AWS Console and a Create Cluster – we will use the “EC2 Linux + Networking” option here.

Now run through the cluster settings. This will include EC2 instance type for the containers, number of instances in the cluster, networking etc. For the purposes of this blog, we’ll take most of the defaults, excluding:

  • Storage: Ensure you have enough capacity for the Splunk instance(s) that you wish to run in the containers – note that this is only for the root filesystem for the instances, as we will be setting up separate volumes for $SPLUNK_HOME/etc/ or $SPLUNK_HOME/var to keep these persistent. We will do this set up later.

  • Key Pair: Either create a new or select an existing one for your Region. This will allow you to ssh into the EC2 instance(s) if you need to do so at a later stage (for example to access docker commands).

  • Security Group: Ensure that all the ports that Splunk will need are available (noting the essential ones – 8000, 9997 etc.)

The EC2 instance(s) provisioned are allocated EBS storage when they are created. One volume is used for the root filesystems for the containers, and the other for the Docker Cluster itself. This EBS volume is also used for any volumes you create within the containers. The default size of this EBS volume is only 8G. We will need to extend this to a

To change the volume size, select the cluster and then ECS Instances. Click on the Instances link (one at a time). This will open up the EC2 dashboard with that instance details.

Click on the root device link /dev/xvda to take us to the EBS volume, and modify the volume to a larger size (from 8G to say 300G). Note that 300G EBS gives us 900 IOPS, which is near the recommended minimum for a single instance deployment.

Save and return to the EC2 Dashboard. Reboot the EC2 instance. (Repeat for other EC2 instances/EBS volumes in the cluster)

Step 2 – Docker Image

Here we will create a single server Splunk install and not a clustered deployment. Back in the main AWS ECS menu select Task Definition, and Create a new Task Definition. Select EC2 as the Launch type.

Enter a task name, and select ecsTaskExecutionRole as the Task Role. Use the default setting for the Network Mode.

For the Task Memory and CPU, select the sizing appropriate to your needs.

For a small test instance I set this to 8GB and 4vCPU. (Note that this is below the recommended sizing for Splunk, but also note an ECS limit of 10vCPUs per task).

Before we add the container, we will create two volumes for $SPLUNK_HOME/etc/ and

Create a volume for both of these (for example name them splunk-etc-vol and splunk-var-vol), and click on the “Specify a volume drive” checkbox so that you can change the Scope of the volume to “shared”, and Enable auto-provisioning.

Now click on “Add Container”, and enter a name, and set the image to read from the Splunk image in DockerHub: use splunk/splunk:latest to do this. Note if you wish to use the Marketplace version, you should use the Splunk image here:

Other Standard settings can be kept as default, but make sure you add port mappings for the host and container; I’ve used 8000 for the Search Head UI and 9997 for forwarding (add others as you need).

If you are creating more than one instance of Splunk on the Container, you will need to have different port mappings, as they will be sharing the same EC2 Host IP/Ports.

Open up the Advanced container configuration. All Healthcheck settings can be left with no settings. (Optional - change the Hostname to a friendly name.)

In the Storage and Logging section, add two mount points for /opt/splunk/etc and /opt/splunk/var, selecting the source volumes per the ones setup earlier.

For the Environment settings, we will need to set the following environment variables:

SPLUNK_START_ARGS = --accept-license

SPLUNK_PASSWORD = <password of your choice for admin>

Note that you can also alternatively enter the password environment variable when you run the task.

The remainder of the information here can be kept as default, so save and create the Task.

Step 3 – Run the Task

Select the Task (from the Task Definition list), and click on Actions, Run Task.

Select EC2, and the cluster you created earlier and run 1 task only. The Advanced Options lets you override some options such as the Password.

Once done, the container task will run, and Splunk will start up.

Step 4 – Log into Splunk

If you click on the container instance, you should now see the IP address and Public DNS. Copy this into your browser and append port 8000 (or whatever port you mapped) to the url. This should now take you to the Splunk log-in page, and you’re ready to go!

Now that you’re up and running, why not try Splunking your AWS environment with this instance – the Insights App for Infrastructure is a simple install! For those wanting to know more about what else Splunk can do with containers, catch up on this short webcast from this year’s AWS re:invent and hear all about it from our own James Hodge.  



Paul Davies
Posted by

Paul Davies

Paul is an Architect in EMEA, responsible for working closely with Splunk customers and partners to help them deliver solutions relating to ingesting data or running Splunk in the cloud. Previously, Paul worked at Cisco as a BDM for big data solutions, an Enterprise Architect at Oracle, and Consultant at Hitachi.

Show All Tags
Show Less Tags