Deploy Splunk Enterprise on Kubernetes: Splunk Connect for Kubernetes and Splunk Insights for Containers (BETA) - Part 2

Welcome back to our Splunk on Kubernetes walkthrough!

In our previous post, "Deploy Splunk Enterprise on Kubernetes: Splunk Connect for Kubernetes and Splunk Insights for Containers (BETA) - Part 1," we covered some prerequisites and concepts behind getting started with running a distributed Splunk deployment in a Kubernetes environment.

If you followed along, you should now have a Kubernetes Namespace called splunk with an NGINX deployment serving config files to our Splunk pods, a standalone Search Head pod where we will install some Splunk apps and make use of our data, and a 3 member Indexer Cluster Statefulset, governed by our Master pod, which will also serve as our License Master.Now that we have an operational Splunk environment, let’s prepare to fill it with Kubernetes data! We will deploy configurations for Splunk SmartStore enabled indexes that Splunk Connect for Kubernetes will use. Then, we’ll create and verify a HEC token and a Kubernetes Service.  

Configure SmartStore

We will use Amazon S3 as our remote store, so let’s head over to the AWS console and create an S3 bucket. If you use another S3 compliant storage endpoint please consult our documentation to determine if your endpoint is compatible with Splunk SmartStore.

For info on how to create an AWS S3 bucket here.

Once completed you will see a bucket in your AWS S3 console, like this:

Let’s test write access!

From your terminal, with awscli and your AWS credentials configured, copy a text file to your remote store:

aws s3 cp test.txt s3://splunk-smartstore-demo

You should now see that file in your AWS console!

Now that I have my S3 bucket and have tested my ability to write to it, we’re ready to set up our SmartStore enabled indexes.

Deploy Configurations to the Splunk Cluster

We’re going to take two demo Splunk Apps, copy them into the master-apps directory on the Master Pod, then push them out to the Indexer Cluster via the Master GUI.

The k8s_smartstore_indexes app below contains an indexes.conf that will enable all our indexes to be SmartStore enabled by default. See Splunk Docs for more.

Create a folder that contains the following files and matches this directory tree:

├── local
│   └── indexes.conf
└── metadata
   └── default.meta

2 directories, 2 files


# Configure all indexes to use the SmartStore remote volume called
# "smartstore".
# Note: If you want only some of your indexes to use SmartStore, 
# place this setting under the individual stanzas for each of the 
# SmartStore indexes, rather than here.

remotePath = volume:smartstore/$_index_name
repFactor = auto

# Configure the remote volume
storageType = remote

# On the next line, the path attribute points to the remote storage location
# where indexes reside. Each SmartStore index resides directly below the location 
# specified by the path attribute. The <scheme> identifies a supported remote 
# storage system type, such as S3. The <remote-location-specifier> is a 
# string specific to the remote storage system that specifies the location 
# of the indexes inside the remote system. 
# This is an S3 example: "path = s3://mybucket/some/path".

path = s3://<yourBucketNameHere>/

# The following S3 settings are required only if you’re using the access and secret 
# keys. They are not needed if you are using AWS IAM roles.

remote.S3.access_key = <putYourAccessKeyHere!>
remote.S3.secret_key = <putYourSecretKeyHere!>
remote.S3.endpoint = http://s3.<region> 

#disable replication for _introspection, to avoid bundle errors

repFactor = 0


# Application-level permissions
access = read : [ * ], write : [ admin ]
export = system
access = read : [ admin ], write : [ admin ]

The ta_containers sample app below creates a HEC token, three event indexes and one metric index. We are also configuring some simple props and transforms and permissions. Create this locally.

├── default
│   ├── indexes.conf
│   ├── inputs.conf
│   ├── props.conf
│   └── transforms.conf
└── metadata
    └── default.meta

2 directories, 5 files


coldPath = $SPLUNK_DB/cm_metrics/colddb
homePath = $SPLUNK_DB/cm_metrics/db
thawedPath = $SPLUNK_DB/cm_metrics/thaweddb
datatype = metric
coldPath = $SPLUNK_DB/cm_meta/colddb
homePath = $SPLUNK_DB/cm_meta/db
thawedPath = $SPLUNK_DB/cm_meta/thaweddb
datatype = event

coldPath = $SPLUNK_DB/cm_events/colddb
homePath = $SPLUNK_DB/cm_events/db
thawedPath = $SPLUNK_DB/cm_events/thaweddb
datatype = event

coldPath = $SPLUNK_DB/cm_alerts/colddb
homePath = $SPLUNK_DB/cm_alerts/db
thawedPath = $SPLUNK_DB/cm_alerts/thaweddb
datatype = event


disabled = 0
index = cm_metrics
sourcetype = cm_metrics
token = 00000000-0000-0000-0000-000000000000




########### Metrics ######################
DEST_KEY = MetaData:Host
REGEX = host=(\S+)
FORMAT = host::$1


# Application-level permission
access = read : [ * ], write : [ admin ]
export = system
access = read : [ admin ], write : [ admin ]

We’ll use the kubectl cp command, to move the folders into place on the Master, then utilize the UI to push the bundle.

This is simply to show off some of the capabilities kubectl provides and to reinforce that once we spin up the Splunk cluster on Kubernetes, it’s all just the Splunk cluster management you already know!

In a production scenario, we’d likely use some of the more advanced features available in our Docker image and other devops tools to orchestrate the apps into place at runtime. Touching the pods by hand will be a “no-go” in most production environments. In the pursuit of Science and Learnings however, it’ll do…

kubectl -n splunk cp k8s_smartstore_indexes/ master-6d7b98f8f5-tb7sh:/tmp/k8s_smartstore_indexes
kubectl -n splunk cp ta_containers/ master-6d7b98f8f5-tb7sh:/tmp/ta_containers

"Exec" into the master pod, and run the following commands, to copy the apps into the master-apps directory and chown to splunk user.

kubectl -n splunk exec -it master-6d7b98f8f5-tb7sh bash
cd /tmp
sudo mv k8s_smartstore_indexes ta_containers /opt/splunk/etc/maser-apps
cd /opt/splunk/etc/maser-apps
sudo chown -R splunk:splunk k8s_smartstore_indexes ta_containers

Our Splunk apps are now in master-apps, ready to be distributed:

ls -la /opt/splunk/etc/master-apps
total 20
drwxr-xr-x  5 splunk splunk 4096 Dec  6 17:29 .
drwxr-xr-x 16 splunk splunk 4096 Dec  6 15:06 ..
drwxr-xr-x  4 splunk splunk 4096 Nov 19 12:28 _cluster
drwxr-xr-x  3 splunk splunk 4096 Dec  6 17:28 k8s_smartstore_indexes
drwxr-xr-x  4 splunk splunk 4096 Dec  6 17:29 ta_containers

From here, you can push via Splunk web or the cli, your choice.

We will use the UI, so let’s port-forward back to our master UI:

kubectl -n splunk port-forward master-6d7b98f8f5-tb7sh 9999:8000

Log in with the default credentials admin/helloworld, navigate to Settings > Indexer Clustering

Hit the edit button and select “Cluster Bundle Actions”

Then select “Validate and Check Restart”.

Once validated, we’re ready to push our bundle out to the Indexer Cluster! If you see errors, re-visit the previous step.

Once you push the bundle, wait a few moments as the cluster does its thing distributing the new configs. Once completed, you should see all your data is replicated and searchable:

You’ll also see buckets are now available in S3! You may we only see indexes that contain data from Splunk _internal indexes. This is expected, as the indexes only appear in the Master GUI when they receive data. Also, in order for data to make it to the remote store, we would need to make some buckets roll, so it may take time to see all indexes represented in s3. If you are impatient, try a rolling restart!

Deploy a Kubernetes Service for HEC

In order to send data in a distributed fashion across our indexing cluster, we will deploy the Kubernetes Service named hec that will allow traffic to be balanced to our Indexer Statefulset.

apiVersion: v1
kind: Service
 name: hec
   app: splunk
   role: splunk_indexer
   tier: indexer
   app: splunk
   role: splunk_indexer
   tier: indexer
   - name: splunk-hec
     port: 8088
     targetPort: 8088

Deploy the above service

kubectl -n splunk apply -f splunk-indexer-hec-service.yaml

Confirm HEC service has a CLUSTER-IP

Kubectl -n splunk get svc
NAME              TYPE CLUSTER-IP      EXTERNAL-IP PORT(S)                        AGE
hec               ClusterIP   <none> 8088/TCP                        1h
indexer           ClusterIP None          <none> 8000/TCP,8089/TCP,4001/TCP,9997/TCP   13h
master            ClusterIP None          <none> 8000/TCP,8089/TCP                     13h
search            ClusterIP None          <none> 8000/TCP,8089/TCP,8191/TCP            13h
splunk-defaults   ClusterIP None          <none> 80/TCP                        13h
tiller-deploy     ClusterIP    <none> 44134/TCP                        1d

Good to go!

Send test data to HEC

Now that we have exposed HEC to the cluster, exec into your search head pod, and let’s send some data to the Splunk HEC service we have running in Kubernetes:

kubectl -n splunk exec -it search-5944fc8696-57jzh bash

Run this command:

curl -k "https://hec:8088/services/collector" -H "Authorization: Splunk 00000000-0000-0000-0000-000000000000" -d '{ "time": 1545316835, "host": "search-5944fc8696-5n7l4", "index": "cm_events", "source": "hec-test", "event": { "message": "Something happened", "severity": "INFO"}}'

Run it a few more times, and as long as you are returning {"text":"Success","code":0} you will see our events being spread across our indexers when you search from the Search pod or Master in Search & Reporting App.

If you return errors when sending the test events, review your hec token and kubernetes service configurations in the previous step.

Receiving these test events will cause the cm_events index to show up in the Master GUI.

Let's recap!

We have our Splunk cluster up and running, we’ve distributed some custom configs, and we’ve got our SmartStore indexes tested and ready for some real data!

In the next installment of this series, we will deploy Splunk Connect for Kubernetes and explore the data it collects with Splunk Enterprise and the App for Containers BETA!

Thanks for checking out Part 2 of our series on Splunk & Kubernetes. For more on monitoring your Kubernetes stack learn more with our Beginner’s Guide to Kubernetes Monitoring. And if you’re interested in gaining immediate insight into your Kubernetes stack including performance metrics for your clusters, pods, containers, and namespaces, as well as log, metric, event, and metadata, sign up for Splunk Insights for Containers (BETA) to test this out.

Matthew Modestino

Posted by



Deploy Splunk Enterprise on Kubernetes: Splunk Connect for Kubernetes and Splunk Insights for Containers (BETA) - Part 2

Show All Tags
Show Less Tags

Join the Discussion