
Welcome back to the finale of our blog series exploring Splunk on Kubernetes!
In this final post, we’ll deploy Splunk Connect for Kubernetes, giving us full visibility of our Kubernetes logs, metadata and metrics. We’ll install some apps, including a sneak peek into the newly announced Splunk for Container Monitoring (BETA). Once complete, we’ll help you clean up all the resources created in this walkthrough!
If you have followed Part 1 & Part 2 in this series, your Splunk namespace should be looking like this:
Let’s continue!
Deploy Helm & Tiller
We’re going to deploy using Helm, so let’s get Tiller deployed to our Splunk namespace.
If you do not use Helm, you can deploy manually using the manifests here (we will not cover the customizations needed in the manifests in this blog, though we recommend to check out Helm template command as a way to render the templates locally).
To make things easy in dev, I’ve provided yaml to create a service account for Tiller:
tiller-rbac-config.yaml
# https://docs.helm.sh/using_helm/#example-service-account-with-cluster-admin-role
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: splunk
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
namespace: splunk
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: splunk
This will give Tiller cluster-admin rights and allow it to deploy apps to our cluster. While the topic of Tiller permissions is one you’ll need to balance with security in real-world deploys, for the scope of this demo and in labs this should be fine. See Helm Docs for more!
kubectl -n splunk apply -f tiller-rbac-config.yaml
Now that we have a service account for Tiller to use, we’ll install Helm on our local machine as outlined here.
Then run the following command, making sure you have access to your Kubernetes cluster from the console session you’re using:
helm init --service-account tiller --tiller-namespace splunk
If deployed successfully, you’ll see a Tiller pod in your namespace:
kubectl -n splunk get pods
NAME READY STATUS RESTARTS AGE
indexer-0 1/1 Running 0 2h
indexer-1 1/1 Running 0 2h
indexer-2 1/1 Running 0 2h
master-6d7b98f8f5-nnl9n 1/1 Running 0 2h
search-5944fc8696-hpj5m 1/1 Running 0 2h
splunk-defaults-686b5885f6-k7crh 1/1 Running 0 5h
tiller-deploy-79f76f8b55-96tq4 1/1 Running 0 8s
Now copy this sample values.yaml to your local machine, and ensure the HEC token and indexes match the ones from the ta_containers in Part 2 of this walkthrough!
Values.yaml
#global settings
global:
logLevel: info
splunk:
hec:
protocol: https
insecureSSL: true
host: hec
token: 00000000-0000-0000-0000-000000000000
#local config for logging chart
splunk-kubernetes-logging:
journalLogPath: /run/log/journal
splunk:
hec:
indexName: cm_events
#local config for objects chart
splunk-kubernetes-objects:
rbac:
create: true
serviceAccount:
create: true
name: splunk-kubernetes-objects
kubernetes:
insecureSSL: true
objects:
core:
v1:
- name: pods
interval: 30s
- name: namespaces
interval: 30s
- name: nodes
interval: 30s
- name: services
interval: 30s
- name: config_maps
interval: 30s
- name: persistent_volumes
interval: 30s
- name: service_accounts
interval: 30s
- name: persistent_volume_claims
interval: 30s
- name: resource_quotas
interval: 30s
- name: component_statuses
interval: 30s
- name: events
mode: watch
apps:
v1:
- name: deployments
interval: 30s
- name: daemon_sets
interval: 30s
- name: replica_sets
interval: 30s
- name: stateful_sets
interval: 30s
splunk:
hec:
indexName: cm_meta
#local config for metrics chart
splunk-kubernetes-metrics:
rbac:
create: true
serviceAccount:
create: true
name: splunk-kubernetes-metrics
splunk:
hec:
indexName: cm_metrics
Deploy Splunk Connect for Kubernetes with the following command, making sure to point to the values.yaml we just created.
helm install --name kubecon-demo-2018 --tiller-namespace splunk --namespace splunk -f values.yaml https://github.com/splunk/splunk-connect-for-kubernetes/releases/download/v1.0.1/splunk-connect-for-kubernetes-1.0.1.tgz
If successful, you should now have your Splunk Connect for Kubernetes pods running in our Splunk namespace:
kubectl -n splunk get pods
NAME READY STATUS RESTARTS AGE
indexer-0 1/1 Running 0 2h
indexer-1 1/1 Running 0 2h
indexer-2 1/1 Running 0 2h
kubecon-demo-2018-splunk-kubernetes-logging-f6n2z 1/1 Running 0 17s
kubecon-demo-2018-splunk-kubernetes-logging-hqrlb 1/1 Running 0 17s
kubecon-demo-2018-splunk-kubernetes-logging-rv78f 1/1 Running 0 17s
kubecon-demo-2018-splunk-kubernetes-metrics-5695b46d58-6fgb4 2/2 Running 0 17s
kubecon-demo-2018-splunk-kubernetes-objects-7f7f47f6f-qkkzg 1/1 Running 0 17s
master-6d7b98f8f5-nnl9n 1/1 Running 0 2h
search-5944fc8696-hpj5m 1/1 Running 0 2h
splunk-defaults-686b5885f6-k7crh 1/1 Running 0 6h
tiller-deploy-79f76f8b55-96tq4 1/1 Running 0 11m
You can also check the pod logs to ensure there are no errors:
kubectl -n splunk logs -f kubecon-demo-2018-splunk-kubernetes-logging-hnghs
kubectl -n splunk logs -f kubecon-demo-2018-splunk-kubernetes-objects-7f7f47f6f-qkkzg
kubectl -n splunk logs -f kubecon-demo-2018-splunk-kubernetes-metrics-5695b46d58-6fgb4 -c splunk-heapster
If you run into any errors, let us know on the github repo or come join us in the #kubernetes room on Slack (https://splk.it/slack). Especially in newer Kubernetes deploys, you may need to patch the metrics deployment and clusterRoleBinding. See this github issue for a workaround.
Otherwise, assuming all went well, we should be able to go back into the master and check that we see our new indexes:
kubectl -n splunk port-forward master-6d7b98f8f5-nnl9n 9999:8000
Success!
Next, let’s port-forward into our search pod and make sure we can search our Kubernetes data. If you have
kubectl -n splunk port-forward search-5944fc8696-hpj5m 9999:8000
Run the following searches and ensure you see results. If you can’t see logs, metrics or metadata, check your configs or come join us in the #kubernetes room on Slack.
Search 1: index=cm_events
Search 2: index=cm_meta
Search 3: | mcatalog values(metric_name) WHERE index=cm_metrics by host
It’s alive!
Now you can install your favorite apps on the standalone Search pod, right through the GUI. From the Splunk Enterprise homepage, press the gear icon in the Apps bar:
Then select Install app from file:
We’ll download Splunk Metrics Workspace and install using the method above to enhance your ability to search the Kubernetes metrics you’re getting from your cluster.
As long as your metrics are flowing in, you should see Splunk has discovered your metrics and you should be able to begin exploring!
For those that have registered for the Insights for Containers Splunk for Container Monitoring (BETA) and received the app, we’ll install it via the Search Pod’s GUI.
Navigating to the App for Containers BETA after install, use the Investigate tab to view and filter down dimensions, i.e. namespace and entity_type and create a group called splunk-pods.
Now, anytime I want to investigate the pods in our splunk deployment from Part 1 & 2 of this series, I can view them as a group, with all metrics and logs and alerts available in one place.
Quite a powerful way to correlate resources across datasets and get started with Kubernetes data! From metrics, to application logging to API auditing and more, there are many insights to be gleaned when your data is all in one place.
Time to take a step back and look at what we’ve built!
kubectl -n splunk get pods
NAME READY STATUS RESTARTS AGE
indexer-0 1/1 Running 0 3h
indexer-1 1/1 Running 0 3h
indexer-2 1/1 Running 0 3h
kubecon-demo-2018-splunk-kubernetes-logging-hnghs 1/1 Running 0 6m
kubecon-demo-2018-splunk-kubernetes-logging-p5s8q 1/1 Running 0 6m
kubecon-demo-2018-splunk-kubernetes-logging-xkg57 1/1 Running 0 6m
kubecon-demo-2018-splunk-kubernetes-metrics-dfd4c76c9-946r5 2/2 Running 0 6m
kubecon-demo-2018-splunk-kubernetes-objects-67d8dc7c7-92rgt 1/1 Running 0 6m
master-6d7b98f8f5-nnl9n 1/1 Running 0 3h
search-5944fc8696-hpj5m 1/1 Running 0 3h
splunk-defaults-686b5885f6-k7crh 1/1 Running 0 6h
tiller-deploy-79f76f8b55-96tq4 1/1 Running 0 28m
You’re now the captain of a fully self-contained Splunk demo environment, consisting Splunk Indexing Cluster deployed via Kubernetes, indexing all your Splunk Connect for Kubernetes logs, metadata and metrics in Splunk SmartStore enabled indexes, all being searched in one place in some of our newest Splunk apps!
Awesome, right?
We’re so excited to see where our passionate user community will go with Splunk & Kubernetes!
Be sure to join the conversation and share your journey on our Slack chat by registering at splk.it/slack in #kubernetes
Contribute with us on github!
- https://github.com/splunk/docker-splunk
- https://github.com/splunk/splunk-connect-for-kubernetes
- https://github.com/splunk/splunk-ansible
Thanks again for checking out our test scenarios, and for more on monitoring your Kubernetes stack learn more with our Beginner’s Guide to Kubernetes Monitoring.
And if you’re interested in gaining immediate insight into your Kubernetes stack including performance metrics for your clusters, pods, containers, and namespaces, as well as log, metric, event, and metadata, sign up for Splunk Insights for Containers (BETA) to test this out. Please inquire if you might be a good candidate for this beta program. You must have a Kubernetes deployment. Existing and non-existing Splunk customers are welcome!
If you are ready to clean it all up, see the following steps to tear down your resources.
Clean Up
Once you’re done exploring, you can clean up all your resources with the following commands:
Remove the Splunk Connect for Kubernetes helm deploy
helm delete kubecon-demo-2018 --tiller-namespace splunk --purge
Remove Tiller
helm reset --tiller-namespace splunk
Delete the HEC service:
kubectl -n splunk delete service hec
Delete the Splunk Cluster
Navigate to the docker-splunk/test_scenarios/kubernetes directory and run:
kubectl -n splunk delete -f 3idx1sh1cm
Delete nginx
Navigate to docker-splunk/test_scenarios/kubernetes/nginx directory and run:
kubectl -n splunk delete -f manifests
Delete your configmaps:
kubectl -n splunk delete cm --all
Be sure to return to AWS/your storage provider and delete your S3 bucket when complete!
----------------------------------------------------
Thanks!
Matthew Modestino