With Kubernetes emerging as a strong choice for container orchestration for many organizations, monitoring in Kubernetes environments is essential to application performance. Kubernetes allows developers to develop applications using distributed microservices introducing new challenges not present with traditional monolithic environments. Understanding your microservices environment requires understanding how requests traverse between different layers of the stack and across multiple services. Modern monitoring tools must monitor these interrelated layers while efficiently correlating application and infrastructure behavior to streamline troubleshooting. Poor application/infrastructure performance impact in the era of cloud computing, as-a-service delivery models is more significant than ever. With such vast competition, customers will be quickly drawn to solutions that work and perform their best.
Here is where Splunk and Opentelemetry come in. OpenTelemetry is a collection of tools, APIs, and SDKs used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your application’s performance and behavior. OpenTelemetry is not an observability back-end – that’s where back-end solutions like Splunk, Prometheus, and Jaeger are helpful. These back-end solutions are where your collected application’s telemetry is exported and then reviewed for analysis.
In this post, I will give basic configuration steps on how to deploy the Splunk OpenTelemetry collector to gather Kubernetes metrics to begin analyzing the performance of your Kubernetes workloads.
How Is My Application’s Telemetry Collected?
To begin collecting your application’s telemetry data and understanding your Kubernetes workloads, you’ll need to deploy the OpenTelemetry Collector. The OpenTelemetry Collector is a vendor-agnostic implementation of how to receive, process and export telemetry data. It removes the need to run, operate, and maintain multiple agents/collectors and instead provides one collector for all your metrics, traces, and logs to help you understand every aspect of how your Kubernetes workloads and application are performing. Splunk offers its distribution of the OpenTelemetry collector using the open-source OpenTelemetry collector core as its upstream along with log collection with FluentD for a more robust experience when using the Splunk Observability back-end for analysis of your Kubernetes workloads.
How Is the Splunk OpenTelemetry Collector for Kubernetes Deployed?
The Splunk OpenTelemetry Connector for Kubernetes installs the Splunk OpenTelemetry Collector on your Kubernetes cluster. Deployment of the Splunk OpenTelemetry Connector for Kubernetes is deployed using a Helm chart. Helm Charts are Kubernetes YAML manifests combined into a single package for easy installation of multiple components into your Kubernetes clusters. Once packaged, installing a Helm Chart into your cluster is as easy as running a single helm install, which simplifies the deployment of containerized applications. Be sure to install helm on your host running your Kubernetes cluster before you begin.
To begin the deployment of the Splunk OpenTelemetry Connector for Kubernetes, log into the Splunk Observability console. Once logged in, navigate to the hamburger menu on the top left-hand corner and click "Data Setup".
In the Connect Your Data window, select Kubernetes and click Add Connection. This presents the data setup wizard, which walks you through the various installation requirements.
For step one, input your custom settings about the cluster into the connection wizard.
- Access Token - The token used to authenticate the integration with Splunk. More on access tokens here.
- Cluster Name - The name used to identify the Kubernetes cluster in Splunk Observability.
- Provider - The cloud provider hosting the Kubernetes cluster. (Use “other” for local on-premise installations)
- Distribution - The Kubernetes distribution type. (Use “other” for local on-premise installations)
- Add Gateway - Assigns a gateway to run on one node. We recommend enabling this if your cluster is larger than 25 hosts, as a gateway will improve performance in this scenario.
For step two, the data setup wizard will present the steps necessary to install the Splunk OpenTelemetry connector using Helm based on the input previously entered about your Kubernetes cluster. The installation begins by first adding and updating the Helm chart repository. Once completed, use Helm to install the Splunk OpenTelemetry Connector for Kubernetes. Simply copy the code on each section to complete the installation.
To confirm the installation script was successful, run kubectl get pods on your Kubernetes cluster to list all pods in your cluster. The output will show that both the collector agent and collector receiver have been deployed in your cluster.
After about 90 seconds, data begins to populate metrics from your cluster onto Splunk Observability Cloud. To verify, let’s navigate to the infrastructure dashboard by clicking the hamburger menu and selecting Infrastructure.
Click on Kubernetes found under the Containers section of the dashboard.
The dashboard now shows the cluster map presenting you with all nodes and pods in your environment.
Now that the Splunk OpenTelemetry Collector is exporting metrics from your Kubernetes cluster to Splunk Obersvability Cloud, we can easily use the various metrics collected to identify any potential infrastructure issues affecting our Kubernetes workloads and unlock the ability to collect data from applications that have been instrumented with OpenTelemetry.
Kubernetes has changed how applications are deployed, bringing new challenges to the table. Understanding how your Kubernetes workloads are performing has never been so important. I hope that this walkthrough helped you become more successful in your Kubernetes journey.
Want to try working with deploying the Splunk OpenTelemetry collector using Kubernetes yourself? You can sign up to start a free trial of the suite of products – from Infrastructure Monitoring and APM to Real User Monitoring and Log Observer. Get a real-time view of your infrastructure and start solving problems with your microservices faster today. If you’re an existing customer who wants to learn more about OpenTelemetry setup, check out our documentation.