PLATFORM

Introducing Splunk Operator for Kubernetes 2.0

The Splunk Operator for Kubernetes team is extremely pleased to announce the release of version 2.0! This represents the culmination of many months of work by our team and continues to deliver on our commitment to provide a high-quality experience for our customers wishing to deploy Splunk on the Kubernetes platform.

The showcase feature for this release — and the reason we bumped the version to 2.0 — is the evolution of our Splunk Operator App Framework. When we deal with Splunk Enterprise deployed in containers on Kubernetes, feeding configuration and content to Splunk looks a lot different than if we were working with Splunk deployed on, say, bare metal or VMs. Instead of Administering Splunk through direct manipulation of the App filesystems, we acquire Apps and configuration externally via S3. 

This is especially relevant when we are dealing with our scale-out features such as Search Head and Indexer Clustering. This allows the Splunk Admin to declaratively define the state of the containerized Splunk deployment from the outside. This also allows us maximum flexibility for integrating Splunk into customer CI/CD pipelines and with version control tools like Git. We’ve arrived at this juncture after several years of observing how customers maintain and deploy Splunk in production environments and talking with members of our Admin community.

Our first evolution of this feature added the ability to acquire configuration and Apps from S3 on a per Custom Resource basis. This was very cool as it abstracted away the various content distribution methods we have such as the Search Head Cluster Deployer and Cluster Manager. But the delivery of Apps in this iteration suffered from a couple of drawbacks that could be undesirable in Day 2 Administration of your containerized Splunk.

First, applying new or updated Apps required that we recycle the pods responsible for content distribution. So, if you were running a single instance Search Head or needed to update your Indexer Cluster, the Search Head or Cluster Manager would need to be bounced. This obviously could be an impediment to users or be disruptive to inbound data. 

The second problem was that customers in large deployments may have hundreds of Apps per Splunk environment. We needed a more comprehensive method for installing and updating Apps at scale.

Enter the new App Framework. We’ve completely overhauled how Apps are acquired and installed in the Splunk Operator for Kubernetes. In this approach, the Splunk Operator pod itself becomes the waypoint for App packages as we detect changes on the S3 bucket. This means we no longer need to use init containers and instead rely upon a persistent volume mounted to the Operator pod. While we still are using Ansible for much of the automation in setting up the various Splunk components and Custom Resources, we’re putting more of the logic on how Apps are handled directly into the Operator. This allows us to better handle failure scenarios and scale as I mentioned above. This also permits us to make updates to the Apps without restarting the underlying pod! 

One of the best resources for digging into how all of this works in more detail is a joint session given by one of the Engineers on the Splunk Operator team, Subba Gontla, and a customer of ours, Tanguy Maltaverne from Amadeus at Splunk .conf 2022. In addition to providing more detail on the implementation of our new App Framework, Subba demos a simple integration between GitHub and the Splunk Operator. He uses GitHub Actions to move changes to Splunk configuration to the S3 bucket that the Operator is monitoring. This is another cool byproduct of having integration points to deal with distributed, scaled-out Splunk Enterprise in a more Cloud-native approach.

I also wanted to call out another cool session where a couple of our extremely large customers talk about using the Splunk Operator for Kubernetes as a part of their current or future deployment strategy.

Congratulations to the Splunk Engineering and extended team for this milestone!

Patrick Ogdin
Posted by

Patrick Ogdin