As part of the Cloud Adoption team, I am working with Splunk Cloud (and Splunk Enterprise) customers on a daily basis and I get asked questions quite frequently about how to optimize, and effectively reduce, administration overhead. This becomes especially relevant when I am talking with new or relatively new customers that are expanding from a handful of forwarders, into the 100’s or 1000’s of forwarders. And I always say…. start with a Deployment Server.
For larger customers that have trained and experienced Splunk Administrators, or have engaged with Professional Services, this is a given and typically already exists in their deployments.
This article is for you.
I won’t go into full details on the how and why this works, but I will outline what configurations are needed and how this will scale based on my field experience, and what our best practices outlines. The configurations here are based upon Splunk’s Professional Services Base Configurations toolset.
This outlines how to configure a DS to deploy apps on your local network. From an architecture point of view, the Cloud Forwarder App contains the configs to send your data to your Splunk Cloud instance. This could be interchanged with an App that forwards to on-premise Indexers or an HF/UF Aggregation Tier, but that’s a different discussion…
Let’s get some terminology out of the way…
Deployment Server (DS) – A Splunk Enterprise instance that acts as a centralized configuration manager. It deploys configuration updates to other instances. Also refers to the overall configuration update facility comprising deployment server, clients, and apps.
Deployment Client – A remotely configured Splunk Enterprise instance. It receives updates from the deployment server. Typically these are Splunk Universal Forwarders or Heavy Forwarders.
Server Class – A deployment configuration category shared by a group of deployment clients. A deployment client can belong to multiple server classes.
Deployment App – A unit of content deployed to the members of one or more server classes.
So let’s dig in!
First off, we need a dedicated Splunk Heavy Forwarder (HF/HWF) instance that will be the DS. This instance should be configured and already sending its data to your Splunk Cloud instance, and this document assumes this is installed in /opt/splunk.
Here, a virtual machine is more than sufficient, and preferred. But follow the recommended spec for this : 4 cores x 8 gb of RAM and sufficient disk space to handle your deployment apps. (Typically 50gb is more than enough!) Additionally, while not required, a 64bit Linux host is ideal and you will get the most mileage out of this.
This server also needs to be placed on the network in such a way that all the hosts can communicate with it. This means that firewalls will need to be opened up for the Splunk Management Port to the DS host (TCP:8089 by default) or multiple DS’s deployed.
Additionally, we need our “Apps”.
In this article we will deploy the Splunk_TA_nix. “100_demostack_splunkcloud” from our Splunk Cloud Stack, and org_deployment_client. (More on this one later!)
These Apps need to all be placed in the /opt/splunk/etc/deployment-apps/ directory. Once these are place here, they will be visible in the Splunk Web Interface, from the Forwarder Management page.
From here, we are able to build our Server Classes. To do this, we want to consider our Deployment Topology. In a nutshell, a DS can filter based on hostname, IP address, or machine type. So we have a few options for deploying to all of our Clients.
Now we will setup our Server Classes..
First we setup a Server Class for All Clients. We are going to call this “All_Hosts”.
Once we create this, we can add Apps and Clients to the Server Class.
Let’s add our org_deployment_client and 100_demostack_splunkcloud Apps to the All_Hosts serverclass.
And next, we need to add Clients. At this point, there are no clients connecting to this DS. However, since this class is for all clients, we add a Include whitelist of ‘*’.
Next, repeat the creation of a serverclass, but with the Splunk_TA_nix add added. For filtering on this, until a client connects, you are not able to filter on machine types. This means you need to filter on machine name or IP address until the machine types connect. In this example, I created a filter for a host name of “nix-*, ubuntu*”.
Once this is done, your DS is ready and awaiting clients to connect!
Previously I mentioned the “org_deployment_client” app. Let’s revisit this now.
Typically, to configure a client to connect to a DS, we either add it through the CLI (via splunk set deploy-poll servername.mydomain.com:8089) or we edit the deploymentclient.conf file in /opt/splunk/etc/system/local and restart..
That’s fine! It works… BUT.. it is local. Once you put it there, you have to manually change it (or if you’re lucky, automate it..) But I digress.
From the start, let’s make an app that connects to the DS.. Here’s where the “org_deployment_client” comes in to play.
Taken from the Splunk PS Base Configs, here is the template..
# Set the phoneHome at the end of the PS engagement
# 10 minutes
# phoneHomeIntervalInSecs = 600
# Change the targetUri
targetUri = deploymentserver.splunk.mycompany.com:8089
As you can guess, we update the targetUri to point to the address and management port of our DS. It’s highly recommended to use DNS for this, and not an IP address. And as of 6.3, this can also be a load balancer.. <finally…woot!! >
Now, the most difficult part.. The org_deployment_client app needs to be deployed to all our UFs on install, or after deployment.. This allows us the ability in the future to change the targetUri and phoneHomeInternvalInSecs without having to touch every forwarder! There are many ways to accomplish this, some use git/mercurial/cvs/ script the delivery of this, some build custom install packages that install this automatically.. Others manually deploy this after installation.. However you want to do it, do it!
Back on track.. once this is deployed, we install our clients (with the org_deployment_client.) In this case, I don’t have the apps configured to restart Splunk once they are downloaded from the DS, so a manual restart is required. Afterwards, we can check the Forwarder Management GUI and confirm our hosts and the apps deployed..
From here, we have our hosts sending their data logs to Splunk Cloud. This will include enabled TA’s and modular inputs.
There are “Gotchas”… Please Don’t do this!
Here are a few things to take into consideration, and not to do.
1) Search Head Cluster Members (SHC) – These cannot be part of a DS, the Deployer Node handles this functionality
2) Index Cluster Members – These cannot be part of a DS, the Cluster Master Node handles deployment of configurations
3) Using Automation ( Puppet / Chef / Ansible etc) – Be careful when using these in conjunction with DS.. configs can disappear and break…
4) Test your serverclasses.conf changes in a DEV environment!!
5) Standardize on a naming convention for your Server Classes and App names. Here I used org_deployment_client, but for your company it would be mycompany_deploymentclient_securelan and mycompany_deploymentclient_dmz1.
There are a lot of features and functionality available in the Deployment Server that I didn’t cover here. Our Education team does a wonderful job of teaching this, and Splunk PS can also spend a wonderful amount of time going over the different features of the DS and how to get it to scale. Please reach out if you want to learn more!
Capacity Planning Manual for Splunk Enterprise
Updating Splunk Enterprise Instances – Deployment server architecture
Updating Splunk Enterprise Instances – Plan a deployment
Updating Splunk Enterprise Instances – Configure deployment clients
Eric Six and Dennis Bourg