Deployment Server Goodies

How would you answer the following:

  • Change looks good on me. (true/false)
  • Change looks good on systems. (true/false)

I’m not sure which is easier to answer, especially since it depends on the change. Yet no matter what it looks like, sometimes it’s not the change, but the rate of change that is more difficult to manage and differs in every environment. I’ve heard of deployment changes happening very rarely to dozens of times a day. Whatever the shape and rate of change in your environment, did you know Splunk can help manage it?

Splunk’s Deployment Server can shepherd the menagerie of its own configuration files. If you are already using Puppet or Chef or some other tool of choice, this discussion is likely not relevant to you. If you are planning to implement Deployment Server or want to improve your current setup, however, let’s roll. After all, it’s that time of the year to work with change (the kind that looks good, of course).

Please note: the intent of this post is not to extol the virtues of Deployment Server over any other tool for managing config file changes. We at Splunk do not feel strongly if you do or you don’t use Deployment Server. Licensing does not change as the Deployment Server is a core Splunk feature. And we understand there are plenty of considerations to weigh when choosing an update tool, the most obvious being you already have a change control process with associated tool. For the souls who have decided to use Splunk’s Deployment Server, the intent is to present gotchas and tips along these lines:

Designing Deployment Apps
Apps by Environment
Where to Place Configuration
Using Default vs. Local
Extra Considerations for Search Head Pooling
Must Reload

As a sideline, if you are curious about why customers decide to use Deployment Server, here are reasons cited by clients I’ve worked with:

  • A change propagation tool does not currently exist in the environment.
  • Deployment Server does not require the creation of code as it’s driven off the same Splunk configuration model.
  • Propagating change to major platforms is supported (use it with any platform where you can install Splunk).
  • Pushing changes is lightweight vs. the usual change controls involving trouble tickets, change windows and long lag times.

What is the Deployment Server?

Chances are if you are considering Deployment Server or have already implemented it, you’ve already trolled Splunk documentation on what it is and how to set it up. If you have not discovered this gem, it contains all the basics on laying the foundation for Deployment Server and deployment clients. There is also a great Splunk Wiki write-up covering more configuration and even troubleshooting. Start with the docs and wiki if you haven’t already.

Sizing It Up

It is possible to have any instance of Splunk serve double duty as a Deployment Server. Theoretically this should work fine. Theoretically CERN’s Large Hadron Collider should not open a black hole. In practice, it has proven to be problematic in some environments to have a shared Search Head-Deployment Server or Indexer-Deployment Server. If possible maybe just go with a standalone Deployment Server.

Since it will do very little or no indexing, a Deployment Server can be installed on a small physical server or a VM. Something like a 2 CPU/vCPU system will do. It does not need to be super fast. If you are using an OS where you can set the ulimit, after you are given the keys to the system, estimate the number of deployment clients you will need to support, then set the ulimit to 4 x # deployment clients. More details on setting ulimit here. This is mildly important as an undersized ulimit combined with hundreds of deployment clients checking in at the same time will bring down the Deployment Server.

Additionally, installing the default Splunk distribution will get you an indexer. This means Splunk will generate and index the usual internal metrics on data throughput and license auditing. If you don’t have the storage capacity or don’t want to manage the Deployment Server as a standalone entity, then simply enable forwarding on your Deployment Server and have the internal metrics whisked off to the main Splunk indexing cluster.

If the forwarding option is not preferred, then be sure to jigger the retention policy and/or ensure there is enough storage to accommodate the _internal and history indexes. The _internal index is currently set to retain data for 28 days and the history index for 7 days. Don’t worry, if you forget to do this, Splunk will eventually send a message it has stopped indexing because disk space is low. If you prefer this 2nd option, you should also have the Deployment Server draw its license usage from the License Manager.

Finally, consider changing the interval at which deployment clients phone home to something north of the 30 second default. Unless you anticipate a lot of immediate changes downstream, most of the time a phone home interval in minutes is acceptable. This value is set low initially so you can see the effects of installing a new deployment client right away. We don’t want to leave the uninitiated wondering why a client is not syncing new content. For the more experienced admin, a higher phone home interval will ease the chattiness of server-client communication.

A higher phone home interval will also allow for a higher deployment client:deployment server ratio. The recommended ratio is 300 clients per deployment server. Adjusting the phone home interval up can scale this comfortably to 1000:1 or even higher.

Designing Deployment Apps

Beyond the basics, as with many things Splunk, there are some structural choices which can minimize risk by minimizing change when config is pushed to deployment clients. Therefore, designing deployment apps to corral changes as much as possible will only help. Plan apps as you would plan for weathering the microclimates of San Francisco–in layers.

  1. Start with a base deployment app. This app will be pushed to all deployment clients and contain just the basics–where to find the Deployment Server and how often to phone home.
  2. Next, layer by Splunk function. Design a base deployment app for Indexers, Search Heads, Universal Forwarders, others. This will ensure all Splunk instances with the same function receive a minimum bootstrap configuration.
  3. Finally, add server/application/environment specific deployment apps. Let’s face it, this is where most configuration changes will happen. Whether you need to add a new data source, update a Splunk App or roll out knowledge objects, logical groupings here will help maintain order and localize change to only the systems which need it. This increases in importance in a distributed Splunk implementation.

In general, this is the proposed structure:

Here’s a sample of the defined deployment apps for a simple J2EE environment like our Flower Shop demo:

Developing an apps naming convention for each of the tiers and segregating configuration by Splunk function can help with identifying systems which need a particular change. Of course, you could simply create a single Flower Shop deployment app containing all manner of config for the Flower Shop. This lone app would be blind to deployment to a search head, indexer or forwarder. You would simply deploy this app to all tiers when making a change. This would exclude you from having to understand what configuration goes where (see Where Do I Configure My Splunk Settings for more details). I don’t blame you as this can be unintuitive and confusing on a good day. The disadvantage is you will have .conf files propagated to places where the settings will have no effect. This could lead to some unexpected conflicts which can be difficult to troubleshoot. It also expands the scope of change whenever something is adjusted in the app. It will need to go out to all indexers/search heads/forwarders where the app currently resides… unless you are okay with having different versions of the same app deployed… and who would be okay with that??

Apps by Environment

Creating a server class for environment/OS-specific configuration can be done via serverclass.conf using the handy machineTypes parameter. This is especially useful if you want to enable the *Nix App or the Windows App which is included in the default Splunk distributions for these platforms.

# Deploy this app only to Windows boxes.
# Deploy this app only to unix boxes - 32/64 bit.
machineTypes=linux-i686, linux-x86_64

Why the Heck Do We Do That?

Do you mean, why the heck do we write stuff to etc/system/local? Out of dementia is my best guess. It’s really not clear to me why we persist deployment server config to etc/system/local, but it’s poor practice. Hopefully this will change in future releases, but in the meantime, please be aware of this and workaround it.

Why should you care? At first, it might seem that the location of these config settings is not significant, but as many small businesses know it’s about location location location. When setting up a deployment client for the first time, if you use the CLI in the installation script (as is commonly done), the Deployment Server URI will be written to etc/system/local/deploymentclient.conf. If you specify a phone home interval at the same time, it is also written to the same file in the same location. Because all config in etc/system/local takes precedence over all others, if any of these scenarios present themselves, pain will also be along:

  • the Deployment Server ever decides on a change of scenery by relocating to a different server/VM/data center,
  • you need to adjust the phone home interval,
  • you want to add a Deployment Server instance and need to reroute a subset of the deployment clients.

It is not possible to override initial settings of Deployment Server URI or the phone home interval using an app propagated by the Deployment Server. Every app in etc/system/apps has a lower precedence than anything in etc/system/local. To realize any of the above scenarios, you must manually edit etc/system/local/deploymentclient.conf, remove it, or move it into an app. Since you’ve deployed hundreds or thousands of clients to systems you may or may not govern directly, let’s imagine the glamour and magnitude of this task for a second… *gulp*. More on configuration precedence here.

What’s an IT Superhero to do? When configuring deployment clients for the first time, instead of using the CLI, drop a standard deploymentclient.conf in an app that will be managed by the Deployment Server. For example, write deploymentclient.conf in etc/apps/Base_DeploymentClient/default.

Here is a sample shell script which demonstrates installing deploymentclient.conf outside etc/system/local to an app which can then be managed by Deployment Server should any changes be required down the road.

Battle Default vs. Local

When creating a deployment app (or any Splunk app), configuration files need to live under one of 2 subfolders: default or local. Functionally, in all likelihood it will make little difference whether default or local is used, unless you are planning to share the app outside your own Splunk deployment. In that case, the configuration should live in default so that any would-be recipients can override default settings using the local subfolder. If you are not planning to release the app, then battle it out between default and local and stick to your decision.

Extra Considerations for Search Head Pooling

For the brave employing search head pooling, there are a few special twists and turns to get your pool working with Deployment Server.

  • Designate only one of the search heads to be the deployment client. Since all search heads in the pool retrieve their configuration from the same repository it is not necessary for all of them to receive updates to this repository from the Deployment Server. It is actually quite dangerous as the deployment clients will become vulnerable to race conditions.
  • The targetRepositoryLocation for the search head class in the serverclass.conf must be set to the shared mount point for the pool.
  • Set serverRepositoryLocationPolicy in deploymentclient.conf so that your target repository location will be honored appropriately.

    From deploymentclient.conf.spec:

    serverRepositoryLocationPolicy = [acceptSplunkHome|acceptAlways|rejectAlways]
    * Defaults to acceptSplunkHome.
    * acceptSplunkHome – accept the repositoryLocation supplied by the deployment server, only if it is rooted by $SPLUNK_HOME.
    * acceptAlways – always accept the repositoryLocation supplied by the deployment server.
    * rejectAlways – reject the server supplied value and use the repositoryLocation specified in the local deploymentclient.conf.

Regarding the last point, setting the server repository location policy is a must if the target repository is not under SPLUNK_HOME. This is often the case as mount points generally start at /mnt. In this case, set the policy to acceptAlways. Alternatively, you can set it to rejectAlways and the repository location specified locally on the deployment client will be honored. If you don’t worry about this, you may be left wondering why the search head deployment client does not phone home and update itself.

Here’s an excellent example of configuration provided by one of our stellar Solution Architects, Terry Griffey.

Some of this is covered in the Splunk Documentation, but perhaps not where you would expect. It lives with the Search Head Pooling docs (not Deployment Server docs).

Oh, One Last Thing (This is a Loooong Post!)

Deployment Server uses a polling model, so you will be waiting a long time if you wait for changes to be auto-detected and deployment clients to get their marching orders. When updating serverclasses.conf or making changes to deployment apps, remember to run the reload command on the CLI to notify the Splunk Universe there’s change to propagate:

# splunk reload deploy-server -class


If Deployment Server could dish up changes in the real world, 2012 could be a much more orderly year. Happy Changes to your Splunk config files and for health and happiness in the new year!

Vi Ly

Posted by


Join the Discussion