Best Practices for Using Splunk Workload Management

Workload management is a powerful Splunk Enterprise feature that allows you to assign system resources to Splunk workloads based on business priorities. In this blog, I will describe four best practices for using workload management. If you want to refresh your knowledge about this feature or use cases that it solves, please read through our recent series of workload management blogs — part 1, part 2, and part 3.

Allocate Resources to Workload Categories Appropriately

There are three categories for workloads in Splunk — Ingest, Search and Misc. The processes that run in each category are assigned by default and cannot be changed. The core system processes and data ingestion workload run in the Ingest category. All searches run in the Search category. Scripted and modular inputs run in the Misc category.

We recommend the following resource allocation for each category. Given that Splunk core processes run in the Ingest category, set the memory limit to 100%. The resource allocation can be different for indexers and search heads if they have vastly different CPU and memory resources or if the ingestion rate is high.


Misc category is optional to configure. You may want to use it if you have many modular or scripted inputs and want to isolate them from rest of the workloads. If you are using Splunk Cloud, each resource category is pre-allocated by default and cannot be altered.

Ensure High Priority Search Pool is Not Overused

The Search category can be further divided into various pools. If you are creating a workload pool for high priority searches, allocate 60-70% CPU resources. Memory can be shared across all search pools. Below is a typical resource allocation. You should be very selective in assigning searches to your high priority pool. We recommend adding no more than 10-20% of your total search volume assigned to your high priority pool, otherwise it will lose its ‘high priority’ nature.

If you are using Splunk Cloud, three search pools are automatically configured for you to use and cannot be altered.

Think Through the Mixed Deployment Well

Each search head or search head cluster enforces the workload management feature independently. This means that workload pools and rules are handled independently on different search head clusters. But you need to plan well if multiple search head clusters are using the same indexer cluster.

On a search head, a search is started in the workload pool specified by the workload rules. The search looks for the same pool name on indexers as specified on the search head. If that pool does not exist on the indexers, it runs in a default search pool on indexers. The example below shows the mapping of searches placed in different workload pools on search heads to the workload pools on indexers. The default search pools are denoted by suffix (d). Because the AdhocPool does not exist on IDX cluster, any search placed in that pool on SHC2 will run in the Standard pool (default) on IDX.

Onboard One Use Case at a Time

As a best practice to get started with workload management, begin with a single use case. Generally, the first use case will lie in either of the two buckets:

  1. High priority search execution — searches from certain users or groups need to be placed in a high priority pool.
  2. Low priority search isolation — certain types of searches need to be isolated in a limited resource pool.

Program simple workload rules to achieve your first use case and then check if your expectations are met before implementing other use cases. Keep the workload rules simple to help with troubleshooting later.

Follow the best practices listed above to correctly configure workload management and extract value from your data with this feature quickly.

For getting a formal training on workload management, please join this course.

Related Articles

Announcing the General Availability of Splunk POD: Unlock the Power of Your Data with Ease
Platform
2 Minute Read

Announcing the General Availability of Splunk POD: Unlock the Power of Your Data with Ease

Splunk POD is designed to simplify your on-premises data analytics, so you can focus on what really matters: making smarter, faster decisions that drive your business forward.
Introducing the New Workload Dashboard: Enhanced Visibility, Faster Troubleshooting, and Deeper Insights
Platform
3 Minute Read

Introducing the New Workload Dashboard: Enhanced Visibility, Faster Troubleshooting, and Deeper Insights

Announcing the general availability of the new workload dashboard – a modern and intuitive dashboard experience in the Cloud Monitoring Console app.
Leading the Agentic AI Era: The Splunk Platform at Cisco Live APJ
Platform
5 Minute Read

Leading the Agentic AI Era: The Splunk Platform at Cisco Live APJ

The heart of our momentum at Cisco Live APJ is our deeper integration with Cisco, culminating in the Splunk POD and new integrations, delivering unified, next-generation data operations for every organization.
Dashboard Studio: Token Eval and Conditional Panel Visibility
Platform
4 Minute Read

Dashboard Studio: Token Eval and Conditional Panel Visibility

Dashboard Studio in Splunk Cloud Platform can address more complex use cases with conditional panel visibility, token eval, and custom visualizations support.
Introducing Resource Metrics: Elevate Your Insights with the New Workload Dashboard
Platform
4 Minute Read

Introducing Resource Metrics: Elevate Your Insights with the New Workload Dashboard

Introducing Resource Metrics in Workload Dashboard (WLD) – a modern and intuitive monitoring experience in the Cloud Monitoring Console (CMC) app.
Powering AI Innovation with Splunk: Meet the Cisco Data Fabric
Platform
3 Minute Read

Powering AI Innovation with Splunk: Meet the Cisco Data Fabric

The Cisco Data Fabric brings AI-centric advancements to the Splunk Platform, seamlessly connecting knowledge, business, and machine data.
Remote Upgrader for Windows Is Here: Simplifying Fleet-Wide Forwarder Upgrades
Platform
3 Minute Read

Remote Upgrader for Windows Is Here: Simplifying Fleet-Wide Forwarder Upgrades

Simplify fleet-wide upgrades of Windows Universal Forwarders with Splunk Remote Upgrader—centralized, signed, secure updates with rollback, config preservation, and audit logs.
Dashboard Studio: Spec-TAB-ular Updates
Platform
3 Minute Read

Dashboard Studio: Spec-TAB-ular Updates

Splunk Cloud Platform 10.0.2503 includes a number of enhancements related to tabbed dashboards, trellis for more charts, and more!
Introducing Edge Processor for Splunk Enterprise: Data Management on Your Premises
Platform
2 Minute Read

Introducing Edge Processor for Splunk Enterprise: Data Management on Your Premises

Announcing the introduction of Edge Processor for Splunk Enterprise 10.0, designed to help customers achieve greater efficiencies in data transformation and improved visibility into data in motion.