Splunk Cheat Sheet: Query, SPL, RegEx, & Commands
This Splunk Quick Reference Guide describes key concepts and features, SPL (Splunk Processing Language) basic, as well as commonly used commands and functions for Splunk Cloud and Splunk Enterprise.
Concepts
Events
An event is a set of values associated with a timestamp. It is a single entry of data and can have one or multiple lines. An event can be a
text document, a configuration file, an entire stack trace, and so on. This is an example of an event in a web activity log:
173.26.34.223 - - [01/ Mar/2021:12:05:27 -0700] “GET /trade/ app?action=logout HTTP/1.1” 200 2953
You can also define transactions to search for and group together events that are conceptually related but span a duration of time. Transactions can represent a multistep business-related activity, such as all events related to a single customer session on a retail website.
Metrics
A metric data point consists of a timestamp and one or more measurements. It can also contain dimensions. A measurement is a metric name and corresponding numeric value. Dimensions provide additional information about the measurements. Sample metric data point:
Timestamp: 08-05-2020 16:26:42.025-0700
Measurement: metric_name:os.cpu. user=42.12, metric_name:max.size. kb=345
Dimensions: hq=us-west-1, group=queue, name=azd
Metric data points and events can be searched and correlated together, but are stored in separate types of indexes.
A host is the name of the physical or virtual device where an event originates. It can be used to find all data originating from a specific device. A source is the name of the file, directory, data stream, or other input from which a particular event originates. Sources are classified into source types, which can be either well known formats or formats defined by the user. Some common source types are HTTP web server logs and Windows event logs.
Host, Source, and Source Type
Events with the same source types can come from different sources. For example, events from the file
source=/var/log/messages
and from a syslog input port
source=UDP:514
often share the source type,
sourcetype=linux_syslog
Fields
Fields are searchable name and value pairings that distinguish one event from another. Not all events have the same fields and field values. Using fields, you can write tailored searches to retrieve the specific events that you want. When Splunk software processes events at index-time and search-time, the software extracts fields based on configuration file definitions and user-defined patterns.
Use the Field Extractor tool to automatically generate and validate field extractions at searchtime using regular expressions or delimiters such as spaces, commas, or other characters.
Tags
A tag is a knowledge object that enables you to search for events that contain particular field values. You can assign one or more tags to any field/value combination, including event types, hosts, sources, and source types. Use tags to group related field values together, or to track abstract field values such as IP addresses or ID numbers by giving them more descriptive names.
Index-Time and Search-Time
During index-time processing, data is read from a source on a host and is classified into a source type. Timestamps are extracted, and the data is parsed into individual events. Line-breaking rules are applied to segment the events to display in the search results. Each event is written to an index on disk, where the event is later retrieved with a search request.
When a search starts, referred to as search-time, indexed events are retrieved from disk. Fields are extracted from the raw text for the event.
Indexes
When data is added, Splunk software parses the data into individual events, extracts the timestamp, applies line-breaking rules, and stores the events in an index. You can create new indexes for different inputs. By default, data is stored in the “main” index. Events are retrieved from one or more indexes during a search.
Core Features
Reports
Search is the primary way users navigate data in Splunk software. You can write a search to retrieve events from an index, use statistical commands to calculate metrics and generate reports, search for specific conditions within a rolling time window, identify patterns in your data, predict future trends, and so on. You transform the events using the Splunk Search Process Language (SPL™). Searches can be saved as reports and used to power dashboards.
Reports
Reports are saved searches. You can run reports on an ad hoc basis, schedule reports to run on a regular interval, or set a scheduled report to generate alerts when the results meet particular conditions. Reports can be added to dashboards as dashboard panels.
Dashboards
Dashboards are made up of panels that contain modules such as search boxes, fields, and data visualizations. Dashboard panels are usually connected to saved searches. They can display the results of completed searches, as well as data from real-time searches.
Alerts
Alerts are triggered when search results meet specific conditions. You can use alerts on historical and real-time searches. Alerts can be configured to trigger actions such as sending alert information to designated email addresses or posting alert information to a web resource
Additional Features
Datasets
Splunk allows you to create and manage different kinds of datasets, including lookups, data models, and table datasets. Table datasets are focused, curated collections of event data that you design for a specific business purpose. You can define and maintain powerful table datasets with Table Views, a tool that translates sophisticated search commands into simple UI editor interactions. It’s easy to use, even if you have minimal knowledge of Splunk SPL.
Data Model
A data model is a hierarchically-organized collection of datasets. You can reference entire data models or specific datasets within data models in searches. In addition, you can apply data model acceleration to data models. Accelerated data models offer dramatic gains in search performance, which is why they are often used to power dashboard panels and essential on-demand reports.
Apps
Apps are a collection of configurations, knowledge objects, and customer designed views and dashboards. Apps extend the Splunk environment to fit the specific needs of organizational teams such as Unix or Windows system administrators, network security specialists, website managers, business analysts, and so on. A single Splunk Enterprise or Splunk Cloud installation can run multiple apps simultaneously.
Distributed Search
A distributed search provides a way to scale your deployment by separating the search management and presentation layer from the indexing and search retrieval layer. You use distribute search to facilitate horizontal scaling for enhanced performance, to control access to indexed data, and to manage geographically dispersed data.
System Components
Forwarders
A Splunk instance that forwards data to another Splunk instance is referred to as a forwarder.
Indexer
An indexer is the Splunk instance that indexes data. The indexer transforms the raw data into events and stores the events into an index. The indexer also searches the indexed data in response to search requests. The search peers are indexers that fulfill search requests from the search head.
Search Head
In a distributed search environment, the search head is the Splunk instance that directs search requests to a set of search peers and merges the results back to the user. If the instance does only search and not indexing, it is usually referred to as a dedicated search head.
Search Processing Language (SPL)
A Splunk search is a series of commands and arguments. Commands are chained together with a pipe “|” character to indicate that the output of one command feeds into the next command on the right.
search | command1 arguments1 | command2 arguments2 | ...
At the start of the search pipeline, is an implied search command to retrieve events from the index. Search requests are written with keywords, quoted phrases, Boolean expressions, wildcards, field name/value pairs, and comparison expressions. The AND operator is implied between search terms. For example:
sourcetype=access_combined error | top 5 uri
This search retrieves indexed web activity events that contain the term “error”. For those events, it returns the top 5 most common URI values.
Search commands are used to filter unwanted events, extract more information, calculate values, transform, and statistically analyze the indexed data. Think of the search results retrieved from the index as a dynamically created table. Each indexed event is a row. The field values are columns. Each search command redefines the shape of that table. For example, search commands that filter events will remove rows, search commands that extract fields will add columns.
Time Modifiers
You can specify a time range to retrieve events inline with your search by using the latest and earliest search modifiers. The relative times are specified with a string of characters to indicate the amount of time (integer and unit) and an optional “snap to” time unit. The syntax is:
[+|-]<integer><unit>@<snap_time_ unit>
The search
“error earliest=-1d@d latest=h@h”
retrieves events containing “error” that occurred yesterday snapping to the beginning of the day (00:00:00) and through to the most recent hour of today, snapping on the hour.
The snap to time unit rounds the time down. For example, if it is 11:59:00 and you snap to hours (@h), the time used is 11:00:00 not 12:00:00. You can also snap to specific days of the week using @w0 for Sunday, @w1 for Monday, and so on.
Subsearches
A subsearch runs its own search and returns the results to the parent command as the argument value. The subsearch is run first and is contained in square brackets. For example, the following search uses a subsearch to find all syslog events from the user that had the last login error:
sourcetype=syslog [ search login error | return 1 user ]
Optimizing Searches
The key to fast searching is to limit the data that needs to be pulled off disk to an absolute minimum. Then filter that data as early as possible in the search so that processing is done on the minimum data necessary.
Partition data into separate indexes, if you will rarely perform searches across multiple types of data. For example, put web data in one index, and firewall data in another.
Limit the time range to only what is needed. For example -1h not -1w, or earliest=-1d.
Search as specifically as you can. For example, fatal_error not *error* Use post-processing searches in dashboards.
Use summary indexing, and report and data model acceleration features.
Machine Learning Capabilities
Splunk’s Machine Learning capabilities are integrated across our portfolio and embedded/
in our solutions through offerings such as the Splunk Machine Learning Toolkit, Streaming ML framework and the Splunk Machine Learning Environment.
SPL2
Several Splunk products use a new version of SPL, called SPL2, which makes the search language easier to use, removes infrequently used commands, and improves the consistency of the command syntax. See the SPL2 Search Reference.
(See the differences in SPL1 vs SPL2.)
Search Examples
Group results that have the same
"host" and "cookie", occur within 30 seconds of each other, and do not have a pause greater than 5 seconds between each event into a transaction.
Calculate the average value of
"CPU" each minute for each "host".
\d\d\d-\d\d-
\d\d\d\d
\d\d\d-?\d\d-
?\d\d\d\d
(?P<var>
...)
(?P<ssn>\d\d\d-
\d\d-\d\d\d\d)
(?: ...
)
Related Articles

How to Use LLMs for Log File Analysis: Examples, Workflows, and Best Practices

Beyond Deepfakes: Why Digital Provenance is Critical Now

The Best IT/Tech Conferences & Events of 2026

The Best Artificial Intelligence Conferences & Events of 2026

The Best Blockchain & Crypto Conferences in 2026

Log Analytics: How To Turn Log Data into Actionable Insights

The Best Security Conferences & Events 2026

Top Ransomware Attack Types in 2026 and How to Defend
