Introducing Edge Processor: Next Gen Data Transformation

We get it — not only can it take a lot of time, money and resources to get data into Splunk, but it also takes effort to shape the data in a way that will provide you the most value. But it doesn’t have to anymore, thanks to Splunk’s latest innovation in data processing.

Splunk is pleased to announce the general availability of Splunk Edge Processor, a service offering within Splunk Cloud Platform designed to help customers achieve greater efficiencies in data transformation close to the data source, and improved visibility into data in motion. Edge Processor provides customers new abilities to filter and mask, and otherwise transform their data, before routing it to supported destinations. Edge Processor joins Ingest Actions as part of Splunk’s pre-ingest data transformation capabilities. All current Edge Processor features are free to all Splunk Cloud customers.

What gives Edge Processor its data transformation power is Splunk’s next generation data search and preparation language, SPL2. With SPL2, customers have much more flexibility to shape data so that it is formatted exactly how they want before sending it to be indexed.

Unique to Edge Processor is its architecture, chiefly the cloud-based control plane. Edge Processor nodes are easily installed and configured on customer servers or customer cloud infrastructure using a single command, and managed completely from Splunk Cloud Platform. These nodes are an intermediate forwarding tier, and receive data from edge sources. Customers manage their entire fleet of edge processors and have visibility into both inbound and outbound data volumes through their edge processor network, all from a single place. Any node can then scale horizontally to handle increasing processing or data volume requirements by simply adding instances.

Customers have detailed metrics to view the impact of their pipelines on data flowing through each of their edge processors and can closely track unexpected spikes or troughs in their data

From the central cloud control plane, customers define data processing logic — pipelines — that dictate their desired filtering, masking and routing logic, and can apply their pipelines to any or all edge processors in their network. Edge Processor pipelines are constructed using SPL2 in the new pipeline editor experience, where users can see previews of the data showing the impact of applying a pipeline before making a change.

The data plane remains completely within the customer control — customers point data sources to an edge processor node that is installed on their hosts, and that data is only sent to where customers direct it to be sent. At launch, Edge Processor can receive data from Splunk Universal and Heavyweight Forwarders, and route data to Splunk Enterprise, Splunk Cloud Platform, and Amazon S3.1

Customers have a guided pipeline editor experience with the ability to preview the effect of their pipeline on sample data that they provide

Edge Processor using SPL2 makes data transformation easy and flexible. One of the most common use cases for Edge Processor is to filter verbose data sources, such as Windows event logs, to retain selected events or content within an event. An explicit set of examples for this use case is retaining only Windows events that match a certain event code, masking the extensive message field at the end of Windows events, and routing an unfiltered copy of data to an AWS S3 bucket. The pipelines below show how these examples are constructed; the user controls what data the pipeline applies to, how that data is to be processed, and then where the processed data is routed to.

Pipeline definition (SPL2)
$source
$destination
Filter Windows system events on event id, route to Splunk Cloud index “Security”
$pipeline =
| from $source
// Extract event code field
| rex field=_raw
/EventCode=(?P<event_code>\d+)/
// retain all events with windows event code = 9
| where event_code = 9
| into $destination;
sourcetype =
winEventLog:
system
Splunk index:
Security
Mask Windows system events to remove the final “Message” contents, route this copy to Splunk Cloud index “Main”
$pipeline =
| from $source
| eval _raw=replace(_raw,
/(Message=.*[\r\n?|\n])((?:.|\r\n?|\n)
*)/, "\\...")
| into $destination;
sourcetype =
winEventLog:
system
Splunk index:
Main
Route unfiltered copy of ALL Windows events to AWS S3 bucket “Windows”
$pipeline =
| from $source
| into $destination;
Sourcetype =
winEventLog*
S3 bucket:
Windows

With Edge Processor, customers will experience increased visibility of data in motion and improved productivity, simplicity, and control of data transformations, all at scale. What’s more, Edge Processor is another capability to help customers manage costs and boost value from your Splunk investment, serving as a sort of forcing function to organize and prioritize your data according to use case so that you work with just the data you want, in the location you need it.

If you are a current Splunk Cloud Platform customer hosted in the US or Dublin Splunk Cloud regions, you can get access to Edge Processor today. Contact by your Splunk sales representative, or send an email to EdgeProcessor@splunk.com with your company name, Splunk cloud stack name, and Splunk Cloud region. If you are a Splunk Cloud Platform customer hosted in other Splunk Cloud regions, also contact your Splunk sales representative or send an email to get on the list to be enabled once Edge Processor is available in your region.

For more about Edge Processor, including release plans to support additional sources, destinations, and new functionality, see release notes and documentation.

[1] See release notes for updates on new features, including additional supported sources and destinations.

----------------------------------------------------
Thanks!
Jodee Varney

Related Articles

Introducing Splunk Operator for Kubernetes 2.0
Platform
2 Minute Read

Introducing Splunk Operator for Kubernetes 2.0

Learn about the newest features in the evolution of our Splunk Operator App Framework.
The Convergence of Security and Observability: Top 5 Platform Principles
Platform
3 Minute Read

The Convergence of Security and Observability: Top 5 Platform Principles

Bringing together security and observability into one holistic platform raises the technical focus of ITOps, DevOps and Security to the broader business concern of managing risk.
Welcome to the Future of Data Search & Exploration
Platform
3 Minute Read

Welcome to the Future of Data Search & Exploration

Introducing the new SPL2 Search Experience for Splunk Cloud, accelerating the data-to-insight workflow, and bringing the power of Splunk to everyone – learn more here.
Splunk 9.0 SmartStore with Microsoft Azure Container Storage
Platform
4 Minute Read

Splunk 9.0 SmartStore with Microsoft Azure Container Storage

With the release of Splunk 9.0 came support for SmartStore in Azure. Previously to achieve this, you’d have to use some form of S3-compliant broker API, but now we can use native Azure APIs.The addition of this capability means that Splunk now offers complete SmartStore support for all three of the big public cloud vendors. This blog will describe a little bit about how it works, and help you set it up yourself.
Machine Learning at Splunk in Just a Few Clicks
Platform
4 Minute Read

Machine Learning at Splunk in Just a Few Clicks

Explore three new beta applications introduced at .conf22 that simplify complex and time consuming tasks while lowering barriers for customers to unlock the power of ML in everyday workflows.
Dashboard Studio: Level-Up Your App with Dashboard Studio
Platform
2 Minute Read

Dashboard Studio: Level-Up Your App with Dashboard Studio

We reimagined the dashboards in the Microsoft 365 App for Splunk using Dashboard Studio, and you can too!
Data Manager Enables Microsoft Azure Data Onboarding!
Platform
2 Minute Read

Data Manager Enables Microsoft Azure Data Onboarding!

We're excited to share that Data Manager now supports the onboarding of Microsoft Azure data sources, allowing you to use the same Data Manager application in your Splunk Cloud Platform to onboard critical Azure data sources to generate actionable insights in Splunk.
Dashboard Studio: More Maps & More Interactivity
Platform
3 Minute Read

Dashboard Studio: More Maps & More Interactivity

Get a closer look at the expanded interactivity capabilities and visualizations for Dashboard Studio, including more drill-down and interactivity options, more maps, more configuration options.
Deep Learning Toolkit 3.7 and 3.8 - What’s New?
Platform
3 Minute Read

Deep Learning Toolkit 3.7 and 3.8 - What’s New?

We are excited to share the latest advances around the Deep Learning Toolkit App for Splunk (DLTK). These include custom certificates, integration with Splunk Observability and a container operations dashboard, just to name a few.