Introducing Edge Processor: Next Gen Data Transformation

We get it — not only can it take a lot of time, money and resources to get data into Splunk, but it also takes effort to shape the data in a way that will provide you the most value. But it doesn’t have to anymore, thanks to Splunk’s latest innovation in data processing.

Splunk is pleased to announce the general availability of Splunk Edge Processor, a service offering within Splunk Cloud Platform designed to help customers achieve greater efficiencies in data transformation close to the data source, and improved visibility into data in motion. Edge Processor provides customers new abilities to filter and mask, and otherwise transform their data, before routing it to supported destinations. Edge Processor joins Ingest Actions as part of Splunk’s pre-ingest data transformation capabilities. All current Edge Processor features are free to all Splunk Cloud customers.

What gives Edge Processor its data transformation power is Splunk’s next generation data search and preparation language, SPL2. With SPL2, customers have much more flexibility to shape data so that it is formatted exactly how they want before sending it to be indexed.

Unique to Edge Processor is its architecture, chiefly the cloud-based control plane. Edge Processor nodes are easily installed and configured on customer servers or customer cloud infrastructure using a single command, and managed completely from Splunk Cloud Platform. These nodes are an intermediate forwarding tier, and receive data from edge sources. Customers manage their entire fleet of edge processors and have visibility into both inbound and outbound data volumes through their edge processor network, all from a single place. Any node can then scale horizontally to handle increasing processing or data volume requirements by simply adding instances.

Customers have detailed metrics to view the impact of their pipelines on data flowing through each of their edge processors and can closely track unexpected spikes or troughs in their data

From the central cloud control plane, customers define data processing logic — pipelines — that dictate their desired filtering, masking and routing logic, and can apply their pipelines to any or all edge processors in their network. Edge Processor pipelines are constructed using SPL2 in the new pipeline editor experience, where users can see previews of the data showing the impact of applying a pipeline before making a change.

The data plane remains completely within the customer control — customers point data sources to an edge processor node that is installed on their hosts, and that data is only sent to where customers direct it to be sent. At launch, Edge Processor can receive data from Splunk Universal and Heavyweight Forwarders, and route data to Splunk Enterprise, Splunk Cloud Platform, and Amazon S3.1

Customers have a guided pipeline editor experience with the ability to preview the effect of their pipeline on sample data that they provide

Edge Processor using SPL2 makes data transformation easy and flexible. One of the most common use cases for Edge Processor is to filter verbose data sources, such as Windows event logs, to retain selected events or content within an event. An explicit set of examples for this use case is retaining only Windows events that match a certain event code, masking the extensive message field at the end of Windows events, and routing an unfiltered copy of data to an AWS S3 bucket. The pipelines below show how these examples are constructed; the user controls what data the pipeline applies to, how that data is to be processed, and then where the processed data is routed to.

Pipeline definition (SPL2)
$source
$destination
Filter Windows system events on event id, route to Splunk Cloud index “Security”
$pipeline =
| from $source
// Extract event code field
| rex field=_raw
/EventCode=(?P<event_code>\d+)/
// retain all events with windows event code = 9
| where event_code = 9
| into $destination;
sourcetype =
winEventLog:
system
Splunk index:
Security
Mask Windows system events to remove the final “Message” contents, route this copy to Splunk Cloud index “Main”
$pipeline =
| from $source
| eval _raw=replace(_raw,
/(Message=.*[\r\n?|\n])((?:.|\r\n?|\n)
*)/, "\\...")
| into $destination;
sourcetype =
winEventLog:
system
Splunk index:
Main
Route unfiltered copy of ALL Windows events to AWS S3 bucket “Windows”
$pipeline =
| from $source
| into $destination;
Sourcetype =
winEventLog*
S3 bucket:
Windows

With Edge Processor, customers will experience increased visibility of data in motion and improved productivity, simplicity, and control of data transformations, all at scale. What’s more, Edge Processor is another capability to help customers manage costs and boost value from your Splunk investment, serving as a sort of forcing function to organize and prioritize your data according to use case so that you work with just the data you want, in the location you need it.

If you are a current Splunk Cloud Platform customer hosted in the US or Dublin Splunk Cloud regions, you can get access to Edge Processor today. Contact by your Splunk sales representative, or send an email to EdgeProcessor@splunk.com with your company name, Splunk cloud stack name, and Splunk Cloud region. If you are a Splunk Cloud Platform customer hosted in other Splunk Cloud regions, also contact your Splunk sales representative or send an email to get on the list to be enabled once Edge Processor is available in your region.

For more about Edge Processor, including release plans to support additional sources, destinations, and new functionality, see release notes and documentation.

[1] See release notes for updates on new features, including additional supported sources and destinations.

----------------------------------------------------
Thanks!
Jodee Varney

Related Articles

Exploratory Data Analysis for Anomaly Detection
Platform
4 Minute Read

Exploratory Data Analysis for Anomaly Detection

With great choice comes great responsibility. One of the most frequent questions we encounter when speaking about anomaly detection is how do I choose the best approach for identifying anomalies in my data? The simplest answer to this question is one of the dark arts of data science: Exploratory Data Analysis (EDA).
Splunk at the Service of Medical Staff
Platform
3 Minute Read

Splunk at the Service of Medical Staff

Given the current circumstances and the pressure medical staff and hospitals are facing in general, access to information is now more critical than ever. Optimising the process of medical exams and enabling alerts and notifications in real-time has become essential.
A Picture is Worth a Thousand Logs
Platform
3 Minute Read

A Picture is Worth a Thousand Logs

Splunk can be used to ingest machine-learning service information from services like AWS recognition, what does that look like and how can you set it up?
Bringing You Context-Driven, In-Product Guidance
Platform
1 Minute Read

Bringing You Context-Driven, In-Product Guidance

Splunk is providing in-product guidance right at your fingertips to help you accomplish your goals without navigating away from the product. Learn more in this blog post.
Splunk AR: HoloLens and Unity SDK
Platform
2 Minute Read

Splunk AR: HoloLens and Unity SDK

Get a sneak peek on two private beta products — AR app for HoloLens, a solution for a hands-free experience, and a Splunk SDK to allow you to securely incorporate Splunk data into your custom apps.
Threat Hunting With ML: Another Reason to SMLE
Platform
4 Minute Read

Threat Hunting With ML: Another Reason to SMLE

This blog is the first in a mini-series of blogs where we aim to explore and share various aspects of our security team’s mindset and learnings. In this post, we will introduce you to how our own security and threat research team develops the latest security detections using ML.
Creating a Fraud Risk Scoring Model Leveraging Data Pipelines and Machine Learning with Splunk
Platform
8 Minute Read

Creating a Fraud Risk Scoring Model Leveraging Data Pipelines and Machine Learning with Splunk

One of the new necessities we came across several times was that the clients were willing to get a sport bets fraud risk scoring model to be able to quickly detect fraud. For that purpose, I designed a data pipeline to create a sport bets fraud risk scoring model based on anomaly detection algorithms built with Probability Density Function powered by Splunk’s Machine Learning Toolkit.
Levelling up your ITSI Deployment using Machine Learning
Platform
2 Minute Read

Levelling up your ITSI Deployment using Machine Learning

To help our customers extract the most value from their IT Service Intelligence (ITSI) deployments, Splunker Greg Ainslie-Malik created this blog series. Here he presents a number of techniques that have been used to get the most out of ITSI using machine learning.
Smarter Noise Reduction in ITSI
Platform
8 Minute Read

Smarter Noise Reduction in ITSI

How can you use statistical analysis to identify whether you have an unusual number of events, and how can similar techniques be applied to non-numeric data to see if descriptions and sourcetype combinations appear unusual? Read all about it in this blog.