It’s April, and that means it’s Mathematics and Statistic Awareness month. And in our everyday world of monitoring and observability, both play an ever-increasing role in how we keep track of our environments, both our apps and our infrastructure.
Our world is no longer about just pinging the server/app to make sure “It’s alive!”. We make use of standard and complex statistical methods to determine impact, including sudden changes, seasonal and historic analysis, linear and non-linear predictive methods. We use metrics, tracing and/or log data in our analysis.
With the Splunk observability products, there are a number of built-in statistical functions that allow you to tailor your analysis and align your alerting and monitoring to your precise requirements. Our real-time data streaming empowers you to get the alerts and visualizations you need, when you need them, rather than rely on old dashboards and charts that someone else may have set up.
Built-In Statistical Analytics:
But in a world powered by data science, new techniques and approaches are constantly evolving. That’s where SignalFlow excels.
The driver of the Splunk Infrastructure Monitoring is SignalFlow, the statistical computation engine. Powered by a Python-like language, SignalFlow programs accept streaming input and produces output in real-time. SignalFlow is accessible via an API, which gives you the ability to create custom solutions. SignalFlow’s features go far beyond analysis, including the ability to send data, update data and metadata, work with charts and dashboards, maintain, create and modify detectors (alerts) and more.
SignalFlow Consists of the Following:
- SignalFlow background computation engine: Runs SignalFlow programs in the background and streams results to charts and detectors
- SignalFlow programming language: Python-like language that you use to write SignalFlow programs
- SignalFlow library: Functions and methods that you call from a SignalFlow program
Basically, any action that analyzes or displays data is driven by SignalFlow, including charts and detectors. In using the SignalFlow API, those computations are run continually on the real-time streams of data. Each program has a start and a stop time, and a specific resolution. Start times can be current or historical. Stop times can be specified, or if not specified, will continue indefinitely. Resolution refers to the time interval at which the data is processed and output is generated and results delivered.
As you would expect, SignalFlow supports filtering on input streams. Filters select the data points you desire for your input stream and allow you to specify as many query arguments as you would like and include the use of boolean (AND, OR and NOT) keywords. Here is an example:
SignalFlow provides the capability to not just use those built-in capabilities but allows you to extend them in new ways, especially important in our new “unknown-unknowns” observabiity world. SignalFlow, as mentioned earlier, has a number of built-in functions and methods. These take our data stream in as input, perform our computations and output the result also as a stream. Methods and functions are nearly identical, however, functions can be arguments to other functions or to methods, while methods can only apply to an output stream object. You can think of the difference as this. Functions take input arguments and return values. Methods only act on the input stream they are active on. These cover topics of analysis like aggregations and transformations (including calendar window transformations).
Aggregations apply a calculation across all the data in the stream at a point in time. You can compute the average of the CPU use across a set of computing resources at a point in time, as an example.
Transformations apply a calculation across the metric time series within a window of time. This gives you the capability to calculate a moving average of CPU utilization as an example.
Calendar window transformations allow the comparison of one date/time block to another, opening up historical comparisons. You could use these to compare metrics week over week, for example. Seasonality or historic comparisons help reduce potential false alerts and in visualization, aid in a better understanding of the business KPIs over time.
Much more detail on these built-in functions and methods can be found in the documentation.
Similar to simply using the tool and the UI, working with SignalFlow API is easy. The programs run asynchronously as background jobs. For your individualized programs, Splunk Infrastructure monitoring starts the jobs for charts and detectors immediately. However, you can run specific programs as background jobs using a REST HTTP endpoint or a WebSocket connection. It is recommended that if possible, you should use WebSocket due to advantages that include lower overhead and the potential for substantially lower latency.
SignalFlow is also supported by several client libraries, including Python, Node.js, Java and Ruby. As a quick example, here’s a Python example that calculates the CPU use average across all servers.
So there you have it, a brief intro to SignalFlow and how you can use it to power your own analysis. There’s lots more information available, as you can see.
- The topic SignalFlow API describes the SignalFlow REST API.
- SignalFlow WebSocket API Request Messages describes the JSON request messages for the SignalFlow WebSocket API.
- SignalFlow Stream Messages Reference describes the REST and WebSocket response messages.
- SignalFlow Information Messages Reference provides more information about information messages you receive while your SignalFlow program is running.
So jump in and find out how to deliver the analysis you need, from your data in real-time.
Try it out for yourself. Get started with a free trial and start monitoring with Splunk Infrastructure Monitoring today.