Recently, Splunk released Cisco Time Series Model, an open-weight Transformer-based foundation model for time-series analysis, available on Huggingface. The Cisco Time Series Model is set to revolutionize how enterprises understand, forecast, and act on machine data. Designed specifically for the complex demands of observability and security data, this model brings a new level of precision, adaptability and usability to time series analysis. In this blog post, we explore how the model is seamlessly integrated with your data in Splunk leveraging the Splunk App for Data Science and Deep Learning.
One of the core challenges in time series modeling is balancing the need to understand long-term historical patterns with the ability to detect timely, detailed events. The Cisco Time Series Model addresses this with a novel multi-resolution approach. It processes data at different granularities—low resolution to capture broad historical trends and high resolution to provide detailed, real-time insights. This capability ensures that users do not miss critical anomalies or patterns that occur at varying time scales, enabling more accurate forecasting and anomaly detection.
Typically, Cisco Time Series Model processes input time series with data points recorded at 1-minute intervals. For instance, a month of historical data at this granularity results in 43,200 data points, a volume that often exceeds the typical context window of many time series models. To effectively manage this, the model intelligently retains the most recent 512 data points at their native 1-minute resolution, while aggregating older historical data into 1-hour intervals. This significantly reduces the input length without sacrificing crucial information. Furthermore, experiments demonstrate the model's flexibility, capably processing input with 5-minute granularities as well. This multi-resolution approach enables the model to achieve long-range forecasts; for example, with a standard forecasting window of 128 steps, it can reliably forecast up to 10 hours into the future, leveraging extensive historical context.
The Splunk App for Data Science and Deep Learning (DSDL) offers a bridge between Splunk search heads and customer-managed container environments, facilitating advanced deep learning model training, inference, and Large Language Model (LLM) integrations. This architecture makes DSDL an ideal platform for seamlessly deploying and utilizing the TSFM model directly within Splunk search.
With our latest build, The Cisco Time Series Model is fully integrated, allowing you to leverage its powerful forecasting capabilities for any time series data within your Splunk environment. Once the latest DSDL is installed, you can easily launch the Golden Transformers GPU (5.2.2) container from the Container Management page within DSDL, putting advanced time series analysis at your fingertips.

A new notebook for forecasting has been added to this container image. You can view this notebook by clicking on the JupyterLab link and navigating to notebooks/tsfm_forecast.ipynb.

This new notebook empowers Splunk users to perform forecasting directly from the search bar using the Fit command. The comprehensive descriptions within the notebook guide users through effectively utilizing the Fit command, including how to apply various parameters to tailor their use cases. By default, the model file is downloaded from the Huggingface repository automatically. If you choose to download the model file manually and place it under a directory inside the notebook, please specify the file path in the Fit command using the parameter local_path ( Example: local_path="/srv/app/model/data/torch_model.ckpt" )
To effectively showcase forecasting capabilities via DSDL, we will utilize the readily available internet_traffic.csv dataset from Splunk AI Toolkit. The following Splunk search query aggregates the average gigabytes of internet traffic over 5-minute intervals for an entire month.
| inputlookup internet_traffic.csv
| head 10000
| timechart span=5min avg("bits_transferred") as bits_transferred
| eval bits_transferred = bits_transferred / 8 / 1024 / 1024
The aggregated internet traffic data, represented as a time series, is shown in the line chart below:

To evaluate the model's accuracy, we employ the following SPL command. This command is designed to forecast the final 128 steps of our time series, leveraging all preceding historical data. We then compare these forecasts directly against the actual values for that same 128-step period, providing a clear measure of the model's performance.
| inputlookup internet_traffic.csv
| head 10000
| timechart span=5min avg("bits_transferred") as bits_transferred
| eval bits_transferred = bits_transferred / 8 / 1024 / 1024
| fit MLTKContainer algo=tsfm_forecast value_field="bits_transferred" forecast_steps=128 * into app:tsfm_forecast
| tail 800
| table _time bits_transferred predicted_p50
The core of our evaluation lies in the SPL command, where the Fit command invokes the tsfm_forecast algorithm. Within this command, we explicitly specify bits_transferred as the field for forecasting and set the forecast_steps to 128.
Following the execution of this command, we then focus our visualization on the last 800 steps of the time series. The resulting chart, displayed below, visually compares the ground truth (bits_transferred) against the mean forecast (predicted_p50) generated by the model.

The chart below offers a zoom-in look on the forecasting area.

As depicted in the charts, the model (represented by the orange line) effectively forecasted the last 128 steps by leveraging the preceding historical data within the time series. It successfully captured the overall trend and the significant peaks of the ground truth (the blue line), demonstrating robust forecasting performance.
It's important to note, however, that forecasting models, especially when forecasting future steps, inherently tend to produce a smoother curve. This characteristic means the model focuses on identifying and projecting the underlying patterns and general direction of the data, rather than attempting to replicate every minor fluctuation or "noise" present in the raw time series. This smoothing is a common and often desirable trait in forecasting, as it helps in understanding the broader trajectory and making strategic decisions, even if it doesn't perfectly mirror every instantaneous data point.
Beyond just the mean forecast, the Fit command also provides a comprehensive confidence interval. It outputs P10 to P90 forecast values, offering users a clearer understanding of the forecast's uncertainty and enabling more informed decision-making. The raw output generated by the Fit command, illustrating these detailed forecasts, is shown below:

Having thoroughly examined the model's performance, let's now apply the model for future forecasting. To generate forecasts for upcoming timestamps within a time series, a slight modification to our existing SPL is required. The following SPL command provides a practical example of how to achieve this:
| inputlookup internet_traffic.csv | head 10000 | timechart span=5min avg("bits_transferred") as bits_transferred | eval bits_transferred = bits_transferred / 8 / 1024 / 1024 | sort _time
```
Adding data point padding to continue the timeseries for forecasting
```
| append [| makeresults count=128 | eval bits_transferred=0, _time = 0 | streamstats count as pad ]
| eventstats latest(_time) as latest_timestamp
| eval _time=if(pad>0, latest_timestamp + pad*300, _time)
| table _time bits_transferred
```
Forecasting the padded time series
```
| fit MLTKContainer algo=tsfm_forecast value_field="bits_transferred" forecast_steps=128 * into app:tsfm_forecast
| tail 5000
| table _time bits_transferred predicted_p50
The initial segment of this SPL command largely mirrors our previous examples. However, to enable future forecasting, a crucial step involves extending our time series. Since the original data does not inherently contain future timestamps, we use the append command to add 128 new rows to the table, each assigned a timestamp that incrementally advances by 300 seconds (5 minutes) from the last recorded timestamp, effectively creating placeholders for our future forecasts.
Subsequently, we re-execute the Fit command. This time, it leverages the entire historical time series to forecast values for these newly padded 128 future timestamps. To provide a comprehensive view and clearly display the trends captured by the model over an extended period, we visualize the last 5000 steps in the chart below:

The chart below offers a zoom-in look on the forecasting area.

As shown in the charts, the orange line extending at the end of the series represents the model's future forecasting results. It adeptly captures the historical trends and projects them forward, providing a clear forecast for the next 10 hours. Such forecasts offer immense value across various domains, including enhanced IT monitoring, optimized resource planning, and even proactive security detections.
The Cisco Time Series Model, leveraging its zero-shot forecasting capabilities and seamless integration via Splunk DSDL, offers significant value for IT operations and observability. This includes enhancing proactive capacity planning by predicting future resource utilization, improving SLA management through anticipating performance trends, and bolstering aspects of security analytics by forecasting typical system behaviors. The model excels at capturing the long-term context and underlying trends of metrics, providing a smooth and interpretable forecast. While this approach is highly valuable for strategic planning and understanding overall trajectory, the model inherently tends to smooth out the forecast and does not aim to capture every instantaneous "noise" or short-term fluctuation present in the raw data, allowing teams to focus on significant shifts and sustained patterns.
For more examples on utilizing this model, please also check out our cookbooks on Github.
Huaibo & Philipp
The world’s leading organizations rely on Splunk, a Cisco company, to continuously strengthen digital resilience with our unified security and observability platform, powered by industry-leading AI.
Our customers trust Splunk’s award-winning security and observability solutions to secure and improve the reliability of their complex digital environments, at any scale.