How do you pass context from events that concern Security teams to Development teams who can make changes and address those events? Often this involves a series of meetings and discussion that can take days or weeks to filter down from security event to developer awareness. Compounding the problem, developers generally do not have access to Splunk Core, Cloud or Enterprise indexes used by security teams, and indeed, may use only Splunk Observability for their metrics, traces and even logs.
Now you can streamline the process of passing the context of events in Splunk to Splunk Observability Events with the Splunk Observability Cloud Alert Action for Splunk!
Splunk Observability Events can then be overlaid on the dashboards developers use most and notify them about interesting or important events being tracked in Splunk. Simply set up the alert action in Splunk Cloud or Splunk Enterprise the way you’d set up any other alert, choose the fields you’d like to pass as context to Splunk Observability and watch the event context traverse between your tools!
Figure 1-1. Example Splunk Observability Event notifying developers of the code scanning results in Splunk regarding their “Vulnerabilty_app”. In this case, the code is using an async package that has a vulnerability (cve-2021-43138) and should be addressed.
Events are everywhere in Splunk products! We love events! They help us pass context with minimal overhead. Splunk Cloud and Splunk Enterprise generally use log events as their basic building blocks.
Splunk Observability uses events with their Detectors, and as a useful means of passing non-timeseries related context to users. I’ve talked about events on this blog before, and I definitely will again! They provide a handle for understanding point in time occurrences, integrate seamlessly into Observability dashboards and they’re “free real estate” with no associated charge.
Need examples of the sort of questions that can be solved by passing context from Splunk indexes to Splunk Observability?
“Our code vulnerability scans of internal repositories are turning up new vulnerabilities. We need to notify the developers who own those repos!"
“A failure in our firewall settings has been detected in Splunk and is degrading the network. Let’s notify users in all of our other monitoring tools.”
“Critical business processes we track in Splunk ITSI are failing. We need to get this information in front of the developers who write the software involved.”
“I wish I could see software deployments for our services and services we care about overlaid on our O11y Dashboards. We track that in Splunk. Can we get that into Observability?”
These are only a few examples of possible use cases. But in reality, if something is happening in Splunk and you need to notify users of Splunk Observability, an email alert from Splunk may be ignored or filtered to an unchecked inbox folder. Sending that information directly into Splunk Observability provides a record of when it happened and puts it in front of developer and SRE eyes in their own tools.
Who Needs Alert Actions?
The most important thing about using Alert Actions to tie Splunk Enterprise/Cloud and Splunk Observability together is that the use cases and teams who care about such integrations cover a wide swath of the organization:
- IT Operations / Support Analysts: Support analysts are often looking at the bigger picture. They may be watching ITSI glass tables or highly customized aggregations of services defined in Splunk. When issues arise, sending an email alert can now be complemented by a Splunk Observability alert notifying developers in their own tools.
- Software Developers / DevOps / SRE: These sorts of teams often live in Splunk Observability and may not have access to Splunk or certain Splunk indexes (or even know SPL). These teams can generally benefit from the context of events contained in those indexes. Now that data can be fluently passed to Splunk Observability to provide that context.
- Security Teams: Providing the context of security events and vulnerabilities to Software, DevOps and SRE teams is important, and getting them noticed can be difficult. Getting those details into Splunk Observability can provide that notification, along with a handy record for historical reporting and referencing. Stop haggling over time from vulnerability to notification and start automating that process with the Splunk Observability Cloud Alert Action for Splunk.
Make everyone’s life easier! Swivel less chairs! Knit together Splunk and Splunk Observability by passing context. Your Security, Development and Support teams will thank you for it!
Download and start using the Splunk Observability Cloud Alert Action for Splunk today!
And Back Again!
Now you want some data and context back into Splunk from Splunk Observability? The “return home” is a bit less interesting but very easy.
Setup Splunk HEC (with SSL of course) and get a Splunk HEC token. Then setup a Webhook integration in Splunk Observability to send to your HEC’s 'collector/raw' endpoint.
Or for a more customizable process for sending Observability Alert data into Splunk ITSI using AWS Lambda check out this Splunk Lantern article!
Your Observability Webhook can then be added to a detector as a notification method and it will send those alerts into Splunk Enterprise or Splunk Cloud. This sort of pattern can be useful when using Splunk and/or Splunk ITSI as a “monitor of monitors”.
If you’re a Splunk user but not yet using Splunk Observability sign up for a free trial today and start digging into near real-time software monitoring!
Not yet using Splunk? Checkout free trials for Splunk Cloud or Splunk Enterprise (on-premises) today!
This blog post was authored by Jeremy Hicks, Solutions Innovation Engineer at Splunk with special thanks to: Doug Erkkila, Joel Schoenberg and the rest of the Solutions Innovation Engineering team!