If making dashboards or detectors is a great way to protect and understand your stack, imagine how amazing it would be if you didn’t have to make them and they just popped into existence, curated by experts in your organization. It’s like a cooking show where the host pulls a baked and iced cake from under the counter. Voila!
This, friends, is one of the benefits of monitoring as code. You, your dinner guests, and coworkers can use Splunk Terraform provider to cook up delicious detectors from scratch, reuse great work, or to create whole new delicacies.
Alas, how can a vendor like Splunk support such a cornucopia of choice? Surely our excellent UI for detector creation is enough? We certainly spent a lot of time and effort making a wonderful UI for creating assets in Splunk, but we also recognize your need to automate, tinker, and generally get flour all over everything. You want your cake and you also want to eat it. That’s why we support toggling between our UI and SignalFlow so that you don’t have to choose.
Today, we’d like to talk about an automation aware tool you know and love. Detectors created via our API are now united with those created via the UI. Go ahead and have a slice of cake with that detector.
Detectors and the API or Before and After
Having a single way to create detectors (via the UI) really helps some users, but Splunk also values having a powerful API. As we explained with our efforts to allow toggling charts between UI and domain-specific language this creates some difficulties to work through. Swapping out techniques can confuse a lot of tools, especially when they have a lot of power!
Delighted users can now treat their detectors nearly identically to those made via the UI. While some minor differences remain to accommodate the power of your detectors we offer near parity with those UI-generated from the UI. This means you can use alert preview when adjusting detectors, as well as empower teams to own and edit detectors that were previously built with the API only.
More Than Good Practices: Best Practices
The crux of this work, just like the earlier post about charts, is that we needed to accommodate not only our UI but also the wide variety of possibilities unlocked by our users, the API, and SignalFlow. Therefore it was important that detectors created via tools like Terraform not only worked in the UI, but that they enabled teams to spin up assets for reuse and best practices.
With this improvement, teams that manage their detectors via Terraform can preview alerts before they actually trigger, customize alert messages to include variables such as host and service name, provide runbook URLs and tips using the UI, and can open and investigate a detector in the UI with no limitations. This allows teams of subject matter experts to contribute best-practice monitoring tooling to other teams with all the ergonomics of Splunk's UI.
Using Terraform and Git to manage your detectors can bring some excellent social perks. It’s definitely a good practice to facilitate any team member improving and tuning the alerts they might be receiving. While making the change in Splunk's UI is straightforward, having Git’s history and the mechanism of a pull request means that every member of the team is familiar with the process of adjustment, can count on feedback from the team, and can receive credit for investing in the tuning and pruning that makes for an informative and attentive on call rotation.
So, my baking friends, as we look into the fall and winter and begin thinking about hibernation, having that cake and eating it too is important.
Knowing you can leverage our helpful UI for building detectors is reassuring, like that stand mixer that revolutionized your kitchen. But now you can supercharge your efforts with the additional power of Terraform, Nike’s signal analog, or a recipe of your own devising! Thanks to the the server-to-table, non-organic power of Splunk's API and SignalFlow you’ll be wowing them with confections worthy of cable competition shows.