The inhabitants of the modern data center face a dilemma of the unknown unknowns:
“…There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.” – Former Defense Secretary Donald Rumsfeld, February 12, 2002.
I am often asked what differentiates Splunk from our competitors. If I had to boil it down to a single item, I would say it has to do with the capability to turn the unknown unknowns into actionable bits of knowledge with a business impact. The problem is that there’s so much stuff going on inside the data center these days, that no one actually knows everything that’s going on in there. If they tell you they know, then they’re lying (or delusional). Further complicating matters is that most software you can buy to help manage your data center problems assume that you know what it is you’re looking for – that you know what you don’t know. Most software in this category has connectors, agents, and/or plugins geared specifically to pull data from a certain piece of technology, which is fine if you know exactly what exists in your data center. Therein lies the problem. With today’s modern data center, complete with virtualized environments, hot deployments of servers and applications, offramps to clouds, mobile devices, and patches of skunkworks software installed without approval, it’s impossible to know what is going on at any given time. And, much like the Heisenberg Uncertainty Principle, the act of finding out would impact the very business transactions you’re trying to watch. Splunk began with a different approach – how do you first turn unknown unknowns into known unknowns and then ultimately into streamlined processes beneficial to your business? Our conclusion was that you simply cannot assume that you can know everything, thus the best approach was to glean intelligence from the data center “exhaust” – logs, application data streams, monitoring data, and any other serialised, machine-generated data source.
I’ve seen it too often to count. Customer X buys Splunk for a specific use in a single department and then quickly escalates their Splunk deployment to include multiple departments, groups and uses because of things they started seeing that they didn’t even know were there, such as abnormally slow transactions. To use examples, I can think of a few customers who brought in Splunk to help with web site security, started noticing some interesting data coming from their web application data, and began Splunking ever more data streams. We saw the potential for this long ago – the word “Splunk” is derived from the term “spelunk” – to explore caves. When you spelunk, you don’t necessarily know what you might find, and the same goes for exploring the cavernous reaches of data centers. Our customers vouch for and wholeheartedly approve of that approach. It is precisely because of this approach that they are able to achieve better operational intelligence and, ultimately, higher operational efficiency. In order to get to the point where they could improve their efficiency – and by extension, profitability – they had to understand what wasn’t working optimally. Not necessarily broken, per se, but working sub-optimally. Understanding the difference between those two things is critical in a world where higher operational efficiency can mean the difference between succeeding and dying.
The operators of the modern data center cannot ever assume the known unknowns – to do so is to invite future calamity. This is why I describe the Splunk experience as “The Joy of Happenstance” – the moment of bliss when you find a previously undiscovered problem and know how to create a solution. Splunk helps you future-proof the data center. You don’t have to wait for us to create new connectors for the thousands of technologies you’ll be adding to your environment – just Splunk IT.