More than 200 people joined us last week in Washington, DC for our largest SplunkLive ever–doubling the number of attendees from the 2009 SplunkLive DC event! Representatives from great companies like BAE, Comcast, Lockheed Martin, McAfee, Qwest Communications, Verizon and representatives from nearly every branch of Federal government were in attendance.
Splunk’s Co-Founders Erik Swan and Rob Das started the day detailing why they created Splunk. Everyone knew there was value in IT data, but the way to search and understand it was complex and troublesome. Google was great for easily and logically finding information on the World Wide Web. Why not apply the same thinking to our log files and IT data? And just look what I can do with it once it’s in Splunk:
So we got the basics down. And helped a lot of people to manage their infrastructures better. So now that we’re on version 4.1, we’re working to focus in 3 key areas:
For Service Oriented Architectures, Splunk is particularly strong in providing visibility across every element of the transaction layer. A transaction didn’t go through? A customer didn’t receive an email confirmation? Need to prove or enforce a service level agreement? Splunk can help you see across the transaction to find and fix the problem—or create a dashboard to keep tabs on overall system throughput and health.
As we move to mixed and virtualized environments, we’re adding in more Operating systems, more applications per host, and adding even more windows and consoles to try to manage it all as VMs go up and down and change. But Splunk gives you one view into your data, configurations and metrics across both virtual and physical machines to understand who is doing what, when to which machines.
OK, so now we’ve got Virtualization covered, SOA and transaction tracing, what else can Splunk do?
Well, it can tackle any data set you throw at it.
We’ve reached a tipping point in history: more data is being manufactured by machines — servers, cell phones, GPS-enabled cars — than by people.
Splunk can help us to see patterns and really make the most of that data. As customers before have said—everything you need to know about your business is in your data—you just need a logical way to sort through it.
Splunk is great for compiling unstructured data from many heterogeneous sources. You don’t have to force fit anything into a pre-designed schema or pattern of querying, and you don’t have to be a SQL expert to create ad-hoc reports dashboards or graphs.
Customers are using Splunk to design variable pricing structures, plan new service offerings, understand usage and customer consumption patterns across dynamic demographics, mitigate revenue leakage and better correlate cause and effect across complex networks.
Correlation is of course one of the core tenets of successfully protecting against hackers, malware and other security risks—which brings me to the next presentation from Terry Brugger, PhD, and contractor for a large Federal agency. Read Tony’s blog Splunk’s Hot in the Federal Government or Splunk for FISMA to understand how Federal organizations are deploying Splunk.
And last but not least, Alan Thompson, Technology Infrastructure Manager for The Washington Post stepped up to detail ways Splunk helps them to stay in business.
For the Washington Post, the website is the business and every hour of downtime could cost $50-$100K…and that doesn’t even factor in disappointed advertisers and subscribers.
They brought in Splunk to get a single point of visibility across their heterogeneous environment—and displace a homegrown logging solution that wasn’t scaling and delivered security alerts days later.
- SysAdmins are saving 30% of their time by not acting as log butlers
- Security teams have dashboards to easily identify anomalies thus reducing MTTR
- Ops team dashboard details Top Refers to anticipate peak traffic pages
- Business teams gathered better web analytics for mobile sites than their packaged Web Analytics Application delivered—which informed capacity and hardware planning
And they’re just getting warmed up. Next Alan’s team plans to use Splunk to
- Determine Top 10 read and emailed articles
- Clickstream tracking
- Trigger SNMP alerts from Splunk into existing tools with automated ticket creation
- Automate Sarbanes-Oxley reporting
Thanks for taking your Splunk implementation from 0-60 in just a few months Alan, we can’t wait to see what you do next!
And remember if you’re inspired by Alan and Terry’s presentations be sure to join us for The First Splunk Users’ Conference, August 9-11, 2010 in San Francisco. We’ve got 40 sessions in 7 tracks featuring customer presenters from Autodesk, Cisco, Pegasus Solutions, VeriSign and Voxeo.
Register for the Users’ Conference by June 25 to save $100. See you there!