Late last year we announced a planned integration layer between Splunk and Hadoop. We called it the Shep project. We saw a tremendous response, signifying the pent up interest in a Splunk-Hadoop integration. To me, it indicated that people just “got it.” What do I mean by that? They got that bringing Splunk technology to the Hadoop ecosystem meant a leap forward in making the promise of Big Data a reality to a huge segment of the industry.
As an example, areas where people saw immediate value were:
- Opening up Splunk-ingested data to a variety of groups building analytics on Hadoop in the Enterprise
- Using Splunk as a way to search and visualize data contained in Hadoop
- Using Splunk to bring real-time search and analysis to Hadoop
- Federating data analytics between Splunk indexed data and data in Hadoop
So as promised, we delivered the Shep beta to customers earlier this year and got some great feedback. We also learned a great deal that we’re now productizing. As a result, the Shep project has evolved into several initiatives, and no longer a standalone project. These initiatives all fall under the common theme of Splunk and the Hadoop stack working in ever-closer concert to enable the handling of ever larger data sets, coming at ever-faster rates. To be kept updated as things develop, register here: www.splunk.com/bigdata.
Today, it’s universally acknowledged that Hadoop itself is complex and difficult, and many times, the struggle is with the technology itself, and not the problems that it was meant to solve. By using Splunk in conjunction with Hadoop, the goal is to undeniably address the challenges of high-velocity analytics. Over the coming months (and years), we expect Splunk ecosystem and Hadoop ecosystem to blend in various ways due to the natural gravitational pull of synergies that exist — and not just via Splunk created products, but through third-party initiatives as well.
Much more on this subject will be featured at our annual users’ conference .conf2012. Look forward to seeing you there.