Regardless of your title, if your job involves preparing data stored mainly in HDFS among other stores, so that your end-users can query and visualize it, Hunk is probably right for you. Two common themes among data officers are:
1. We are building a data lake.
2. It takes too long to prepare the data.
So, you’ve built your data lake, now what?
If you’re using one of the many point solution tools, in order to gain insights from your data, you must first go through an ETL process. This requires expertise in a programming language, imposing structure on the data, loading it in Hive or a relational database, and using your favorite visualization tool. As the data changes and additional sources must be brought in, you find yourself using more tools and repeating this process, adding complexity, not to mention soon relying on a single technical expert.
From a management perspective, this is a precarious position to be in. As the technical expert, it’s an enviable position until you realize you are stuck maintaining this hodgepodge.
Hunk leverages the same easy-to-use Splunk Processing Language to explore, query and visualize data stored in HDFS and other data stores giving users quick insights into their data and letting developers reduce delivery time and leverage the Splunk SDK of their choice and REST API for integration. This allows many organizations to provide Hadoop as a Self Service and deliver fast time to value.
Below is diagram depicting a single pane of glass into your entire data farm, comprising Splunk Enterprise, Hadoop, a RDBMS and a NoSQL database.
Where one needs to gain insights from data quickly and efficiently, Hunk delivers. Hence, if I’m forced to treat every data problem like a nail due to lack of resources, let Hunk be my hammer.