- Archive your existing Splunk indexer’s data with a Hunk 6.2.1
- Search archived data in place from the Hunk search head
- Documentation here!
Archive Splunk Data
Hunk 6.2.1 enables you to continuously archive your Splunk data to Hadoop, by pointing a Hunk search head to your Splunk indexers and configuring an new Archive Indexes.
Searching archived data
You can search archived data in place on Hadoop just as easily as you would search any other Splunk index. There’s no need to move data more than once. This works because Hunk already knows how to efficiently search data in Hadoop. We just had to archive the data in a file structure such that Hunk could efficiently prune the data by time.
Most of the configuration goes on the Hunk search head, but you need Hadoop client libraries and Java on all nodes. For more information about configuration, see the documentation.
Copy, not delete
The archiver copies data to Hadoop. It does not delete data from the indexer, we let your Splunk index configuration take care of the deletion of data. Data can get copied from the Splunk indexer as soon as the data has reached a warm state. We copy warm and cold bucket data. You can configure how old you want your data to be before copying it to Hadoop.
What about precious network bandwidth?
We understand that network bandwidth is a limited resource in your cluster, so we’ve made it easy for you to configure network bandwidth throttling for the data transfers between your indexers to Hadoop. You can set limit the transfer rate to bits/second per indexer. See documentation.
We’ve made it easy for you to monitor your newly setup archive system through a dashboard which is built upon the logs from the new archiver feature. To view the dashboard, on your Hunk search head, go to: “Settings -> Virtual Index -> Archived Indexes -> View Dashboards”
We hope you will enjoy this new feature! Happy archiving!