Best For

Business analysts, security analysts, and IT analysts who need to analyze greater than ~1TB of data per day and have data living in across various systems throughout their organizations.

Project Description

Splunk Data Fabric Search (DFS) quickly weaves together massive datasets living across multiple data stores into a single view. With DFS, organizations can analyze high-cardinality data with up to billions of unique values and large datasets containing multiple hours, months and even years of data across any number of Splunk® deployments and eventually any third-party data store — providing comprehensive insights from every event or record across the entire enterprise.

*analyze data across 3rd party data stores is currently in pre-release. Please sign-up for the pre-release program here if interested.

Technical Requirements

Deployments with at least 1TB of data 

Upgrade to Enterprise 8.0

Spark Requirements

  • Dedicated Spark Cluster for Splunk 
    • 4:1 ratio for Splunk indexers : spark nodes
    • Minimum 5 Spark nodes (with <=20 indexers)
  • Minimum Spark Node Requirement
  • CPU: 8 core (16 recommended)
    • Memory: 64GB (128GB recommended)
    • Network: 10Gbps
    • Storage: 500GB+ (1200+ IOPS)

Recommended Model for Spark Configuration: Download the Splunk DFS Manager app on Splunkbase. The app provides a user interface that allows customers to manage and monitor the Spark master and Spark workers by providing a high level view of your DFS deployment and the available resources.

I'm Interested!