IT IT

Unleashing Data Ingestion from Apache Kafka

Whether it’s familiar data-driven tech giants or hundred-year-old companies that are adapting to the new world of real-time data, organizations are increasingly building their data pipelines with Apache Kafka. Confluent has an impressive catalog of these use cases.

Splunk Connect for Kafka introduces a scalable approach to tap into the growing volume of data flowing into Kafka. With the largest Kafka clusters processing over one trillion messages per day and Splunk deployments reaching petabytes ingested per day, this scalability is critical.

Overall, the new connector provides:

  1. High scalability, allowing linear scaling, limited only by the hardware supplied to the Kafka Connect environment
  2. High reliability by ensuring at-least-once delivery of data to Splunk
  3. Ease of data onboarding and simple configuration with Kafka Connect framework and Splunk's HTTP Event Collector

How it Works

As a sink connector, Splunk Connect for Kafka takes advantage of the Kafka Connect framework to horizontally scale workers to push data from Kafka topics to Splunk Enterprise or Splunk Cloud. Once the plugin is installed and configured on a Kafka Connect cluster, new tasks will run to consume records from the selected Kafka topics and send them to your Splunk indexers through HTTP Event Collector, either directly or through a load balancer.

Key configurations include:

  1. Load Balancing by specifying list of indexers, or using a load balancer
  2. Indexer Acknowledgements for guaranteed at-least-once delivery (if using a load balancer, sticky sessions must be enabled)
  3. Index Routing using connector configuration by specifying 1-to-1 mapping of topics to indexes, or using props.conf on indexers for record level routing
  4. Metrics using collectd, raw mode, and collectd_http pre-trained sourcetype

Getting Started

To get started, download Splunk Connect for Kafka from Splunkbase. Install this JAR file across your Kafka Connect nodes, and restart your these nodes with updated properties to enable the Splunk connector. Please consult our documentation for additional instructions, configuration options, and troubleshooting. Also, check out Don Tregonning’s blog post for a full guide to getting set up and tips for configuring your end-to-end pipeline.

With the introduction of Splunk Connect for Kafka, we recommend shifting existing use of the legacy Splunk Add-on for Kafka for consuming Kafka topics to the new connector. The add-on will continue to be supported for monitoring your Kafka environment using JMX.

And in case if you wondering, yes—Splunk Connect for Kafka is open source! You can access the source code at our github repo.

Michael Lin
Posted by Michael Lin

Michael Lin is a Senior Product Manager at Splunk focused on data onboarding. He has worked across Splunk's core platform including creating its internal product analytics framework and data pipeline. He previously worked in consumer tech, including a mobile content services startup, and Samsung Mobile based in its South Korean headquarters. Michael has a B.S. in Computer Science from the University of Maryland, and a M.B.A. from the University of Chicago Booth School of Business.

Join the Discussion