TIPS & TRICKS

Splunk Java SDK Updates Now Available

Recent updates to our Java SDK, currently in “Preview”, make it even easier to build Splunk applications in Java.  Get the Java SDK now on GitHub!

Enhanced Events Processing

Updates to the Splunk Java SDK include an XML, JSON, and CSV streaming results reader that parses events into key-value pairs.  The previous version of the SDK only provided raw streaming of event data.  The XML streaming reader uses built-in JDK XML tokenizing support while the JSON and CSV form requires a separate .jar files which are included in the SDK.

The following example uses the built-in XML streaming reader:

Job job = service.getJobs().create(query, queryArgs);
...

HashMap<String, String> map;
stream = job.getResults(outputArgs);
ResultsReader resultsReader = new ResultsReaderXml(stream);
while ((map = resultsReader.getNextEvent()) != null) {
    for (String key: map.keySet())
        System.out.println(key + " --> " + map.get(key));
}

Splunk Storm support

Splunk Storm is a cloud service-based version of Splunk for users who want a turnkey, fully managed and hosted service for their machine data.  Developers can build statistical analysis into applications, find and isolate bugs or performance problems in code, record and analyze events using semantic logging.  This update supports Splunk’s Storm service. To work with Storm, you simply create a StormService rather than a Service:

// the storm token provided by Splunk
Args loginArgs = new Args("StormToken",
    "p-n8SwuWEqPlyOXdDU4PjxavFdAn1CnJea9LirgTvzmIhMEBys6w7UJUCtxp_7g7Q9XopR5dW0w=");
Storm service = StormService.connect(loginArgs);

// get the receiver object
Receiver receiver = service.getReceiver();

// index and source type are required for storm event submission
Args logArgs = new Args();
logArgs.put("index", "0e8a2df0834211e1a6fe123139335741");
logArgs.put("sourcetype", "yoursourcetype");

// log an event.
receiver.log("This is a test event from the SDK", logArgs);

Extending the API for full REST endpoint coverage

This update includes a paginate feature for data returned from Splunk.  Page through Splunk meta data with count and offset methods rather than retrieving all of the data at once.

ConfCollection confs;
Args args = new Args();
args.put("count", 30);
args.put("offset", 0);

confs = service.getConfs(args);
// ... operate on the first 30 elements
offset = offset + 30;
args.put("offset", offset)
confs = service.getConfs(args);
// ... operate on the next 30 elements

The Java SDK also includes a namespace feature as optional arguments (app, owner, sharing) to the collection’s create and get methods. Namespace specifications allow you to apply access control to saved searches and dashboards, making it easier to manage how Splunk data is shared.  An example is where “owner = magilicuddy, app = oneMeanApp”:

String searchName = "My scoped search"; String search = "index=main * | head 10"; args args = new Args(); args.put("owner", "magilicuddy"); args.put("app",  "oneMeanApp");  // ... other creation arguments also get set into the args map  savedSearches.create(searchName, search, args); 

This example shows how to returns all saved searches within the same scoped namespace:

args args = new Args(); args.put("owner", "magilicuddy"); args.put("app",  "oneMeanApp"); SavedSearchCollection     mySavedSearches = service.getSavedSearches(args); 

Making it Easier to Get Data into Splunk

The new Receiver class makes it easier to get your data into Splunk. Support has been added for a default index, allowing optional parameters for streaming connections.

Getting Started & Staying Connected

Watch and fork Splunk’s Java SDK on GitHub.  Learn more about how to get started with the Java SDK on our developer site.  Stay up to date on the latest developments by following us on Twitter at @splunkdev.

----------------------------------------------------
Thanks!
Jon Rooney

Splunk
Posted by

Splunk

Join the Discussion