Tips and Tricks with ServiceNow for Splunk

Given Splunk’s release of a full integration with Service Now , I thought it may be nice to describe some functions and possibilities available within the app.  If you download and deploy it today,  you’ll be able to generate events or incidents within ServiceNow (with event generation being a relatively new offering).  You can also track those events and incidents that have been generated within Splunk, via the feed coming from Service Now.  We’ve also included several ‘basic’ dashboards to give users a taste of what they can do.  So lets explore what you can do and what you can splunk, beyond the configuration that is included out of the box.

Within the app, there are three very important files.  The first is ‘’, which lives under the apps’ bin/ directory.  This file is responsible for establishing communication and passing data from Service Now into Splunk (*hint, hint* if you want to setup a distributed deployment, this will live on a universal forwarder).  The second is one you should be familiar with, inputs.conf, and the third, snow.conf.  Inputs.conf is familiar and controls the execution rate of the scripted input, indexes and sourcetypes.  The file ‘snow.conf’ contains all of the configuration information and parameters that will be passed to, making snow.conf the file you will want to focus on first.  If you look at it out of the box, you’ll notice a few stanzas, such as the incident stanza (This is simply a sample configuration);

endpoint =
limit = 1000
exclude = close_notes,description,comments,comments_and_work_notes
timefield = sys_updated_on
keyfield = sys_id
lookup = incident.csv

Lets take a look at what this is doing. Be mindful that there are NO default settings for these configurations so you would need to set it accordingly.

[MyStanza] Your stanza and argument that will be appended to the python script e.g. ‘/apps/bin/ MyStanza’
endpoint = This is the endpoint to bind to within ServiceNow
limit = 0000 Used to set a limit to how many records to retrieve during a single poll
exclude = my_excluded_fields, in_a_comma_separated_list A list of fields that will be ignored during the poll
timefield = my_time_field keyfield Which field should Splunk bind to as the time (there are a few endpoints with many time fields to choose from)
keyfield = your_key A unique field that can be used for de-duplication, but more importantly used if you are writing to a lookup
lookup = "stream" -or- lookupfile.csv This tells the script to either stream the events directly into the Splunk Index, or write them to a lookup file stored in snow/lookups.  Streaming is usually preferred, but lookups have their advantages when sorting through massive amounts of data (e.g. 1M records).**more on this in the documentation

Following this, you would need to set the appropriate arguments in inputs.conf similar to those already set for incident and event.

Internally we use these feeds from ServiceNow to analyze projects, detect stale incidents, or discover incidents that are about to violate their SLA response time.

Armed with this information, you can easily pull data from any endpoint within ServiceNow into Splunk.  A very valuable use case for this is tracking changes to your environment.  You could easily pull in change records, as well as Splunk particular configuration files.  By building a search to detect changes on a system, and correlate this information against change records, discovering unauthorized changes is no longer a challenge. You can use Splunk to automate that for you! Why not close the loop and use Splunk to create an incident for an unauthorized change, or even take it a step further and use ServiceNow Orchestration to revert the changes?


Dennis Bourg

Posted by