Skip to main content


How to Monitor and Validate Data from API Endpoints

In a previous post, we covered four areas that are important to test when monitoring APIs: availability, response time, data validation, and multi-step processes. This post will focus specifically on availability and data validation for responses from API endpoints.

When we’re monitoring a website in a browser we want to go beyond checking the response code and confirm that some content or images loads on the page. If the page returns a “200 OK” but it’s completely blank, that’s something we’ll want to investigate right away. The same concept applies when we’re monitoring API endpoints. When monitoring API endpoints we want to not only confirm that the response code is expected but that the right data comes back in the right format, too.

Let’s walk through a simple use case for how to use basic Extract and Assert options to validate that an API returns data in the correct format.

Monitor and Validate Data

The Splunk Synthetic Monitoring App has an open API that Splunk Synthetic Monitoring users rely on to regularly pull data for reporting. It’s important that when a Splunk Synthetic Monitoring user hits the endpoint for their check with an API key that we return a “200 OK” response code and the data set for the correct ID that matches the endpoint.

We can create an external, synthetic test to hit the check endpoint at a set frequency from multiple locations and confirm that:

  • Response Code = 200, and
  • The check ID included in the JSON output matches the URL endpoint we’re hitting

API Check Steps Example

In the example above we’re using a Splunk Synthetic Monitoring API Check to:

  • Make a request with an API Key to Splunk Synthetic Monitoring API’s endpoint for Real Browser Check data
  • Assert that the Response Code contains the value ‘200’
  • Extract the check ID from the JSON using JSON path
  • Assert that the check ID extracted from the JSON path is the expected value

This very simple user flow helps us test:

  • Availability – the check will fail if the API returns a response code that’s not 200 OK
  • Data Format – if the data comes back from the API in a format other than JSON then the step to extract a value using JSON path will fail the check
  • Data Quality – if we’re able to extract a value for the ID but it doesn’t match the expected value, then the Assert step will fail the check


This example alert shows which step failed and who was notified.

For example, if we receive an alert that our external monitor was unable to extract the check ID in JSON we would want to visit our alert and inspect the output or response body from the API endpoint. By looking at the response body we could quickly see whether the format was incorrect or whether the id value was missing from the output. This information would help us start troubleshooting right away.

This just one simple example of how to implement robust monitoring for an API. If your current API tests only monitor for response code and response time, it might be time to consider adding some additional criteria for data format and quality.


Learn more about Digital Experience Monitoring