Synthetic Monitoring for CI/CD Pipelines

For DevOps teams, delivering quality software has long required reconciling a major tension:  In a perfect world, you’d catch every issue in each new release of your application before you deployed the release into production. But in the real world, doing so is tricky, not least because it’s hard to collect data about application performance before the application is actually deployed.

To put this another way: until your application is being used by real users, and you can trace actual transactions, how can you be sure the application actually performs as required?

The answer is synthetic monitoring. With synthetic monitoring, you can detect problems in an application release pre-deployment, using data from synthetic transactions rather than real-user transactions. Not only does this mean that fewer bugs reach your end-users, but it also makes issues easier and faster to resolve.

Here’s a primer on how synthetic monitoring works and how to implement it in order to catch application problems early in the CI/CD pipeline.

Why Synthetic Monitoring?

Synthetic monitoring is one of a few methods of monitoring for CI/CD pipelines. In a nutshell, synthetic monitoring is a monitoring technique in which engineers run scripts that simulate user transactions. Then, they monitor and analyze the transactions to determine how the application would respond if a real user initiated the same transaction.

This provides many advantages:

  • Validate application behavior before releasing the application to real users.
  • Compare how changes to an existing page effect key performance metrics relative to those pages in production. For example, make a change to the JavaScript of an image carousel and see how it impacts page metrics.
  • Understand the change of adding third party code. For example, see the impact of a new A/B testing library or chat widget in staging.
  • Experiment with different options or techniques to see if performance improves. For example, implement different Resource Hints to see if critical metrics like Web Vitals improve. Validate application behavior before exposing the application to real users.

While all these advantages are helpful, the value of synthetic monitoring lies in more than just the ability to collect monitoring data without waiting on real users to initiate certain types of requests. The most important benefit of synthetic monitoring is that it allows you to validate application behavior earlier in the CI/CD pipeline. Instead of having to wait until the application is deployed to get feedback about its performance, you can use synthetic monitoring to evaluate its performance during the development and testing stages of the CI/CD pipeline.

This is important not just because it means that you get earlier alerts about problems in your application, but also because most issues are much easier to fix early-on than they would be if you waited until your release was already in production. Having to roll back a problematic release is a big deal that may disrupt users, especially if it means taking away new functionality that has already been deployed. When you catch issues pre-deployment, you can fix them more smoothly, without disrupting the production environment.

How Does Synthetic Monitoring Work?

To perform synthetic monitoring, engineers use frameworks that allow them to script application requests and then automatically execute and monitor the transactions. Selenium is probably the most popular open source framework for synthetic testing, although it’s often used in conjunction with proprietary tools that make it easier to orchestrate tests and analyze results.

Best Practices for Synthetic Monitoring

Simply writing the first types of synthetic monitoring tests that come to mind and running them pre-deployment won’t guarantee meaningful visibility into your application release before your end-users encounter it. Instead, it’s important to keep several factors in mind as you plan a synthetic monitoring strategy.

Test Broadly

One is to ensure that your synthetic monitoring tests cover a wide variety of transaction types and variables. While it’s tempting to focus only on the most common transaction types or configurations that are likely to align with real users, it’s equally important to test for niche cases – such as transaction types that represent only a small fraction of all percentages, or configurations that are uncommon among your user base.

You want to be able to understand how your application will behave for all of your users, and you can only do that effectively if you perform synthetic monitoring for a wide variety of user profiles and use cases. This is made easier by using web analytics to better understand your user’s behavior, geographic location, as well as common browsers and connections speeds.

Test All Application Components

Along similar lines, synthetic monitoring is most effective when it’s used to monitor all application components and services instead of just the most important ones. Ideally, you’ll integrate synthetic monitoring into your CI/CD pipeline so that all code – every release of every microservice – is monitored synthetically as soon as it’s built and ready to test.

Orchestrate Your Tests

When you have a large number of synthetic tests to run, keeping track of them all and executing them effectively becomes a challenge. Don’t take an ad hoc approach where you simply keep a library of tests on hand and try to remember which ones need to run when. Instead, use a synthetic monitoring tool that allows you to orchestrate test execution as well as keep track of changes to your tests.

Test for Geographic Variables

One of the most powerful benefits of synthetic monitoring – but one that is also easy to overlook – is its ability to evaluate application behavior for users who are located in different geographic regions. Will users who are located far from your data centers experience unacceptable latency? Synthetic monitoring will help you answer that question before your release goes live.

So, be sure to test for geographic variables as well as the more obvious ones (like browser and operating system configurations). You can also use synthetic monitoring to compare how applications perform with and without the use of CDNs, which will also help you anticipate different types of user experiences. This is a great example of the “experiment” advantage discussed earlier.

Use Real User Monitoring, Too

While synthetic monitoring offers unique visibility into the performance of your applications prior to deployment, it should be augmented in production with real-user monitoring (RUM) as well. While using synthetics in pre-production can help forecast what users will experience, only RUM used in production analyzes actual transactions is able to tell you what users actually experienced. Synthetic monitoring is one part of a broader performance and reliability management strategy, not typically used in a standalone fashion.


In a world where customer expectations are higher than ever, synthetic monitoring helps you find and fix problems before they reach your end-users. Learn how Splunk offers a complete digital experiencing monitoring platform for integrating end-to-end synthetic monitoring into your reliability engineering and performance management operations.

By enabling you to evaluate application behavior across a wide variety of use cases and user configurations, synthetic monitoring maximizes your ability to catch issues early in the CI/CD pipeline, when they are easier to fix and they have not yet been inflicted upon real users.

What is Splunk?

This is a guest blog post from Chris Tozzi, Senior Editor of content and a DevOps Analyst at Fixate IO. Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. This posting does not necessarily represent Splunk's position, strategies, or opinion.

Stephen Watts
Posted by

Stephen Watts

Stephen Watts works in growth marketing at Splunk. Stephen holds a degree in Philosophy from Auburn University and is an MSIS candidate at UC Denver. He contributes to a variety of publications including CIO.com, Search Engine Journal, ITSM.Tools, IT Chronicles, DZone, and CompTIA.