Article

How to Leverage Synthetic Monitoring to get a Full View of Performance in a Cloud-Native World

By Billy Hoffman

For reliability engineering and performance management teams, it can be easy to think of synthetic monitoring as a mere dress rehearsal rather than something you should use for the real show. SRE teams are aware of other approaches to performance monitoring, such as real user monitoring (RUM). Compared to these alternative approaches, synthetic monitoring, which uses data from simulated transactions instead of actual user requests, can feel less capable or valuable. Synthetic monitoring may seem useful for performing basic analysis of certain key applications or services pre-deployment, but not once applications are in production.

In reality, teams need to make full use of every monitoring technique at their disposal. Modern application environments and user engagements have grown so complex that using just one approach — such as only synthetic monitoring or only RUM — leads to blindspots and incomplete data. With respect to synthetic monitoring, this means not only that synthetic data collection and analysis need to happen at all stages of the software delivery process — including in production — but also that they should apply to all applications and services, not just a sampling.

This whitepaper explains why synthetic monitoring is a critical component of any holistic monitoring approach. It also describes best practices for achieving end-to-end synthetic monitoring, even in a large-scale environment that includes dozens of different services and transaction types to monitor.

What Is synthetic monitoring?

Synthetic monitoring refers to the analysis of data from simulated transactions to evaluate the health or performance of an application or service. Synthetic monitoring uses an active monitoring approach to understand application behavior: It impersonates a client, such as a web browser or API client, and generates its own traffic as it interacts with a service.

In contrast, real user monitoring, or RUM, takes a passive approach by recording the experience that happens from real-world transactions driven by live users.

When teams perform synthetic monitoring, they typically use scripts or emulated environments to mimic the actions that real users might perform when using an application or service. Then, they use tools to collect and analyze the resulting transaction data. To do this efficiently, synthetic monitoring tools should be automated to provide feedback early in the application delivery lifecycle. Manual synthetic monitoring may suffice for one-off testing of small-scale environments, but it doesn’t deliver the systematic insights that teams need to find issues early and fix them before they impact end users.

How synthetic monitoring complements other approaches

In many ways, synthetic monitoring is the opposite of RUM. RUM involves collecting and analyzing data from live transactions that are generated by real users rather than scripts running in simulated environments.

Because synthetic monitoring drives its own traffic, it excels in several ways that RUM does not:

 1. Synthetic monitoring makes it possible to detect availability issues. Since RUM merely records the experience a real user has, it isn’t helpful in detecting problems in context where there are no real users — which might happen if, for instance, a user cannot access a service, resulting in no data to record. Because synthetic monitoring provides its own traffic, it can detect these availability issues.

2. Synthetic monitoring makes it possible to analyze application behavior without having to expose an untested application to end users, whose experience will suffer if the release turns out to have flaws. This means that synthetic monitoring can be used in pre-production environments, which RUM cannot.

3. Synthetic monitoring makes it easy to simulate a multitude of different variables. Since you can control the settings and environment of the synthetic tests — such as different web browser versions or network connections — synthetic monitoring allows you to experiment and test different scenarios.

4.Synthetic monitoring may be faster and more efficient than RUM, especially when using techniques such as “headless” mode, which simulates only the necessary parts of an application environment when running test transactions. By avoiding the need to run unnecessary services or components for each test, headless mode leads to faster tests while consuming fewer system resources.

In each of these ways, synthetic monitoring delivers crucial insights that typically can’t be obtained efficiently in other ways. It’s an essential ingredient in a modern, holistic monitoring strategy.

Why end-to-end synthetic monitoring?

Historically, teams tended to leverage synthetic monitoring most often as part of their pre-deployment testing routine. They used automated testing frameworks like Selenium to script simulated transactions within their applications, then analyzed the data to validate that their release performed as required prior to deploying a new release into production. Once the release was in production, they would shift to RUM and rely on monitoring data from actual transactions.

That made sense when monitoring needs were straightforward and application hosting environments were relatively simple.

If you’re deploying a monolithic web application hosted in a virtual machine, for instance, the variables at play in each transaction are relatively few. Each transaction will involve the same application frontend and backend, and the same server. Under these circumstances, RUM may be enough to achieve accurate observability into production environments.

Synthetic monitoring in a cloud-native world

In today’s cloud-native world, however, applications tend not to be so simple. They are more likely to be composed of a dozen or more microservices, each hosted in a different set of container instances. The containers may be distributed across a large cluster of servers. A user accessing the app may interact with a variety of different services during each engagement.

In this type of environment, transactions come in a variety of constantly changing forms. User requests could be routed in a multitude of ways across the sprawling microservices environment. The set of services and service integrations that are triggered by each request are likely to vary from one user to another. And because environment conditions constantly change as container instances spin up and down and individual services are updated, data collected from one transaction may not necessarily be representative of another transaction, even if the request is identical in both cases.

RUM plays an important role in tracking these complex transactions. But so does synthetic monitoring, which reinforces your team’s ability to analyze the reliability and performance of each service not just pre-deployment, but also in production. By using synthetic monitoring in production to evaluate the behavior of every microservice under every likely condition within the environment, your team is in a much stronger position to identify outliers or unexpected problems that may not be evident from real-user transactions until it’s too late.

Building an end-to-end synthetic monitoring strategy

Getting the most from synthetic monitoring in modern environments requires more than simply deploying synthetic monitoring tools at all stages of the application delivery pipeline. Teams must take additional steps to ensure that they optimize the visibility that synthetic monitoring provides as part of their broader performance management strategy.

Use application-agnostic monitoring tools

A basic first step toward end-to-end synthetic monitoring is to ensure that the monitoring tools you use can support any type of application or service that you need to monitor.

Today’s applications come in an array of forms, from web apps, to native mobile and server applications, to hybrid applications. They can be built using an assortment of different cloud and on-premises services, and they can be deployed using bare-metal servers, virtual machines, containers, serverless functions, or (as is common in modern environments) a combination of these various technologies.

For this reason, monitoring solutions that work with only one kind of application or only one cloud platform are not enough for guaranteeing holistic visibility via synthetic monitoring. Look instead for tools that are compatible with whatever you throw at them. Even if you don’t need to monitor a certain type of application or service today, you may in the future as your applications evolve.

Monitor for uptime and performance

Delighting users means ensuring not just that applications and services remain available, but also that they perform adequately. An unacceptably slow transaction is just as bad as a transaction that doesn’t work at all because a service is down.

 

Monitor interactions, not individual services

When you set up synthetic monitoring routines, you typically collect data from all of the applications or services you need to monitor. In order to make full use of that data, it’s critical to analyze how an event in one service impacts the availability or performance of another service.

Holistic visibility into service interactions is crucial for microservices environments where the user experience may be poor, even if each service is performing adequately on an individual basis. Data that takes too long to move from one service to another, for example, or problems associated with service discovery and orchestration during certain types of transactions could lead to major disruptions for users. Synthetic monitoring operations can surface such problems before they impact actual users.

Leverage synthetic monitoring as part of a broader toolset

Synthetic monitoring is one powerful tool in the modern reliability engineer’s toolbox, but it’s not the only one. To deliver the greatest value, synthetic monitoring should be paired with other monitoring and observability techniques, including not just RUM, but also log analysis, metrics collection and tracing.

Focus on leveraging synthetic monitoring to gain insights that these other techniques can’t deliver with the same efficacy, such as application behavior under unusual variables that are occasionally but not frequently triggered by real users.

Be sure to aggregate and analyze all of the data produced by your various monitoring systems and routines in a single place. It’s only by integrating and correlating all of the information at your disposal that you can gain complete contextual insight into the state of your systems and total visibility into problems.

Conclusion

The complexity and scale of today’s application environments and user requests demand a monitoring strategy that offers more precision and broader coverage than ever before. End-to-end synthetic monitoring that delivers visibility into all applications and services, both pre- and post-deployment, is an essential component of such a strategy. By allowing you to gain insights that other monitoring techniques don’t provide, end-to-end synthetic monitoring delivers vital context that your team needs when observing its systems as a whole.

Splunk offers a complete digital experiencing monitoring platform for integrating end-to-end synthetic monitoring into your broader reliability engineering and performance management operations. Using Splunk Synthetic Monitoring, you can simulate transactions via Selenium to test all of the conditions and variables that you need to evaluate, in any application or service, at all stages of the application lifecycle. You can then correlate and analyze your synthetic monitoring data easily by integrating with Splunk Observability Cloud.  Watch this demo and start your free trial of Splunk Observability Cloud today.   

 

Get started with a free trial of Splunk Observability Cloud today.