Test Before You Ship: Synthetics in Pre-Prod And Watch For Drift

It is 5 p.m. on a Friday. The deployment just wrapped up, the maintenance window is closing, and everyone is ready to start their weekend. Why oh why would you deploy at 5 p.m. on a Friday?

At the very next synthetic test interval, production alarms fire. Login is failing. Checkout is timing out. Your 24 by 7 synthetics, the same ones built to tell you when it is go time, are suddenly lighting up.

Now the executives want answers: How did this slip through? Why was it not caught earlier?
And just like that, all the hard work you put into making your synthetic monitoring reliable and trusted feels at risk.

The truth is simple. The break happened in pre-production first. It was detectable. It was measurable. It was avoidable. But nobody was watching that environment the way they watch production.

You may have heard me say that I slept well at night knowing that when synthetics tripped, we moved with confidence. That confidence comes from the discipline behind all the best practices in this series, including treating pre-production as a first class citizen in your observability strategy.

This article covers the next best practice in the Getting Synthetics Right Series: Using synthetics in pre-production environments. If you’re new to the series, check out the introduction article to learn how these best practices come together to make your synthetic browser tests reliable and actionable.

What is testing before you ship?

Testing before you ship means treating your pre-production or prod-minus-one environment with the same seriousness as your production environment. It is not a smoke test or a quick manual click through. It is full synthetic coverage running continuously against the same critical user journeys you monitor in production.

In most delivery pipelines, pre-production is the last stop before changes reach customers. Code, configuration, feature flags, dependencies, and UI updates all land here first. If something is going to break, slow down, or drift, this is usually where it shows up first.

When you run production like synthetics in pre-production, you gain early warning for issues that would otherwise become outages, escalations, or late night bridge calls. Whether that is an actual breaking change or just a test (selectors, assertions) that needs to be tweaked, you find it before it ever impacts real users.

Put simply, testing before you ship ensures that your synthetic monitors, your application, and your delivery process are all aligned before anything touches customers.

Learn more: See how tiered observability helps to prioritize and mature observability practices.

Why it matters

Pre-production is where issues surface first. If you are not watching it, you are promoting changes into production without the one signal that could have proactively warned you early. Running production-like synthetics in pre-production protects:

If you are following a structured SDLC and promoting changes through environments, pre-production is the last opportunity to catch drift or failures before customers see them. (If you are not following an SDLC… well, that is beyond the scope of this article.)

Here is what consistent pre-production monitoring helps you catch:

Area
Why it matters
What synthetics reveal early
Deployment quality
Most issues appear in pre-production before reaching customers
Misconfigured secrets, missing flags, broken paths
Zero downtime validation
Zero downtime claims need actual verification
Brief spikes, rollout instability, slow post deploy recovery
Developer productivity
Pre-production outages block builds, tests, and pipelines
Unstable or unusable environments that waste engineering hours
Test and selector drift
Drift in test assets leads to false positives in production
Broken selectors, stale assertions, UI changes
Performance regression
Small regressions become SLO burn in production
Early increases in LCP, TTFB, or step durations
Observability plumbing
Dashboards, spans, detectors, and routing need a safe proving ground
Broken detectors, missing tags, incorrect routing
Resiliency readiness
Failover and chaos should be tested somewhere safe
HA gaps, slow detection, unclear response workflows

Putting it into practice: How to monitor pre-production the right way

Follow these steps to best monitor your pre-prod environments.

1. Mirror production tests into pre-production using consistent tagging

Start by cloning your most important production synthetic tests into pre-production. The goal is parity. Keep everything the same:

Only update what is absolutely required:

Everything else should match your production tests. This gives you clean side-by-side comparisons across environments, makes drift easy to spot, and keeps test maintenance low by preventing definitions from diverging over time.

Learn more:

2. Keep tests running through deployments using downtime configurations

When you deploy into pre-production, do not turn off your synthetic tests. You want visibility into how the journey behaves before, during, and after a rollout. Instead, use a downtime configuration in Splunk Synthetic Monitoring to control how those runs are treated.

Splunk provides two rule types:

For pre-production and any environment where you want to understand deployment impact, augmenting the data is the better choice. It keeps test execution uninterrupted while clearly marking that the results occurred during a known maintenance period. You get a complete picture of how the system behaved without polluting your core metrics or triggering notifications.

This lets teams analyze:

And you do all of this without waking responders unnecessarily.

Splunk’s downtime configuration UI makes it easy to schedule these windows, add the appropriate buffers, and ensure results are properly labeled. Any tests executed during an augment rule will include the under_maintenance:true dimension automatically.

Learn more: How to manage planned downtime with synthetics.

3. Make pre-production test health a promotion gate

If your synthetic tests are failing in pre-production, you should not be promoting anything to production. This applies whether the failure is caused by the application itself or by a synthetic test that needs attention.

Pre-production synthetics act as your last quality checkpoint before changes reach customers. If those tests are failing, it means one of two things:

Either way, pushing forward introduces risk. Broken application behavior becomes a production incident. Broken tests become false positives that erode trust or hide real failures.

Treating pre-production test health as a promotion gate creates discipline in the delivery process. Teams learn to:

This is not about slowing teams down. It is about ensuring that your production synthetics remain the reliable, trusted signal you intend them to be.

Pro tip: You can validate test execution directly in the Splunk Observability UI or by querying the Splunk Observability Cloud API. Both methods give you quick confirmation that tests are running as expected and returning usable results.

4. Respond to pre-production failures on a predictable timeline

Pre-production does not need a full 24X7 on-call rotation, but it does need consistent and predictable attention. When pre-production is unhealthy, developers cannot develop, QA cannot validate, and pipelines cannot promote. It is a productivity outage, even if customers never see it.

Set a response expectation that aligns with how your engineering organization operates. For example:

If most of your teams work in IST, then a failing pre-production synthetic at 10 a.m. IST should be treated as blocking. The goal is not to escalate everything urgently, but to ensure failures do not linger unnoticed for days and slow down delivery.

5. Tune thresholds for the reality of pre-production

Pre-production changes more frequently than production. New builds, feature flags, config changes, dependency updates, and infrastructure tweaks all land here first. That natural churn can create noise if your thresholds mirror production too closely.

The goal is not to soften expectations, but to tune for the reality of the environment:

Splunk Observability Cloud gives you flexibility in how you notify teams. You can route pre-production alerts to collaboration tools such as Slack, Jira, and ServiceNow. This keeps issues visible without overwhelming responders during normal development cycles.

Learn more: Check out Splunk Observability Cloud’s OOTB Notification Service integrations.

6. Do your synthetic test development and tuning in pre-production, not production

Synthetic test development takes iteration. Selectors change, assertions evolve, and timing strategies often need refinement as applications mature. Pre-production is the right place for that work.

Tuning tests directly in production introduces blind spots, false positives, and alert suppression. That erodes the confidence you rely on when a production synthetic fires and it is supposed to be go time. It also means that your customers are being affected while you’re trying to scramble to tune tests.

By refining your test logic in pre-production first, you keep production synthetics clean, stable, and trusted. That is exactly how they should operate.

7. Combine synthetics with chaos, failover, and load testing

Pre-production is the safest place to understand how your application behaves under stress. If your organization runs chaos experiments, load tests, or failover scenarios, keep your synthetic tests active throughout these events. Synthetics give you a consistent measurement of how critical journeys respond when the system is pushed.

Use these runs to understand:

These exercises give you more than application resiliency. They help you validate operational readiness: detection, response, communication, and recovery. Pre-production synthetics make these drills measurable and repeatable.

Wrap up and next steps

Testing before you ship strengthens everything that comes after it. When your pre-production environment is monitored with the same discipline as production, issues surface earlier, drift is caught sooner, and your production synthetics remain clean and trusted. You gain a reliable signal before customers ever feel the impact, and your teams move faster because they are not slowed down by unstable environments or broken tests.

Pre-production is not a lesser environment. It is where your delivery pipeline proves its work. Treat it as a first-class citizen, keep your synthetic tests aligned across environments, and use the insights to improve both your releases and your monitoring strategy.

If this practice resonates with you, follow along with the rest of the series as we continue breaking down what makes synthetic monitoring reliable and actionable. Start your trial today of Splunk Observability Cloud today to see how you can apply these ideas in your environment.

Related Articles

Monitor Containerized Deployments on AWS Bottlerocket with Splunk
Observability
4 Minute Read

Monitor Containerized Deployments on AWS Bottlerocket with Splunk

Learn how you can monitor the performance of containerized deployments on AWS Bottlerocket with Splunk.
Observations from .conf Go: Why Every Business Needs Observability
Observability
4 Minute Read

Observations from .conf Go: Why Every Business Needs Observability

Raen Lim, Splunk Group Vice President, explains why building robust digital resilience — supported by strong observability — is essential.
Top 5 APM Tools to Keep Your Application Healthy
Observability
6 Minute Read

Top 5 APM Tools to Keep Your Application Healthy

In this post, we'll cover top APM tools with details of their features, use cases, and how they can keep your application healthy.