How To Choose the Best Synthetic Test Locations

Everything looks perfect in your synthetic browser tests. Every run is green. Yet, within minutes, your customers in Frankfurt start reporting slow logins and checkout timeouts. After a quick check, Real User Monitoring (RUM) confirms the impact across Europe and parts of Asia.

While unfortunate, it’s an all-too-familiar story for teams just beginning their synthetic browser testing journey. The tests are healthy, but they’re all running from the wrong places, locations that don’t reflect where your customers actually connect from. The tests are accurate for what they measure, but not for what your customers experience.

Where your tests run directly impacts what they reveal. If every test originates from one region, you only see one perspective of the user experience. That narrow view can hide regional performance issues, routing problems, or CDN inconsistencies that your customers feel first.

In this article, we’ll explore how location strategy influences what your synthetic browser tests reveal and how to measure their effectiveness across customer regions.

This article covers Best Practice #3 in the Getting Synthetics Right Series: Testing Synthetics Near Users. If you’re new to the series, check out the intro article to learn how these best practices come together to make your synthetic browser tests reliable and actionable.

What is the role of location in synthetic testing?

Every synthetic browser test runs from one or more locations, the geographic points where the browser session originates. That could be a Splunk-managed public runner in a major cloud region or a private runner deployed inside your own network (more on that a bit later).

Location determines the context of what your test measures:

For globally accessible applications, running tests from multiple regions is essential to understand latency and performance variance. For internal tools, external locations often add noise. Understanding which locations are relevant and using them consistently turns your synthetic checks into a critical pillar of your Digital Experience Monitoring (DEM) strategy.

Why it matters

Every synthetic browser test has three key factors: what you test, how you measure it, and where you run it. The “where” defines the perspective of your results.

If your tests all run from one region, you miss how performance differs elsewhere. A test that looks great in Virginia may tell a different story in Frankfurt or Singapore. Geography naturally affects metrics like Time to First Byte (TTFB) and page load time; it is physics, not failure.

Understanding this context is essential for thresholding and alerting. Comparing results across regions without accounting for location can trigger false positives. Tuning begins by recognizing how geography shapes your baselines.

Consistent location usage helps you see these patterns and measure across global regions.

Putting it into practice: How to manage synthetic test locations

Match test locations to real traffic and usage patterns

Splunk lets you leverage multiple observability data sources in a unified way to understand and measure performance against your business.

Use RUM data, analytics, or web access logs (default fields include Region) to identify your primary customer markets and critical application vantage points. Prioritize those regions in your synthetic tests so the data mirrors real customer experience and validates performance where it matters most.

Splunk Observability Cloud RUM automatically enriches data with geographic tags such as country, city, and region. These default tags are indexed and visible in Tag Spotlight, making it easy to correlate synthetic browser test results with real user activity and application performance.

Location-based testing also helps confirm that your application performs consistently across your global footprint. Running tests from diverse locations can uncover issues as your customers experience them, such as:

You can use Splunk’s out-of-the-box visualizations or create your own dashboards to include geographic context across metrics such as latency, success rate, and page load time. This turns location data into an early warning signal for your customers’ experience and your global application health.

Learn more: Analyze RUM geography insights in Splunk Observability Cloud.

Use the right runner type for your audience

Public runners simulate customer access from the open internet. Private runners replicate internal access behind VPNs or corporate firewalls. Together, they provide full coverage from customer-facing journeys to employee workflows.

Location Type
Description
Example Use Cases
Public Locations
Splunk-managed runners hosted in global cloud regions. Ideal for testing customer-facing sites and APIs over the public internet.
Verify checkout flow performance across North America, Europe, and Asia. Detect CDN, DNS, or routing issues affecting international customers.
Private Locations
Self-hosted runners deployed inside your organization’s network or VPC. Used for apps behind VPNs, internal portals, or restricted domains.
Test an internal HR site, employee portal, or app accessible only over corporate network routes. Test a staging or development environment as part of a CI/CD pipeline.

Examples:

Using both types helps you detect external CDN latency and internal authentication issues in one monitoring strategy.

Learn more:

Pro tip: Using AWS Local Zones (LZ) in Synthetic Tests

Some Splunk public locations are hosted in AWS Local Zones, identified by names prefixed with AWS LZ. These smaller AWS deployments are located closer to metro areas to reduce latency and provide more realistic regional testing.

Because Local Zones have less redundancy than full AWS regions, they are best suited for performance and latency benchmarking, not as your only source for uptime testing. If availability monitoring is required, run the same test concurrently from at least one standard region.

Using a mix of Local Zone and regional runners helps you:

  • • Measure true user-experience latency in specific cities.
  • • Detect network path or CDN edge issues unique to a geography.
  • • Maintain redundant coverage for accurate uptime monitoring.

Learn more: AWS Local Zones overview and location list.

Establish consistent location assignments

Keep test origins stable over time. Moving them around introduces variance and makes trend analysis unreliable. If you expand coverage, add new locations gradually and document why.

Consistency is what makes performance data meaningful. When tests always run from the same regions, you can trust that changes in response time reflect the application, not the geography.

Shifting test origins midstream can mask real regressions or create the illusion of improvement.For example, if your checkout test runs from California one month and Virginia the next, the time to first byte might appear faster simply because the route is shorter. Without consistent origins, it becomes difficult to quickly tell whether your application optimization worked or if the test just moved closer to the data center.

If connection timings suddenly spike for a specific test, compare those metrics against another test running from the same location that targets an application hosted in the same region or data center. If both show elevated timings, the issue may be location-specific or network-related rather than an application defect.

Treat location as a first-class dimension

Location isn’t just metadata; it’s context. When you treat location as a first-class dimension in your synthetic data, it becomes a key factor in how you define thresholds, interpret anomalies, and decide how to respond.

Metrics such as latency, time to first byte, or total duration naturally vary by geography. Factoring location into your alert logic prevents false positives caused by normal regional differences. For example, a U.S.-hosted site will almost always respond faster from Virginia than from Singapore.

Correlate failures by scope

When failures happen, correlation and context matters! Location context helps you determine whether the issue is isolated, regional, or global. Check out these examples:

Use location in thresholds and response

In Splunk Observability Cloud, location is a default dimension and can be included directly in detector logic. Use it to tune alerts and responses more intelligently:

By treating location as a core part of your correlation, thresholding, and response strategy, your detectors stay grounded in real-world context. You can separate geographic variance from genuine performance regressions and focus your response on what truly impacts customers.

Learn more: Configuring Detectors and Alerts in Splunk Observability Cloud Synthetics

Location-aware observability for better decisions

Running synthetic browser tests from the right locations gives you realistic visibility into how customers experience your applications around the world. By selecting locations intentionally and using them consistently, you can detect regional issues faster, validate resiliency, and make more confident operational decisions.

Next step: Review your existing synthetic browser tests. Identify where your customers connect and adjust test locations to match.You can try it yourself right now with this free trial of Splunk Observability Cloud.

Related Articles

What the North Pole Can Teach Us About Digital Resilience
Observability
3 Minute Read

What the North Pole Can Teach Us About Digital Resilience

Discover North Pole lessons for digital resilience. Prioritise operations, just like the reliable Santa Tracker, for guaranteed outcomes. Explore our dashboards for deeper insights!
The Next Step in your Metric Data Optimization Starts Now
Observability
6 Minute Read

The Next Step in your Metric Data Optimization Starts Now

We're excited to introduce Dimension Utilization, designed to tackle the often-hidden culprit of escalating costs and data bloat – high-cardinality dimensions.
How to Manage Planned Downtime the Right Way, with Synthetics
Observability
6 Minute Read

How to Manage Planned Downtime the Right Way, with Synthetics

Planned downtime management ensures clean synthetic tests and meaningful signals during environment changes. Manage downtime the right way, with synthetics.
Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise
Observability
7 Minute Read

Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise

Smart alerting is the way to get reliable signals from your synthetic tests. Learn how to set up and use smart alerts for better synthetic signaling.
How To Choose the Best Synthetic Test Locations
Observability
6 Minute Read

How To Choose the Best Synthetic Test Locations

Running all your synthetic tests from one region? Discover why location matters and how the right test regions reveal true customer experience.
Advanced Network Traffic Analysis with Splunk and Isovalent
Observability
6 Minute Read

Advanced Network Traffic Analysis with Splunk and Isovalent

Splunk and Isovalent are redefining network visibility with eBPF-powered insights.
Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud
Observability
4 Minute Read

Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud

Learn more about how AI Agents in Observability Cloud can help you and your teams troubleshoot, identify root cause, and remediate issues faster.
Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step
Observability
2 Minute Read

Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step

The OpenTelemetry Injector makes implementation incredibly easy and expands OpenTelemetry's reach and ease of use for organizations with diverse infrastructure.
Resolve Database Performance Issues Faster With Splunk Database Monitoring
Observability
3 Minute Read

Resolve Database Performance Issues Faster With Splunk Database Monitoring

Introducing Splunk Database Monitoring, which helps you identify and resolve slow, inefficient queries; correlate application issues to specific queries for faster root cause analysis; and accelerate fixes with AI-powered recommendations.