How To Choose the Best Synthetic Test Locations
Everything looks perfect in your synthetic browser tests. Every run is green. Yet, within minutes, your customers in Frankfurt start reporting slow logins and checkout timeouts. After a quick check, Real User Monitoring (RUM) confirms the impact across Europe and parts of Asia.
While unfortunate, it’s an all-too-familiar story for teams just beginning their synthetic browser testing journey. The tests are healthy, but they’re all running from the wrong places, locations that don’t reflect where your customers actually connect from. The tests are accurate for what they measure, but not for what your customers experience.
Where your tests run directly impacts what they reveal. If every test originates from one region, you only see one perspective of the user experience. That narrow view can hide regional performance issues, routing problems, or CDN inconsistencies that your customers feel first.
In this article, we’ll explore how location strategy influences what your synthetic browser tests reveal and how to measure their effectiveness across customer regions.
This article covers Best Practice #3 in the Getting Synthetics Right Series: Testing Synthetics Near Users. If you’re new to the series, check out the intro article to learn how these best practices come together to make your synthetic browser tests reliable and actionable.
What is the role of location in synthetic testing?
Every synthetic browser test runs from one or more locations, the geographic points where the browser session originates. That could be a Splunk-managed public runner in a major cloud region or a private runner deployed inside your own network (more on that a bit later).
Location determines the context of what your test measures:
- A test that runs from the same region as your customers reflects their real experience.
- A test that runs from somewhere else might still pass, but it is measuring a completely different journey: one your customers may never take.
For globally accessible applications, running tests from multiple regions is essential to understand latency and performance variance. For internal tools, external locations often add noise. Understanding which locations are relevant and using them consistently turns your synthetic checks into a critical pillar of your Digital Experience Monitoring (DEM) strategy.
Why it matters
Every synthetic browser test has three key factors: what you test, how you measure it, and where you run it. The “where” defines the perspective of your results.
If your tests all run from one region, you miss how performance differs elsewhere. A test that looks great in Virginia may tell a different story in Frankfurt or Singapore. Geography naturally affects metrics like Time to First Byte (TTFB) and page load time; it is physics, not failure.
Understanding this context is essential for thresholding and alerting. Comparing results across regions without accounting for location can trigger false positives. Tuning begins by recognizing how geography shapes your baselines.
Consistent location usage helps you see these patterns and measure across global regions.
Putting it into practice: How to manage synthetic test locations
Match test locations to real traffic and usage patterns
Splunk lets you leverage multiple observability data sources in a unified way to understand and measure performance against your business.
Use RUM data, analytics, or web access logs (default fields include Region) to identify your primary customer markets and critical application vantage points. Prioritize those regions in your synthetic tests so the data mirrors real customer experience and validates performance where it matters most.
Splunk Observability Cloud RUM automatically enriches data with geographic tags such as country, city, and region. These default tags are indexed and visible in Tag Spotlight, making it easy to correlate synthetic browser test results with real user activity and application performance.
Location-based testing also helps confirm that your application performs consistently across your global footprint. Running tests from diverse locations can uncover issues as your customers experience them, such as:
- Regional latency
- Routing differences
- Peering issues
- CDN inconsistencies (for example, a misrouted DNS entry, a caching gap, or an uneven failover path)
You can use Splunk’s out-of-the-box visualizations or create your own dashboards to include geographic context across metrics such as latency, success rate, and page load time. This turns location data into an early warning signal for your customers’ experience and your global application health.
Learn more: Analyze RUM geography insights in Splunk Observability Cloud.
Use the right runner type for your audience
Public runners simulate customer access from the open internet. Private runners replicate internal access behind VPNs or corporate firewalls. Together, they provide full coverage from customer-facing journeys to employee workflows.
Examples:
- Use public locations to test your e-commerce checkout and APIs.
- Use private locations to test your internal HR or finance portals.
Using both types helps you detect external CDN latency and internal authentication issues in one monitoring strategy.
Learn more:
- Check out this video on private locations.
- Watch this video to learn about configuring locations in Splunk Synthetics.
Pro tip: Using AWS Local Zones (LZ) in Synthetic Tests
Some Splunk public locations are hosted in AWS Local Zones, identified by names prefixed with AWS LZ. These smaller AWS deployments are located closer to metro areas to reduce latency and provide more realistic regional testing.
Because Local Zones have less redundancy than full AWS regions, they are best suited for performance and latency benchmarking, not as your only source for uptime testing. If availability monitoring is required, run the same test concurrently from at least one standard region.
Using a mix of Local Zone and regional runners helps you:
- • Measure true user-experience latency in specific cities.
- • Detect network path or CDN edge issues unique to a geography.
- • Maintain redundant coverage for accurate uptime monitoring.
Learn more: AWS Local Zones overview and location list.
Establish consistent location assignments
Keep test origins stable over time. Moving them around introduces variance and makes trend analysis unreliable. If you expand coverage, add new locations gradually and document why.
Consistency is what makes performance data meaningful. When tests always run from the same regions, you can trust that changes in response time reflect the application, not the geography.
Shifting test origins midstream can mask real regressions or create the illusion of improvement.For example, if your checkout test runs from California one month and Virginia the next, the time to first byte might appear faster simply because the route is shorter. Without consistent origins, it becomes difficult to quickly tell whether your application optimization worked or if the test just moved closer to the data center.
If connection timings suddenly spike for a specific test, compare those metrics against another test running from the same location that targets an application hosted in the same region or data center. If both show elevated timings, the issue may be location-specific or network-related rather than an application defect.
Treat location as a first-class dimension
Location isn’t just metadata; it’s context. When you treat location as a first-class dimension in your synthetic data, it becomes a key factor in how you define thresholds, interpret anomalies, and decide how to respond.
Metrics such as latency, time to first byte, or total duration naturally vary by geography. Factoring location into your alert logic prevents false positives caused by normal regional differences. For example, a U.S.-hosted site will almost always respond faster from Virginia than from Singapore.
Correlate failures by scope
When failures happen, correlation and context matters! Location context helps you determine whether the issue is isolated, regional, or global. Check out these examples:
- A single test failing in a single location could indicate a network routing or DNS issue.
- Multiple tests failing in the same region suggest a regional infrastructure or CDN edge problem.
- Failures everywhere (single test from all locations) are usually indicative of an application or backend dependency issue.
Use location in thresholds and response
In Splunk Observability Cloud, location is a default dimension and can be included directly in detector logic. Use it to tune alerts and responses more intelligently:
- Baseline thresholds by region. Expect higher TTFB from distant geographies.
- Route alerts to the right teams. If only internal locations fail, notify your network or VPN team, not customer support.
- Require multi-location validation. Confirm an alert by checking that the issue appears in at least two regions before paging on-call responders.
By treating location as a core part of your correlation, thresholding, and response strategy, your detectors stay grounded in real-world context. You can separate geographic variance from genuine performance regressions and focus your response on what truly impacts customers.
Learn more: Configuring Detectors and Alerts in Splunk Observability Cloud Synthetics
Location-aware observability for better decisions
Running synthetic browser tests from the right locations gives you realistic visibility into how customers experience your applications around the world. By selecting locations intentionally and using them consistently, you can detect regional issues faster, validate resiliency, and make more confident operational decisions.
Next step: Review your existing synthetic browser tests. Identify where your customers connect and adjust test locations to match.You can try it yourself right now with this free trial of Splunk Observability Cloud.
Related Articles

What the North Pole Can Teach Us About Digital Resilience

The Next Step in your Metric Data Optimization Starts Now

How to Manage Planned Downtime the Right Way, with Synthetics

Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise

How To Choose the Best Synthetic Test Locations

Advanced Network Traffic Analysis with Splunk and Isovalent

Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud

Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step
