Understanding Where You Fit in the Web Performance Maturity Curve

We all know that faster is better. Research and results clearly indicate that faster experiences with fewer errors result in increased usage, conversion, and revenue. With the desire to improve business metrics in mind, organizations often seek immediate improvements in customer experience across digital properties. However, without proper planning and coordination, these attempts consistently fail. In this post we will discuss the Performance Maturity Curve, and how organizations can use this to systematically improve their performance and business in a sustainable way.

Why Does a Maturity Curve Exist?

Optimizing web performance is a journey that puts user and customer experience first. It starts with recognizing the need to go beyond basic uptime, leverages Digital Experience Monitoring to establish benchmarks for page performance and end-user experience, and ultimately incorporate performance measurements into the software development release cycle. Progressing on this journey requires increasing levels of maturity, both technically, and organizationally, that build and depend on each previous phase. For example:

  • Companies cannot track how poor performance is impacting their critical business flows without first adopting availability monitoring for key pages and tracking of SLOs.
  • Companies cannot track the impact of their optimizations if they are only monitoring uptime and not key UX metrics such as Core Web Vitals.
  • Companies cannot prioritize engineering work to optimize their site alongside new features or fixing bugs, unless everyone agrees on performance’s value to the business. 

Much like Maslow's Hierarchy of Needs, companies must adopt the technical and organizational practices of one phase before attempting the next. 

Let’s take a closer look at the performance maturity curve.

Phase One: Reactive, Uptime-Focused

You cannot improve what you aren’t measuring. You can’t resolve what you don’t know about. The focus of the first level is for organizations to get that basic visibility into the availability and site stability as a catch-all for performance. In this phase, achieving SLAs for business services is the top priority, and customer experience is managed reactively, often after support tickets have been submitted and escalated. 

Often, IT or SRE teams are strapped for resources, both people and time. IT and engineering teams everywhere are undergoing massive cloud migrations and modernizations, and simply maintaining SLAs for uptime may be seen as good enough. Also, digital businesses overhauling their infrastructure and backend services may not be aware of how simple benchmarking and measuring end-user experience can be. This approach mirrors many organizations that are early in the process of modernizing their architecture – teams are in silos and don’t have the time or ability to look at the bigger picture of user experience.  While individual pages and APIs may be measured for general uptime and performance, the complete picture of customer happiness is missed.

APM solutions aren’t always enough to quantify user experience


What to do:

  • Create an inventory of all critical services (e.g. websites, apps, and APIs).
  • Use synthetic monitoring to monitor the availability of service, and alert on outages so they can be quickly resolved.
  • Build dashboards and reports to communicate service availability and trends to the business.

Phase Two: Establishing Benchmarks and Trends

A user’s experience is more nuanced than “Is a service available or not?” Phase two of the maturity curve starts with the understanding that, for the business, a slow service is just as bad as an unavailable one. Teams need to grow from a reactive, uptime-based approach to a user experience focused approach using richer, industry-researched measurements for page performance like Google’s Core Web Vitals. Since web vitals are now standard with modern Real User Monitoring and synthetic monitoring solutions, businesses can measure user-experience alongside uptime, and alert on poorly performing pages. 

What to do:

  • Track performance metrics across critical pages and services and alert based on slow experiences.
  • Monitor key business flows and transactions, in addition to individual pages.
  • Measure trends across users
  • Use lab and field data from synthetic and Real User Monitoring solutions to measure impact of performance improvements.

Let’s explore these in depth.

Tracking Performance Metrics

Teams establish baseline measurements on user experience across critical pages or services. Instead of traditional “page load time,” Google’s Core Web Vitals provide three measurements to help accurately identify how an end-user experiences a page: Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift. These three metrics help quantify how quickly a page displays content, page load responsiveness to user interactivity, and a page’s visual stability. Having more precise, quantifiable measurements for user experience helps teams quantify, trend, and more towards optimizing digital experience with their service level objectives (SLOs).

Core Web Vitals provide three core measurements for end-user experience 


Monitoring Business Transactions

While individual pages (eg. a home page or product page) are critical, business transactions like the user log-in flow, user authentication, or checkout process often have several technological dependencies. Modern pages rely on multiple APIs and third parties to fetch data, or move a user through a service. Monitoring business transactions enables teams to detect availability or performance issues that arise as data flows through a transaction, which cannot be seen when monitoring the individual parts. leverage synthetic monitoring to simulate user behavior and measure page performance and functionality.

Example of simulating and measuring multiple pages of the entire business transaction


Establishing User Trends

Application teams trending performance across users will group traffic into specific buckets using dimensional data like location, browser, or connection type, and then  use percentiles to measure performance. Percentiles help identify how the majority of end-users experience page performance. The 75th percentile, for example, is helpful to inform page performance and user-experience without being overly influenced by slow outliers. Dimensional data helps teams understand the characteristics of users navigating through their site. Collecting  this data enables richer insight and more accurate measurement of the user’s experience.

Measuring Impact of Performance Optimizations

Now that they can quantify performance and user experience across their audience, Phase two companies can establish benchmarks and baselines which helps IT and engineering teams measure the impact of different performance optimizations on application KPIs.  What was the impact of adopting that CDN or refactoring how JavaScript loads? Now you know.

Phase Three: Performance and UX as Release Criteria

In phase 3, companies understand at an organizational level that performance and UX has a direct impact on their business.  Knowing this, companies can use the performance data from their established baselines and benchmarks for key pages and business transactions,   during the development process to ensure they are only improving performance, and not regressing.

What to do:

  • Use performance and user-experience metrics as release criteria
  • A/B test to measure the performance impact of changes (new features or code)
  • Track performance and user-experience alongside business outcomes

Example of performance within the deployment and DevOps process


Performance and User-Experience as Release Criteria

Since automation is such a large and critical component to modern enterprises, embedding performance into CI/CD practices helps engineering teams identify and resolve performance problems earlier in the development lifecycle. Frontend development or engineering teams who have agreed to performance budgets can automatically pass or fail builds based on specific page criteria like: page weight, size of images or scripts, total number of external resources, to name a few.

A/B Testing to Measure the Impact of Change

Prior to pushing a new deployment, engineering teams can measure the impact of new code, feature improvements, or services against previous builds or versions. A/B testing with synthetic monitoring helps identify the performance impact of new code from one page to another, which is helpful in catching problems before pushing to production. A/B testing by combining synthetic monitoring with Real User Monitoring helps measure optimal conditions on fast networks, against what real users actually experience.

Tracking Performance and UX Alongside Business Outcome

The larger business goal with any Digital Experience Monitoring solution is to help correlate web performance to business outcomes. While there is no easy way to exactly correlate page speed or user-experience to revenue, conversions, or usage, tracking performance alongside business results helps provide guidance and direction to engineering, IT, and digital business leaders. Increasingly, digital businesses are aligning on Google’s Core Web Vitals as a baseline for user-experience. Going even further, some companies set up “speed teams” or “centers of excellence” to benchmark page performance across their user journey, benchmark against industry standards and competitors, and continuously improve deployments.

Going All in on Web Performance

For time and resource constrained IT and engineering teams, establishing performance best practices may feel daunting. However, simple steps like establishing baselines for user-experience, standardizing on Core Web Vitals, and tracking the performance of new features and improvements, can result in less abandonment, increased usage, and higher revenues. 

Ready to take the next step in your performance maturity? Start your free trial of Splunk Synthetic Monitoring today.

Additional Resources:


Mat Ball
Posted by

Mat Ball

Mat Ball leads marketing for Splunk's Digital Experience Monitoring (DEM) products, with the goal of educating digital teams on web performance optimization, specifically the art and science of measuring and improving end-user experience across web and mobile. He's worked in web performance since 2013, and previously led product marketing for New Relic's DEM suite.

Show All Tags
Show Less Tags