What is Applied Observability?

There’s a new term on the technology block: Applied Observability. Gartner estimates that 70% of organizations will successfully adopt applied observability capabilities in coming years. The most common use cases of applied observability will include:

  • Asset discovery
  • Cloud resource management and cost optimization
  • Improved SLA compliance
  • System dependability
  • Cybersecurity and incident management

But exactly what is applied observability? We’ve got the answers and more here for you to get a full understanding. Read on!

Observability overview

Observability is the practice of inferring internal states of a system from the knowledge of its external outputs. If that sounds heavy, it doesn’t have to be.

Observability is extremely useful and valuable. It has a variety of applications in all scientific and technologic domains — any area where discovering and knowing what is going on in systems is a challenging task.

When wanting to know what goes on between systems, applications and software, you use external outputs of a system to model the characteristics and behavior of that system. If the current state and the state evolution of the system can be observed only by analyzing the system outputs, it is said to be an observable system. These output characteristics are then attributed to the system components.

Observability is used to discover assets and instrumentation, identify active components, their performance and health, and contextual knowledge on the system.

(Check out our monitoring vs. observability explainer.)

What’s applied observability?

Applied observability is defined as the analysis of data artifacts as an evidence-based asset discovery and optimization process for system components in an enterprise IT network.

Modern enterprise IT architecture is complex, built upon and with multiple hierarchical and orchestrated layers of technology. These layers may be:

  • Virtual, running in separate microservice instances, in an ephemeral state.
  • Cloud-based environments where the resources are provisioned dynamically by ITOps but are exposed to limited visibility and control into the underlying layers of the technology stack. 

These layers may run in parallel with observability data produced with high dimensions — this means that the data may represent a variety of feature attributes. Since any traditional system modeling technique may be ineffective to map known system components to the observable data, engineers need to search in the opposite direction — that is, mapping the high dimensional observability data to system components that may produce only intricate differences in their output.

As long as these underlying components comply with the principle of observability, and sufficient output data is available, it is possible to infer the system state despite the high dimensionality and complexity of the observable data itself.

(Check out this video all about applied observability: what to do with telemetry to actually improve work for your teams.)

Use cases for applied observability

We can say that applied observability uses the measurable output data of system components in the technology stack along with the scientific phenomenon of observability to infer the state of resources responsible for producing the observable data.

Once the observable data is acquired, it is used to analyze the health, performance and security related metrics of the associated system components. As explained by Distinguished VP Analyst David Groombridge conference:


“Applied observability is about clarity rather than creativity as it is based on confirmed stakeholder actions, rather than intentions. Even if we don’t know what the decision was, or if it was implemented differently than what we planned, we can see the actual outcomes in data.”


Applied observability vs. prediction vs. monitoring

It’s important to highlight two key differences between observability, prediction and monitoring:

Applied observability is a highly data-driven and entirely evidence-based practice.

In many cases, you may have sufficient knowledge of the source system and its state, which makes it possible to map the observable output to their respective system state. This is because ITOps and InfoSec teams extensively test and measure the performance of these systems before making them accessible to the users.

The resulting knowledge base is complemented by vendor specifications and data resources, which can further assist in correctly modeling an applied observability process.


The outcome of a prediction can, and often is, one that might not have occurred previously. Predictions therefore rely on data from a wide variety of external data sources. In order to establish a truly holistic evidence-based practice for forecasting or predicting — such as market trends or cybersecurity risks to your systems — you need an ever-growing pool of information.

The same is not true for applied observability, where only the measured output of system components is required.

In practice, applied observability also requires a vast pool of data, since the evolution of source system states is only captured accurately by data collected over a long time (or at least, in large volumes where these states change rapidly, and data therefore captures the intricate differences in the form of a high dimensional data).


Monitoring is a process of aggregating, transforming and analyzing information from:

This provides visibility into the system at individual endpoints, nodes as well as system components. These data assets are compared against predefined thresholds and alerts are triggered to perform some security controls with the integrated Security Information and Events Management (SIEM) tools.

The application of monitoring is not to infer system states (observability) or to forecast what’s about to happen next (prediction). Instead, monitoring is about making sense of what’s already happening in real-time between all components of the connected network.

The power of observability

Observability is not only useful to associate system outputs with its components, but also to map a measured output with the underlying system process. For example, observability data can be analyzed to identify system changes that lead to an IT incident, outage or security attack. This knowledge can help organizations track back to the root cause, and issue lasting resolutions for cybersecurity and IT incidents facing the organization.

What is Splunk?

This posting does not necessarily represent Splunk's position, strategies or opinion.

Muhammad Raza
Posted by

Muhammad Raza

Muhammad Raza is a technology writer who specializes in cybersecurity, software development and machine learning and AI.