The Next Step in your Metric Data Optimization Starts Now

In the world of observability, managing metrics effectively is paramount. While our Metrics Usage Analytics (MUA) has empowered you to keep your data volume in check by identifying and optimizing unused metrics, the journey to a lean, cost-efficient, and insightful telemetry pipeline doesn't end there. We're excited to introduce the next evolution in smart telemetry management: Dimension Utilization, designed to tackle the often-hidden culprit of escalating costs and data bloat – high-cardinality dimensions.

The Silent Drain: When Dimensions Become a Burden

You've successfully dropped or archived unused metrics with Metric Pipeline Management but still find your telemetry bill higher than expected. The likely culprit? High-cardinality dimensions. These are dimensions that generate a vast number of unique values, leading to an explosion in Metric Time Series (MTS) count. Imagine a dimension like user_id in a system with millions of users, or request_timestamp that changes with every single event. While seemingly useful, such dimensions can:

Identifying these problematic dimensions, understanding their impact on specific metrics, and making informed optimization decisions in Pipeline Management isn’t always straightforward. Without clear visibility, it’s easy to overlook sources of inefficiency and unnecessary expense.

Unveiling Dimension Utilization: Your New Cardinality Compass

Dimension Utilization is a powerful new feature within Usage Analytics that provides unparalleled visibility and control over your dimensional data. It empowers you to pinpoint, analyze, and manage high-cardinality dimensions, transforming them from a cost liability into a strategic asset.

Key Features

With this view you can easily:

Dimension Profile Page: A dedicated view for each dimension, providing detailed information where exactly dimension is used (lists of associated metrics, dashboards, charts, detectors) and crucial sample values.

Note:

Both views are empowered by:

Learn more: Analyze your dimensions with Usage Analytics

From Insight to Action: Optimize Your Dimensions with Confidence

Dimension Utilization turns deep visibility into measurable action. Here are a few practical ways it helps you make smarter, faster, and more cost-effective decisions.

1. Identify dimensions that contribute the most to you MTS Count:

Challenge: "As a user, I want to know which dimensions have the highest MTS count so that I can determine if they should be kept or dropped."

Solution: The main Dimension Utilization page provides a comprehensive view of all dimensions across your organization. To identify dimensions that contribute the most to your MTS count:

The table is sorted by default in descending order for "Average Hourly MTS count", bringing most impactful dimensions to your attention immediately.

2. Analyze Dimension Utility with Smart Ranking:

Challenge: “As a user, I need to understand if the dimension is used and how it’s being used, to be sure that it can be safely removed.”

Solution: We introduce a clear Utilization ranking system (R5 - R0) for each dimension:

This ranking, visible in the main table and drill-down views, provides an immediate understanding of a dimension's active use, guiding decisions on whether to keep, optimize, or drop it.

3. Evaluate Dimensions by Metric & Utility:

Challenge: “As a user, I want to know if a dimension exists in multiple metrics so that I can analyze its utility."

Solution: Our new Dimension Profile page (accessed by clicking on a dimension name) offers a deep dive. Here, you can see all the metrics a specific dimension is tied to, along with its utilization across various observability components. This helps you understand its true value and impact.

4. Troubleshooting dimension patterns:

Challenge:

“As a user, I would like to know how the number of dimension values changes over time, so that I can identify spikes contributing to unnecessary Metrics Time Series (MTS) bloat.”

Solution: By providing detailed insights like "Unique values”, "Percentage of total values," and "MTS with dimension”, you gain the necessary data to make informed decisions to reduce MTS bloat and manage costs. The trend charts display trends over time for hourly dimensional values, enabling quick detection of spikes and attribution to responsible parties.

5. Uncover Problematic Dimensional Values with Sample Data:

Challenge: "As a user, I want to know what kind of values a dimension has to get an idea on what the dimension is doing, and therefore understand its function, use and how valuable it is."

Solution: The Dimension Profile page includes a Sample Values tab, displaying up to 10 random dimensional values. This crucial insight helps you identify dimensions that might be generating unique values like timestamps, which often contribute to bloat without providing significant analytical value. It also helps highlight dimensions that might be sending the same value as another dimension, indicating potential redundancy.

Empowering Smarter Telemetry Management

Dimension Utilization is more than just a new feature; it's a strategic tool that empowers platform engineers and data managers to gain granular control over their observability data. By understanding the true impact and utility of each dimension, you can easily:

Stop letting hidden cardinality bloat your telemetry! Embrace Dimension Utilization and unlock a new level of efficiency and insight in your observability practice. Take control of your data, optimize your costs, and ensure your metrics are always working for you.

If you’d like to dive deeper into how to use Dimension Utilization, check out our documentation:

If you want to refresh your knowledge of your metrics usage and cost management in Splunk Observability Cloud - or explore more practical ways to optimize your telemetry data - take a look at these related resources:

..and selected sessions from .conf24 and .conf25:

For other recent Splunk Observability releases, check out the updates here.

Related Articles

What the North Pole Can Teach Us About Digital Resilience
Observability
3 Minute Read

What the North Pole Can Teach Us About Digital Resilience

Discover North Pole lessons for digital resilience. Prioritise operations, just like the reliable Santa Tracker, for guaranteed outcomes. Explore our dashboards for deeper insights!
The Next Step in your Metric Data Optimization Starts Now
Observability
6 Minute Read

The Next Step in your Metric Data Optimization Starts Now

We're excited to introduce Dimension Utilization, designed to tackle the often-hidden culprit of escalating costs and data bloat – high-cardinality dimensions.
How to Manage Planned Downtime the Right Way, with Synthetics
Observability
6 Minute Read

How to Manage Planned Downtime the Right Way, with Synthetics

Planned downtime management ensures clean synthetic tests and meaningful signals during environment changes. Manage downtime the right way, with synthetics.
Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise
Observability
7 Minute Read

Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise

Smart alerting is the way to get reliable signals from your synthetic tests. Learn how to set up and use smart alerts for better synthetic signaling.
How To Choose the Best Synthetic Test Locations
Observability
6 Minute Read

How To Choose the Best Synthetic Test Locations

Running all your synthetic tests from one region? Discover why location matters and how the right test regions reveal true customer experience.
Advanced Network Traffic Analysis with Splunk and Isovalent
Observability
6 Minute Read

Advanced Network Traffic Analysis with Splunk and Isovalent

Splunk and Isovalent are redefining network visibility with eBPF-powered insights.
Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud
Observability
4 Minute Read

Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud

Learn more about how AI Agents in Observability Cloud can help you and your teams troubleshoot, identify root cause, and remediate issues faster.
Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step
Observability
2 Minute Read

Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step

The OpenTelemetry Injector makes implementation incredibly easy and expands OpenTelemetry's reach and ease of use for organizations with diverse infrastructure.
Resolve Database Performance Issues Faster With Splunk Database Monitoring
Observability
3 Minute Read

Resolve Database Performance Issues Faster With Splunk Database Monitoring

Introducing Splunk Database Monitoring, which helps you identify and resolve slow, inefficient queries; correlate application issues to specific queries for faster root cause analysis; and accelerate fixes with AI-powered recommendations.