Skip to main content
false

Perspectives Home / CTO STACK

With Observability and AI, If Data Is the New Oil, What Is Its Pipeline?

As with oil, data is informational energy that must be found, extracted, refined, and transported to the location of consumption. Here's how it's done.


For businesses entering the coming AI age, the engine of digital business is becoming more reliant on access to data. Data powers the decisions needed to innovate with the speed and resiliency required to thrive in the modern economy of today and tomorrow. 


Within digital enterprises, observability data is being used to drive operational excellence and digital resilience. Executive teams are being asked to do more with less, and they are mining their internal reserves for any telemetry that can be leveraged to power operations.


However, as with oil, this informational energy must be found, extracted, refined, and transported to the location of consumption. Whether our telemetry is to be used on personal devices, a machine learning algorithm, or an LLM, the data is rendered useless if it’s locked away in a forgotten toolset. 


The good news? The pipeline of telemetry is here.


Sprawl keeps us stuck in the past, and paying dearly


Ever since the first mainframe computers became commercially available in the 1950s, businesses have been creating novel ways to leverage system telemetry to improve both system resiliency and business performance. The following revolutions in computing, from mainframes to client-server, to cloud, have prompted an exponential sprawl of proprietary telemetry datasets to be scattered about. 


While most organizations are not carrying the burden of 70-year-old mainframes (although some are!), modern enterprises bounce between dozens of siloed systems to detect, investigate, and respond to issues. The sprawl impedes human workflow efficiencies and can significantly obstruct the implementation of AI automation.


I was struck by this during a recent visit to a Fortune 100 company that is running its entire operation on the shoulders of 4,000 crucial applications built over the past 50 years. To monitor what existed 50 years ago, they log into a tool written for platforms of that era, too. The CIO, CTO, CISO, and respective teams were tasked with building a plan to leverage GenAI, but their own data was trapped inside dozens of proprietary, legacy systems. The business needed to consolidate its data before GenAI could take advantage of it.


This isn’t uncommon. As companies go through these different generations of tool sets, specific tools also pop up to monitor those. Then you also have commercial off-the-shelf software like SAP that has important, unique components. Maybe you have to go to a specific vendor that specializes just in handling that. Add in an application development team’s work — essentially an entire organization developing new, more modern software — and you’re left with a lot of tools that the business must be able to access, monitor, and secure.


Companies that don’t set their telemetry free end up stuck in the past and are often unable to take advantage of AI-driven operational efficiencies. Organization tool sprawl keeps telemetry data locked away deep in the Permian Basin, and standards are needed to unleash the value of that data.


OTel unlocks the pipeline we’ve been waiting for


That’s where OpenTelemetry comes in. An open-source project of the Cloud Native Computing Foundation, OpenTelemetry is a collection of APIs, devkits, and other tools that gives you the power to centralize and standardize your telemetry data about what your software is doing and how well it’s doing it.


Here’s where the power of community comes into play: Because OpenTelemetry is open source, everyone can contribute to it. Over 12,000 have contributed to the project, making OTel CNCF’s second most popular project behind Kubernetes. A telemetry project of this scale creates a window where enterprises have the opportunity to consolidate the monitoring systems that have been sprawling for decades. 


OpenTelemetry lays the framework for a centralized telemetry pipeline to route telemetry that feeds engines of operational excellence across an entire organization.


Future-proof operations with open standards


For some, the first step to tool consolidation may be to consolidate dashboards from legacy monitoring systems. That might be right for human agents in a legacy workflow. But to power operational excellence in the AI age, OpenTelemetry standards will enable you to unearth your telemetry data and pipe it to the engines that run your business.


To find out what other observability trends will shape the coming years, read Splunk’s 2024 Observability Predictions


Sign up for our monthly newsletter to get insights and perspectives from security, IT, and industry leaders delivered straight to your inbox.

Read more Perspectives by Splunk

OCTOBER 6, 2023 • 16 minute watch

How To Succeed at Tool Consolidation

Just like a living room with good feng shui, an effective tool consolidation strategy aims to reduce complexity and improve efficiency. Learn tips from Inter IKEA’s success.

SEPTEMBER 20, 2023 • 21 minute watch

Want To Lead Cross-Functional Teams? Rethink the Concept of "Digital Resilience"

Preventing both observability and security incidents require holistic thinking, cross-functional teams and cultural changes. Splunk's Matt Swann and Patrick Coughlin discuss with analysts Daniel Newman and Pat Moorhead.

July 31, 2023 • 5 minute read

Top 3 Strategies for Tech Leaders To Thrive in the AI Revolution

AI is changing the workplace. Your leadership strategy should change, too.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.