It’s becoming increasingly difficult to keep up with observability data, thanks to the ongoing data explosion, the GenAI application mandate, and hybrid, containerized deployments. But retreat is not an option, even in the face of overwhelming odds, so onward we march in 2025 with observability predictions for the new year, as well as a few observations for good measure.
The biggest advance in observability over the past few years, arguably, has been the adoption of OpenTelemetry, which establishes standards for the “Holy Trinity” of observability data types: logs, metrics, and traces. That OpenTelemetry adoption trend will continue in 2025, predicts Andreas Prins, the vice president of product marketing for Linux distributor SUSE.
“OpenTelemetry will cement its place as the standard for telemetry data collection, embraced not only by open-source contributors but also by major commercial players,” Prins says. “This will drastically simplify integration, enabling teams to adopt observability practices more easily. The unified approach will lower barriers for new entrants, leading to a proliferation of innovative observability tools tailored to specific use cases.”
Programmers often rely on profiling to analyze how an application is running and consuming resources. In 2025, profiling will merge with tracing to provide even more observability for developers, predicts Ryan Perry, the principal product manager at Grafana Labs.
“While traces and profiles have their unique benefits, 2025 will see their increasing convergence as organizations seek deeper insights into application performance,” Perry prognosticates to BigDATAwire. “Traces excel at showing end-to-end request flows, while profiles reveal detailed system resource usage. By combining these tools, teams gain visibility into their applications that manually added spans never could.
“For example, when a trace shows a 400ms span, corresponding profile data can reveal exactly which code executed during that time period, down to the specific functions and their resource consumption,” Perry continues. “This allows teams to pinpoint performance bottlenecks with surgical precision, leading to more efficient optimization efforts and reduced operational costs. In the coming years, especially as profiling becomes stable in OpenTelemetry, forward-thinking organizations won’t just be collecting traces and profiles – they’ll be treating them as interconnected, contextual data streams that provide a holistic view of system performance and efficiency.”
We all get attached to the tools that we use, whether or not they’re the best tools for the job. In 2025, we will wake up, smell the coffee burning, and say “goodbye” to our old monitoring tools forever, says Chrystal Taylor, an evangelist at SolarWinds.
“Traditional monitoring tools just don’t cut it anymore,” Taylor tells us. “The shift to observability is well underway, and we’re also seeing a big decline in homegrown apps. Open-source tooling is so robust and readily available now that spending the time and resources to build and maintain your own solution rarely makes sense. Add to that the growing expectation for IT pros to take on more roles, and it’s clear that the tools we use need to help bridge those gaps and support us as we upskill.”
One of the other big trends that’s currently unfolding place is the boom in artificial intelligence, or AI (perhaps you’ve read something about it). As organizations build AI applications, keeping tabs on all of the data, software, and systems is becoming difficult. That’s fueling the rise of a new phenomenon called AI observability, says Baris Gultekin, head of AI at Snowflake.
“The emerging field of AI observability examines not only the performance of the system itself, but the quality of the outputs of a large language model–including accuracy, ethical and bias issues, and security problems such as data leakage,” Gultekin says. “I view AI observability as the missing puzzle piece to building explainability into the development process, giving enterprises faith in their AI demos to get them across the finish line.
“Although AI observability is a fairly new conversation, 2025 is the year it goes mainstream,” Gultekin continues. “We’ll see more and more vendors come out with AI observability features to meet the growing demand in the market. However, while there will be many AI observability startups, observability will ultimately end up in the hands of data platforms and the large cloud providers. It’s hard to do observability as a standalone startup, and companies that adopt AI models are going to need AI observability solutions, so big cloud providers will be adding the capability.”
In a tech version of “I’ll scratch your back if you scratch mine,” AI and machine learning tech will also drive the observability ball forward with better data governance and algorithmic forecasts, in addition to observability tech (logs, metrics, traces) helping to piece together exactly what’s going on inside of GenAI and AI apps. Kunju Kashalikar, senior director of product management at Pentaho, is particularly bullish on AI in data observability.
“Data observability, when implemented correctly, will be the best tool for an organization to stay on the right track with data,” Kashalikar says. “Bringing observability for data and AI together is crucial for any business that wants to fully benefit from AI. Observability will help with security and governance and allow organizations to stay ahead of any issues whether data is at rest, in motion with ETL, used in applications, BI reports or ML/AI pipelines. Observability, however, will need to be active. For example, it won’t be sufficient to understand that data freshness has fallen and just see that in a static display. Observability will need to trigger action, either via intelligent automation or via a human who’s notified of what needs to be done.”
Retrofitting observability into applications is so 2024. In 2025, developers will build observability directly into their apps, says Jacob Rosenberg, senior leader for infrastructure and platform engineering at observability firm Chronosphere.
“We need to shift observability left, the way we have with security and many other areas of IT, so that it’s actually being done as part of the design of an application,” Rosenberg tells us. “Right now, engineers aren’t thinking about the metrics, data, and observability that they need as they’re building things–it’s almost always retrofitted afterwards. We’ve done test-driven development; why not observability-driven development?”
AI-driven APIs exist at the programmatic edge of the wild frontier. In 2025, that frontier will be partially tamed thanks to adoption of observability technologies and techniques, predicts Rob Brazier, vice president of product at Apollo GraphQL.
“In 2025, the relationship between AI and APIs will enter uncharted territory, reshaping how systems are built and interact,” Brazier says. “AI will increasingly guide developers in crafting and consuming APIs, introducing new patterns and unpredictable usage scenarios. This shift will demand advanced observability tools to monitor and adapt to evolving behaviors, ensuring systems remain secure and efficient. As AI dynamically composes user experiences in real-time, APIs will need to be more robust, resilient, and flexible than ever before. Businesses must embrace this wild frontier with innovation and foresight, as the synergy between AI and APIs transforms digital ecosystems in ways we’re only beginning to understand.”
Data observability, which is a subset of observability focused on data supply chains, is also gaining steam. In 2025, data observability will become less reliant on manual intervention and more automated, predicts Egor Gryaznov, the CTO at data observability provider Bigeye.
“Now that data observability has reached a level of market maturity, automation will be essential to maximizing its value,” Gryaznov says. “Observability tools will increasingly focus on reducing user time in the platform by automating workflows for deployment, issue identification, triage, and resolution. As best practices become standardized, speeding up these processes will be key to delivering real ROI and enabling teams to resolve data issues with minimal manual intervention.”
Forget reactive observability and AIOps. In 2025, proactive AIOps will be the name of the game, says Phil Lenton, senior director of product management at Riverbed.
“By 2025, AIOps will transition from a reactive model, which fixes problems after they occur, to a proactive approach capable of predicting and resolving issues before they manifest,” Lenton says. “This evolution will leverage predictive analytics and advanced machine learning models to anticipate potential failures, optimizing operational efficiency and reducing downtime. Companies that embrace proactive AIOps will gain a significant edge, minimizing disruptions and improving user experiences across their IT ecosystems.
The best-of-breed typically prevails when a new technology category emerges, as is the case with data observability, but consolidation often takes hold as maturation progresses. Ashwin Rajeeva, co-founder and CTO of data observability provider Acceldata, sees unified data observability platforms emerging to serve a variety of needs as the category solidifies.
“In 2025, unified data observability platforms will emerge as essential tools for large enterprises, enabling comprehensive visibility into data quality, pipeline health, infrastructure performance, cost management, and user behavior to address complex governance and integration challenges,” he says. “By automating anomaly detection and enabling real-time insights, these platforms will support data reliability and streamline compliance efforts across industries.”
Related Items:
2025 Big Data Management Predictions
2025 Data Analytics Predictions