Why Continuous Profiling Matters More Than Ever
Continuous profiling is quickly becoming a cornerstone of observability—and for good reason. While metrics, logs, and traces tell you what is happening, only profiling reveals why your code is slow or expensive. Metrics might show elevated CPU usage; logs indicate a sluggish request; traces pinpoint a bottlenecked service. But a profile dives deeper, identifying the exact function and line of code consuming the resources. As systems grow more complex, this granular visibility is essential. The OpenTelemetry project recently marked its Profiles signal as alpha, signaling that profiling is on track to become a first-class observability signal.
With the release of Pyroscope 2.0, we've completely rearchitected our open source continuous profiling database to make it more cost-effective at scale. This version also introduces native support for the OpenTelemetry Protocol (OTLP) for profiling, letting you adopt the emerging standard from day one.
The Case for Always-On Profiling
Before diving into what's new, it's worth understanding why continuous profiling delivers outsized benefits—often more than teams initially expect.
Cut Infrastructure Costs with Data, Not Guesswork
Cloud spending is a major portion of engineering budgets, with CPU and memory as primary contributors. Many teams overprovision simply because they lack fine-grained insight into resource consumption. Continuous profiling changes that. By seeing which functions drive CPU and memory usage across every service in production over time, you can target optimizations rather than adding more hardware.
Faster Root Cause Analysis
When an incident occurs, the first question is always why. Metrics and traces narrow the scope—you know the service, endpoint, and possibly the deployment that introduced the regression. But the final stretch of root cause analysis often eats up hours. With continuous profiling, that last mile shrinks to minutes. Compare a profile from before and after the regression, diff them, and see exactly which code paths changed. No need to reproduce in staging, add ad-hoc logging, or guess.
Understand Latency at the Code Level
Distributed tracing shows where wall clock time is spent; profiling reveals where the CPU spends that time. Together, they close the observability gap. A trace might indicate your auth service added 200ms to a request, while a profile shows 150ms of that was in a regex compilation that could be cached. This is especially powerful for tail latency—p99 spikes that are hard to reproduce and diagnose. Continuous profiling captures these moments as they happen, so you don't rely on luck with a debugger.
Pyroscope 2.0: A Closer Look at What’s New
The original Pyroscope architecture was based on Cortex, the same foundation used by Mimir and Loki. For Pyroscope 2.0, we rebuilt the entire storage and ingestion layer from scratch. The new design prioritizes performance and cost efficiency, especially for large-scale deployments.
Ground-Up Rearchitecture for Scale
We moved away from the Cortex dependency to a purpose-built architecture. This reduces memory overhead, improves ingestion throughput, and makes queries faster. The result is a database that can handle millions of profiles per second while keeping infrastructure costs low.
Native OTLP Profiling Support
Pyroscope 2.0 now natively ingests profiles via the OpenTelemetry Protocol (OTLP). This means you can use the same pipeline you already have for traces and metrics to collect profiling data. No need for custom agents or protocols. As the OpenTelemetry Profiles signal matures, your investment in this standard pays off immediately.
More Cost-Effective Storage
Profiling data can be voluminous. The new architecture uses advanced compression and deduplication techniques to significantly reduce storage costs. We've also optimized the way data is indexed, making it cheaper to retain profiles for longer periods without blowing out your budget.
Faster Queries and Diffing
One of the most powerful features of continuous profiling is the ability to compare profiles across time. Pyroscope 2.0 introduces a new query engine that makes diffs nearly instantaneous, even across large datasets. This accelerates root cause analysis and helps teams rapidly identify performance regressions.
Getting Started with Pyroscope 2.0
If you're already using Pyroscope, upgrading is straightforward—we've provided migration guides and backward compatibility for existing data. For new users, getting started is as simple as spinning up the server and connecting your applications via the OTLP exporter or the existing language-specific agents.
Continuous profiling is becoming an indispensable part of the observability stack. With Pyroscope 2.0, we're making it faster, cheaper, and easier to adopt at scale. Try it out today.