gcx CLI: Terminal-Native Observability for Humans and AI Agents

From Stripgay, the free encyclopedia of technology

The New Engineering Workflow Needs Production Context

The way engineers write code is evolving rapidly. With the rise of agentic coding tools like Cursor and Claude Code, much of the daily development work now happens directly in the command line. These AI agents excel at generating code, but they create a visibility gap: they can see your source files but remain blind to what’s happening in your production environment.

gcx CLI: Terminal-Native Observability for Humans and AI Agents

Without access to real-time operational data, agents rely solely on pattern matching. They don’t know about a sudden latency spike on checkout or whether your service is meeting its SLOs. This forces engineers to context-switch between their terminal and separate monitoring dashboards, slowing down the very workflow these tools are meant to accelerate.

To close this gap, Grafana has launched the public preview of gcx, a new CLI tool that brings Grafana Cloud and Grafana Assistant directly into the terminal. With gcx, both human engineers and AI agents can spot, diagnose, and resolve incidents in minutes rather than hours.

From Greenfield to Full Observability in Minutes

gcx is designed to handle the heavy lifting of setting up observability from scratch. Most services start with no instrumentation, alerts, or SLOs. Instead of treating this as a blocker, gcx treats it as a starting point. You simply point your agent at the service and ask it to bring the system up to standard. The tool exposes the necessary primitives across the full observability lifecycle.

Instrumentation with OpenTelemetry

gcx can wire OpenTelemetry directly into your codebase. It validates that metrics, logs, and traces are flowing correctly and confirms the data lands in the right backend—all without leaving the terminal. This automatic instrumentation ensures your agent can immediately start working with production signals.

Alerting, SLOs, and Synthetic Checks

Once instrumentation is in place, gcx generates alert rules based on the signals your service actually emits. You can define an SLO against a real latency or availability indicator and push it live. It also sets up synthetic probes so that users aren’t the first to report an outage. Everything is driven from the command line.

Frontend, Backend, and Kubernetes Monitoring

For frontend observability, gcx can onboard a Faro-instrumented application, create the app, and manage sourcemaps so stack traces remain readable. For backend services and Kubernetes infrastructure, it uses Instrumentation Hub to enable monitoring quickly and consistently.

Everything as Code

gcx treats dashboards, alerts, SLOs, and checks as code. You can pull these resources as files, edit them locally with your agent, and push changes back. If a human needs to investigate further, gcx provides a deep link directly into Grafana Cloud, minimizing friction.

This unified approach turns what used to be a multi-day ticket into a single agent session.

Why This Matters for AI Agents

Giving agents access to production context transforms their decision-making. Without it, an agent is pattern-matching on source files and hoping to find the right answer. With gcx, the same agent can read the state of the running system—latency, error rates, SLO compliance—and make informed choices based on actual behavior.

For example, if an agent sees a checkout latency spike, it can immediately open the relevant dashboard, analyze the trace data, and suggest a fix that addresses the real problem rather than a hypothetical one. This reduces both the time to resolution and the number of human handoffs.

Get Started with gcx

The gcx CLI is now available in public preview. By bringing Grafana Cloud and Grafana Assistant into the terminal, it closes the observability gap for both humans and AI agents. To try it out, visit the official documentation and point your agent toward your service today.