Instant Navigation: How GitHub Issues Reimagined Performance with Client-Side Caching

From Stripgay, the free encyclopedia of technology

The Problem: Every Millisecond Breaks Flow

For developers working through a backlog, every navigation—opening an issue, jumping to a linked thread, returning to the list—carries a hidden cost. It's not that GitHub Issues was ever truly slow in isolation, but that even small delays accumulate, forcing unnecessary context switches. When you're in the zone, a 200-millisecond wait can feel like a full stop. The real issue wasn't just backend latency; it was that too many navigation paths required redundant data fetches, shattering the developer's flow state over and over again.

Instant Navigation: How GitHub Issues Reimagined Performance with Client-Side Caching
Source: github.blog

The Solution: A Client-First Architecture

Earlier this year, the GitHub Issues team launched an ambitious redesign—not by chasing marginal backend improvements, but by rethinking how pages load from start to finish. The core idea: shift as much work as possible to the client, and optimize for perceived latency. Instead of waiting for the server to render and send everything, the client renders instantly from locally available data, then silently revalidates in the background. This required three key innovations:

  1. Client-side caching layer backed by IndexedDB
  2. Preheating strategy to boost cache hit rates without flooding the network
  3. Service worker to preserve cache usability on hard navigations

Client-Side Caching with IndexedDB

The foundation is a local cache stored in the browser's IndexedDB. When a developer opens an issue, the client first checks this cache. If the data is present and fresh, the page renders instantly—no network request needed. If not, it fetches from the server while showing a smooth loading state. This cache is smart: it stores issue details, lists, and related metadata, and it's designed to update incrementally without invalidating everything at once.

Preheating the Cache

But a cache is only as good as its hit rate. To avoid empty caches on first use, the team implemented a preheating strategy. As a developer browses a project—for example, scanning the issue list—the system proactively fetches data for the issues they're likely to open next. This is done based on user behavior patterns, such as hovering over a link or the order of issues in the backlog. The result: by the time you click an issue, its data is already waiting in the local cache, making navigation feel instant.

Service Worker for Hard Navigations

A common performance pitfall is the hard navigation—when a user reloads the page or opens a link in a new tab, the entire client application restarts and the cache is lost. To fix this, GitHub Issues introduced a service worker that intercepts network requests and serves cached data even on fresh page loads. The service worker works hand-in-hand with IndexedDB to ensure that the cache survives browser restarts. This means that even when you navigate directly to an issue URL, you still get that instant render from local data.

Measuring the Impact: The Metrics That Matter

The team focused on a single metric: time to interactive after navigation. This is the time between clicking a link and being able to interact with the new page content. By optimizing for this perceived latency, they achieved dramatic improvements:

Instant Navigation: How GitHub Issues Reimagined Performance with Client-Side Caching
Source: github.blog
  • 95th percentile navigation time dropped from over 1.2 seconds to under 300 milliseconds
  • Cache hit rate on navigation paths exceeded 80% after the first few minutes of use
  • User-reported flow interruptions decreased significantly in internal tests

Tradeoffs and Considerations

No architecture is free, and this approach comes with its own set of tradeoffs:

  1. Increased client-side complexity—building and maintaining the caching layer, preheating logic, and service worker requires careful engineering.
  2. Memory and storage costs—IndexedDB consumption can grow, especially for large projects. The team implemented strict eviction policies based on recency and size.
  3. Stale data risks—background revalidation must handle conflicts where the cache is outdated. The system uses versioned schemas and optimistic UI updates to minimize confusion.
  4. Debugging difficulties—client-side caching can obscure network behavior, making it harder to trace issues. The team added logging and visual indicators for cache state.

Despite these tradeoffs, the performance gains have been transformative. As GitHub Issues becomes the planning layer for AI-assisted coding, speed is no longer a luxury—it's a requirement.

A Pattern You Can Apply Today

The techniques used by GitHub Issues are not unique to that platform. Any data-heavy web application can benefit from this same model: a client-side cache, proactive preheating, and a service worker that extends caching to hard navigations. The key is to design for instant perceived latency rather than optimizing backend response times alone.

By shifting work to the client and being smart about what data to prefetch, you can reduce the feeling of "waiting" even in complex apps. The future of web performance is local-first, and GitHub Issues has shown us one clear path forward.

The Bottom Line

In 2026, "fast enough" is no longer a competitive bar. Developer tools must feel instant—every action, every navigation, every load. GitHub Issues' modernized navigation proves that with the right architecture, you can turn latency into a non-issue. The result? Less context switching, more flow, and a tool that respects the developer's time.