Caching Strategies: fetch(), react cache(), and unstable_cache

Caching in an RSC application is not one knob—it is a stack of mechanisms that operate at different scopes and lifetimes. Confusing them leads to subtle bugs: you might dedupe within a request but still hammer the database across requests, or you might cache an HTTP response beautifully while your Prisma query remains cold on every hit.

There are three ideas to keep separate:

  1. fetch with Next.js cache options — ties HTTP responses to time-based revalidation and tag/path invalidation for remote APIs you call with fetch.

  2. cache() from Reactrequest-scoped memoization: many calls in one tree collapse to one execution; it does not persist across requests.

  3. unstable_cache (Next.js) — wraps arbitrary async work (database, ORM, internal services) in a durable application cache with revalidate windows and tags—fetch not required.

Invalidation then becomes a design choice: tags for logical groups (products, product-123), paths for concrete routes, and time for freshness when stale-while-revalidate behavior is acceptable.

Layer 1: fetch with next.revalidate and tags

When your Server Component loads data from a URL, Next can store that response in the Data Cache and reuse it according to your options. revalidate sets a TTL in seconds; tags give you named buckets to invalidate from Server Actions or route handlers.

// Method 1: fetch() with Next.js cache options
async function getProduct(id: string) {
  const res = await fetch(`https://api.example.com/products/${id}`, {
    next: {
      revalidate: 3600,
      tags: ["products", `product-${id}`],
    },
  });
  return res.json();
}

Type the JSON return if you control the API (ProductSchema.parse(await res.json())) so downstream components do not spread any through the tree.

When to use: external HTTP APIs, CDN-friendly GETs, and any read where fetch is the natural primitive. If the remote resource is user-specific or must never be shared, use cache: 'no-store' (or equivalent) so you do not accidentally serve one user’s payload to another.

Layer 2: unstable_cache for non-fetch work

Most serious apps read from a database, not from a public URL. unstable_cache wraps a function so its result is cached like a fetch entry: revalidate for TTL, tags for selective invalidation. The name still carries unstable_ in many versions—track release notes—but the pattern is established: cache the unit of read you want to reuse.

// Method 2: unstable_cache for non-fetch (DB) operations
import { unstable_cache } from "next/cache";

const getCachedUser = unstable_cache(
  async (userId: string) => {
    return db.user.findUnique({
      where: { id: userId },
      include: { profile: true },
    });
  },
  ["user"],
  { revalidate: 900, tags: ["users"] },
);

The key parts array (['user']) participates in cache identity; include dynamic segments in keys or arguments so different users do not collide. Passing userId as an argument (as above) is the usual pattern—Next incorporates serialized args into the cache key for that invocation.

When to use: ORM queries, aggregation pipelines, and expensive transforms that are safe to share across users hitting the same key. For per-request personalization, either do not cache that segment or key the cache so tightly that sharing is impossible—wrong choice here is a privacy defect, not a performance nit.

Layer 3: React cache() for per-request deduplication

cache() solves a different problem: fan-out inside one render. Without it, ten components that each call getUser(session.id) might issue ten identical queries. Wrapping the loader dedupes to one.

// Method 3: React cache() for request-scoped deduplication (already shown above)
import { cache } from "react";

export const getUser = cache(async (id: string) => {
  return db.user.findUnique({ where: { id } });
});
// Multiple components calling getUser(id) → only ONE DB query per request

When to use: shared loaders imported across the tree, layout + page + sidebar all needing the same entity. Combine with unstable_cache when you also want cross-request reuse: cache() on the outside for dedupe, unstable_cache inside for persistence—or the other way around depending on how you structure modules; avoid double-wrapping without a clear reason.

Invalidation from Server Actions

Caches are only as good as your invalidation story. After a mutation, tags let you surgically drop related entries without flushing the world.

// Cache invalidation in Server Actions
"use server";
import { revalidateTag, revalidatePath } from "next/cache";
import { db } from "@/lib/db";

type ProductData = { title?: string; price?: number };

export async function updateProduct(productId: string, data: ProductData) {
  await db.product.update({ where: { id: productId }, data });
  revalidateTag(`product-${productId}`);
  revalidateTag("products");
  revalidatePath(`/products/${productId}`);
}

Tags align with read paths: whatever you tagged when caching reads should be revalidated when writes touch that data. revalidatePath is blunt-instrument useful when a segment’s composition is hard to tag exhaustively—use it when you know exactly which routes must refresh.

TanStack Query and the client layer

The TOC for this document also points at TanStack Query alongside RSC. The mental model: RSC + HTTP/Data cache own the initial read and server-driven freshness; TanStack Query still shines for client-side interactivity after hydration—polling, infinite scroll, optimistic updates on dense dashboards—provided you hydrate with server-fetched initial data where appropriate so you do not duplicate fetch logic blindly.

TypeScript helps you draw the line: server DTO types can feed initialData generics on the client, keeping a single source of truth for shapes while still allowing client-only fields (timestamps, UI flags) in separate types when needed.

TTL versus on-demand invalidation

Time-based revalidate: N is simple operationally: stale entries expire without you remembering to tag every reader. Tag/path invalidation is precise: the moment a write completes, readers flip to fresh on the next hit. Mature apps mix both—short TTLs as a safety net, tags for correctness after mutations, and no-store for anything that must never be shared. Document the policy next to each loader function; future you will not remember whether getBilling was safe to cache for five minutes or five milliseconds.

Debugging: when the UI “should” update but does not

Start from the lowest layer that should have changed. If cache() alone backs the read, no cross-request freshness is expected—only dedupe within one render. If fetch cached the response, confirm the mutation calls revalidateTag for every tag the read used. If unstable_cache wrapped ORM access, confirm the key parts and arguments match between writer and reader; off-by-one string in the key array creates a phantom second cache entry that never invalidates.

Choosing a strategy (cheat sheet)

Need Reach for
Same DB row read 12 times in one request cache()
Same DB row read across many requests/users (safe) unstable_cache + tags
Third-party HTTP GET with TTL fetch + next.revalidate
After mutation revalidateTag / revalidatePath in Server Actions

Misapplying unstable_cache to highly dynamic or user-specific reads creates stale or leaked data; misapplying cache() alone under load still hits the database on every HTTP request. Layer them intentionally, document tags alongside schema changes, and treat cache keys as part of your public API—because debugging production oddities will start there.