The Metrics That Matter: INP, LCP, and CLS

If you optimize without a number on the screen, you are decorating code, not engineering performance. In 2026, the numbers that matter most for user experience are still the Core Web Vitals: a small set of field-measurable signals Google uses to approximate whether a page feels fast, stable, and responsive on real devices and networks. For React developers, these metrics are especially useful because they sit above framework trivia. They do not care whether you used hooks “correctly” in the abstract; they care whether your main thread, layout, and network story produce a page that responds when people tap and settles visually without surprises.

This section connects those vitals to React’s execution model and to the way you should prioritize work in a TypeScript codebase. It also makes the business case explicit, because performance budgets are easier to defend when they translate into retention and revenue, not only Lighthouse scores.

LCP: when the page looks loaded

Largest Contentful Paint (LCP) measures when the largest visible content element in the viewport finishes rendering. Think hero image, main headline block, or primary video poster—whatever dominates the user’s first impression of “something is here.” Google’s guidance treats 2.5 seconds as the threshold for a “good” experience at the 75th percentile of real users; beyond that, you are increasingly likely to lose people who interpret slowness as brokenness.

LCP is not purely a React problem. It is the intersection of server response time, critical path CSS, font loading, image sizing and format, client hydration if you ship HTML first, and how eagerly your JavaScript competes for the main thread. Still, React choices influence LCP in predictable ways. Shipping a monolithic bundle that must download, parse, and execute before the shell becomes interactive pushes LCP and interactivity in the wrong direction. Rendering strategies that defer below-the-fold work help, but only if the “largest” element is not accidentally blocked by a long task or a layout-unstable insertion.

For SPAs, one practical discipline is to ensure your LCP candidate is discoverable early: avoid hiding it behind client-only gates, avoid opacity: 0 tricks that delay paint without good reason, and give images explicit dimensions so the browser can reserve space while bytes arrive. Pair that with honest measurement: lab tools are great for iteration, but LCP is defined in the field; your laptop is not the median phone on a commuter train.

CLS: trust is visual stability

Cumulative Layout Shift (CLS) quantifies how much visible content moves after it first appears—ads that push text, fonts that swap metrics, images without width and height, or list items that reorder without animation budgets. The “good” bar is below 0.1 at the 75th percentile. CLS is easy to dismiss until you watch someone mis-tap a button that jumped a fraction of a second earlier; it is a direct measure of whether your UI feels competent.

React can worsen CLS when render output changes structure without reserved space: skeleton screens that differ in height from final content, infinite lists that mount placeholders inconsistently, or conditionally rendered banners that appear above interactive controls. The fix is rarely “fewer re-renders” and more often deterministic layout: size placeholders to match final content, stabilize font fallbacks, and avoid inserting late-discovered media above the fold without aspect-ratio boxes.

INP: responsiveness for every interaction

Interaction to Next Paint (INP) replaced First Input Delay (FID) as a Core Web Vital in March 2024. FID only measured the delay until the browser could begin processing the first input. That was blind to sluggishness on subsequent clicks, keystrokes, and route transitions—the places where modern apps live. INP closes that gap by considering all interactions and reporting the worst (or high-percentile) responsiveness experienced during a page’s lifetime.

Conceptually, INP measures the time from a user interaction until the browser can paint the next frame that reflects that interaction’s handling. A “good” experience targets under 200 milliseconds at the 75th percentile. Slow INP almost always correlates with long tasks on the main thread: big React renders, expensive layout, synchronous JSON parsing, or third-party scripts that starve your handlers.

This is where React’s concurrent features earn their keep in product conversations. useTransition lets you mark updates as non-urgent so urgent input can stay responsive. useDeferredValue lets you keep a fast, immediate representation for typing while deferring heavier derived views. Neither is magic—they do not shrink work—but they help the scheduler prioritize so the next paint after a tap is not stuck behind a whole-tree reconciliation you could have postponed.

Long synchronous renders still hurt INP even if they are “pure.” If a single interaction triggers hundreds of components to reconcile and each does a little too much work, the sum becomes a main-thread hog. That is why the earlier chapter’s “profile first” mantra matters: INP is the user’s stopwatch on your JavaScript budget.

The business case: latency is not neutral

Performance metrics are not abstract kindness. Studies cited widely in web performance circles suggest that even one second of additional delay can increase bounce rate by on the order of thirty-two percent, depending on context and audience. The exact percentage varies by industry, but the directional lesson is stable: people leave when the UI feels stuck, and ad-driven or commerce funnels pay for that hesitation directly.

For engineering leads, the takeaway is to tie vitals to outcomes your stakeholders already track: conversion, support tickets, session length, or task completion in SaaS products. A Core Web Vital in the red is not merely an SEO checkbox; it is a proxy for friction that shows up in behavior.

Measuring vitals in production React

Lab tests catch regressions; production measurement catches reality. The web-vitals library exposes callbacks for each metric so you can forward values to analytics or your observability stack. In TypeScript, keep the boundary typed so you do not accidentally drop attribution fields that explain which interaction drove a poor INP.

import { onLCP, onCLS, onINP } from "web-vitals";

onLCP((metric) => {
  sendToAnalytics({ name: metric.name, value: metric.value, rating: metric.rating });
});

onCLS((metric) => {
  sendToAnalytics({ name: metric.name, value: metric.value, rating: metric.rating });
});

onINP((metric) => {
  sendToAnalytics({
    name: metric.name,
    value: metric.value,
    rating: metric.rating,
    attribution: metric.attribution,
  });
});

type WebVitalMetric = {
  name: string;
  value: number;
  rating: string;
  attribution?: unknown;
};

function sendToAnalytics(data: WebVitalMetric) {
  fetch("/api/metrics", {
    method: "POST",
    body: JSON.stringify(data),
    keepalive: true,
  });
}

Using keepalive: true on the fetch call helps ensure the beacon still fires when the user navigates away or closes the tab—exactly when you most want to capture a final paint or interaction story.

Once data lands server-side, segment it: vitals differ by device class, geography, and route. A dashboard that only shows a global average will hide the INP regression that only hits your heaviest table view on Android.

Putting metrics ahead of micro-optimizations

Core Web Vitals reward outcomes, not patterns. The React Compiler may remove whole categories of manual memoization, but it cannot shrink your images, fix layout shift, or delete a synchronous megabyte of vendor code. Start from LCP, CLS, and INP in the field, reproduce issues under throttling, then change the smallest slice of code the profiler implicates. That is how you keep TypeScript React apps fast in 2026 without drowning them in optimization cargo cults.