The Rules of React and Escape Hatches
React Compiler is not a permission slip to treat render like a generic function that can do
anything. It is an optimizer that assumes your components already behave like pure, predictable
functions of props and state (plus context), with effects and event handlers as the sanctioned
homes for side effects. Those assumptions are the same ones that make Concurrent Features, Strict
Mode double-invocation, and future React versions safe. The difference is that now violations do not
merely produce philosophical debates in code review—they cause the compiler to bail out and
leave a component unoptimized, or they show up as ESLint errors if you are on the current
eslint-plugin-react-hooks recommended preset.
What “rules” mean in practice
No mutating props or state during render. If you write props.items.sort(...) or
state.count++ before return, you break referential transparency: two renders with the same inputs
can observe different worlds. TypeScript can help if you type incoming props as readonly arrays
and treat state updates as immutable copies, but the rule is semantic. The compiler’s mutability
analysis flags many patterns; when it cannot prove safety, it skips optimization rather than risk
reusing a stale reference.
No side effects during render. Network calls, logging pipelines that mutate global singletons,
document.title assignment, subscribing to stores, and localStorage reads that you pretend are
pure—all of these poison memoization because the “output” is not a function of React inputs alone.
Colocate I/O in useEffect, event handlers, external stores with explicit subscription APIs, or
server-only loaders, depending on your architecture.
Deterministic output. Randomness, Date.now() without careful stabilization, and useRef
tricks that change what JSX returns without going through state are all non-deterministic from
React’s point of view. Sometimes you need time or randomness for UI; the fix is to route those
values through state set in an effect, or accept that specific subtrees will not be
compiler-optimized.
No mutating values that appear in JSX. A classic foot-gun is creating an object literal,
mutating a field before passing it to a child, then expecting a child wrapped in React.memo to
bail out. Even without memo, the compiler’s dependency graph assumes that values flowing into JSX
are not sneakily changed afterward. Immutable data patterns—spread updates, small value
objects—align naturally with both TypeScript and the analyzer.
When a component is skipped, React still renders it correctly. You lose automatic memoization for that component (and potentially for boundaries that depend on its behavior), not correctness. The performance cliff is silent until you profile, which is why lint rules matter: they turn “skipped” into something you see during development.
How violations surface in tooling
The compiler pipeline includes validation passes that mirror the Rules of React. ESLint closes the
feedback loop for humans. Rules with names in the family of set-state-in-render and
set-state-in-effect (exact rule IDs evolve with plugin versions) target patterns where state
updates happen in the wrong phase, creating render cascades or infinite loops. Ref-related rules
call out unsafe ref access during render—reading ref.current to decide JSX structure, for
example—because refs are intentionally outside React’s snapshot model.
A TypeScript-heavy example of a problematic pattern is using a ref as a cache for derived JSX without syncing through state:
function BadPanel({ id }: { id: string }) {
const cacheRef = useRef<Map<string, React.ReactNode>>(new Map());
if (cacheRef.current.has(id)) {
return cacheRef.current.get(id)!;
}
const node = <ExpensiveSubtree id={id} />;
cacheRef.current.set(id, node);
return node;
}
Even if this “works” in development, it violates the predictable-render contract: React may invoke
render more than once for a given commit, and you are storing mutable cache state outside the
fiber’s memoization machinery. The compiler cannot treat node as a pure function of id. A
rule-following version lifts caching to a parent that controls keys, uses useMemo with explicit
dependencies, or accepts remounting via key={id}.
Legitimate escape hatches: when manual memoization still wins
The compiler does not ban useMemo, useCallback, or React.memo. It makes them
special-purpose rather than default hygiene.
The most important remaining use case is effect dependencies. Effects run when their dependency
array changes. If an intermediate value is recreated every render—even if doing so is cheap—you can
force an effect to fire more often than intended. A stable callback reference still matters when a
child is a memo’d component that compares props by reference, or when passing a callback into a
dependency array of another hook.
import { useEffect, useMemo, useState } from "react";
function useDebouncedIdentityToken(raw: string, delayMs: number) {
const [token, setToken] = useState(raw);
useEffect(() => {
const id = setTimeout(() => setToken(raw), delayMs);
return () => clearTimeout(id);
}, [raw, delayMs]);
const signature = useMemo(() => ({ token, delayMs }), [token, delayMs]);
return signature;
}
Here useMemo is not primarily about saving work—it is about ensuring signature is referentially
stable across renders where raw has not yet settled, so downstream
useEffect(() => { … }, [signature]) does not chatter. The compiler may still optimize other parts
of the tree; you are narrowing a reference contract for synchronous consumers of that value.
Similarly, useCallback remains appropriate when integrating with libraries that use reference
identity in dependency lists, or when supplying handlers to highly optimized lists where you have
measured prop churn. React.memo on a large child can still be a blunt instrument you choose
after profiling, especially for components that receive many props where only a subset are stable.
Existing codebases: delete memoization slowly
For new code, the community guidance converges on trusting the compiler first and adding manual
memoization only when you have a measured or API-driven reason. For existing code, wholesale
deletion of useMemo/useCallback in a single pull request is risky: some of those hooks were
papering over unstable references for effects, or working around child components that are not yet
compiler-optimized. A sane approach is to enable the compiler, profile hot routes, then remove
redundant hooks in the components where the profiler shows no benefit and lint reports no bail-outs.
Because compiler versions can change memoization granularity, teams without strong visual regression and interaction tests should pin exact compiler package versions until upgrades are deliberate. A future release might merge or split memoization boundaries; behavior stays correct, but subtle timing around effects could shift if you relied on accidental referential stability from an older transform.
Skipped components and team process
When a component is skipped, treat it like a TypeScript error in spirit: find the suppressing pattern, fix the root cause, or isolate the untrusted subtree behind a small wrapper that follows the rules. Code review checklists can include “no ref reads for render branching,” “no prop mutation,” and “no fetch in render”—the same items Strict Mode already encouraged, now tied to measurable optimization.
The Rules of React were never optional for correct concurrent apps; they were just easier to ignore
when the only consequence was an occasional stale UI under race conditions. With the compiler in the
loop, they become the license to optimize. Follow them and you get granular automatic
memoization; break them and you get correct-but-slower code until you refactor. Manual useMemo and
useCallback remain the precision tools for reference identity and effect orchestration—not the tax
every component pays just to exist.