docs/runtime/deoptimization.md
Deoptimization (often called "deopt") is the process of moving execution from optimized code (generated by TurboFan or Maglev) back to unoptimized code (Ignition interpreter). This is necessary because optimized code makes optimistic assumptions about types and shapes of objects that may be violated at runtime.
V8 has three types of deoptimization (defined by DeoptimizeKind in src/common/globals.h): Eager, Lazy, and LazyAfterFastCall (a specialized variant of lazy deoptimization used for direct calls to C++).
Eager deoptimization occurs immediately when a check in the optimized code fails.
Eager deopts are synchronous and happen at the exact instruction that failed the check.
Lazy deoptimization occurs when an assumption made by optimized code is invalidated by an external event, rather than a check failure within the code itself.
code->set_marked_for_deoptimization).Lazy deopts are asynchronous relative to the code execution and are checked at specific safepoints or return sites.
The most complex part of deoptimization is reconstructing the interpreter frame(s) from the optimized frame state. This is handled by TranslatedState (defined in src/deoptimizer/translated-state.h).
Optimized code may have:
During compilation, TurboFan/Maglev generates Deoptimization Data (specifically a DeoptimizationFrameTranslation). This is a compact bytecode stream that describes how to reconstruct the unoptimized state for every point where a deopt can occur.
The translation data uses opcodes (defined in src/deoptimizer/translation-opcode.h) to instruct the deoptimizer on how to fill the slots of the reconstructed interpreter frame:
TranslatedState: It creates a TranslatedState object containing a list of TranslatedFrames. Each TranslatedFrame corresponds to one interpreter frame (including inlined ones).digraph G {
rankdir=TB;
newrank=true;
node [shape=record];
edge [];
subgraph cluster_opt {
label="Optimized Stack";
OF [label="Optimized Frame"];
}
subgraph cluster_table {
label="Translation Table";
TT [label="Deopt Data"];
}
subgraph cluster_recon {
label="Reconstructed Stack";
Stack [shape=record, label="{ <f1> Interpreter Frame 1\n(Inlined Function) | <f2> Interpreter Frame 2\n(Outer Function) }"];
}
Deopt [label="Deoptimizer"];
Heap [label="Heap", shape=cylinder];
Ignition [label="Ignition Interpreter"];
OF -> Deopt [label="1. Trigger Deopt"];
TT -> Deopt [label="2. Lookup PC"];
Deopt -> Heap [label="3. Materialize Objects"];
Deopt -> Stack [label="4. Construct Frames"];
Stack -> Ignition [label="5. Resume"];
}
When deoptimization occurs while execution is inside a builtin function (e.g., a builtin written in CodeStubAssembler or Torque), V8 cannot simply resume at a bytecode offset in the interpreter, because the builtin execution was not driven by bytecode.
To handle this, V8 uses Builtin Continuations.
BUILTIN_CONTINUATION_FRAME, JAVASCRIPT_BUILTIN_CONTINUATION_FRAME).ContinueToCodeStubBuiltin or ContinueToJavaScriptBuiltin).Continuation builtins can do more than just return; they can complete the execution of an inlined builtin before resuming the bytecode.
Array.prototype.forEach is inlined by TurboFan and a deopt occurs inside the callback, V8 cannot just jump back to the interpreter's Call bytecode (which invoked forEach) because we might be in the middle of the loop.ArrayForEachLoopLazyDeoptContinuation (defined in src/builtins/array-foreach.tq). This continuation knows how to resume the loop from the current index (initialK) and complete the remaining iterations before finally returning to the interpreter.Continuations ensure that even if we deoptimize in the middle of a complex native operation, we don't lose the result and can safely transition back to interpreted JavaScript.