docs/compiler/maglev/compiler-maglev.md
Maglev is V8's mid-tier optimizing compiler. It is designed to be a fast optimizing compiler that provides a significant performance boost over Sparkplug while compiling much faster than TurboFan.
Maglev is designed to be a "fast" optimizing compiler. It fills the gap between Sparkplug (very fast, no optimizations) and TurboFan (slow, highly optimized). To achieve this, Maglev makes several deliberate design choices:
MacroAssembler, without a separate instruction selection phase.Maglev is a graph-based compiler that uses a Single Static Assignment (SSA) style Intermediate Representation (IR).
FeedbackVector. It uses this feedback to speculate on types and generate specialized code (e.g., assuming a property access is monomorphic).The entire compilation pipeline in src/maglev/maglev-compiler.cc consists of only a few major steps:
MaglevGraphBuilder iterates over bytecode and builds the graph in a single pass, performing abstract interpretation and inserting checks based on type feedback.StraightForwardRegisterAllocator).MaglevCodeGenerator iterates over the graph and emits code to a buffer.The core of Maglev's frontend is the MaglevGraphBuilder (defined in src/maglev/maglev-graph-builder.h).
BytecodeArray.InterpreterFrameState which maps Ignition's virtual registers and accumulator to Maglev IR nodes (ValueNode).Add bytecode might create an Int32AddWithOverflow node if feedback suggests integer addition.JSHeapBroker to read feedback and make decisions about inlining and specialized operations.In TurboFan, a graph starts with high-level JavaScript operators, gets lowered to "Simplified" operators (handling types like numbers and strings), and finally to "Machine" operators (raw pointers, integers).
Maglev bypasses this multi-tiered lowering. Its IR nodes represent operations that are already close to machine level but still retain enough high-level information to support deoptimization.
For example, Int32AddWithOverflow is a single node that represents an integer addition that might fail and trigger a deopt. It doesn't need to be lowered further; it knows how to generate the code to perform the addition and check the overflow flag.
The most striking difference between Maglev and TurboFan is how they generate machine code.
In TurboFan, the graph is passed to an Instruction Selector, which matches patterns of nodes to machine instructions, creating a new "Instruction" list, which is then scheduled and colored by the register allocator.
In Maglev, after register allocation, the MaglevCodeGenerator simply iterates through the basic blocks and the nodes within them. Each node implements a GenerateCode method.
Here is an example from src/maglev/x64/maglev-ir-x64.cc for Int32AddWithOverflow:
void Int32AddWithOverflow::GenerateCode(MaglevAssembler* masm,
const ProcessingState& state) {
Register left = ToRegister(LeftInput());
if (!RightInput().operand().IsRegister()) {
auto right_const = TryGetInt32ConstantInput(kRightIndex);
DCHECK(right_const);
__ addl(left, Immediate(*right_const)); // Emits x64 'add'
} else {
Register right = ToRegister(RightInput());
__ addl(left, right); // Emits x64 'add'
}
// None of the mutated input registers should be a register input into the
// eager deopt info.
DCHECK_REGLIST_EMPTY(RegList{left} &
GetGeneralRegistersUsedAsInputs(eager_deopt_info()));
// Emit eager deopt if overflow flag is set
__ EmitEagerDeoptIf(overflow, DeoptimizeReason::kOverflow, this);
}
The __ macro expands to masm->, which is the MaglevAssembler (inheriting from MacroAssembler). The node directly emits the addl instruction and the conditional jump for deoptimization.
This direct approach eliminates the overhead of instruction selection and intermediate lists, making compilation extremely fast.
A key requirement for Maglev is the ability to deoptimize back to the interpreter. This means that even though Maglev IR nodes are close to machine level, they must retain enough information to reconstruct the interpreter state.
Maglev achieves this by attaching EagerDeoptInfo or LazyDeoptInfo to nodes that can fail or side-effect.
EagerDeoptInfo to reconstruct the frame state at the point of failure.The DeoptFrame stored in these info structures captures the state of the interpreter frames (parameters, registers, accumulator) at that specific point in the execution. This allows the deoptimizer to translate the current machine state back into interpreter frames.
Maglev performs a curated set of optimizations that are fast to execute:
Maglev sits between Sparkplug and the optimizing compilers (TurboFan and Turboshaft). It is triggered when a function becomes hot enough, but before it is deemed worthy of the full optimization power (and compilation time) of TurboFan or Turboshaft.
In the new Turboshaft pipeline, Maglev can also be used as a frontend. In this mode (known as Turbolev, enabled with --turbolev), Maglev builds the graph and performs initial optimizations, which is then lowered into Turboshaft IR for further optimization and code generation. This allows reusing Maglev's fast graph building and speculatively optimized IR as a starting point for Turboshaft.