docs/concepts/execution.md
NautilusTrader can handle trade execution and order management for multiple strategies and venues simultaneously (per instance). Several interacting components are involved in execution, making it important to understand the possible flows of execution messages (commands and events).
The main execution-related components include:
StrategyExecAlgorithm (execution algorithms)OrderEmulatorRiskEngineExecutionEngine or LiveExecutionEngineExecutionClient or LiveExecutionClientThe Strategy base class inherits from Actor and contains all common data methods.
It also provides methods for managing orders and trade execution:
submit_order(...)submit_order_list(...)modify_order(...)cancel_order(...)cancel_orders(...)cancel_all_orders(...)close_position(...)close_all_positions(...)query_account(...)query_order(...)These methods create the necessary execution commands and send them on the message bus to the
relevant components (point-to-point). They also publish events such as OrderInitialized.
There is not a single linear path for every command:
submit_order(...) routes to OrderEmulator for emulated orders, to an ExecAlgorithm when
exec_algorithm_id is set, and to the RiskEngine otherwise.submit_order_list(...) follows the same branching behavior based on emulation and
exec_algorithm_id.modify_order(...) routes to the OrderEmulator for emulated orders and to the RiskEngine
otherwise.OrderEmulator, ExecAlgorithm, or
ExecutionEngine, depending on the command and order state.For new order submission, the typical flow looks like this:
Strategy -> OrderEmulator or ExecAlgorithm or RiskEngine
From there, the downstream flow is typically:
OrderEmulator -> ExecAlgorithm or ExecutionEngine
ExecAlgorithm -> RiskEngine -> ExecutionEngine -> ExecutionClient
This diagram illustrates message flow (commands and events) across the Nautilus execution components.
flowchart LR
strategy[Strategy]
emulator[OrderEmulator]
algo[ExecAlgorithm]
risk[RiskEngine]
engine[ExecutionEngine]
client[ExecutionClient]
strategy --> emulator
strategy --> algo
strategy --> risk
strategy --> engine
emulator -. OrderReleased .-> risk
emulator --> algo
emulator --> engine
algo --> risk
risk <--> engine
engine <--> client
An order management system (OMS) type refers to the method used for assigning orders to positions and tracking those positions for an instrument.
OMS types apply to both strategies and venues (simulated and real). Even if a venue doesn't explicitly
state the method in use, an OMS type is always in effect. The OMS type for a component can be specified
using the OmsType enum.
The OmsType enum has three variants:
UNSPECIFIED: The OMS type defaults based on where it is applied (details below)NETTING: Positions are combined into a single position per instrument IDHEDGING: Multiple positions per instrument ID are supported (both long and short)The table below describes different configuration combinations and their applicable scenarios.
When the strategy and venue OMS types differ, the ExecutionEngine handles this by overriding or assigning position_id values for received OrderFilled events.
A "virtual position" refers to a position ID that exists within the Nautilus system but not on the venue in
reality.
| Strategy OMS | Venue OMS | Description |
|---|---|---|
NETTING | NETTING | The strategy uses the venue's native OMS type, with a single position ID per instrument ID. |
HEDGING | HEDGING | The strategy uses the venue's native OMS type, with multiple position IDs per instrument ID (both LONG and SHORT). |
NETTING | HEDGING | The strategy overrides the venue's native OMS type. The venue tracks multiple positions per instrument ID, but Nautilus maintains a single position ID. |
HEDGING | NETTING | The strategy overrides the venue's native OMS type. The venue tracks a single position per instrument ID, but Nautilus maintains multiple position IDs. |
:::note Configuring OMS types separately for strategies and venues increases platform complexity but allows for a wide range of trading styles and preferences (see below). :::
OMS config examples:
NETTING OMS type, representing a single position per market. It may be desirable for a trader to track multiple "virtual" positions for a strategy.HEDGING OMS type, tracking multiple positions both LONG and SHORT. The trader may only care about the NET position per currency pair.:::info
Nautilus does not yet support venue-side hedging modes such as Binance BOTH vs. LONG/SHORT where the venue nets per direction.
It is advised to keep Binance account configurations as BOTH so that a single position is netted.
:::
If a strategy OMS type is not explicitly set using the oms_type configuration option,
it will default to UNSPECIFIED. This means the ExecutionEngine will not override any venue position_ids,
and the OMS type will follow the venue's OMS type.
:::tip
When configuring a backtest, you can specify the oms_type for the venue. For accuracy, match this with the OMS type used by the venue.
:::
The RiskEngine is a component of every Nautilus system, including backtest, sandbox, and live
environments. It sits on the submit and modify path, and it also receives order events such as
OrderReleased from the OrderEmulator. Cancel and query commands route directly to other
execution components and do not pass through the RiskEngine.
Unless specifically bypassed in the RiskEngineConfig, the engine validates:
reduce_only orders do not increase the referenced position.max_notional_per_order limits and instrument max_notional limits.ACTIVE, HALTED, REDUCING).If a submit-time risk check fails, the system generates an OrderDenied event with a
human-readable reason. If a modify-time risk check fails, it generates an
OrderModifyRejected event.
Additionally, the current trading state of a Nautilus system affects order flow.
The TradingState enum has three variants:
ACTIVE: Submit and modify commands operate normally.HALTED: New submit and modify commands are denied. Cancels still pass through.REDUCING: Cancels are allowed, and only submit or modify commands that do not increase
exposure are accepted.See the RiskEngineConfig API Reference for further details.
The platform supports custom execution algorithm components and provides built-in algorithms such as TWAP (Time-Weighted Average Price).
The TWAP algorithm spreads execution evenly over a specified time horizon. It receives a primary order representing the total size and direction, then spawns smaller child orders executed at regular intervals.
This reduces the market impact of the full order size by spreading trade volume over time.
The algorithm will immediately submit the first order, with the final order submitted being the primary order at the end of the horizon period.
Using the TWAP algorithm as an example (found in
nautilus_trader/examples/algorithms/twap.py), this example demonstrates how to initialize and
register a TWAP execution algorithm directly with a BacktestEngine (assuming an engine is
already initialized):
from nautilus_trader.examples.algorithms.twap import TWAPExecAlgorithm
# `engine` is an initialized BacktestEngine instance
exec_algorithm = TWAPExecAlgorithm()
engine.add_exec_algorithm(exec_algorithm)
For this particular algorithm, two parameters must be specified:
horizon_secsinterval_secsThe horizon_secs parameter determines the time period over which the algorithm will execute, while
the interval_secs parameter sets the time between individual order executions. These parameters
determine how a primary order is split into a series of spawned orders.
from decimal import Decimal
from nautilus_trader.model.data import BarType
from nautilus_trader.test_kit.providers import TestInstrumentProvider
from nautilus_trader.examples.strategies.ema_cross_twap import EMACrossTWAP, EMACrossTWAPConfig
# Configure your strategy
config = EMACrossTWAPConfig(
instrument_id=TestInstrumentProvider.ethusdt_binance().id,
bar_type=BarType.from_str("ETHUSDT.BINANCE-250-TICK-LAST-INTERNAL"),
trade_size=Decimal("0.05"),
fast_ema_period=10,
slow_ema_period=20,
twap_horizon_secs=10.0, # execution algorithm parameter (total horizon in seconds)
twap_interval_secs=2.5, # execution algorithm parameter (seconds between orders)
)
# Instantiate your strategy
strategy = EMACrossTWAP(config=config)
Alternatively, you can specify these parameters dynamically per order, determining them based on actual market conditions. In this case, the strategy configuration parameters could be provided to an execution model which determines the horizon and interval.
:::info There is no limit to the number of execution algorithm parameters you can create. The parameters must be a dictionary with string keys and primitive values (values that can be serialized over the wire, such as ints, floats, and strings). :::
To build a custom execution algorithm, define a class that inherits from ExecAlgorithm.
An execution algorithm is a type of Actor, so it's capable of the following:
Cache.Clock.Additionally it can:
Portfolio.Once an execution algorithm is registered, and the system is running, it will receive orders off the
messages bus which are addressed to its ExecAlgorithmId via the exec_algorithm_id order parameter.
The order may also carry the exec_algorithm_params being a dict[str, Any].
:::warning
Because of the flexibility of the exec_algorithm_params dictionary, it's important to thoroughly
validate all of the key value pairs for correct operation of the algorithm (for starters that the
dictionary is not None and all necessary parameters actually exist).
:::
Received orders will arrive via the following on_order(...) method. These received orders are
known as "primary" (original) orders when being handled by an execution algorithm.
from nautilus_trader.model.orders.base import Order
def on_order(self, order: Order) -> None:
# Handle the order here
When the algorithm is ready to spawn a secondary order, it can use one of the following methods:
spawn_market(...) (spawns a MARKET order)spawn_market_to_limit(...) (spawns a MARKET_TO_LIMIT order)spawn_limit(...) (spawns a LIMIT order):::note Additional order types will be implemented in future versions, as the need arises. :::
Each of these methods takes the primary (original) Order as the first argument. By default, the
primary order quantity is reduced by the spawned quantity. This can be disabled by passing
reduce_primary=False.
:::warning
When reduce_primary=True, the spawned quantity must not exceed the primary order's leaves_qty
(remaining unfilled quantity).
:::
:::note If a spawned order is denied or rejected before acceptance, the deducted quantity is automatically restored to the primary order. Once accepted by the venue, the reduction is considered committed. :::
An execution algorithm can keep spawning secondary orders, submit the remaining primary order, or do both depending on its design. The built-in TWAP example submits the remaining primary order on the final interval.
All secondary orders spawned from an execution algorithm will carry a exec_spawn_id which is
the ClientOrderId of the primary (original) order, and whose client_order_id
derives from this original identifier with the following convention:
exec_spawn_id (primary order client_order_id value)spawn_sequence (the sequence number for the spawned order){exec_spawn_id}-E{spawn_sequence}
e.g. O-20230404-001-000-E1 (for the first spawned order)
:::note The "primary" and "secondary" / "spawn" terminology was specifically chosen to avoid conflict or confusion with the "parent" and "child" contingent orders terminology (an execution algorithm may also deal with contingent orders). :::
The Cache provides several methods to aid in managing (keeping track of) the activity of
an execution algorithm. Calling the below method will return all execution algorithm orders
for the given query filters.
def orders_for_exec_algorithm(
self,
exec_algorithm_id: ExecAlgorithmId,
venue: Venue | None = None,
instrument_id: InstrumentId | None = None,
strategy_id: StrategyId | None = None,
side: OrderSide = OrderSide.NO_ORDER_SIDE,
account_id: AccountId | None = None,
) -> list[Order]:
As well as more specifically querying the orders for a certain execution series/spawn.
Calling the below method will return all orders for the given exec_spawn_id (if found).
def orders_for_exec_spawn(self, exec_spawn_id: ClientOrderId) -> list[Order]:
:::note This also includes the primary (original) order. :::
Own order books are L3 order books that track only your own (user) orders organized by price level, maintained separately from the venue's public order books.
Own order books serve several purposes:
Own order books are maintained per instrument and automatically updated as orders transition through their lifecycle. Orders are added when submitted or accepted, updated when modified, and removed when filled, canceled, rejected, or expired.
Only orders with prices can be represented in own order books. Market orders and other order types without explicit prices are excluded since they cannot be positioned at specific price levels.
When querying own order books for orders to cancel, use a status filter that excludes PENDING_CANCEL to avoid processing orders already being cancelled.
:::warning
Including PENDING_CANCEL in status filters can cause:
PENDING_CANCEL remain "open" until confirmed canceled).:::
The optional accepted_buffer_ns many methods expose is a time-based guard that only returns
orders whose ts_accepted is at least that many nanoseconds in the past. When
accepted_buffer_ns > 0, you must also provide ts_now. Orders that have not yet been accepted
by the venue still have ts_accepted = 0, so they are included once the buffer window elapses.
To exclude those inflight orders you must pair the buffer with an explicit status filter
(for example, restrict to ACCEPTED / PARTIALLY_FILLED).
During live trading, own order books can be periodically audited against the cache's open and inflight order indexes to ensure consistency. The audit verifies that closed orders are removed and that inflight orders (submitted but not yet accepted) remain tracked during venue latency windows.
The audit interval can be configured using the own_books_audit_interval_secs parameter in live
trading configurations.
An overfill occurs when the cumulative fill quantity for an order exceeds the original order quantity. For example, an order for 100 units that receives fills totaling 110 units has an overfill of 10 units.
Overfills can result from two fundamentally different causes:
Genuine overfills at the matching engine
In some cases, the matching engine actually executes more quantity than the order requested. This is a real execution outcome, not a duplicate event:
Duplicate fill events
Separate from genuine overfills, the same fill event may be delivered multiple times:
The system handles duplicate events via trade_id deduplication (see below), but duplicates with
different trade_id values require overfill handling.
Race conditions with reconciliation
During live trading, the system maintains state through two parallel channels:
If the same fill arrives through both channels with different identifiers before deduplication can occur, both may be applied to the order. This is particularly likely during:
The likelihood of reconciliation race conditions increases when:
open_check_threshold_ms and inflight_check_threshold_ms settings
(both default to 5,000 ms) define how long the engine waits before acting on discrepancies.
Reducing these below the round-trip latency to your venue increases the chance of processing
a fill via reconciliation before the real-time event arrives (or vice versa).open_check_interval_secs to aggressive values
(e.g., 1-2 seconds) increases how often the system polls the venue, creating more opportunities
for race conditions with real-time events.reconciliation_startup_delay_secs setting (default 10 seconds)
provides time for WebSocket connections to stabilize before continuous reconciliation begins.
Reducing this increases the chance of duplicate fills during the startup window.See Continuous reconciliation for configuration details.
The ExecutionEngine checks for potential overfills before applying each fill event by comparing
the order's current filled_qty plus the incoming last_qty against the original quantity.
The allow_overfills configuration option (default: False) controls how overfills are handled:
allow_overfills | Behavior |
|---|---|
False | Logs and rejects the fill, preserving the order's current state. |
True | Logs a warning, applies the fill, and tracks the excess in overfill_qty. |
When overfills are allowed, the order's overfill_qty field tracks the excess quantity.
The order transitions to FILLED status and leaves_qty is clamped to zero.
The Order model enforces that each trade_id can only be applied once. Inside Order.apply(),
a hard check raises an error if the incoming fill's trade_id already exists on the order.
This is the invariant that prevents double-counting executions.
Core engine path (backtest and real-time event processing)
In the core ExecutionEngine (used for backtests and processing real-time fill events), before
calling apply(), the engine checks Order.is_duplicate_fill() which compares:
trade_idorder_sidelast_pxlast_qtyIf all fields match an existing fill exactly, the event is skipped gracefully with a warning log.
This avoids raising an error for benign exact replays (e.g., from WebSocket reconnection).
If the trade_id matches but other fields differ ("noisy replay"), the 4-field check passes
but Order.apply() will raise an error due to the duplicate trade_id. The engine catches
this error, logs the exception with full context, and drops the fill - it does not crash.
Live reconciliation sanitizer
During live reconciliation, LiveExecutionEngine pre-filters on trade_id alone before
generating fill events. This check runs before the 4-field check described above. If a fill
report arrives with a trade_id that already exists on the order, it is skipped regardless
of whether the price or quantity differs. When the data does differ, a warning is logged to
alert operators to potential venue data quality issues.
This pre-filtering ensures that "noisy duplicates" from venue replays or reconciliation races
are filtered out before they can trigger model integrity errors. If a venue legitimately needs
to correct fill data, it should use proper execution report semantics rather than resending
with the same trade_id.
Reconciliation-generated trade_id values are deterministic hashes of the reconciliation fill
inputs, so a restart that replays reconciliation produces the same trade_id and is deduped by
this sanitizer rather than being treated as a new fill.
For live trading, enable overfill tolerance in the LiveExecEngineConfig:
from nautilus_trader.live.config import LiveExecEngineConfig
config = LiveExecEngineConfig(
allow_overfills=True, # Log warning instead of rejecting
)
:::tip
Enable allow_overfills=True when trading on venues known to emit duplicate fills or when
position reconciliation races with exchange fill events are expected. Monitor the logs for
overfill warnings to identify patterns that may require venue-specific handling.
:::
:::warning
When allow_overfills=False (the default), rejected fills may cause position discrepancies
between the system and the venue. Use the reconciliation
features to detect and resolve such discrepancies.
:::
The execution engine consumes four reconciliation report variants emitted by adapters in live trading. Each variant has a different role and a different fallback when the matching order is not yet in the local cache.
| Variant | Use case | Order missing from cache |
|---|---|---|
OrderStatusReport | Standalone order state update. | External order created from the report; if status is PartiallyFilled/Filled, an inferred fill is synthesised from avg_px/filled_qty. |
FillReport | Standalone execution. | External order is created from the fill (OrderType::Market, quantity last_qty); the real fill is then applied so its trade_id and commission are preserved. |
OrderWithFills | Order status update bundled with the fills that produced it. | External order created without an inferred fill; the supplied fills are applied first; any residual gap between report.filled_qty and the sum of supplied last_qtys is closed with an inferred fill. |
PositionStatusReport | Position snapshot from the venue. | Logged; positions are derived from fills, not bootstrapped here. |
Adapters choose the variant based on what the venue's wire format actually delivers for a given event:
OrderStatusReport for ordinary order lifecycle updates (Accepted,
PartiallyFilled, Canceled, Expired) where fill detail arrives separately on
a different stream.FillReport for venues that only surface a fill for venue-initiated
closures and never open a user-level order (the canonical example is
Hyperliquid liquidations: the user receives a userFills entry with
liquidation metadata but no entry on the orders stream).OrderWithFills when a single venue event maps to both a status update
and one or more fills, and the adapter has both available at the same
point in time. Bundling lets the engine apply real fill metadata
(trade_id, commission) and only synthesise an inferred fill for the
residual quantity. Binance Futures uses this for exchange-generated ADL,
liquidation, and settlement orders via
dispatch_exchange_generated_fill.When a report references an order that is not in the cache (a venue-initiated ADL / liquidation / settlement, an order placed by a different process, or an order that has not yet been observed locally), the engine creates an external order and routes ownership to:
register_external_order_claims, orEXTERNAL strategy as a default fallback.The external order's client_order_id is taken from the report when
present, otherwise derived from the venue_order_id. The order is added to
the cache, the venue order ID index is registered, and the engine emits the
appropriate lifecycle events (OrderAccepted, OrderFilled,
OrderCanceled, OrderExpired) so positions update through the normal
event pipeline.
This means a Hyperliquid liquidation that arrives as a single FillReport
and a Binance ADL that arrives as a bundled OrderWithFills both update
the local position without any strategy-side handling.