docs/ContribOperators.md
This file is automatically generated from the registered contrib operator schemas by this script. Do not modify directly.
Multi-Head Attention that can be either unidirectional (like GPT-2) or bidirectional (like BERT).
The weights for input projection of Q, K and V are merged. The data is stacked on the second dimension. Its shape is (input_hidden_size, hidden_size + hidden_size + v_hidden_size). Here hidden_size is the hidden dimension of Q and K, and v_hidden_size is that of V.
The mask_index is optional. Besides raw attention mask with shape (batch_size, total_sequence_length) or (batch_size, sequence_length, total_sequence_length) with value 0 for masked and 1 otherwise, we support other two formats: When input has right-side padding, mask_index is one dimension with shape (batch_size), where value is actual sequence length excluding padding. When input has left-side padding, mask_index has shape (2 * batch_size), where the values are the exclusive end positions followed by the inclusive start positions.
When unidirectional is 1, each token only attends to previous tokens.
Both past and present state are optional. They shall be used together, and not allowed to use only one of them. The qkv_hidden_sizes is required only when K and V have different hidden sizes.
When there is past state, hidden dimension for Q, K and V shall be the same.
The total_sequence_length is past_sequence_length + kv_sequence_length. Here kv_sequence_length is the length of K or V. For self attention, kv_sequence_length equals to sequence_length (sequence length of Q). For cross attention, query and key might have different lengths.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Computes an one-layer RNN where its RNN Cell is an AttentionWrapper wrapped a LSTM Cell. The RNN layer contains following basic component: LSTM Cell, Bahdanau Attention Mechanism, AttentionWrapp.
Activation functions:
Relu(x) - max(0, x)
Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})
Sigmoid(x) - 1/(1 + e^{-x})
(NOTE: Below are optional)
Affine(x) - alpha*x + beta
LeakyRelu(x) - x if x >= 0 else alpha * x
ThresholdedRelu(x) - x if x >= alpha else 0
ScaledTanh(x) - alpha*Tanh(beta*x)
HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)
Elu(x) - x if x >= 0 else alpha*(e^x - 1)
Softsign(x) - x/(1 + |x|)
Softplus(x) - log(1 + e^x)
Softmax(x) - exp(x) / sum(exp(x))
Bahdanau Attention Mechanism:
M - Memory tensor.
`VALUES` - masked Memory by its real sequence length.
`MW` - Memory layer weight.
`KEYS` - Processed memory tensor by the memory layer.
KEYS = M * MW
`Query` - Query tensor, normally at specific time step in sequence.
`QW` - Query layer weight in the attention mechanism
`PQ` - processed query, = `Query` * `QW`
`V' - attention vector
`ALIGN` - calculated alignment based on Query and KEYS
ALIGN = softmax(reduce_sum(`V` * Tanh(`KEYS` + `PQ`)))
`CONTEXT` - context based on `ALIGN` and `VALUES`
CONTEXT = `ALIGN` * `VALUES`
LSTM Cell:
X - input tensor concat with attention state in the attention wrapper
`i` - input gate
`o` - output gate
`f` - forget gate
`c` - cell gate
`t` - time step (t-1 means previous time step)
`W[iofc]` - W parameter weight matrix for input, output, forget, and cell gates
`R[iofc]` - R recurrence weight matrix for input, output, forget, and cell gates
`Wb[iofc]` - W bias vectors for input, output, forget, and cell gates
`Rb[iofc]` - R bias vectors for input, output, forget, and cell gates
`P[iof]` - P peephole weight vector for input, output, and forget gates
`WB[iofc]` - W parameter weight matrix for backward input, output, forget, and cell gates
`RB[iofc]` - R recurrence weight matrix for backward input, output, forget, and cell gates
`WBb[iofc]` - W bias vectors for backward input, output, forget, and cell gates
`RBb[iofc]` - R bias vectors for backward input, output, forget, and cell gates
`PB[iof]` - P peephole weight vector for backward input, output, and forget gates
`H` - Hidden state
`num_directions` - 2 if direction == bidirectional else 1
Equations (Default: f=Sigmoid, g=Tanh, h=Tanh):
- it = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Pi (.) Ct-1 + Wbi + Rbi)
- ft = f(Xt*(Wf^T) + Ht-1*(Rf^T) + Pf (.) Ct-1 + Wbf + Rbf)
- ct = g(Xt*(Wc^T) + Ht-1*(Rc^T) + Wbc + Rbc)
- Ct = ft (.) Ct-1 + it (.) ct
- ot = f(Xt*(Wo^T) + Ht-1*(Ro^T) + Po (.) Ct + Wbo + Rbo)
- Ht = ot (.) h(Ct)
AttentionWrapp Notations: `lstm()' - wrapped inner cell. Ht, Ct = lstm(concat(Xt, ATTNt-1), Ct-1)
`am()` - attention mechanism the wrapper used.
CONTEXTt, ALIGNt = am(Ht, ALIGNt-1)
`AW` - attention layer weights, optional.
`ATTN` - attention state, initial is zero. If `AW` provided, it is the output of the attention layer,
ATTNt = concat(Ht, CONTEXTt) * AW
otherwise,
ATTNt = CONTEXTt
RNN layer output:
Y - if needed is the sequence of Ht from lstm cell.
`Y_h` - is the last valid H from lstm cell.
`Y_c` - is the last valid C from lstm cell.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Beam Search for text generation. Supports GPT-2 decoder.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Add input with bias, then add residual inputs.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
output, dropout_mask = Dropout(data + bias, ratio) + residual, Intended to specialize the dropout pattern commonly found in transformer models.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Bias Gelu. It's an extension of Gelu. It takes the sum of input A and bias input B as the input of Gelu activation.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Y = softmax(scores + bias)) with simple broadcast on bias. Intended to specialize softmax(scores + additive_mask) commonly found in transformer models.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
A fusion used in diffusion model that after adding bias, hidden state is sliced into two tensors of same size, then left tensor multiplies the Gelu activation result of right tensor.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Component for aggressive decoding. Find the bifurcation index of predicted tokens, between source tokens, starting from previous suffix match index, and predicted tokens. Concat predicted tokens, starting from bifurcation index, to the back of current tokens. This forms the output tokens. Detect suffix match index in source tokens, between source tokens and output tokens. Detection is based on finding the appearances of last n-gram in output tokens in source tokens. A match is considered found if source tokens contain a single matching n-gram. Return the index of the start of the n-gram in source tokens. No matching if found if src tokens contain multiple or zero matching n-grams. Return -1.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
output, dropout_bitmask = Dropout(data + bias, ratio) + residual, Intended to specialize the dropout pattern commonly found in transformer models.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
BitmaskDropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional input training_mode (boolean scalar).
It produces two tensor outputs: output (floating-point tensor) and mask (optional Tensor<uint32>). If training_mode is true then the output Y will be a random dropout.
Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.
output = scale * data * mask,
where
scale = 1. / (1. - ratio).
This op functions in much the same was as Dropout-11 and Dropout-13 do, except that the mask is output as a bit-packed uint32 tensor, instead of a boolean tensor.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Stateful causal depthwise convolution, generalized to N spatial dimensions.
Used by Gated DeltaNet (Qwen3.5) and Mamba (Jamba, FalconMamba) as a preprocessing step. Replaces the 3-op pattern (Concat + Conv + Slice) with a single fused operation.
The convolution is causal (looks only at current and past positions along the last spatial dimension) and depthwise (each channel is convolved independently with its own kernel).
Input layout is channels-first: (batch_size, channels, ...). Weight layout: (channels, 1, k_1, ...) for depthwise convolution. The carry state stores the last (k-1) positions along the causal axis for incremental decode.
The ndim attribute generalizes the op to 1D, 2D, or 3D spatial dimensions. Causality is enforced on the last spatial dimension only.
The optional activation attribute supports fused SiLU/Swish activation.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Extracts crops from the input image tensor and resizes them using bilinear sampling or nearest neighbor sampling (possibly with aspect ratio change) to a common output size specified by crop_height and crop_width. Returns a tensor with crops from the input image at positions defined at the bounding box locations in boxes. The cropped boxes are all resized (with bilinear or nearest neighbor interpolation) to a fixed size = [crop_height, crop_width]. The result is a 4-D tensor [num_boxes, crop_height, crop_width, depth]. The resizing is corner aligned.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This DecoderAttention supports self attention and cross attention, key and value cache, and key_padding_mask. The attention mask is not support at the moment. Some boolean parameters are passed by runtime input for generic purpose
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Multihead attention that supports input sequence length of 1. Similar to DecoderMaskedSelfAttention but this op excludes QKV MatMul and Bias. This op supports both Self and Cross Attention.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Self attention that supports input sequence length of 1.
The weights for input projection of Q, K and V are merged. The data is stacked on the second dimension. Its shape is (input_hidden_size, hidden_size + hidden_size + v_hidden_size). Here hidden_size is the hidden dimension of Q and K, and v_hidden_size is that of V.
The mask_index is optional. If it is provided, only raw attention mask with shape (batch_size, total_sequence_length) is supported currently.
Both past and present state need to be provided.
The qkv_hidden_sizes is required only when K and V have different hidden sizes.
The total_sequence_length is past_sequence_length + kv_sequence_length. Here kv_sequence_length is the length of K or V. Currently, only self attention is supported which means that kv_sequence_length equals to sequence_length (sequence length of Q).
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The BFP dequantization operator. It consumes the raw BFP data and some metadata such as the shape and strides of the original tensor and computes the dequantized tensor. More documentation on the BFP format can be found in this paper: https://www.microsoft.com/en-us/research/publication/pushing-the-limits-of-narrow-precision-inferencing-at-cloud-scale-with-microsoft-floating-point/
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The linear dequantization operator. It consumes a quantized data, a scale, a zero point and computes the full precision data. The dequantization formula is y = (x - x_zero_point) * x_scale. Scale and zero point must have same shape. They must be either scalar (per tensor) or 1-D tensor (per 'axis').
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Dequantize input matrix to specific layout used in cublaslt. attr to specify output type, float16 or float32
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Input is cost matrix where each value in input[r][c] is the cost for pass the point (r, c). From current point(r, c), points (r+1, c), (r+1, c+1) or (r, c+1) could be arrived in next move. Given such cost matrix, return dynamic time warping of shape [2, x], where the path made by all points (output[0][t], output[1][t])have the lowest cost among all paths from (0, 0) to (M-1, N-1).
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Onnx node container for EP context.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
EmbedLayerNormalization is the fusion of embedding layer in BERT model, with optional mask processing. The embedding layer takes input_ids (word IDs) and segment_ids (sentence IDs) to look up word_embedding, position_embedding, and segment_emedding; the embeddings are added then applied layer normalization using gamma and beta tensors. The last input mask is optional. If mask is provided, mask index (that is position of first 0 in mask, or number of words) will be calculated.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
ExpandDims echo operator.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
GELU (Gaussian Error Linear Unit) approximation: Y=0.5X(1+tanh(0.797885X+0.035677XXX)) with an optional input of bias that will be added to X before GELU.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The fused convolution operator schema is the same as Conv besides it includes an attribute activation.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The FusedGemm operator schema is the same as Gemm besides it includes attributes activation and leaky_relu_alpha.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Executes the same operation as FusedMatMul, but also has an activation function fused to its output.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
query_layer = (query_layer + query_bias).reshape(batch_size, seq_len, num_heads, head_size).transpose(1, 2) gate_u, gate_r = torch.sigmoid( self.gate_ur_linear(query_layer).view(batch_size, num_head, seq_len, 2, D/2).sum(-1, keepdim=False) ).chunk(2, dim=-1) gate_u_1 = gate_u * (gate_r * self.eco_a - 1.0) + 2.0 rel_pos_bias = gate_u_1 * rel_pos
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
GatherBlockQuantized is a Gather with data quantized. It is similar to Gather (https://github.com/onnx/onnx/blob/main/docs/Operators.md#gather) with differences:
1. Input data is a constant. It is quantized block-wise along attribute quantize_axis with block size specified by attribute block_size.
block_size must be a power of 2 and not smaller than 16, like 16, 32, 64, 128, ...
2. Input data's scale and zero point are specified by input scales and zero_points. scales and zero_points are also constants.
If zero_points is not provided, the default value is 0 for int4/uint4, or 2^(bits-1) for uint8.
3. During the op execution, data and indices are first used to generate the quantized output. Then, scales and zero_points are used
to dequantize the output.
4. The output and scales have the same type. The data and zero_points have the same type.
5. For uint8 data, the gather_axis must be 0.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Given data tensor of rank r >= 1, and indices tensor of rank q >= 1, gather
slices of data into an output tensor of rank q - 1 + r - indices[-1].
Example 1:
data = [[0,1],[2,3]]
indices = [[0,0],[1,1]]
output = [0,3]
Example 2:
data = [[0,1],[2,3]]
indices = [[1],[0]]
output = [[2,3],[0,1]]
Example 3:
data = [[[0,1],[2,3]],[[4,5],[6,7]]]
indices = [[0,1],[1,0]]
output = [[2,3],[4,5]]
Example 4:
data = [[[0,1],[2,3]],[[4,5],[6,7]]]
indices = [[[0,1]],[[1,0]]]
output = [[[2,3]],[[4,5]]]
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Gaussian Error Linear Unit. A high-performing neural network activation function.The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron's input. The GELU nonlinearity weights inputs by their magnitude, rather than gates inputs by their sign as in ReLUs.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
It's a fusion of MatMul and FastGelu.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Generic Gemm for float and float 8.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
GemmaRotaryEmbedding is the implementation of below part of rotary positional embeddings (RoPE). It implements below from modeling_gemma.py.
Here's onnxscript that was tested
from onnxscript import FLOAT, FLOAT16, script from onnxscript import opset18 as op
@script() def gemma_rotary_embedding(emb: FLOAT["bs", "seq_len", "dim"], q: FLOAT16["bs", "num_heads", "seq_len", "dim"], q_rot: FLOAT16["bs", "num_heads", "seq_len", "dim"], k: FLOAT16["bs", "num_heads", "seq_len", "dim"], k_rot: FLOAT16["bs", "num_heads", "seq_len", "dim"]): sin_val = op.Sin(emb) casted_sin = op.Cast(sin_val, to=10) # for fp16 mix-precision training. Other types are not supported. cos_val = op.Cos(emb) casted_cos = op.Cast(cos_val, to=10) unsqueezed_sin = op.Unsqueeze(casted_sin, [1]) unsqueezed_cos = op.Unsqueeze(casted_cos, [1]) q_embed = (q * casted_cos) + (q_rot * casted_sin) k_embed = (k * casted_cos) + (k_rot * casted_sin) return q_embed, k_embed
onnx_model = gemma_rotary_embedding.to_model_proto()
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Greedy Search for text generation.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Given an input and a flow-field grid, computes the output using input values and pixel locations from grid.
Currently, only spatial (4-D) inputs are supported. For input with shape (N, C, H, W) and grid with shape (N, H_out, W_out, 2),
the output will have shape (N, C, H_out, W_out).
For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y,
which are used to interpolate the output value output[n, :, h, w].
The GridSample operator is often used in doing grid generator and sampler in the Spatial Transformer Networks.
See also in torch.nn.functional.grid_sample.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization (https://arxiv.org/abs/1803.08494).
This operator transforms input according to y = gamma * (x - mean) / sqrt(variance + epsilon) + beta
The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. num_channels must be divisible by num_groups. The mean and standard-deviation are calculated separately over the each group. The weight and bias are per-channel affine transform parameter vectors of size num_channels.
The activation attribute can be used to enable activation after group normalization.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Group Query Self/Cross Attention with KV Cache Quantization Support.
This operator implements causal grouped-query attention with past state (KV cache) support. It also supports optional float8, int8 or int4 quantization for the KV cache to reduce memory footprint.
Cache Format:
The past and present KV cache tensors are expected in a BNSH format: (batch_size, num_heads, cache_sequence_length, head_size), where cache_sequence_length is the length of the cached key/value sequences, or the maximum sequence length when past and present buffer sharing is used.
Quantization:
When quantization is enabled, past_key and past_value inputs can be of type float8e4m3fn, uint8 or int8. The corresponding k_scale and v_scale tensors must be provided.
The operator will output present_key and present_value in same format as the past_key and past_value.
For 4-bit quantization, the data type is uint8 where each byte contains two 4-bit values. The bit width of quantized KV cache can be set using kv_cache_bit_width attribute.
The shapes of the k_scale, v_scale tensors shall be broadcastable to present_key shape.
Quantization Modes (k_quant_type, v_quant_type attributes):
[1].[1, num_heads_k, 1, head_size].This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This function computes the inverse of the one-dimensional n-point RFFT computed in 'com.microsoft.rfft'.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Unified linear attention operator for autoregressive decoding (T=1) and prefill (T>1).
All inputs use 3D packed format [B, T, H*D]; q_num_heads and kv_num_heads are always required. The op internally unpacks to 4D for computation.
The update_rule attribute selects the recurrence type:
where g_t is the decay (in log-space), β_t is the update rate, and ⊗ denotes outer product.
Semantics: Equivalent to running the recurrent update sequentially for each token, but may be implemented using chunk-parallel algorithms for GPU efficiency.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Longformer Self Attention with a local context and a global context. Tokens attend locally: Each token attends to its W previous tokens and W succeeding tokens with W being the window length. A selected few tokens attend globally to all other tokens.
The attention mask is of shape (batch_size, sequence_length), where sequence_length is a multiple of 2W after padding. Mask value < 0 (like -10000.0) means the token is masked, 0 otherwise.
Global attention flags have value 1 for the tokens attend globally and 0 otherwise.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
MatMulBnb4 is a MatMul with weight quantized with 4 bits using either FP4 or NF4 data type (https://arxiv.org/pdf/2305.14314.pdf). It does Matrix Multiplication like MatMul (https://github.com/onnx/onnx/blob/main/docs/Operators.md#matmul) with differences: 1. Input B is a 2D constant Matrix. Its input feature count and output feature count are specified by attribute 'K' and 'N'. 2. Input B is quantized with 4 bits with quantization data type specified by attribute 'quant_type'. It is transposed, flattened and quantized blockwisely with block size specified by attribute 'block_size'. And block_size is not an arbitrary number and must be a power of 2 and not smaller than 16, like 16, 32, 64, 128,.. 3. Input B's quantization constants or scales are specified by input 'absmax'.
Input B is stored as uint8_t with shape: [(N * K + 1) / 2].
Input absmax is stored in same type as original type of B(float32, float16) with shape like: [(N * K + block_size - 1) / block_size].
1. (Default value) transB=True (Majorly used for forward pass)
Shape of A: [D0, D1, ..., Dn, K]
Shape of Dequanted B: [N, K], this is aligned with how PyTorch defined the linear weight, .e.g [out_features, in_features].
The computation math:
dequant_B = dequant(B, absmax, quant_type, block_size)
transposed_dequant_B = dequant_B^T
output = A @ transposed_dequant_B
Shape of output: [D0, D1, ..., Dn, N]
2. transB=False (Majorly used for backward pass)
Shape of A: [D0, D1, ..., Dn, N]
Shape of Dequanted B: [N, K], this is aligned with how PyTorch defined the linear weight, .e.g [out_features, in_features].
The computation math:
dequant_B = dequant(B, absmax, quant_type, block_size)
output = A @ dequant_B
Shape of output: [D0, D1, ..., Dn, K]
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Matrix product with right hand matrix being pre-packed and quantized int4 data blob. During quantization, the matrix is divided into blocks, where each block is a contiguous subset inside each column. Each block is quantized into a sequence of 4b integers with a scaling factor and an optional offset. Currently 3 quantization types are supported: (0): block size 32, no offset, (1): block size 32, with offset, (2): block size 64, no offset
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
MatMulNBits performs a matrix multiplication where the right-hand-side matrix (weights) is quantized to N bits.
It is a fusion of two operations:
The weight matrix is a 2D constant matrix with the input feature count and output feature count specified by attributes 'K' and 'N'. It is quantized block-wise along the K dimension with a block size specified by the 'block_size' attribute. The block size must be a power of 2 and not smaller than 16 (e.g., 16, 32, 64, 128). Each block has its own scale and zero-point. The quantization is performed using a bit-width specified by the 'bits' attribute, which can take values from 2 to 8.
The quantized weights are stored in a bit-packed format along the K dimension, with each block being represented by a blob of uint8. For example, for 4 bits, the first 4 bits are stored in the lower 4 bits of a byte, and the second 4 bits are stored in the higher 4 bits of a byte.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
For internal use.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Mixture of experts. Examples: Switch transformer(https://arxiv.org/pdf/2101.03961.pdf) use top 1, GLaM(https://arxiv.org/abs/2112.06905) activates top 2 FFN, Vision MOE(https://arxiv.org/pdf/2106.05974.pdf) usually uses top 32 experts and Mixtral(https://huggingface.co/blog/mixtral).
The SwiGLU (Swish-Gated Linear Unit) activation function is like:
g = xW + b
l = xV + c
G = clamp(g, max=limit)
L = clamp(l, min=-limit, max=limit)
swiglu = G * sigmoid(alpha * G) * (L + beta)
where x is the input, W and V are weight matrices, b and c are bias vectors, and alpha, beta and limit are constant float parameters.
When swiglu_fusion=0, two GEMMs are not fused, and they are FC1 and FC3 in the inputs.
When swiglu_fusion=1, two GEMMs are fused so that g and l are computed in a single GEMM (FC1), and g and l are interleaved on each row of size 2 * inter_size.
When swiglu_fusion=2, two GEMMs are fused, and g and l are concatenated on each row.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Performs element-wise binary quantized multiplication (with Numpy-style broadcasting support). "This operator supports multidirectional (i.e., Numpy-style) broadcasting" The output of this op is the int32 accumulated result of the mul operation
C (int32) = (A - A_zero_point) * (B - B_zero_point)
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Multi-Head Self/Cross Attention. Bias from input projection is included.
The key padding mask is optional. When its shape is (batch_size, kv_sequence_length), value 0 means padding or 1 otherwise. When key has right-side padding, its shape could be (batch_size): it is actual length of each key sequence excluding paddings.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The underlying implementation is MurmurHash3_x86_32 generating low latency 32bits hash suitable for implementing lookup tables, Bloom filters, count min sketch or feature hashing.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Enforce no repetition of n-grams. Scores are set to -inf for tokens that form a repeated n-gram if added to the back of the input_ids.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
NhwcFusedConv is a Conv operator with optional activation and add operators fused in. Only has fp16 implementation as of 2023/04/15.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This is the packed version of Attention.
Sequences in one batch usually don't have same length and they are padded to have same length, e.g., below is a batch with 3 sequences and tokens* are padded. Sequence_0: 0, 1*, 2*, 3* Sequence_1: 4, 5, 6*, 7* Sequence_2: 8, 9, 10, 11
PackedAttention is designed to takes in packed input, i.e., only the real tokens without padding. An input as above will be packed into 3 tensors like below:
Input tensors contains the hidden embedding of real tokens. Token_offset records the offset of token in the unpacked input. cumulated_token_count records cumulated length of each sequence length.
The operator only supports BERT like model with padding on right now.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This is the packed version of MultiHeadAttention.
Sequences in one batch usually don't have same length and they are padded to have same length, e.g., below is a batch with 3 sequences and * is padding token. Sequence_0: 0, 1*, 2*, 3* Sequence_1: 4, 5, 6*, 7* Sequence_2: 8, 9, 10, 11
PackedMultiHeadAttention is designed to takes in packed input, i.e., only the real tokens without padding. An input as above will be packed into 3 tensors like below:
The query, key and value tensors contain result of hidden embedding of real tokens after input projections. Token_offset records the offset of token in the unpacked input. cumulative_sequence_length records cumulated length of each sequence length.
The operator only supports BERT like model with padding on right now.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Given data tensor, pads, mode, and value.
Example:
Insert 0 pads to the beginning of the second dimension.
data = [
[1.0, 1.2],
[2.3, 3.4],
[4.5, 5.7],
]
pads = [0, 2, 0, 0]
output = [
[
[0.0, 0.0, 1.0, 1.2],
[0.0, 0.0, 2.3, 3.4],
[0.0, 0.0, 4.5, 5.7],
],
]
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Paged Attention.
This op leverages a block-based KV cache to enable continuous batching for LLMs. Currently, it is designed to work with the CUDA Execution Provider only.
In other attention ops, batch entries typically aren't of the same length, so they are padded. Below is a batch with 3 sequences where * denotes a padding token. Sequence_0: 0, 1*, 2*, 3* Sequence_1: 4, 5, 6*, 7* Sequence_2: 8, 9, 10, 11
PagedAttention is designed to take in packed input, i.e., only the real tokens without padding. For example, the input shown above will be packed into 3 tensors like below:
The query, key and value tensors contain result of hidden embedding of real tokens after input projections. cumulative_sequence_length records cumulated length of each sequence length.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantization of Multi-Head Self Attention.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantized Gemm
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Performs element-wise binary addition on 8 bit data types (with Numpy-style broadcasting support).
C = (A_scale * (A - A_zero_point) + B_scale * (B - B_zero_point))/C_scale + C_zero_point
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
QLinearAveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
or
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Input and output scales and zero points are used to convert the output to a new quantization range. Output = Dequantize(Input) -> AveragePool on fp32 data -> Quantize(output)
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Concatenate a list of tensors into a single tensor.All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
QLinearGlobalAveragePool consumes an input tensor X and applies Average pooling across the values in the same channel. This is equivalent to AveragePool with kernel size equal to the spatial dimension of input tensor. Input is of type uint8_t or int8_t.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
QLinearLeakyRelu takes quantized input data (Tensor), an argument alpha, and quantize parameter for output,
and produces one output data (Tensor<T>) where the function f(x) = quantize(alpha * dequantize(x)) for dequantize(x) < 0,
f(x) = quantize(dequantize(x)) for dequantize(x) >= 0, is applied to the data tensor elementwise.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Performs element-wise binary multiplication on 8 bit data types (with Numpy-style broadcasting support).
C = ((A - A_zero_point) * (B - B_zero_point)) * (A_scale * B_scale)/C_scale + C_zero_point
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Computes the mean of the low-precision input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. The above behavior is similar to numpy, with the exception that numpy default keepdims to False instead of True. Input and Output scales and zero points are used to requantize the output in a new range. This helps to improve accuracy as after ReduceMean operation the range of the output is expected to decrease.
"Output = Dequantize(Input) -> ReduceMean on fp32 data -> Quantize(output)",
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
QLinearSigmoid takes quantized input data (Tensor), and quantize parameter for output, and produces one output data
(Tensor<T>) where the function f(x) = quantize(Sigmoid(dequantize(x))), is applied to the data tensor elementwise.
Wwhere the function Sigmoid(x) = 1 / (1 + exp(-x))
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
QLinearSoftmax computes the normalized exponential values for the given input: Softmax(input, axis) = Exp(input) / ReduceSum(Exp(input), axis=axis, keepdims=1) The input does not need to explicitly be a 2D vector. The "axis" attribute indicates the dimension along which QLinearSoftmax will be performed for onnx v.13+. or the dimension coerced to NxD Matrix for onnx v.12-. The output tensor has the same shape.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Return elements, either from X or Y, depending on condition.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantized mixture of experts (MoE).
The quantized weights are stored in column major order per expert.
The quantization block size can be specified. If not provided, column wise quantization is used.
The formula of linear dequantization of the quantized weights using scale and (optionally) zero-point is:
dequantized_weight = (quantized_weight - zero_point) * scale
When zero_point is not provided, the default value is 2^(bits-1): 8 for 4 bits, 128 for 8 bits.
If block_size is provided, both hidden_size and inter_size must be divisible by the block size, and
the dequantization is performed per block of size block_size along the K (input feature) dimension.
If block_size and zero_point are provided, both hidden_size and inter_size must be divisible by block_size * pack_size,
where pack_size = 8 / expert_weight_bits.
The SwiGLU (Swish-Gated Linear Unit) activation function is like:
g = xW + b
l = xV + c
G = clamp(g, max=limit)
L = clamp(l, min=-limit, max=limit)
swiglu = G * sigmoid(alpha * G) * (L + beta)
where x is the input, W and V are weight matrices, b and c are bias vectors, and alpha, beta and limit are constant float parameters.
When swiglu_fusion=0, two GEMMs are not fused, and they are FC1 and FC3 in the inputs.
When swiglu_fusion=1, two GEMMs are fused so that g and l are computed in a single GEMM (FC1), and g and l are interleaved on each row of size 2 * inter_size.
When swiglu_fusion=2, two GEMMs are fused, and g and l are concatenated on each row.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantized version of simplified Multi-Head Self Attention(using int8 with specific matrix Layout). Multi-Head Self Attention that can be either unidirectional (like GPT-2) or bidirectional (like BERT). The mask_index input is optional. Besides raw attention mask with shape (batch_size, past_sequence_length + sequence_length) or (batch_size, sequence_length, past_sequence_length + sequence_length) with value 0 for masked and 1 otherwise, we also support other two formats: When input has right-side padding, mask_index is one dimension with shape (batch_size), where value of each element is the end position, or valid length of actual sequence excluding padding. When input has left-side padding, mask_index has shape (2 * batch_size), where the values are the exclusive end positions followed by the inclusive start positions. When unidirectional is 1, and each token only attend to previous tokens. For GPT-2, both past and present state are optional. Present state could appear in output even when past state is not in input. Current version does not support past/present, attention_bias and qkv_hidden_sizes. TODO: Support them if needed in the future.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Ordered Quantize Gelu.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
QOrderedLayerNormalization
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantized version of Longformer Self Attention (using int8 with specific matrix Layout).
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantize (Int8) MatMul with order. Implement Y = alpha * A * B + bias + beta * C. Matrix A, B, C, Y are all int8 matrix. Two type of order combination supported: *) When order_B is ORDER_COL, order_A must be ORDER_ROW. bias is vector of {#cols of Y} of float32, C should be batch 1/batch_A. B could be of batch 1 or batch_A. Note B is reorder to ORDER_COL, or Transposed. Not Transposed first and then Reordered here. *) When order_B is specify ORDER_COL4_4R2_8C or ORDER_COL32_2R_4R4, orderA must be ORDER_COL32. MatMul will be implemented using alpha(A * B) + beta * C => Y. bias is not supported here. B in fact is transposed first then reordered into ORDER_COL4_4R2_8C or ORDER_COL32_2R_4R4 here. order_Y and order_C will be same as order_A. Support per column quantized weight, ie, scale_B is 1-D vector of size [#cols of matrix B].
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The BFP quantization operator. It consumes a full precision tensor and computes an BFP tensor. More documentation on the BFP format can be found in this paper: https://www.microsoft.com/en-us/research/publication/pushing-the-limits-of-narrow-precision-inferencing-at-cloud-scale-with-microsoft-floating-point/
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The linear quantization operator. It consumes a full precision data, a scale, a zero point to compute the low precision / quantized tensor. The quantization formula is y = saturate ((x / y_scale) + y_zero_point). For saturation, it saturates to [0, 255] if it's uint8, [-128, 127] if it's int8, [0, 65,535] if it's uint16, and [-32,768, 32,767] if it's int16. For (x / y_scale), it's rounding to nearest ties to even. Refer to https://en.wikipedia.org/wiki/Rounding for details. Scale and zero point must have same shape. They must be either scalar (per tensor) or 1-D tensor (per 'axis').
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Quantize input matrix to specific layout used in cublaslt.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Compute x * Sigmoid(alpha * x).
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Creates a sequence of numbers that begins at start and extends by increments of delta
up to but not including limit.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Computes the sum of the low-precision input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. The above behavior is similar to numpy, with the exception that numpy default keepdims to False instead of True.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Compute binned relative position bias for T5 model. ref: https://arxiv.org/abs/1803.02155v2
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Compress transformer input by removing paddings. It assumes padding is on the right side of sequence.
The input has padding with shape (batch_size, sequence_length, hidden_size). This will generate two outputs: output has shape (total_tokens, hidden_size); token_offset with shape (batch_size, sequence_length).
token_offset has offsets of all non-padding tokens first, then offset of all padding tokens. It is a list of batch_size * sequence_length elements, which is reshaped to 2D for convenience of shape inference.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Restore paddings and fill padding with zeros.
The input has padding with shape (total_tokens, hidden_size) and token_offset with shape (batch_size, sequence_length). The output has shape (batch_size, sequence_length, hidden_size).
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This function computes the n-point one dimensional Fourier transform for a real-valued input where n is an even number.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
RotaryEmbedding is the implementation of rotary positional embeddings (RoPE). The positions are represented as rotation matrices that are multiplied to query and key before the inner product of query and key is taken.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Sample echo operator.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Greedy Sampling for text generation.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This operator element-wise adds x, skip and bias, then apply group normalization and optional activation.
This operator transforms input according to s = x + skip + bias y = gamma * (s - mean) / sqrt(variance + epsilon) + beta
The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. The num_channels must be divisible by num_groups. The mean and standard-deviation of s are calculated separately over the each group. The weight and bias are per-channel affine transform parameter vectors of size num_channels.
The activation attribute can be used to enable activation after group normalization.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Skip and Layer Normalization Fusion
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Skip and Root Mean Square Layer Normalization
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Onnx node for SNPE.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Block Sparse Attention used in Phi-3-small (https://arxiv.org/pdf/2404.14219).
It is inspired by Sparse Transformers (https://arxiv.org/pdf/1904.10509) and BigBird (https://arxiv.org/pdf/2007.14062).
block_mask can be used to configure sparse layout for different head. When number of sparse layout is 1, all heads have same sparse layout. Otherwise, different layouts are used cyclically. For example, given 4 layouts (S0, S1, S2, S3), 8 heads will have layouts like (S0, S1, S2, S3, S0, S1, S2, S3).
The block_row_indices and block_col_indices are the CSR representation of block mask. The block_col_indices might contain paddings at the right side when different layout has different number of non-zeros in block mask.
An example of block mask with 2 layouts where each layout is 4 x 4 blocks: [[[1, 0, 0, 0], [1, 1, 0, 0], [0, 1, 1, 0], [0, 1, 1, 1]],
[[1, 0, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 0],
[1, 0, 1, 1]]]
The corresponding CSR format: block_col_indices = [[0, 0, 1, 1, 2, 1, 2, 3, -1], [0, 0, 1, 0, 1, 2, 0, 2, 3]] block_row_indices = [[0, 1, 3, 5, 8], [0, 1, 3, 6, 9]]
When do_rotary is True, cos_cache and sin_cache are required. Note that the maximum sequence length supported by cos or sin cache can be different from the maximum sequence length used by kv cache.
Only supports unidirectional attention with cache of past key and value in linear buffers.
For performance, past_key and present_key share same memory buffer, and past_value and present_value too.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Tokenizer divides each string in X into a vector of strings along the last axis. Allowed input shapes are [C] and [N, C]. If the maximum number of tokens found per input string is D, the output shape would be [N, C, D] when input shape is [N, C]. Similarly, if input shape is [C] then the output should be [C, D]. Tokenizer has two different operation modes. The first mode is selected when "tokenexp" is not set and "separators" is set. If "tokenexp" is set and "separators" is not set, the second mode will be used. The first mode breaks each input string into tokens by matching and removing separators. "separators" is a list of strings which are regular expressions. "tokenexp" is a single regular expression. Let's assume "separators" is [" "] and consider an example. If input is ["Hello World", "I love computer science !"] whose shape is [2], then the output would be [["Hello", "World", padvalue, padvalue, padvalue], ["I", "love", "computer", "science", "!"]] whose shape is [2, 5] because you can find at most 5 tokens per input string. Note that the input at most can have two axes, so 3-D and higher dimension are not supported. If "separators" contains a single empty string, the Tokenizer will enter into character tokenezation mode. This means all strings will be broken part into individual characters. For each input string, the second mode searches matches of "tokenexp" and each match will be a token in Y. The matching of "tokenexp" is conducted greedily (i.e., a match should be as long as possible). This operator searches for the first match starting from the beginning of the considered string, and then launches another search starting from the first remained character after the first matched token. If no match found, this operator will remove the first character from the remained string and do another search. This procedure will be repeated until reaching the end of the considered string. Let's consider another example to illustrate the effect of setting "mark" to true. If input is ["Hello", "World"], then the corresponding output would be [0x02, "Hello", "World", 0x03]. This implies that if mark is true, [C]/[N, C] - input's output shape becomes [C, D+2]/[N, C, D+2]. If tokenizer removes the entire content of [C]-input, it will produce [[]]. I.e. the output shape should be [C][0] or [N][C][0] if input shape was [N][C]. If the tokenizer receives empty input of [0] then the output is [0] if empty input of [N, 0] then [N, 0].
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Based on Torch operator Embedding, creates a lookup table of embedding vectors of fixed size, for a dictionary of fixed size.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Duplicate of FusedMatMul. Going forward FusedMatMul should be used. This OP will be supported for backward compatibility. Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Returns the upper or lower triangular part of a 2-D matrix, or batches of 2-D matrices. If the attribute "upper" is set to true, the upper triangular matrix is retained. Lower triangular matrix is retained otherwise. Default value for upper is true. Trilu takes one input tensor of shape [*, N, M], where * is zero or more batch dimensions. The upper triangular part consists of the elements on and above the given diagonal (k). The lower triangular part consists of elements on and below the diagonal. All other elements in the matrix are set to zero. If k = 0, the triangular part on and above/below the main diagonal is retained. If upper is set to true, a positive k retains the upper triangular matrix excluding k diagonals above the main diagonal. A negative k value includes as many diagonals below the main diagonal. If upper is set to false, a positive k retains the lower triangular matrix including k diagonals above the main diagonal. A negative k value excludes as many diagonals below the main diagonal.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Returns a tensor which contains all slices of size size from input tensor in the dimension dim. Step between two slices is given by step. If sizedim is the size of dimension dim for input tensor, the size of dimension dim in the returned tensor will be (sizedim - size) / step + 1. An additional dimension of size size is appended in the returned tensor.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Finds all the unique values (deduped list) present in the given input tensor. This operator returns 3 outputs. The first output tensor 'uniques' contains all of the unique elements of the input, sorted in the same order that they occur in the input. The second output tensor 'idx' is the same size as the input and it contains the index of each value of the input in 'uniques'. The third output tensor 'counts' contains the count of each element of 'uniques' in the input. Example: input_x = [2, 1, 1, 3, 4, 3] output_uniques = [2, 1, 3, 4] output_idx = [0, 1, 1, 2, 3, 2] output_counts = [1, 2, 2, 1]
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
Beam Search for whisper model, especially with cross_qk features etc.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
The WordConvEmbedding takes in a batch of sequence words and embed each word to a vector.
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
IsAllFinite
No versioning maintained for experimental ops.
QEmbedLayerNormalization is the quantized fusion of embedding layer in BERT model, with optional mask processing. The embedding layer takes input_ids (word IDs) and segment_ids (sentence IDs) to look up word_embedding, position_embedding, and segment_emedding; the embeddings are added then applied layer normalization using gamma and beta tensors. The input_ids and segment_ids remain int32. All embeddings, gamma, and beta tensors are converted to int8/uint8. The last input mask is optional. If mask is provided, mask index (that is position of first 0 in mask, or number of words will be calculated.
No versioning maintained for experimental ops.