Supported MLA-100 Operations

Tested with the v0.12.0 qb compiler. Development flow PyTorch -> ONNX -> MXQ.

Successfully Tested Ops

Op NameNPU Supported
AMaxYes
AddingCPU Fallback
AddingConstantCPU Fallback
ArgMaxCPU Fallback
BatchNorm1dYes
BatchnormYes
CastCPU Fallback
CeluYes
ClipCPU Fallback
ConcatenateCPU Fallback
ConvolutionYes
Convolution1dYes
DepthToSpaceYes
DepthwiseConvolutionYes
DivCPU Fallback
DivConstantCPU Fallback
EinsumYes
EluYes
EmbeddingYes
ErfCPU Fallback
ExpCPU Fallback
ExpandCPU Fallback
FlattenCPU Fallback
FloorDivYes
FloorModYes
GLUYes
GatherCPU Fallback
GatherNDCPU Fallback
GeluCPU Fallback
GemmYes
GlobalAveragePoolingYes
GreaterCPU Fallback
GroupConvolutionYes
GroupNormalizationCPU Fallback
HardSigmoidYes
HardSwishYes
HardtanhYes
IdentityYes
InstanceNormalizationYes
L1NormalizationYes
LeakyReluYes
MaskedFillYes
MatMulCPU Fallback
MaximumConstantYes
MinimumConstantCPU Fallback
MishYes
MultiplyCPU Fallback
MultiplyConstantCPU Fallback
NegYes
NotCPU Fallback
PReluYes
PadYes
PoolingYes
PowCPU Fallback
QuickGeluYes
RangeCPU Fallback
ReduceL2Yes
ReduceMaxCPU Fallback
ReduceMeanCPU Fallback
ReduceMinCPU Fallback
ReduceProdCPU Fallback
ReduceSumCPU Fallback
ReluCPU Fallback
RepeatYes
RepeatNYes
ReshapeCPU Fallback
ResizeCPU Fallback
RollYes
ScatterNDCPU Fallback
SigmoidCPU Fallback
SliceCPU Fallback
SoftmaxCPU Fallback
SoftplusCPU Fallback
SplitCPU Fallback
SplitV2Yes
SquaredDifferenceYes
SqueezeCPU Fallback
SubCPU Fallback
SubConstantCPU Fallback
SwishYes
TanhCPU Fallback
TileCPU Fallback
TopKYes
TopKIndicesCPU Fallback
TopKValuesCPU Fallback
TransposeCPU Fallback
TransposeConvolutionYes
UnflattenYes
UnsqueezeCPU Fallback
UpsamplingYes
WhereCPU Fallback

All Unsupported Tested Ops

Failed Ops

Op NameFail Reason
AbsLibrary bug: ONNX ‘Abs’ op not registered in OPTIONSMAPPER in subgraph_builder.py — KeyError: ‘abs’
Andqubee compiler does not implement the ONNX And operator
CeilLibrary bug: ONNX ‘Ceil’ op listed in parse_simple branch but ‘ceil’ key missing from OPTIONSMAPPER — KeyError: ‘ceil’
Convolution3dInherently unsupported: Aries2 backend does not support Conv3d — ValueError: All subgraphs are unsupported
CosLibrary bug: ONNX ‘Cos’ op not registered in OPTIONSMAPPER in subgraph_builder.py — KeyError: ‘cos’
qubee compiler does not support the ONNX ‘Cos’ op (KeyError: ‘cos’)
CumSumCompiler OPTIONSMAPPER does not support the ONNX ‘CumSum’ op (KeyError: ‘cumsum’)
Equalqubee compiler explicitly raises NotImplementedError in parse_equal;
FlipQubee quantizer crashes on Flip: applySelect ‘index -3 out of bounds for
FloorUnsupported at quantization stage: quantizer rejects FloorOptions — RuntimeError: quantize failed, Unknown layer type: FloorOptions
GRUUnsupported at quantization stage: quantizer rejects StatefulGRUWrapperOptions
GatherElementsQubee folds GatherElements with a constant index tensor into the internal op
GridSampleaten::grid_sampler cannot be exported to ONNX opset 13 (needs opset 16 for
L2Normalizationqubee quantizer does not support L2Normalization layer type;
LSTMUnsupported at quantization stage: quantizer rejects StatefulLSTMWrapperOptions
LayerNormalizationqubee quantizer does not support SqrtOptions (Sqrt layer);
Lessqubee compiler maps ONNX Less to LessV2 internally, which is not
LessThenOrEqualqubee compiler does not recognise the ONNX LessOrEqual operator at all;
Logqubee compiler does not support the ONNX ‘Log’ op in the quantizer stage
LogSigmoidqubee compiler does not support the Log operator (LogOptions type unknown to quantizer)
Logittorch.logit expands to Div/Greater/Less/Log/Where ops, most unsupported by qubee; compiler also hits internal TypeError on empty shape in comparison layer
MaximumLibrary bug: ONNX ‘Max’ (element-wise) is mis-routed to parse_clip which expects 3 Clip inputs — IndexError: list index out of range on node.input[2]
MinimumLibrary bug: ONNX ‘Min’ (element-wise) is mis-routed to parse_clip which expects 3 Clip inputs — IndexError: list index out of range on node.input[2]
NonMaxSuppressionThe ONNX NonMaxSuppression op has a data-dependent output size and
NonZeroONNX NonZero has a data-dependent output shape that qubee’s static
Orqubee compiler parser bug: OrOptions.inputs expects Tuple[int32] but
RNNUnsupported at quantization stage: quantizer rejects StatefulRNNWrapperOptions
ReciprocalModel uses torch.abs internally; ONNX ‘Abs’ op not registered in OPTIONSMAPPER — KeyError: ‘abs’
RmsNormalizationqubee quantizer does not support SqrtOptions (Sqrt layer);
Rsqrt
SinLibrary bug: ONNX ‘Sin’ op not registered in OPTIONSMAPPER in subgraph_builder.py — KeyError: ‘sin’
qubee compiler does not support the ONNX ‘Sin’ op (KeyError: ‘sin’)
SpaceToDepthQubee’s HL pass maps pixel_unshuffle/SpaceToDepth to OrderedPatchify (shows 100%
SqrtModel uses torch.abs internally; ONNX ‘Abs’ op not registered in OPTIONSMAPPER — KeyError: ‘abs’
Trilaten::tril cannot be exported to ONNX opset 13 (needs opset 14 for the Trilu op).
Xorqubee compiler parser bug: XorOptions.inputs expects Tuple[int32] but