Tested with the v0.12.0 qb compiler. Development flow PyTorch -> ONNX -> MXQ.
| Op Name | NPU Supported |
|---|---|
| AMax | Yes |
| Adding | CPU Fallback |
| AddingConstant | CPU Fallback |
| ArgMax | CPU Fallback |
| BatchNorm1d | Yes |
| Batchnorm | Yes |
| Cast | CPU Fallback |
| Celu | Yes |
| Clip | CPU Fallback |
| Concatenate | CPU Fallback |
| Convolution | Yes |
| Convolution1d | Yes |
| DepthToSpace | Yes |
| DepthwiseConvolution | Yes |
| Div | CPU Fallback |
| DivConstant | CPU Fallback |
| Einsum | Yes |
| Elu | Yes |
| Embedding | Yes |
| Erf | CPU Fallback |
| Exp | CPU Fallback |
| Expand | CPU Fallback |
| Flatten | CPU Fallback |
| FloorDiv | Yes |
| FloorMod | Yes |
| GLU | Yes |
| Gather | CPU Fallback |
| GatherND | CPU Fallback |
| Gelu | CPU Fallback |
| Gemm | Yes |
| GlobalAveragePooling | Yes |
| Greater | CPU Fallback |
| GroupConvolution | Yes |
| GroupNormalization | CPU Fallback |
| HardSigmoid | Yes |
| HardSwish | Yes |
| Hardtanh | Yes |
| Identity | Yes |
| InstanceNormalization | Yes |
| L1Normalization | Yes |
| LeakyRelu | Yes |
| MaskedFill | Yes |
| MatMul | CPU Fallback |
| MaximumConstant | Yes |
| MinimumConstant | CPU Fallback |
| Mish | Yes |
| Multiply | CPU Fallback |
| MultiplyConstant | CPU Fallback |
| Neg | Yes |
| Not | CPU Fallback |
| PRelu | Yes |
| Pad | Yes |
| Pooling | Yes |
| Pow | CPU Fallback |
| QuickGelu | Yes |
| Range | CPU Fallback |
| ReduceL2 | Yes |
| ReduceMax | CPU Fallback |
| ReduceMean | CPU Fallback |
| ReduceMin | CPU Fallback |
| ReduceProd | CPU Fallback |
| ReduceSum | CPU Fallback |
| Relu | CPU Fallback |
| Repeat | Yes |
| RepeatN | Yes |
| Reshape | CPU Fallback |
| Resize | CPU Fallback |
| Roll | Yes |
| ScatterND | CPU Fallback |
| Sigmoid | CPU Fallback |
| Slice | CPU Fallback |
| Softmax | CPU Fallback |
| Softplus | CPU Fallback |
| Split | CPU Fallback |
| SplitV2 | Yes |
| SquaredDifference | Yes |
| Squeeze | CPU Fallback |
| Sub | CPU Fallback |
| SubConstant | CPU Fallback |
| Swish | Yes |
| Tanh | CPU Fallback |
| Tile | CPU Fallback |
| TopK | Yes |
| TopKIndices | CPU Fallback |
| TopKValues | CPU Fallback |
| Transpose | CPU Fallback |
| TransposeConvolution | Yes |
| Unflatten | Yes |
| Unsqueeze | CPU Fallback |
| Upsampling | Yes |
| Where | CPU Fallback |
| Op Name | Fail Reason |
|---|---|
| Abs | Library bug: ONNX ‘Abs’ op not registered in OPTIONSMAPPER in subgraph_builder.py — KeyError: ‘abs’ |
| And | qubee compiler does not implement the ONNX And operator |
| Ceil | Library bug: ONNX ‘Ceil’ op listed in parse_simple branch but ‘ceil’ key missing from OPTIONSMAPPER — KeyError: ‘ceil’ |
| Convolution3d | Inherently unsupported: Aries2 backend does not support Conv3d — ValueError: All subgraphs are unsupported |
| Cos | Library bug: ONNX ‘Cos’ op not registered in OPTIONSMAPPER in subgraph_builder.py — KeyError: ‘cos’ qubee compiler does not support the ONNX ‘Cos’ op (KeyError: ‘cos’) |
| CumSum | Compiler OPTIONSMAPPER does not support the ONNX ‘CumSum’ op (KeyError: ‘cumsum’) |
| Equal | qubee compiler explicitly raises NotImplementedError in parse_equal; |
| Flip | Qubee quantizer crashes on Flip: applySelect ‘index -3 out of bounds for |
| Floor | Unsupported at quantization stage: quantizer rejects FloorOptions — RuntimeError: quantize failed, Unknown layer type: FloorOptions |
| GRU | Unsupported at quantization stage: quantizer rejects StatefulGRUWrapperOptions |
| GatherElements | Qubee folds GatherElements with a constant index tensor into the internal op |
| GridSample | aten::grid_sampler cannot be exported to ONNX opset 13 (needs opset 16 for |
| L2Normalization | qubee quantizer does not support L2Normalization layer type; |
| LSTM | Unsupported at quantization stage: quantizer rejects StatefulLSTMWrapperOptions |
| LayerNormalization | qubee quantizer does not support SqrtOptions (Sqrt layer); |
| Less | qubee compiler maps ONNX Less to LessV2 internally, which is not |
| LessThenOrEqual | qubee compiler does not recognise the ONNX LessOrEqual operator at all; |
| Log | qubee compiler does not support the ONNX ‘Log’ op in the quantizer stage |
| LogSigmoid | qubee compiler does not support the Log operator (LogOptions type unknown to quantizer) |
| Logit | torch.logit expands to Div/Greater/Less/Log/Where ops, most unsupported by qubee; compiler also hits internal TypeError on empty shape in comparison layer |
| Maximum | Library bug: ONNX ‘Max’ (element-wise) is mis-routed to parse_clip which expects 3 Clip inputs — IndexError: list index out of range on node.input[2] |
| Minimum | Library bug: ONNX ‘Min’ (element-wise) is mis-routed to parse_clip which expects 3 Clip inputs — IndexError: list index out of range on node.input[2] |
| NonMaxSuppression | The ONNX NonMaxSuppression op has a data-dependent output size and |
| NonZero | ONNX NonZero has a data-dependent output shape that qubee’s static |
| Or | qubee compiler parser bug: OrOptions.inputs expects Tuple[int32] but |
| RNN | Unsupported at quantization stage: quantizer rejects StatefulRNNWrapperOptions |
| Reciprocal | Model uses torch.abs internally; ONNX ‘Abs’ op not registered in OPTIONSMAPPER — KeyError: ‘abs’ |
| RmsNormalization | qubee quantizer does not support SqrtOptions (Sqrt layer); |
| Rsqrt | |
| Sin | Library bug: ONNX ‘Sin’ op not registered in OPTIONSMAPPER in subgraph_builder.py — KeyError: ‘sin’ qubee compiler does not support the ONNX ‘Sin’ op (KeyError: ‘sin’) |
| SpaceToDepth | Qubee’s HL pass maps pixel_unshuffle/SpaceToDepth to OrderedPatchify (shows 100% |
| Sqrt | Model uses torch.abs internally; ONNX ‘Abs’ op not registered in OPTIONSMAPPER — KeyError: ‘abs’ |
| Tril | aten::tril cannot be exported to ONNX opset 13 (needs opset 14 for the Trilu op). |
| Xor | qubee compiler parser bug: XorOptions.inputs expects Tuple[int32] but |