reference#

CReferenceEvaluator#

class onnx_extended.reference.CReferenceEvaluator(proto: Any, opsets: Dict[str, int] | None = None, functions: List[ReferenceEvaluator | FunctionProto] | None = None, verbose: int = 0, new_ops: List[OpRun] | None = None, save_intermediate: str | None = None, **kwargs)[source]#

This class replaces the python implementation by C implementation for a short list of operators quite slow in python (such as Conv). The class automatically replaces a python implementation by a C implementation if available. See example Using C implementation of operator Conv.

from onnx.reference import ReferenceEvaluator
from onnx_extended.reference.c_ops import Conv
ref = ReferenceEvaluator(..., new_ops=[Conv])

See onnx.reference.ReferenceEvaluator for a detailed documentation.

Additions

Parameter save_intermediate can be set to a folder to save intermediate results in this folder. It follows the same design as the backend test. Let’s consider a model with the following nodes:

<
    ir_version: 8,
    opset_import: [ "" : 18]
>
agraph (float[N, 128] X, float[128,10] W, float[10] B) => (float[N] C)
{
    T = MatMul(X, W)
    S = Add(T, B)
    C = Softmax(S)
}

It will produce the following files after it is run with CReferenceEvaluator(…, save_intermediate=”modelrun”).

modelrun
    +-- test_node_0_MatMul
    |       +-- model.onnx
    |       +-- test_data_set_0
    |               + input_0.pb
    |               + input_1.pb
    |               + output_0.pb
    +-- test_node_1_Add
    |       +-- model.onnx
    |       +-- test_data_set_0
    |               + input_0.pb
    |               + input_1.pb
    |               + output_0.pb
    +-- test_node_2_Softmax
            +-- model.onnx
            +-- test_data_set_0
                    + input_0.pb
                    + output_0.pb

These files can then be run with a different runtime to look for discrepancies. Following example executes node by node with onnxruntime.

from onnx.backend.test.loader import load_model_tests
from onnx.reference.c_reference_backend import (
    ReferenceEvaluatorBackend,
    create_reference_backend,
)
from onnxruntime import InferenceSession

root = "folder which folder modelrun"
examples = load_model_tests(root, "modelrun")

class Wrapper(InferenceSession):

    def __init__(self, model, *args, providers=None, **kwargs):
        super().__init__(
            model.SerializeToString(),
            *args,
            providers=providers or ["CPUExecutionProvider"],
            **kwargs,
        )

    def run(self, *args, **kwargs):
        return InferenceSession.run(self, *args, **kwargs)

    @property
    def input_names(self):
        return [i.name for i in self.get_inputs()]

    @property
    def output_names(self):
        return [o.name for o in self.get_outputs()]

new_cls = ReferenceEvaluatorBackend[NewRef]
backend = create_reference_backend(new_cls, path_to_test=root)
beckend.run()

New in version 0.2.0.

property input_names#

Returns the input names.

property opsets#

Returns the opsets.

property output_names#

Returns the output names.

run(output_names, feed_inputs: Dict[str, Any], attributes: Dict[str, Any] | None = None)[source]#

Executes the onnx model.

Parameters:
  • output_names – requested outputs by names, None for all

  • feed_inputs – dictionary { input name: input value }

  • attributes – attributes value if the instance runs a FunctionProto

Returns:

list of requested outputs

Backend#

onnx_extended.reference.c_reference_backend.create_reference_backend(backend: type[onnx.backend.base.Backend] | None = None, path_to_test: str | None = None, kind: str | None = None) Runner[source]#
class onnx_extended.reference.c_reference_backend.CReferenceEvaluatorBackend[source]#

See onnx_extended.reference.CReferenceEvaluator for an example.

cls_inference#

alias of CReferenceEvaluator

classmethod create_inference_session(model: str | bytes | ModelProto | NodeProto | FunctionProto)[source]#

Creates an instance of the class running a model.

classmethod is_opset_supported(model)[source]#

Tells which opsets are supported.

classmethod run_model(model, inputs: List[Any], device: str | None = None, **kwargs: Dict[str, Any])[source]#

Called if the onnx proto is a ModelProto.

classmethod run_node(node, inputs, device=None, outputs_info=None, **kwargs)[source]#

Called if the onnx proto is a NodeProto.

classmethod supports_device(device: str) bool[source]#

Tells if a specific device is supported.

class onnx_extended.reference.c_reference_backend.CReferenceEvaluatorBackendRep(session: CReferenceEvaluator)[source]#

See onnx_extended.reference.CReferenceEvaluator for an example.

Parameters:

session – any runtime with the same interface as onnx.reference.ReferenceEvaluator

run(inputs: List[ndarray], **kwargs) List[ndarray][source]#

Abstract function.

class onnx_extended.reference.c_reference_backend.Runner(backend: type[onnx.backend.base.Backend], path_to_test: str | None = None, kind: str | List[str] | None = None, test_kwargs: dict[str, Any] | None = None)[source]#

Collects tests and run them as unit tests.

Parameters:
  • backend – a subclass of onnx.backend.base.Backend

  • path_to_test – folder to look at

  • kind – subfolder to test

  • test_kwargs – additional test parameters

run(verbose: int = 0, exc_cls: type | None = <class 'AssertionError'>) Tuple[List[Tuple[str, Callable]], List[Tuple[str, Callable, Any]], List[Tuple[str, Callable, Exception]]][source]#

Runs all tests.

Parameters:
  • verbose – verbosity, use tqdm

  • exc_cls – exception to raise when a test fails, if None, no exception is raised

Returns:

list of run tests, list of skipped tests, list of failed tests

tests(name: str = 'CustomTestCase') type[unittest.case.TestCase][source]#

Returns a subclass of unittest.TestCase.

Parameters:

name – name of the subclass

Tools#

onnx_extended.reference.from_array_extended(tensor: ndarray, name: str | None = None) TensorProto[source]#

Converts an array into a TensorProto including float 8 types.

Parameters:
  • tensor – numpy array

  • name – name

Returns:

TensorProto

Operators#

ai.onnx#

class onnx_extended.reference.c_ops.c_op_conv.Conv(onnx_node: NodeProto, run_params: Dict[str, Any], schema: Any | None = None)[source]#

ai.onnx.ml#

class onnx_extended.reference.c_ops.c_op_tree_ensemble_classifier.TreeEnsembleClassifier_1(onnx_node: NodeProto, run_params: Dict[str, Any], schema: Any | None = None)[source]#
class onnx_extended.reference.c_ops.c_op_tree_ensemble_classifier.TreeEnsembleClassifier_3(onnx_node: NodeProto, run_params: Dict[str, Any], schema: Any | None = None)[source]#
class onnx_extended.reference.c_ops.c_op_tree_ensemble_regressor.TreeEnsembleRegressor_1(onnx_node: NodeProto, run_params: Dict[str, Any], schema: Any | None = None)[source]#
class onnx_extended.reference.c_ops.c_op_tree_ensemble_regressor.TreeEnsembleRegressor_3(onnx_node: NodeProto, run_params: Dict[str, Any], schema: Any | None = None)[source]#