yobx.reference.onnxruntime_evaluator#

class yobx.reference.onnxruntime_evaluator.OnnxList(itype: list | int)[source]#

Defines a list for the runtime.

clone() OnnxList[source]#

Clone (torch).

get_device()[source]#

Returns the device of the first tensor.

numpy()[source]#

Creates a new list with all tensors on numpy or self it is already the case.

to(tensor_like) OnnxList[source]#

Creates a new list with all tensors on numpy or pytorch depending on tensor_like.

OnnxruntimeEvaluator#

class yobx.reference.onnxruntime_evaluator.OnnxruntimeEvaluator(proto: str | FunctionProto | ModelProto | GraphProto | NodeProto | OnnxruntimeEvaluator | ExportArtifact, session_options: SessionOptions | None = None, providers: str | List[str] | None = None, nvtx: bool = False, enable_profiling: bool = False, graph_optimization_level: GraphOptimizationLevel | bool = None, log_severity_level: int | None = None, log_verbosity_level: int | None = None, optimized_model_filepath: str | None = None, disable_aot_function_inlining: bool | None = None, use_training_api: bool = False, verbose: int = 0, local_functions: Dict[Tuple[str, str], FunctionProto | ModelProto | GraphProto | NodeProto | OnnxruntimeEvaluator] | None = None, ir_version: int = 10, opsets: int | Dict[str, int] | None = None, whole: bool = False, torch_or_numpy: bool | None = None, function_kwargs: Dict[str, Any] | None = None, dump_onnx_model: str | None = None)[source]#

This class loads an onnx model and the executes one by one the nodes with onnxruntime. This class is mostly meant for debugging.

Parameters:
  • proto – proto or filename

  • session_options – options

  • nvtx – enable nvidia events

  • providersNone, “CPU”, “CUDA” or a list of providers

  • graph_optimization_level – see onnxruntime.SessionOptions

  • log_severity_level – see onnxruntime.SessionOptions

  • log_verbosity_level – see onnxruntime.SessionOptions

  • optimized_model_filepath – see onnxruntime.SessionOptions

  • disable_aot_function_inlining – see onnxruntime.SessionOptions

  • use_training_api – use onnxruntime-training API

  • verbose – verbosity

  • local_functions – additional local function

  • ir_version – ir version to use when unknown

  • opsets – opsets to use when unknown

  • whole – if True, do not split node by node

  • torch_or_numpy – force the use of one of them, True for torch, False for numpy, None to let the class choose

  • dump_onnx_model – dumps the temporary onnx model created if whole is True

  • function_kwargs – a FunctionProto may have parameters, this contains the values of them

enumerate_nodes(nodes: List[NodeProto]) Iterator[NodeProto][source]#

Enumerates nodes recursively.

property input_names: List[str]#

Returns input names.

property input_types: List[TypeProto]#

Returns input types.

property output_names: List[str]#

Returns output names.

property output_types: List[TypeProto]#

Returns output types.

run(outputs: List[str] | None, feed_inputs: Dict[str, Any], intermediate: bool = False, report_cmp: ReportResultComparison | None = None) Dict[str, Any] | List[Any][source]#

Runs the model. It only works with numpy arrays.

Parameters:
  • outputs – required outputs or None for all

  • feed_inputs – inputs

  • intermediate – returns all output instead of the last ones

  • report_cmp – used as a reference, every intermediate results is compare to every existing one, if not empty, it is an instance of yobx.reference.ReportResultComparison

Returns:

outputs, as a list if return_all is False, as a dictionary if return_all is True