onnx_extended.tools

onnx_extended.tools.ort_debug

enumerate_ort_run

onnx_extended.tools.ort_debug.enumerate_ort_run(onx: str | ModelProto, feeds: Dict[str, Any], verbose: int = 0, providers: List[str] | None = None, **kwargs: Dict[str, Any]) Iterator[Tuple[List[str], List[Any], NodeProto]][source]

Yields all the intermediate results produced by onnxruntime.

Parameters:
  • onx – model

  • feeds – input tensors

  • verbose – prints out a summary of the results

  • providers – if not specified, default is [“CPUExecutionProvider”]

  • kwargs – additional parameter to give InferenceSession when it is initialized

Returns:

intermediate results, names, and node

onnx_extended.tools.js_profile

js_profile_to_dataframe

onnx_extended.tools.js_profile.js_profile_to_dataframe(filename: str, as_df: bool = True, first_it_out: bool = False, agg: bool = False, agg_op_name: bool = False, with_shape: bool = False) List | DataFrame[source]

Profiles the execution of an onnx graph with onnxruntime.

Parameters:
  • filename – filename holding the profiling stored in json format

  • as_df – returns the

  • first_it_out – if aggregated, leaves the first iteration out

  • agg – aggregate by event

  • agg_op_name – aggregate on operator name or operator index

  • with_shape – keep the shape before aggregating

Returns:

DataFrame or dictionary

plot_ort_profile

onnx_extended.tools.js_profile.plot_ort_profile(df: DataFrame, ax0: matplotlib.axes.Axes | None = None, ax1: matplotlib.axes.Axes | None = None, title: str | None = None) matplotlib.axes.Axes[source]

Plots time spend in computation based on a dataframe produced by function js_profile_to_dataframe().

Parameters:
  • df – dataframe

  • ax0 – first axis to draw time

  • ax1 – second axis to draw occurences

  • title – graph title

Returns:

the graph

plot_ort_profile_timeline

onnx_extended.tools.js_profile.plot_ort_profile_timeline(df: DataFrame, ax: matplotlib.axes.Axes | None = None, iteration: int = -2, title: str | None = None, quantile: float = 0.5, fontsize: int = 12) matplotlib.axes.Axes[source]

Creates a timeline based on a dataframe produced by function js_profile_to_dataframe().

Parameters:
  • df – dataframe

  • ax – first axis to draw time

  • iteration – iteration to plot, negative value to start from the end

  • title – graph title

  • quantile – draw the 10% less consuming operators in a different color

  • fontsize – font size

Returns:

the graph

onnx_extended.tools.run_onnx

save_for_benchmark_or_test

onnx_extended.tools.run_onnx.save_for_benchmark_or_test(folder: str, test_name: str, model: ModelProto, inputs: List[ndarray], outputs: List[ndarray] | None = None, data_set: int = 0) str[source]

Saves input, outputs on disk to later uses it as a backend test or a benchmark.

Parameters:
  • folder – folder to save

  • test_name – test name or subfolder

  • model – model to save

  • inputs – inputs of the node

  • outputs – outputs of the node, supposedly the expected outputs, if not speficied, they are computed with the reference evaluator

  • data_set – to have multiple tests with the same model

Returns:

test folder

bench_virtual

onnx_extended.tools.run_onnx.bench_virtual(test_path: str, virtual_path: str, runtimes: List[str] | str = 'ReferenceEvaluator', index: int = 0, warmup: int = 5, repeat: int = 10, modules: List[Dict[str, str | None]] | None = None, verbose: int = 0, save_as_dataframe: str | None = None, filter_fct: Callable[[str, Dict[str, str | None]], bool] | None = None) List[Dict[str, float | Dict[str, Tuple[int, ...]]]][source]

Runs the same benchmark over different versions of the same packages in a virtual environment.

Parameters:
  • test_path – test path

  • virtual_path – path to the virtual environment

  • runtimes – runtimes to measure (ReferenceEvaluation, CReferenceEvaluator, onnxruntime)

  • index – test index to measure

  • warmup – number of iterations to run before starting to measure the model

  • repeat – number of iterations to measure

  • modules – modules to install, example: modules=[{“onnxruntime”: “1.17.3”, “onnx”: “1.15.0”}]

  • filter_fct – to disable some of the configuration based on the runtime and the installed modules

  • verbose – verbosity

  • save_as_dataframe – saves as dataframe

Returns:

list of statistics

TestRun

class onnx_extended.tools.run_onnx.TestRun(folder: str)[source]

Loads a test saved by save_for_benchmark_or_test().

Parameters:

folder – test folder

It has the following attributes:

  • folder: str

  • proto: ModelProto

  • datasets: Dict[int, List[Tuple[int, np.array]]]

bench(f_build: Callable[[ModelProto], Any], f_run: Callable[[Any, Dict[str, array]], List[array]], index: int = 0, warmup: int = 5, repeat: int = 10) Dict[str, float | str | Any][source]

Runs the model on the given inputs.

Parameters:
  • f_build – function to call to build the inference class

  • f_run – function to call to run the inference

  • index – test index to measure

  • warmup – number of iterations to run before starting to measure the model

  • repeat – number of iterations to measure

Returns:

dictionary with many metrics, any metric endings with “_time” is a duration

property input_names

Returns the input names of the model.

property output_names

Returns the output names of the model.

test(f_build: Callable[[ModelProto], Any], f_run: Callable[[Any, Dict[str, array]], List[array]], index: int = 0, exc: bool = True, atol: float = 1e-05, rtol: float = 1e-05) Dict[str, Tuple[float, float, str]] | None[source]

Runs the tests.

Parameters:
  • f_build – function to call to build the inference class

  • f_run – function to call to run the inference

  • index – test index

  • exc – raises an exception if the verification fails

  • atol – absolute tolerance

  • rtol – relative tolerance

Returns:

list of results with discrepancies, the absolute error, the relative one and a reason for the failure