experimental_experiment.torch_models.dump_helper

experimental_experiment.torch_models.dump_helper.assert_all_close(v1: Any, v2: Any, atol: float | Tuple[float, float] = 1e-05, rtol: float = 1e-05, msg: str = '')[source]

Checks that the expected outputs and new outputs are the same.

Parameters:
  • v1 – tensor or tuple of tensors

  • v2 – tensor or tuple of tensors

  • atol – absolute error or (absolute error, quantile), if quantile is specified, the function checks the error is < atol for quantile %

  • rtol – relative error

  • msg – more complex message

See 301: Compares LLAMA exporters for onnxrt backend for an example.

experimental_experiment.torch_models.dump_helper.build_matching_inputs(model1: str | ModelProto, feeds: Dict[str, Any], model2: str | ModelProto) Dict[str, Any][source]

Builds a list of inputs for a model based on the inputs made for another. We assume they both needs the same inputs.

Parameters:
  • model1 – first model

  • feeds – inputs for the first model

  • model2 – second model, the one we need the inputs for

Returns:

new inputs

See 301: Compares LLAMA exporters for onnxrt backend for an example.

experimental_experiment.torch_models.dump_helper.dump_onnx(prefix: str, folder: str | None = None, clean: bool = False)[source]

context enabling the dump of models generated by onnxrt backend.

Parameters:
  • prefix – prefix for all files

  • folder – sub folder (created if it does not exist)

  • clean – if True, cleans the folder

See 301: Compares LLAMA exporters for onnxrt backend for an example.

experimental_experiment.torch_models.dump_helper.inputs_from_onnx_model(model: str | ModelProto, init: bool = False) List[Tuple[str, int, Tuple[int, ...]]][source]

Returns the inputs for a model.

Parameters:
  • model – model or filename

  • init – include the initializer as well

Returns:

list of inputs and initializers

See 301: Compares LLAMA exporters for onnxrt backend for an example.

experimental_experiment.torch_models.dump_helper.reorder_functions_in_proto(proto: str | ModelProto) str | ModelProto[source]

The reference implementation expects function to be defined. So rank function has to be placed in the first position

Parameters:

proto – a model

Returns:

modified model inplace

See 301: Compares LLAMA exporters for onnxrt backend for an example.

experimental_experiment.torch_models.dump_helper.results_to_string(results: Any, indent: str = '') str[source]

Builds a string showing the type and shape of every tensor in it.