onnx_diagnostic.helpers.mini_onnx_builder

class onnx_diagnostic.helpers.mini_onnx_builder.MiniOnnxBuilder(target_opset: int = 18, ir_version: int = 10, sep: str = '___')[source][source]

Simplified builder to build very simple model.

Parameters:
  • target_opset – opset to specify

  • ir_verison – IR version to use

  • sep – separator to build output names

append_output_dict(name: str, tensors: Dict[str, ndarray | Tensor])[source][source]

Adds two outputs, a string tensors for the keys and a sequence of tensors for the values.

The output name is name___keys and name___values.

append_output_initializer(name: str, tensor: ndarray | Tensor, randomize: bool = False)[source][source]

Adds an initializer as an output. The initializer name is prefixed by t_. The output name is name. If randomize is True, the tensor is not stored but replaced by a random generator.

append_output_sequence(name: str, tensors: List[ndarray | Tensor])[source][source]

Adds a sequence of initializers as an output. The initializers names are prefixed by seq_. The output name is name.

to_onnx() ModelProto[source][source]

Conversion to onnx. :return: the proto

onnx_diagnostic.helpers.mini_onnx_builder.create_input_tensors_from_onnx_model(proto: str | ModelProto, device: str = 'cpu', engine: str = 'ExtendedReferenceEvaluator', sep: str = '___') Any[source][source]

Deserializes tensors stored with function create_onnx_model_from_input_tensors(). It relies on ExtendedReferenceEvaluator to restore the tensors.

Parameters:
  • proto – ModelProto or the file itself

  • device – moves the tensor to this device

  • engine – runtime to use, onnx, the default value, onnxruntime

  • sep – separator

Returns:

restored data

See example Dumps intermediate results of a torch model for an example.

onnx_diagnostic.helpers.mini_onnx_builder.create_onnx_model_from_input_tensors(inputs: Any, switch_low_high: bool | None = None, randomize: bool = False, sep: str = '___') ModelProto[source][source]

Creates a model proto including all the value as initializers. They can be restored by executing the model. We assume these inputs are not bigger than 2Gb, the limit of protobuf. Nothing is implemented yet to get around that limit.

Parameters:
  • inputs – anything

  • switch_low_high – if None, it is equal to switch_low_high=sys.byteorder != "big"

  • randomize – if True, float tensors are not stored but randomized to save space

  • sep – separator

Returns:

ModelProto

The function raises an error if not supported.

onnx_diagnostic.helpers.mini_onnx_builder.proto_from_array(arr: Tensor, name: str | None = None, verbose: int = 0) TensorProto[source][source]

Converts a torch Tensor into a TensorProto.

Parameters:
  • arr – tensor

  • verbose – display the type and shape

Returns:

a TensorProto