experimental_experiment.mini_onnx_builder¶
- class experimental_experiment.mini_onnx_builder.MiniOnnxBuilder(target_opset: int = 18, ir_version: int = 10)[source]¶
Simplified builder to build very simple model.
- Parameters:
target_opset – opset to specify
ir_verison – IR version to use
- append_output_dict(name: str, tensors: Dict[str, ndarray | torch.Tensor])[source]¶
Adds two outputs, a string tensors for the keys and a sequence of tensors for the values.
The output name is
name__keys
andname__values
.
- append_output_initializer(name: str, tensor: ndarray | torch.Tensor, randomize: bool = False)[source]¶
Adds an initializer as an output. The initializer name is prefixed by
t_
. The output name is name. If randomize is True, the tensor is not stored but replaced by a random generator.
- append_output_sequence(name: str, tensors: List[ndarray | torch.Tensor])[source]¶
Adds a sequence of initializers as an output. The initializers names are prefixed by
seq_
. The output name isname
.
- to_onnx() ModelProto [source]¶
Conversion to onnx. :return: the proto
- experimental_experiment.mini_onnx_builder.create_input_tensors_from_onnx_model(proto: str | ModelProto, device: str = 'cpu', engine: str = 'ExtendedReferenceEvaluator') Tuple[Any, ...] | Dict[str, Any] [source]¶
Deserializes tensors stored with function
create_onnx_model_from_input_tensors()
. It relies onExtendedReferenceEvaluator
to restore the tensors.- Parameters:
proto – ModelProto or the file itself
device – moves the tensor to this device
engine – runtime to use, onnx, the default value, onnxruntime
- Returns:
ModelProto
- experimental_experiment.mini_onnx_builder.create_onnx_model_from_input_tensors(inputs: Any, switch_low_high: bool | None = None, randomize: bool = False) ModelProto [source]¶
Creates a model proto including all the value as initializers. They can be restored by executing the model. We assume these inputs are not bigger than 2Gb, the limit of protobuf.
- Parameters:
inputs – anything
switch_low_high – if None, it is equal to
switch_low_high=sys.byteorder != "big"
randomize – if True, float tensors are not stored but randomized to save space
- Returns:
ModelProto
The function raises an error if not supported.
- experimental_experiment.mini_onnx_builder.dtype_to_tensor_dtype(dt: dtype) int [source]¶
Converts a torch dtype or numpy dtype into a onnx element type.
- Parameters:
to – dtype
- Returns:
onnx type
- experimental_experiment.mini_onnx_builder.proto_from_array(arr: torch.Tensor, name: str | None = None, verbose: int = 0) TensorProto [source]¶
Converts a torch Tensor into a TensorProto.
- Parameters:
arr – tensor
verbose – display the type and shape
- Returns:
a TensorProto
- experimental_experiment.mini_onnx_builder.torch_dtype_to_onnx_dtype(to: torch.dtype) int [source]¶
Converts a torch dtype into a onnx element type.
- Parameters:
to – torch dtype
- Returns:
onnx type