yobx.torch.to_onnx#

yobx.torch.to_onnx(mod: torch.nn.Module | torch.fx.GraphModule, args: Sequence[torch.Tensor] | None = None, kwargs: Dict[str, torch.Tensor] | None = None, input_names: Sequence[str] | None = None, target_opset: int | Dict[str, int] | None = None, as_function: bool = False, options: OptimizationOptions | None = None, verbose: int = 0, return_builder: bool = False, raise_list: Set[str] | None = None, dynamic_shapes: Dict[str, Any] | Tuple[Any] | None = None, optimize: bool = True, dispatcher: Dispatcher | None = None, large_model: bool = False, external_threshold: int = 1024, export_options: str | ExportOptions | None = None, return_optimize_report: bool = False, filename: str | None = None, inline: bool = True, export_modules_as_functions: bool | Set[type[torch.nn.Module]] = False, function_options: FunctionOptions | None = None, output_names: List[str] | None = None, output_dynamic_shapes: Dict[str, Any] | Tuple[Any] | None = None, validate_onnx: bool | float = False) ExportArtifact[source]#

Exports a torch model into ONNX using dynamo export.

Parameters:
  • mod – torch module

  • args – input arguments

  • kwargs – keyword attributes

  • input_names – input names

  • target_opset – targeted opset or targeted opsets as a dictionary

  • as_function – export as a ModelProto or a FunctionProto

  • options – optimization options

  • verbose – verbosity level

  • return_builder – returns the builder as well

  • raise_list – the builder stops any time a name falls into that list, this is a debugging tool

  • dynamic_shapes – see torch.export.export

  • optimize – optimize the model before exporting into onnx

  • dispatcher – see yobx.torch.interpreter.Dispatcher

  • large_model – if True returns a onnx.model_container.ModelContainer, it lets the user to decide later if the weights should be part of the model or saved as external weights

  • external_threshold – if large_model is True, every tensor above this limit is stored as external

  • return_optimize_report – returns statistics on the optimization as well; statistics are also available via artifact.report on the returned ExportArtifact

  • filename – if specified, stores the model into that file

  • inline – inline the model before converting to onnx, this is done before any optimization takes place

  • export_options – to apply different options before to get the exported program

  • export_modules_as_functions – export submodules as local functions, this parameter can be filled with a set of class to preserve, all this other will be exported as usual

  • function_options – to specify what to do with the initializers in local functions, add them as constants or inputs

  • output_names – to rename the output names

  • output_dynamic_shapes – same as dynamic_shapes but for the output

  • validate_onnx – if a float or True, validates the onnx model against the model with the input used to export, if True, the tolerance is 1e-5

Returns:

ExportArtifact wrapping the exported ONNX proto and an ExportReport. When return_builder is True a tuple (artifact, builder) is returned instead; when return_optimize_report is also True the tuple is (artifact, builder, stats).

If environment variable PRINT_GRAPH_MODULE is set to one, information about the graph module is printed out. Environment variable ONNXVERBOSE=1 can be used to increase verbosity in this function. Environment variable ONNX_BUILDER_PROGRESS=1 can be used to show a progress bar on big models. Other debugging options are available, see GraphBuiler.

Example:

import torch
from yobx.torch.interpreter import to_onnx

class Neuron(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = torch.nn.Linear(4, 2)

    def forward(self, x):
        return torch.relu(self.linear(x))

x = torch.randn(3, 4)
artifact = to_onnx(Neuron(), (x,))
artifact.save("model.onnx")