onnx_export¶
to_onnx¶
- experimental_experiment.torch_interpreter.to_onnx(mod: torch.nn.Module | torch.fx.GraphModule, args: Sequence[torch.Tensor], input_names: Sequence[str] | None = None, target_opset: int | Dict[str, int] | None = None, as_function: bool = False, options: OptimizationOptions | None = None, verbose: int = 0, return_builder: bool = False, raise_list: Set[str] | None = None, dynamic_shapes: Dict[str, Any] | Tuple[Any] | None = None, optimize: bool = True, dispatcher: Dispatcher | None = None, large_model: bool = False, external_threshold: int = 1024, api_two: bool = False) ModelProto | ModelContainer | Tuple[ModelProto | ModelContainer, GraphBuilder] [source]¶
Exports a torch model into ONNX using dynamo export.
- Parameters:
mod – torch module
args – input arguments
input_names – input names
target_opset – targeted opset or targeted opsets as a dictionary
as_function – export as a ModelProto or a FunctionProto
options – optimization options
verbose – verbosity level
return_builder – returns the builder as well
raise_list – the builder stops any time a name falls into that list, this is a debbuging tool
dynamic_shapes – see torch.export.export
optimize – optimize the model before exporting into onnx
dispatcher – see
experimental_experiment.torch_interpreter.Dispatcher
large_model – if True returns a
onnx.model_container.ModelContainer
, it lets the user to decide later if the weights should be part of the model or saved as external weightsexternal_threshold – if large_model is True, every tensor above this limit is stored as external
api_two – use
torch._dynamo.export
instead oftorch.export.export
- Returns:
onnx model