.torch_interpreter.onnx_export

class experimental_experiment.torch_interpreter.onnx_export.ParameterNaming(mod: torch.nn.Module, exported_program: torch.export.ExportedProgram | None = None)[source]

A class which maps parameters name in the original module and the different they have in the fx.graph.

The exported program and the original model may have different parameter names.

class experimental_experiment.torch_interpreter.onnx_export.SubModuleNaming(mod: torch.nn.Module)[source]

A class which maps class submodule name and local functions in order to give short but unique names.

experimental_experiment.torch_interpreter.onnx_export.build_source_lines(model: torch.nn.Module) Dict[str, Tuple[str, Tuple[int, int]]][source]

Extracts source file and line number for method of the model and submodules.

Parameters:

model – model to investigate

Returns:

source files and lines

experimental_experiment.torch_interpreter.onnx_export.get_default_aten_as_function(target_opset: int | None = None) Tuple[str][source]

Returns the list of aten functions to export as local functions depending on this opset. If the opset is not specified, it returns a minimum of functions to keep.

<<<

import pprint
from experimental_experiment.torch_interpreter.onnx_export import (
    get_default_aten_as_function,
)

pprint.pprint(get_default_aten_as_function(23))

>>>

    ('aten.index_copy.default',
     'aten.index_put.default',
     'aten.setitem',
     <built-in function setitem>)
experimental_experiment.torch_interpreter.onnx_export.is_wrapped(model: Any, dynamic_shapes: Any | None = None) bool[source]

Tells if a model is wrapped.

experimental_experiment.torch_interpreter.onnx_export.match_input_parameters(model: Any, names: List[str], args: Tuple[Any, ...] | None = None) Dict[str, Any][source]

Maps the given names with the parameter names in the model.

Parameters:
  • model – model

  • names – names to retrieve

  • args – available inputs

Returns:

dictionary with values

Example:

<<<

import torch
from torch._subclasses.fake_tensor import FakeTensorMode
from experimental_experiment.reference import ExtendedReferenceEvaluator
from experimental_experiment.torch_interpreter import to_onnx, match_input_parameters


class Neuron(torch.nn.Module):
    def __init__(self, n_dims: int, n_targets: int):
        super(Neuron, self).__init__()
        self.linear = torch.nn.Linear(n_dims, n_targets)

    def forward(self, x):
        return torch.relu(self.linear(x))


fake_mode = FakeTensorMode()
converter = fake_mode.fake_tensor_converter

fake_x = converter.from_real_tensor(fake_mode, torch.rand(2, 5))
with fake_mode:
    model = Neuron(5, 3)
    onx = to_onnx(model, (fake_x,))

# expected values with a different model
not_fake_model = Neuron(5, 3)
x = torch.rand(2, 5)
expected = not_fake_model(x)
print(expected)

# converts the model, fill inputs with the weights
names = [i.name for i in onx.graph.input]
pfeeds = match_input_parameters(not_fake_model, names, (x,))
nfeeds = {k: v.detach().numpy() for k, v in pfeeds.items()}
ref = ExtendedReferenceEvaluator(onx)
got = ref.run(None, nfeeds)
print(got)

>>>

    tensor([[0., 0., 0.],
            [0., 0., 0.]], grad_fn=<ReluBackward0>)
    [array([[0., 0., 0.],
           [0., 0., 0.]], dtype=float32)]
experimental_experiment.torch_interpreter.onnx_export.validate_exported_onnx(model: torch.nn.Module, args, kwargs, filename, atol: float = 1e-05, verbose: int = 0)[source]

Validates the exported model with onnxruntime.