experimental_experiment.torch_interpreter.onnx_export

class experimental_experiment.torch_interpreter.onnx_export.ParameterNaming(mod: torch.nn.Module, exported_program: torch.export.ExportedProgram | None = None)[source]

A class which maps parameters name in the original module and the different they have in the fx.graph.

The exported program and the original model may have different parameter names.

class experimental_experiment.torch_interpreter.onnx_export.SubModuleNaming(mod: torch.nn.Module)[source]

A class which maps class submodule name and local functions in order to give short but unique names.

experimental_experiment.torch_interpreter.onnx_export.is_wrapped(model: Any, dynamic_shapes: Any | None = None) bool[source]

Tells if a model is wrapped.

experimental_experiment.torch_interpreter.onnx_export.match_input_parameters(model: Any, names: List[str], args: Tuple[Any, ...] | None = None) Dict[str, Any][source]

Maps the given names with the parameter names in the model.

Parameters:
  • model – model

  • names – names to retrieve

  • args – available inputs

Returns:

dictionary with values

Example:

<<<

import torch
from torch._subclasses.fake_tensor import FakeTensorMode
from experimental_experiment.reference import ExtendedReferenceEvaluator
from experimental_experiment.torch_interpreter import to_onnx, match_input_parameters


class Neuron(torch.nn.Module):
    def __init__(self, n_dims: int, n_targets: int):
        super(Neuron, self).__init__()
        self.linear = torch.nn.Linear(n_dims, n_targets)

    def forward(self, x):
        return torch.relu(self.linear(x))


fake_mode = FakeTensorMode()
converter = fake_mode.fake_tensor_converter

fake_x = converter.from_real_tensor(fake_mode, torch.rand(2, 5))
with fake_mode:
    model = Neuron(5, 3)
    onx = to_onnx(model, (fake_x,))

# expected values with a different model
not_fake_model = Neuron(5, 3)
x = torch.rand(2, 5)
expected = not_fake_model(x)
print(expected)

# converts the model, fill inputs with the weights
names = [i.name for i in onx.graph.input]
pfeeds = match_input_parameters(not_fake_model, names, (x,))
nfeeds = {k: v.detach().numpy() for k, v in pfeeds.items()}
ref = ExtendedReferenceEvaluator(onx)
got = ref.run(None, nfeeds)
print(got)

>>>

    tensor([[0.0000, 0.2166, 0.1305],
            [0.0000, 0.0000, 0.5688]], grad_fn=<ReluBackward0>)
    [array([[0.   , 0.217, 0.131],
           [0.   , 0.   , 0.569]], dtype=float32)]