yobx.torch.interpreter.onnx_export#
- class yobx.torch.interpreter.onnx_export.ParameterNaming(mod: torch.nn.Module, exported_program: torch.export.ExportedProgram | None = None)[source]#
A class which maps parameters name in the original module and the different they have in the fx.graph.
The exported program and the original model may have different parameter names.
- class yobx.torch.interpreter.onnx_export.SubModuleNaming(mod: torch.nn.Module)[source]#
A class which maps class submodule name and local functions in order to give short but unique names.
- yobx.torch.interpreter.onnx_export.build_source_lines(model: torch.nn.Module) Dict[str, Tuple[str, Tuple[int, int]]][source]#
Extracts source file and line number for method of the model and submodules.
- Parameters:
model – model to investigate
- Returns:
source files and lines
- yobx.torch.interpreter.onnx_export.check_model_weights(model: torch.nn.Module, proto: ModelProto | ModelContainer) List[Tuple[str, str, Tuple[int, ...] | None]][source]#
After a model is exported to ONNX, checks that every initializer name in the ONNX model can be traced back to a parameter or buffer of the original PyTorch model. When a name is found but the shape differs only by a transposition (reversed dimension order), the initializer is reported as
"transposed"rather than"unknown".- Parameters:
model – original torch model
proto – exported ONNX model (
ModelProtoorModelContainer); the check results are also written toproto.metadata_propsunder the key"check_model_weights"as a JSON string
- Returns:
list of 3-tuples
(initializer_name, status, onnx_shape, original_shape)where status is one of:"match"– name found in the original model and shape is identical"status"– name found in the original model but the ONNX shape is the reverse of the original shape (e.g. weight that was folded with aTransposenode during optimization)"torch_shape"– name not found among the original parameters or buffers at all
Example:
import torch from yobx.torch.interpreter import to_onnx, check_model_weights class Neuron(torch.nn.Module): def __init__(self, n_dims: int, n_targets: int): super().__init__() self.linear = torch.nn.Linear(n_dims, n_targets) def forward(self, x): return torch.relu(self.linear(x)) model = Neuron(5, 3) x = torch.rand(2, 5) onx = to_onnx(model, (x,)) issues = check_model_weights(model, onx) for name, status, onnx_shape, orig_shape in issues: print(name, status, onnx_shape, orig_shape)
- yobx.torch.interpreter.onnx_export.get_default_aten_as_function(target_opset: int | None = None) Tuple[str][source]#
Returns the list of aten functions to export as local functions depending on this opset. If the opset is not specified, it returns a minimum of functions to keep.
<<<
import pprint from yobx.torch.interpreter.onnx_export import ( get_default_aten_as_function, ) pprint.pprint(get_default_aten_as_function(23))
>>>
('aten.histc.default', 'aten.index_copy.default', 'aten.index_put.default', 'aten._grouped_mm.default', 'aten.setitem', <built-in function setitem>)
- yobx.torch.interpreter.onnx_export.is_wrapped(model: Any, dynamic_shapes: Any | None = None) bool[source]#
Tells if a model is wrapped.
- yobx.torch.interpreter.onnx_export.match_input_parameters(model: Any, names: List[str], args: Tuple[Any, ...] | None = None) Dict[str, Any][source]#
Maps the given names with the parameter names in the model.
- Parameters:
model – model
names – names to retrieve
args – available inputs
- Returns:
dictionary with values
Example:
<<<
import torch from torch._subclasses.fake_tensor import FakeTensorMode from yobx.reference import ExtendedReferenceEvaluator from yobx.torch.interpreter import to_onnx, match_input_parameters class Neuron(torch.nn.Module): def __init__(self, n_dims: int, n_targets: int): super(Neuron, self).__init__() self.linear = torch.nn.Linear(n_dims, n_targets) def forward(self, x): return torch.relu(self.linear(x)) fake_mode = FakeTensorMode() converter = fake_mode.fake_tensor_converter fake_x = converter.from_real_tensor(fake_mode, torch.rand(2, 5)) with fake_mode: model = Neuron(5, 3) onx = to_onnx(model, (fake_x,)) # expected values with a different model not_fake_model = Neuron(5, 3) x = torch.rand(2, 5) expected = not_fake_model(x) print(expected) # converts the model, fill inputs with the weights names = [i.name for i in onx.proto.graph.input] pfeeds = match_input_parameters(not_fake_model, names, (x,)) nfeeds = {k: v.detach().numpy() for k, v in pfeeds.items()} ref = ExtendedReferenceEvaluator(onx) got = ref.run(None, nfeeds) print(got)
>>>
tensor([[0.1718, 0.0000, 0.0000], [0.1048, 0.0000, 0.1768]], grad_fn=<ReluBackward0>) [array([[0.172, 0. , 0. ], [0.105, 0. , 0.177]], dtype=float32)]
- yobx.torch.interpreter.onnx_export.validate_exported_onnx(model: torch.nn.Module, args, kwargs, filename, atol: float = 1e-05, verbose: int = 0)[source]#
Validates the exported model with onnxruntime.