.torch_interpreter.investigate_helper

experimental_experiment.torch_interpreter.investigate_helper.prepare_args_kwargs(torch_results: Dict[str, Any], node: Node) Tuple[Tuple[Any, ...], Dict[str, Any]][source]

Prepares args and kwargs before executing a fx node.

Parameters:
  • torch_results – existing results

  • node – node to execute

Returns:

new args and kwargs

experimental_experiment.torch_interpreter.investigate_helper.run_aligned(ep: ExportedProgram, onx: ModelProto | FunctionProto, args: Tuple[Tensor, ...], check_conversion_cls: Dict[str, Any] | type, kwargs: Dict[str, Any] | None = None, verbose: int = 0) Iterator[Tuple[Any, ...]][source]

Runs in parallel both the exported program and the onnx proto and looks for discrepancies. The function does match on result names so it assumes the exported program and the onnx model have the same names fro equivalent results.

Parameters:
  • ep – exported program

  • onx – model or function proto

  • args – input args

  • check_conversion_cls – defines the runtime to use for this task

  • kwargs – input kwargs

  • verbose – verbosity level

Returns:

a list of tuples containing the results, they come in tuple,

Example:

<<<

import pprint
import pandas
import torch
from experimental_experiment.reference import (
    # This can be replace by any runtime taking NodeProto as an input.
    ExtendedReferenceEvaluator as ReferenceEvaluator,
)
from experimental_experiment.torch_interpreter import to_onnx
from experimental_experiment.torch_interpreter.investigate_helper import run_aligned


class Model(torch.nn.Module):
    def forward(self, x):
        ry = x.abs()
        rz = ry.exp()
        rw = rz + 1
        ru = rw.log() + rw
        return ru


def post_process(obs):
    dobs = dict(zip(["ep_id_node", "onnx_id_node", "ep_name", "onnx_name"], obs))
    dobs["err_abs"] = obs[-1]["abs"]
    dobs["err_rel"] = obs[-1]["rel"]
    return dobs


x = torch.randn((5, 4))
Model()(x)  # to make sure the model is running
ep = torch.export.export(
    Model(), (x,), dynamic_shapes=({0: torch.export.Dim("batch")},)
)
onx = to_onnx(ep)
results = list(
    map(
        post_process,
        run_aligned(
            ep,
            onx,
            (x,),
            check_conversion_cls=dict(cls=ReferenceEvaluator, atol=1e-5, rtol=1e-5),
            verbose=1,
        ),
    ),
)
print("------------")
print("final results")
df = pandas.DataFrame(results)
print(df)

>>>

    [run_aligned] +torch-cst: _reshape_init1_s_0: T1s1[1.0,1.0:A1.0]
    [run_aligned] +torch-cst: p__reshape_init1_s_0: T1s1[1.0,1.0:A1.0]
    [run_aligned] +onnx-init: _reshape_init1_s_0: A1s1[1.0,1.0:A1.0]
    [run_aligned] +onnx-init: p__reshape_init1_s_0: A1s1[1.0,1.0:A1.0]
    [run_aligned] +onnx-input: x: T1s5x4[-2.107454299926758,1.9339978694915771:A-0.20542464833706617]
    [run_aligned] run ep.graph.nodes[0]: placeholder -> 'x'
    [run_aligned] +torch x=T1s5x4[-2.107454299926758,1.9339978694915771:A-0.20542464833706617]
    [run_aligned] run ep.graph.nodes[1]: call_function[aten.abs.default] -> 'abs_1'
    [run_aligned] +torch abs_1=T1s5x4[0.05657948926091194,2.107454299926758:A0.8859601126983762]
    [run_aligned] run onx.graph.node[0]: Abs(x) -> abs_1
    [run_aligned] +onnx abs_1=A1s5x4[0.05657948926091194,2.107454299926758:A0.8859601126983762]
    [run_aligned] =common results abs_1: abs=0.0, rel=0.0,amax=0,0
    [run_aligned] run ep.graph.nodes[2]: call_function[aten.exp.default] -> 'exp'
    [run_aligned] +torch exp=T1s5x4[1.0582107305526733,8.227270126342773:A3.088086408376694]
    [run_aligned] run onx.graph.node[1]: Exp(abs_1) -> exp
    [run_aligned] +onnx exp=A1s5x4[1.0582107305526733,8.227270126342773:A3.0880863964557648]
    [run_aligned] =common results exp: abs=4.76837158203125e-07, rel=1.672420544815129e-07, n=20.0,amax=1,0
    [run_aligned] run ep.graph.nodes[3]: call_function[aten.add.Tensor] -> 'add'
    [run_aligned] +torch add=T1s5x4[2.058210849761963,9.227270126342773:A4.088086438179016]
    [run_aligned] run onx.graph.node[2]: Add(exp, _reshape_init1_s_0) -> add
    [run_aligned] +onnx add=A1s5x4[2.058210849761963,9.227270126342773:A4.088086414337158]
    [run_aligned] =common results add: abs=9.5367431640625e-07, rel=1.1101327999203846e-07, n=20.0,amax=1,0
    [run_aligned] run ep.graph.nodes[4]: call_function[aten.log.default] -> 'log'
    [run_aligned] +torch log=T1s5x4[0.7218371033668518,2.222163200378418:A1.2751444399356842]
    [run_aligned] run ep.graph.nodes[5]: call_function[aten.add.Tensor] -> 'add_1'
    [run_aligned] +torch add_1=T1s5x4[2.78004789352417,11.449433326721191:A5.363230919837951]
    [run_aligned] run ep.graph.nodes[6]: output -> 'output'
    [run_aligned] +torch output=(T1s5x4[2.78004789352417,11.449433326721191:A5.363230919837951],)
    [run_aligned] run onx.graph.node[3]: Log(add) -> _onx_log_add0
    [run_aligned] +onnx _onx_log_add0=A1s5x4[0.7218371033668518,2.222163200378418:A1.2751444727182388]
    [run_aligned] run onx.graph.node[4]: Add(_onx_log_add0, add) -> output_0
    [run_aligned] +onnx output_0=A1s5x4[2.78004789352417,11.449433326721191:A5.363230895996094]
    [run_aligned] =common results* add_1/output_0: abs=9.5367431640625e-07, rel=1.5300586559369392e-07, n=20.0,amax=1,0
    ------------
    final results
       ep_id_node  onnx_id_node   ep_name onnx_name       err_abs       err_rel
    0           1             0     abs_1     abs_1  0.000000e+00  0.000000e+00
    1           2             1       exp       exp  4.768372e-07  1.672421e-07
    2           3             2       add       add  9.536743e-07  1.110133e-07
    3           6             4  output_0     add_1  9.536743e-07  1.530059e-07
experimental_experiment.torch_interpreter.investigate_helper.run_fx_node(node: Node, args: Tuple[Any, ...], kwargs: Dict[str, Any] | None = None) Tuple[Any, ...][source]

Executes a node

Parameters:
  • node – runs a node

  • args – unnamed inputs to the node

  • kwargs – named inputs to the node

Returns:

results

experimental_experiment.torch_interpreter.investigate_helper.validate_fx_outputs(node: Node, outputs: Tuple[Any, ...]) None[source]

Validates the outputs of a node using metadata stored in the node.

Parameters:
  • node – node

  • outputs – outputs

experimental_experiment.torch_interpreter.investigate_helper.validate_fx_tensor(node: Node, tensor: Tensor, expected_shape: Tuple[Any, ...]) None[source]

Validates the shape of tensor is expected.

Parameters:
  • node – node

  • tensor – tensor

  • expected_shape – expected shape