onnx_diagnostic.torch_export_patches.patches.patch_torch

onnx_diagnostic.torch_export_patches.patches.patch_torch.patch__check_input_constraints_for_graph(previous_function: Callable, input_placeholders: list[Node], flat_args_with_path, range_constraints, verbose: int = 0) None[source][source]
class onnx_diagnostic.torch_export_patches.patches.patch_torch.patched_ShapeEnv[source][source]
onnx_diagnostic.torch_export_patches.patches.patch_torch.patched__broadcast_shapes(*_shapes)[source][source]

Patches torch._refs._broadcast_shapes.

onnx_diagnostic.torch_export_patches.patches.patch_torch.patched_infer_size(a, b)[source][source]

Patches torch._subclasses.fake_impls.infer_size.

onnx_diagnostic.torch_export_patches.patches.patch_torch.patched_vmap(func, in_dims=0, out_dims=0)[source][source]

Python implementation of torch.vmap(). The implementation raises an issue when it is being exported with torch.export.export() when the function is called with non tensors arguments and the batch size is dynamic.

onnx_diagnostic.torch_export_patches.patches.patch_torch.retrieve_stacktrace()[source][source]

Retrieves and prints the current stack trace, avoids every torch file.