.torch_interpreter.piece_by_piece

class experimental_experiment.torch_interpreter.piece_by_piece.CustomOpStrategy(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Defines when to switch to CustomOp to see if the module successfully exports with none of its children.

  • NONE: tries to export the module

  • ONLY_IF_FAILING: look into submodule only if it fails

  • ALWAYS: always export as a custom op

  • LOCAL: export all submodules as a custom op and tries the

    conversion of the module itself after it was done

class experimental_experiment.torch_interpreter.piece_by_piece.ModelDiagnoseOutput(parent: ModelDiagnoseOutput | None, name: str, model: Module, level: int = 0, method_name: str = 'forward')[source]

Contains inputs and outputs, traced results when tracing intermediate results. An instance of this class is produced by trace_execution_piece_by_piece(). Example to_onnx, failures Phi-3.5-mini-instruct tells you more about how to use this class.

  • parent: parent owning this instance

  • name: module name

  • model: module

  • level: depth level

  • device: device

  • inputs: stored inputs like the following (args, kwargs)

  • outputs: stored outputs

  • signature: signature of the module or function

The default method spied on is forward but it can be changed. After the tracing:

  • inputs: traced inputs

  • outputs: traced outputs

Attribute added to store the export results:

  • forward: forward method of the module

  • forward_parameter_names

  • forward_ordered_parameter_names

  • forward_args

  • forward_kwargs

  • forward_custom_op_schema

  • forward_need_serialization

Results from the last status:

  • exporter: exporter name

  • last_error: last error

  • exporter_status: last exporter status

  • setattr(self, exporter, exported): whatever is exported

Debugging options:

self._debug_noquiet_name = os.environ.get("DIAGNAME", "")
self._debug_print_status = os.environ.get("DIAGPRINTSTATUS", "")
self._debug_print_export = os.environ.get("DIAGPRINTEXPORT", "")

The class can be improved:

  • It cannot infer how to produce in all cases outputs with expected dynamic dimensions based on inputs ones

  • Custom ops are not working well yet with forward method using **kwargs *args in their signature. It is better to keep them empty.

add_child(diag: ModelDiagnoseOutput)[source]

Adds a submodule.

add_inputs(args: Tuple[Any, ...], kwargs: Dict[str, Any])[source]

Stores used inputs. Makes a copy.

add_outputs(args: Tuple[Any, ...])[source]

Stores returned outputs. Makes a copy.

build_c_schema(verbose: int = 0) str[source]

Returns a schema for the C function.

build_shape_mapping_indices(shape_functions: Dict[str, Callable] | None = None, verbose: int = 0) List[Tuple[int | Tuple[int, ...], dtype, Callable | None]][source]

Builds a mapping output and input shapes so that a function returns dynamic shapes can automatically inferred.

The main idea: knowning everything is going to be serialized, inputs and outputs are serialized, we try to match the output shapes with the inputs one.

It returns for every output:

  • a list if indices of input to consider

  • an element type

  • if the output shape is not one of the input, it adds a function which can automatically create it

property custom_op_name

Returns a name and class name.

determine_shape_fct(output_index: int, flattened_inputs: List[Tuple[Tuple[Any, ...], Dict[str, Any]]], flattened_outputs: List[Tuple[Any, ...]], verbose: int = 0, shape_functions: Dict[str, Callable] | None = None) Callable[source]

Determines a function producing an output shape based in this inputs.

property dot_name

Returns a kind of indented name.

property full_name

Returns a name and class name.

get_debug_msg() str[source]

Returns information about this instances to help debugging.

get_export_report(exported_program: bool = False, fx: bool = False) str[source]

Returns a report status on the conversion.

Parameters:
  • exported_program – adds the exported program if available

  • fx – display the graph instead of the exported program

Returns:

string

guess_dynamic_dimensions(*tensors) Any[source]

Infers the dynamic dimension from multiple shapes.

guess_dynamic_shape_object(*objs: Any, msg: Callable | None = None) Any[source]

Guesses the dynamic shapes for one argument.

guess_dynamic_shapes() Any[source]

Guesses the dynamic shapes for that module from two execution. If there is only one execution, then that would be static dimensions.

is_customized()[source]

Tells of this module was bypassed.

property module_name_type

Returns name and module type.

pretty_text(with_dynamic_shape: bool = False, with_shape: bool = True, with_min_max: bool = True, with_device: bool = True, with_inputs: bool = True) str[source]

Renders the outputs.

Parameters:
Returns:

text

put_custom_op_inplace(shape_functions: Dict[str, Callable] | None = None, verbose: int = 0)[source]

Replaces the submodule by a custom operator. It rewrites the forward method to call a function

remove_custom_op_inplace(verbose: int = 0)[source]

Just replaces the forward, hoping the registration does not have to be removed.

try_export(exporter: str = 'fx', exporter_kwargs: Dict[str, Any] | None = None, verbose: int = 0, quiet: bool = True, discrepancies: bool = True, use_dynamic_shapes: bool | None = None, replace_by_custom_op: bool | CustomOpStrategy | Dict[str, CustomOpStrategy] = CustomOpStrategy.NONE, atol: float = 0.01, rtol: float = 0.1, shape_functions: Dict[str, Callable] | None = None) StatusExport[source]

Tries to export a model. If not possible, tries every child until it is possible. The function stores the export and other results in the class itself, in attributes prefixed by forward_.

Parameters:
  • exporter – export way, ‘fx’ for torch.export.export(), ‘onnx_dynamo’ to call torch.onnx.export() (..., dynamo=True), ‘torch_script’ to call torch.onnx.export() (..., dynamo=False), ‘to_onnx’ to call experimental_experiment.torch_interpreter.to_onnx().

  • exporter_kwargs – argument for the export function

  • verbose – verbosity, to see what the function is doing

  • discrepancies – run the exported model to measure the discrepancies

  • quiet – do not catch the first exception

  • use_dynamic_shapes – use dynamic shapes

  • replace_by_custom_op – before exporting, it replaces submodules by custom ops, it can be a boolean to replace all or a selected classes (name or type), or names

  • atol – absolute tolerance

  • rtol – relative tolerance

  • shape_functions – dictionary of functions to compute the shape of the output, the signature should be the following fct(_output_index:i, *args, **kwargs) -> Optional[Any]. If it returns None, the shape is automacally computed. The key of the dictionary is a class name, the class of the submodule to handle with this function.

Returns:

result of the export function

See to_onnx, failures Phi-3.5-mini-instruct for an example. Environment variable DIAGNAME=<name> can be set to increase the verbosity on a particular op and avoid catching the exception if any.

verifies(verbose: int = 0)[source]

Does some verifications. Raises an exception if it fails.

class experimental_experiment.torch_interpreter.piece_by_piece.StatusExport(status: StatusExportCode, step: str = '', reason: str = '', exported: Any | None = None)[source]

Defines the the exporter status.

Parameters:
  • status – status exporter

  • step – step it fails

  • reason – details about the failure

  • exported – whatever is exporter

is_ok() bool[source]

Returns True if the export succeeds.

pretty_text() str[source]

pretty text

property short_reason: str

Shortened reason

class experimental_experiment.torch_interpreter.piece_by_piece.StatusExportCode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Defines the the exporter status.

  • FAIL: exporter has failed

  • OK: export succeeds with all the submodule included

  • CHILDC: export succeeds with submodule replaced by custom ops

  • CUSTOM: export succeds with this module replaced by a custom ops

  • DISC: fails due to discrepancy

This options can be combined.

remove(a: StatusExportCode) StatusExportCode[source]

Compose..

experimental_experiment.torch_interpreter.piece_by_piece.trace_execution_piece_by_piece(model: Module, inputs: List[Tuple[Tuple[Any, ...], Dict[str, Any]]], verbose: int = 0, traced_method: Dict[type[Module] | str, str] | None = None) ModelDiagnoseOutput[source]

Runs a model, traces the intermediate output and infers dynamic shapes based on it.

Parameters:
  • model – model

  • inputs – list of input sets [(args, kwargs), (args, kwargs), ...] with different shapes (at least for the dynamic dimensions)

  • verbose – verbosity

  • traced_method – by default the class traced method forward but another one can be traced, if the traced method is empty, then it is not traced at all

Returns:

see ModelDiagnoseOutput

See to_onnx, failures Phi-3.5-mini-instruct for an example.

experimental_experiment.torch_interpreter.piece_by_piece.trace_forward_execution(model: Module, verbose: int = 0, traced_method: Dict[type[Module] | str, str] | None = None) ModelDiagnoseOutput[source]

Replaces all forward to store the inputs and outputs of the module and every submodules. See to_onnx, failures Phi-3.5-mini-instruct for an example.