.torch_interpreter.piece_by_piece¶
- class experimental_experiment.torch_interpreter.piece_by_piece.CustomOpStrategy(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Defines when to switch to CustomOp to see if the module successfully exports with none of its children.
NONE
: tries to export the moduleONLY_IF_FAILING
: look into submodule only if it failsALWAYS
: always export as a custom opLOCAL
: export all submodules as a custom op and tries theconversion of the module itself after it was done
- class experimental_experiment.torch_interpreter.piece_by_piece.ModelDiagnoseOutput(parent: ModelDiagnoseOutput | None, name: str, model: Module, level: int = 0, method_name: str = 'forward')[source]¶
Contains inputs and outputs, traced results when tracing intermediate results. An instance of this class is produced by
trace_execution_piece_by_piece()
. Example l-plot-exporter-recipes-custom-phi35 tells you more about how to use this class.parent
: parent owning this instancename
: module namemodel
: modulelevel
: depth leveldevice
: deviceinputs
: stored inputs like the following(args, kwargs)
outputs
: stored outputssignature
: signature of the module or function
The default method spied on is
forward
but it can be changed. After the tracing:inputs
: traced inputsoutputs
: traced outputs
Attribute added to store the export results:
forward
: forward method of the moduleforward_parameter_names
forward_ordered_parameter_names
forward_args
forward_kwargs
forward_custom_op_schema
forward_need_serialization
Results from the last status:
exporter
: exporter namelast_error
: last errorexporter_status
: last exporter statussetattr(self, exporter, exported)
: whatever is exported
Debugging options:
self._debug_noquiet_name = os.environ.get("DIAGNAME", "") self._debug_print_status = os.environ.get("DIAGPRINTSTATUS", "") self._debug_print_export = os.environ.get("DIAGPRINTEXPORT", "")
The class can be improved:
It cannot infer how to produce in all cases outputs with expected dynamic dimensions based on inputs ones
Custom ops are not working well yet with forward method using
**kwargs
*args
in their signature. It is better to keep them empty.
- add_child(diag: ModelDiagnoseOutput)[source]¶
Adds a submodule.
- add_inputs(args: Tuple[Any, ...], kwargs: Dict[str, Any])[source]¶
Stores used inputs. Makes a copy.
- build_shape_mapping_indices(shape_functions: Dict[str, Callable] | None = None, verbose: int = 0) List[Tuple[int | Tuple[int, ...], dtype, Callable | None]] [source]¶
Builds a mapping output and input shapes so that a function returns dynamic shapes can automatically inferred.
The main idea: knowning everything is going to be serialized, inputs and outputs are serialized, we try to match the output shapes with the inputs one.
It returns for every output:
a list if indices of input to consider
an element type
if the output shape is not one of the input, it adds a function which can automatically create it
- compute_onnx_discrepancies(onx: FunctionProto | ModelProto, check_conversion_cls: Dict[str, Any] | type, verbose: int = 0) List[Dict[str, Any]] [source]¶
Computes the discrepancies by using the intermediate inputs and outputs.
- Parameters:
onx – proto
check_conversion_cls – class to use to compute the discrepancies, it should follow the same API as
onnx.reference.ReferenceEvaluator
, it can be also a dictionary to specify atol, rtol to be used after it runsverbose – verbosity
- Returns:
discrepancies for each set of inputs
- property custom_op_name¶
Returns a name and class name.
- determine_shape_fct(output_index: int, flattened_inputs: List[Tuple[Tuple[Any, ...], Dict[str, Any]]], flattened_outputs: List[Tuple[Any, ...]], verbose: int = 0, shape_functions: Dict[str, Callable] | None = None) Callable [source]¶
Determines a function producing an output shape based in this inputs.
- property dot_name¶
Returns a kind of indented name.
- draft_export_local(use_dynamic_shapes: bool | None = None, exporter_kwargs: Dict[str, Any] | None = None, verbose: int = 0, shape_functions: Dict[str, Callable] | None = None)[source]¶
Draft Exports with every submodule converted into a submodule. This can be used to understand where the conversion is failing.
- Parameters:
exporter_kwargs – argument for the export function
verbose – verbosity, to see what the function is doing
use_dynamic_shapes – use dynamic shapes
shape_functions – dictionary of functions to compute the shape of the output, the signature should be the following
fct(_output_index:i, *args, **kwargs) -> Optional[Any]
. If it returns None, the shape is automacally computed. The key of the dictionary is a class name, the class of the submodule to handle with this function.
- Returns:
result of the export function
- export_local(use_dynamic_shapes: bool | None = None, exporter_kwargs: Dict[str, Any] | None = None, verbose: int = 0, shape_functions: Dict[str, Callable] | None = None)[source]¶
Exports with every submodule converted into a submodule.
- Parameters:
exporter_kwargs – argument for the export function
verbose – verbosity, to see what the function is doing
use_dynamic_shapes – use dynamic shapes
shape_functions – dictionary of functions to compute the shape of the output, the signature should be the following
fct(_output_index:i, *args, **kwargs) -> Optional[Any]
. If it returns None, the shape is automacally computed. The key of the dictionary is a class name, the class of the submodule to handle with this function.
- Returns:
result of the export function
- property full_name¶
Returns a name and class name.
- get_export_report(exported_program: bool = False, fx: bool = False) str [source]¶
Returns a report status on the conversion.
- Parameters:
exported_program – adds the exported program if available
fx – display the graph instead of the exported program
- Returns:
string
- guess_dynamic_shape_object(*objs: Any, msg: Callable | None = None) Any [source]¶
Guesses the dynamic shapes for one argument.
- guess_dynamic_shapes() Any [source]¶
Guesses the dynamic shapes for that module from two execution. If there is only one execution, then that would be static dimensions.
- property module_name_type¶
Returns name and module type.
- pretty_text(with_dynamic_shape: bool = False, with_shape: bool = True, with_min_max: bool = True, with_device: bool = True, with_inputs: bool = True) str [source]¶
Renders the outputs.
- Parameters:
with_dynamic_shape – show dynamic shapes
with_shape – see
experimental_experiment.helpers.string_type()
.with_min_max – see
experimental_experiment.helpers.string_type()
.with_device – see
experimental_experiment.helpers.string_type()
.with_inputs – show inputs and outputs shapes
- Returns:
text
- put_custom_op_inplace(shape_functions: Dict[str, Callable] | None = None, verbose: int = 0)[source]¶
Replaces the submodule by a custom operator. It rewrites the forward method to call a function
- remove_custom_op_inplace(verbose: int = 0)[source]¶
Just replaces the forward, hoping the registration does not have to be removed.
- to_onnx_local(target_opset: Dict[str, int] | int | None = None, as_function: bool = False, options: OptimizationOptions | None = None, optimize: bool = True, filename: str | None = None, inline: bool = False, input_names: Sequence[str] | None = None, output_names: List[str] | None = None, large_model: bool = False, verbose: int = 0, return_builder: bool = False, raise_list: Set[str] | None = None, external_threshold: int = 1024, return_optimize_report: bool = False, function_options: FunctionOptions | None = None, dispatcher: Dispatcher | None = None, output_dynamic_shapes: Dict[str, Any] | Tuple[Any] | None = None, export_options: str | ExportOptions | None = None, check_conversion_cls: Dict[str, Any] | type | None = None)[source]¶
Exports into ONNX with submodule as local functions.
- Parameters:
input_names – input names
target_opset – targeted opset or targeted opsets as a dictionary
as_function – export as a ModelProto or a FunctionProto
options – optimization options
verbose – verbosity level
return_builder – returns the builder as well
raise_list – the builder stops any time a name falls into that list, this is a debbuging tool
optimize – optimize the model before exporting into onnx
large_model – if True returns a
onnx.model_container.ModelContainer
, it lets the user to decide later if the weights should be part of the model or saved as external weightsexternal_threshold – if large_model is True, every tensor above this limit is stored as external
return_optimize_report – returns statistics on the optimization as well
filename – if specified, stores the model into that file
inline – inline the model before converting to onnx, this is done before any optimization takes place
export_options – to apply differents options before to get the exported program
function_options – to specify what to do with the initializers in local functions, add them as constants or inputs
dispatcher – see
experimental_experiment.torch_interpreter.Dispatcher
output_names – to rename the output names
output_dynamic_shapes – same as dynamic_shapes but for the output
export_options – to apply differents options before to get the exported program
check_conversion_cls – a runtime with the same API than
onnx.reference.ReferenceEvaluator
than can be used to check that the onnx models produce the same outputs, it can be also a dictionary to specify atol, rtol to be used after it runs
- Returns:
onnx model
- property true_model_name¶
Returns class name or module name.
- try_export(exporter: str = 'fx', exporter_kwargs: Dict[str, Any] | None = None, verbose: int = 0, quiet: bool = True, discrepancies: bool = True, use_dynamic_shapes: bool | None = None, replace_by_custom_op: bool | CustomOpStrategy | Dict[str, CustomOpStrategy] = CustomOpStrategy.NONE, atol: float = 0.01, rtol: float = 0.1, shape_functions: Dict[str, Callable] | None = None) StatusExport [source]¶
Tries to export a model. If not possible, tries every child until it is possible. The function stores the export and other results in the class itself, in attributes prefixed by
forward_
.- Parameters:
exporter – export way, ‘fx’ for
torch.export.export()
, ‘onnx_dynamo’ to calltorch.onnx.export()
(..., dynamo=True)
, ‘torch_script’ to calltorch.onnx.export()
(..., dynamo=False)
, ‘to_onnx’ to callexperimental_experiment.torch_interpreter.to_onnx()
.exporter_kwargs – argument for the export function
verbose – verbosity, to see what the function is doing
discrepancies – run the exported model to measure the discrepancies
quiet – do not catch the first exception
use_dynamic_shapes – use dynamic shapes
replace_by_custom_op – before exporting, it replaces submodules by custom ops, it can be a boolean to replace all or a selected classes (name or type), or names
atol – absolute tolerance
rtol – relative tolerance
shape_functions – dictionary of functions to compute the shape of the output, the signature should be the following
fct(_output_index:i, *args, **kwargs) -> Optional[Any]
. If it returns None, the shape is automacally computed. The key of the dictionary is a class name, the class of the submodule to handle with this function.
- Returns:
result of the export function
See l-plot-exporter-recipes-custom-phi35 for an example. Environment variable
DIAGNAME=<name>
can be set to increase the verbosity on a particular op and avoid catching the exception if any.
- class experimental_experiment.torch_interpreter.piece_by_piece.StatusExport(status: StatusExportCode, step: str = '', reason: str = '', exported: Any | None = None)[source]¶
Defines the the exporter status.
- Parameters:
status – status exporter
step – step it fails
reason – details about the failure
exported – whatever is exporter
- class experimental_experiment.torch_interpreter.piece_by_piece.StatusExportCode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Defines the the exporter status.
FAIL: exporter has failed
OK: export succeeds with all the submodule included
CHILDC: export succeeds with submodule replaced by custom ops
CUSTOM: export succeds with this module replaced by a custom ops
DISC: fails due to discrepancy
This options can be combined.
- remove(a: StatusExportCode) StatusExportCode [source]¶
Compose..
- experimental_experiment.torch_interpreter.piece_by_piece.trace_execution_piece_by_piece(model: Module, inputs: List[Tuple[Tuple[Any, ...], Dict[str, Any]]], verbose: int = 0, traced_method: Dict[type[Module] | str, str] | None = None, trace_functions: bool = False, black_list_functions: Set[str] | None = None) ModelDiagnoseOutput [source]¶
Runs a model, traces the intermediate output and infers dynamic shapes based on it.
- Parameters:
model – model
inputs – list of input sets
[(args, kwargs), (args, kwargs), ...]
with different shapes (at least for the dynamic dimensions)verbose – verbosity
traced_method – by default the class traced method
forward
but another one can be traced, if the traced method is empty, then it is not traced at alltrace_functions – traces not only submodules but function called by the traced method or function inside this method or function
black_list_functions – if trace_functions is true, this option can be used to avoid tracing some functions
- Returns:
See l-plot-exporter-recipes-custom-phi35 for an example.
- experimental_experiment.torch_interpreter.piece_by_piece.trace_forward_execution(model: Module, verbose: int = 0, traced_method: Dict[type[Module] | str, str] | None = None, trace_functions: bool = False, black_list_functions: Set[str] | None = None) ModelDiagnoseOutput [source]¶
Replaces all forward to store the inputs and outputs of the module and every submodules. See l-plot-exporter-recipes-custom-phi35 for an example.
torch.cond()
is replaced bytraced_cond()
when tracing otherwise no branch receive any input.
- experimental_experiment.torch_interpreter.piece_by_piece.traced_cond(pred: bool | int | float | Tensor, true_fn: Callable, false_fn: Callable, operands: tuple | list = ()) Any [source]¶
torch.cond()
relies ontorch.compile()
and this does not work well with tracing. Before tracing, the function is replaced by another one. Every piece of code such asprint
must be avoided while the code is begin compiled withif not torch.compiler.is_compiling(): ...
. Seetorch.compiler.is_compiling()
.