.torch_dynamo.compiled_model

experimental_experiment.torch_dynamo.compiled_model.create_compiled_model(model: Any, backend: str, target_opset: int, use_dynamic: bool = False, verbose: int = 0, enable_pattern: str | List[str] = 'default', disable_pattern: str | List[str] | None = None, return_storage: bool = False, rename_inputs: bool = True, dump_prefix: str | None = None, dump_patterns: str | None = None, optimize: bool = True, ort_optimize: bool = True, use_fused_aten_ops: bool = False, processor: str = 'CPU', order_algorithm: str = 'NONE') Any[source]

Creates the compiled model. :param model: module :param backend: kind of backend :param use_dynamic: use dynamic shape :param verbose: verbosity :param enable_pattern: to enable optimization pattern :param disable_pattern: to disable optimization pattern :param return_storage: return a container for the models,

only works with backend custom and debug

Parameters:
  • rename_inputs – rename inputs into input_{i}

  • dump_prefix – dumps the models (backend, custom and debug)

  • dump_patterns – dumps the optimization applied patterns if applicable

  • optimize – enable optimizations

  • ort_optimize – enables onnxruntime optimization

  • use_fused_aten_ops – use fused opetor when converting the model, it only works the backend custom

  • processor – optimization should be made for this processor or this list of processors (comma separated value)

  • order_algorithm – algorithm optimizing the order the onnx node, none by default

Returns:

compiled model

experimental_experiment.torch_dynamo.compiled_model.get_fused_aten_ops_dispatcher()[source]

Returns a dispatcher with additional converting function to convert fused operators into ATen ops onnxruntime can call.