experimental_experiment.xbuilder

GraphBuilder

class experimental_experiment.xbuilder.GraphBuilder(target_opset_or_existing_proto: int | Dict[str, int] | ModelProto | FunctionProto, input_names: Sequence[str] | None = None, as_function: bool = False, optimization_options: OptimizationOptions | None = None, args: List[Any] | None = None, kwargs: Dict[str, Any] | None = None, ir_version: int | None = None, verbose: int = 0, infer_shapes_options: InferShapesOptions = InferShapesOptions.NONE, raise_list: Set[str] | None = None, dynamic_shapes: Dict[str, Any] | Tuple[Any] | None = None, local_domain: str = 'local_function', signature: Any | None = None, check_empty_source: bool = False, graph_module: torch.fx.GraphModule | None = None)[source]

Simplifies the creation of a model.

Parameters:
  • target_opset_or_existing_proto – a ModelProto, an integer, a dictionary of domain, version

  • input_names – input names

  • as_function – export as a function or a model there are less assert when as_function is True

  • optimization_options – optimizations options, see OptimizationOptions

  • args – example of inputs

  • kwargs – example of inputs

  • ir_version – ir version when exporting

  • verbose – verbosity

  • infer_shapes_options – options when running shape inference for an existing model

  • raise_list – raise an exception if a new operator belongs to that list

  • dynamic_shapes – dynamic shapes

  • local_domain – domain name to use for local functions if not specified

  • signature – the signature is unused but helps for debugging purposes

  • check_empty_source – checks source are not empty

  • graph_module – only used for debugging purpose

Important attributes:

  • input_names: List[str]: list of input names

  • as_function: bool: the model must be exported as a function or as a model, there are less assert when as_function is True

  • optimization_options: OptimizationOptions:

  • nodes: List[NodeProto]: list of nodes

  • initializers_dict: Dict[str, Any]: initializers

  • initializers_dict_sources: Dict[str, InitializerInfo]: information about where the initiliazers was created

  • inputs: List[ValueInfoTensorProto]: inputs

  • outputs: List[ValueInfoTensorProto]: outputs

  • ir_version: int: ir version

  • opsets: Dict[str, int]: declared opsets

  • input_args: List[T]: input tensors when the class is used to convert an existing model

  • input_kwargs: Dict[str, T]: input tensors when the class is used to convert an existing model

  • functions: Dict[Tuple[str,str], FunctionProto]: dictionary of functions to add to the model

  • value_info: List[ValueInfoProto]: value info of the original model

  • dynamic_shapes: Union[Dict[str, Any], Tuple[Any]]]: dynamic_shapes informations

  • _parameter_renaming: Dict[str, str]: to rename parameter and give them a name which can be found in module.named_parameter

Computed attributes:

  • _unique_names: used to create unused result names

  • _unique_node_names: used to create unused node names

  • _known_names: set of existing results names

  • _known_shapes: Dict[str, DYNAMIC_SHAPE]: declared shapes

  • _known_types: Dict[str, int]: declared element types

  • _known_value_shape: Dict[str, Any]: if a result is a shape or not (for example the output of operator Shape)

  • _known_ranks: Dict[str, int]: declared ranks

  • _known_sequences: Dict[str, Dict[str, Any]]: known sequences

  • _dynamic_examples: Dict[str, Set[Union[int,float]]]: example of dynamic dimensions

  • constants_node_: Dict[bytes, NodeProto]: constant node

  • constants_alias_: Dict[str, str]: alias for constant

  • constants_: Dict[str, Any]: constant values

  • constants_computed_: Dict[str, Any]: computed constant values

  • dynamic_objects: Dict[str, torch.SymInt]: list of dynamic dimension

  • dynamic_objects_rev: Dict[str, str]: reverse dictionary to fasten lookups

  • _cache_shape: Dict[key,str]: cache concatenation of shapes

  • _values: Dict[key,str]: cache initializer value to merge those which are equal

  • _dynamic_alias: Dict[str,str]: used when the user gives a different

    name to the dynamic shapes

  • constraints_: Dict[str, Set[Any]]:

    if a broadcast implies a constraints on a dynamic shape, it is stored here

  • _events: is used ot retrieve any information useful to debug

Debugging attributes:

  • _raise_list: Set[str]: the builder stop if a result falls in that list (debugging tool)

You can setup environment variable ONNXSTOP, ONNXSTOPSHAPE, ONNXSTOPTYPE to raise an exception when the type or shape of a variable is set. Example: ONNXSTOP=attn_output python .... ONNXCST=1 shows which constant is computed, NULLSHAPE=1 raises an exception as soon as a null shape occurs. The code includes:

self._debug_null_shape = int(os.environ.get("NULLSHAPE", "0"))
self._debug_stop = os.environ.get("ONNXSTOP", "#?#")
self._debug_stop_shape = os.environ.get("ONNXSTOPSHAPE", "#?#")
self._debug_stop_type = os.environ.get("ONNXSTOPTYPE", "#?#")
self._debug_get_constant = int(os.environ.get("ONNXCST", "0"))
self._debug_local_function = int(os.environ.get("ONNXFUNC", "0"))
class InitializerInfo(name: str, source: str, same_as: str | None = None)[source]

Tracks the location where the initializer was created.

Parameters:
  • name – initializer name

  • source – information

  • same_as – same as an existing initializers

add_source(source: str)[source]

Adds other sources.

class ShapeConstant(name: str, shape: Tuple[int, ...], node: NodeProto)[source]

Wraps a constant shape even if the input producing the shape is not.

class WrapSym(sym: torch.SymInt | torch.SymFloat)[source]

Wraps a symbolic int (a dimension for example).

add_constant_node(node: NodeProto) bytes | None[source]

Adds a constant node. Any constant equivalent to this one will be fused. self.optimization_options.constant_fusing must be True.

add_domain(domain: str, version: int = 1)[source]

Adds a domain to the list of supported ones. Checks the version is the same if it exists.

add_dynamic_object(key: str, value: Any, name: str | None = None, dim: int | None = None, parse: bool = False, check_tokens: bool = True)[source]

Registers a dynamic object such as a dynamic dimension.

Parameters:
  • key – string

  • value – SymInt, Dim, _DerivedDim

  • name – input name it comes from

  • dim – dimension for this dimension in input

  • parse – parse the expression add pieces of it as well

  • check_token – check that the subtoken are registered prior to this addition

add_function(f: FunctionProto, rename_allowed: bool = False, merge_allowed: bool = False, builder: GraphBuilder | None = None) Tuple[str, str][source]

Adds a new local function.

Parameters:
  • f – new function to register

  • rename_allowed – the function can be renamed if a function with the same name already exists, the proto is modified inplace

  • merge_allowed – the function is not added if another function of the same name already exists and is the same

  • builder – GraphBuilder used to build the local function, it contains shape information the function does not have

Returns:

function name

This function does not add the domain to the list of supported opsets. You should use method make_local_function() for this.

add_initializer(name: str, value: Any, itype: int | None = None, shape: Tuple[int, ...] | None = None, cst: Any | None = None, key: Any | None = None, existing: bool = False, allow_empty: bool = False, parameter_name: str | None = None, source: str = '')[source]

Adds an initializer.

Parameters:
  • name – constant name

  • value – initializer

  • itype – to overwrite the type

  • shape – to overwrite the shape

  • cst – value to send to update_node_constant

  • key – used to register the initializer

  • existing – if True, shape and type should exist, if False, it should not exist, if None, both case are allowed

  • allow_empty – allow empty tensor anyway

  • parameter_name – the parameter name is different than its name in the fx graph, they are restored when the model is finally exported into onnx, until then, the mapping is kept in attribute _parameter_renaming

  • source – any additional information, this field is usually used to let the number know where the initializer was created.

add_stat(kind: str, name: str)[source]

Increments a counter.

compute_constant(name: str, exc: bool = True, only_array: bool = False, allow_empty: bool = False) Tuple[ndarray, Dict[str, ndarray] | None][source]

Computes a constant.

Parameters:
  • name – constant name

  • exc – raises an exception if any failure

  • only_array – do not return TensorProto

  • allow_empty – allow empty result

Returns:

constant

If returns None if the constant is a FakeTensor.

constant_folding(convert_into_initializer: bool = True) Dict[str, float][source]

Folds all constants. Constants are marked during the creation of the graph. There is no need to propagate this information.

Parameters:

convert_into_initializer – moves the constant as an initializer, otherwise, just evaluates it

Returns:

dictionary of statistics

do_not_remove(node: NodeProto) bool[source]

Tells if a node should be removed or not.

elem_size(elem_type: int) int[source]

Returns the size in byte of the an element of this size.

empty_copy(as_function: bool = False, constant_size: int = 16777216) GraphBuilder[source]

Creates an empty copy but with the same opsets.

get_attribute(node: NodeProto, att_name: str, exc: bool = True) AttributeProto | None[source]

Returns an attribute for a node.

get_attributes_with_default(node: NodeProto, **default_values) Dict[str, Any][source]

Returns int or float attributes. If missing, the default value is returned.

Parameters:
  • node – node

  • default_values – default values

get_constant(name: str, exc: bool = True, computed_value: bool = False, as_shape: bool = False, multiple_outputs: bool = False) ndarray | NodeProto[source]

The method returns the constant name. It is a tensor (numpy array) or a NodeProto which must be evaluated. If computed_value is True, the NodeProto is evaluated wuth the ReferenceEvaluator.

Parameters:
  • name – constant name

  • exc – raise an exception if anything is impossible to do

  • computed_value – compute the value if not a constant

  • as_shape – returns a tuple for a shape

  • multiple_outputs – allow multiple outputs

Returns:

value

get_constant_or_attribute(node: NodeProto, input_index: int, att_name: str) Any[source]

Tells if an input is a constant or returns true if in an older opset, it was named as an attribute.

get_debug_msg(limit: int = 1000) str[source]

Returns a string providing as much information as possible to help the developper understand why a conversion failed.

Parameters:

limit – limit the string if the model is big

Returns:

many pieces of informations about the on going conversion

get_initializer_size(name: str) int[source]

Returns the size of an initializer.

Parameters:

name – name

Returns:

size

get_input_dynamic_shape(name: str, input_index: int, example_shape: Tuple[int, ...], dynamic_shapes: Any | None = None, example_value: Any | None = None) Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...][source]

Updates the shape based on the available information.

Parameters:
  • name – input name

  • input_index – input index

  • example_shape – the shape of the given input

  • dynamic_shapes – used to handle nested dynamic shapes

  • example_value – one example of the value

Returns:

dynamic shape

get_is_dimension(name: str, elem_type: int | None = None, shape: Tuple[int, ...] | None = None, n_outputs: int | None = None, exc: bool = True) bool[source]

Tells if a result is a dynamic dimension or not.

get_local_function(name: str, domain: str = '') FunctionProto[source]

Returns a local function.

get_local_function_outputs(name: str, domain: str = '') List[str][source]

Returns the outputs of a local function.

get_opset(domain: str) int[source]

Returns the opset version for a specific domain.

Parameters:

domain – domain name

Returns:

version

get_rank(name: str) int[source]

Returns the rank of a result.

get_registered_constraints() Dict[str, Set[str | int]][source]

Returns the constraints registered so far.

get_sequence(name: str) Dict[str, Any][source]

Returns sequence information

get_shape(name: str) int[source]

Returns the shape of a result.

get_type(name: str) int[source]

Returns the type of a result.

get_type_known(name: str, exc: bool = False) int | None[source]

Returns the type known by torch to help solve mismatches.

has_dynamic_object(name: str) bool[source]

Tells if a result is a dynamic object, torch.SymInt for torch.

has_local_function(name: str, domain: str = '') bool[source]

Checks if a local function exists.

has_name(name: str) bool[source]

Tells if a result exists.

has_rank(name: str) bool[source]

Tells if a result has a rank.

has_shape(name: str, full=False) bool[source]

Tells if a result has a shape.

has_type(name: str) bool[source]

Tells if a result has a type. This should be always true.

infer_shapes() Dict[str, Tuple[Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...], Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...]]][source]

Runs custom shape inference. Returns the updates.

inline_functions(verbose: int = 0) int[source]

Inlines local functions. Returns the number of inlined nodes.

insert_and_remove_nodes(insert_at: int | None, new_nodes: List[NodeProto], removed: List[int], opsets: Dict[str, int] | None = None, debug: Any | None = None) List[NodeProto][source]

Inserts new nodes and removes others.

Parameters:
  • insert_at – insert the new nodes at this position, if empty, the function guesses where to add them

  • new_nodes – list of nodes to insert

  • removed – list of nodes to removed (based on their positions)

  • opsets – opsets used

  • debug – anything added to exception messages

Returns:

list of removed nodes

io_names()[source]

Returns the list of inputs, output for nodes.

is_constant(name: str) bool[source]

Tells if a result is a constant.

is_constant_or_attribute(node: NodeProto, input_index: int, att_name: str) bool[source]

Tells if an input is a constant or returns true if in an older opset, it was named as an attribute.

is_exact_same_constant(node: NodeProto) NodeProto | None[source]

Adds a constant node. Any constant equivalent to this one will be fused. self.optimization_options.constant_fusing must be True.

is_sequence(name: str) bool[source]

Tells if a result is a sequence.

property main_opset

Returns the opset for the main domain (assuming it is used).

make_dynamic_object(name: str, value: Any, shape_as_input: bool = False, input_name: str | None = None, axis: int | None = None) str[source]

Creates a dynamic shapes.

Parameters:
  • name – name

  • value – value

  • shape_as_input – adds the name to the list of the inputs of the onnx model

  • input_name – the dimension comes from this input

  • axis – the dimension comes this axis

Returns:

the name

make_initializer(name: str, value: Any, external: bool = False, msg: str = '', parameter_name: str | None = None, source: str = '') str[source]

Adds an initializer to the graph. The function detects duplicated small containers, only if they are integers. Other type might be used as weights. Even similar, they could change after training.

Parameters:
  • name – name, if empty (“”), a unique names is given, if not empty, it is more like a prefix, the method might change it to make it unique

  • value – value (TensorProto)

  • external – external initializer or not (not stored in the graph model)

  • msg – added to the error message if something goes wrong

  • parameter_name – the parameter name is different than its name in the fx graph, they are restored when the model is finally exported into onnx, until then, the mapping is kept in attribute _parameter_renaming

  • source – any additional information, this field is usually used to let the number know where the initializer was created.

Returns:

name of the initializer

make_key(value: Any) Tuple[int | str, ...] | None[source]

Builds a key identifying a value. Returns None if it is not possible.

make_local_function(builder: GraphBuilder, function_options: FunctionOptions, optimize: bool = False) Tuple[List[str], Tuple[str, str]][source]

Adds a local function to exiting graph.

Parameters:
  • builder – builder

  • function_options – to define how to handle weights

  • optimize – optimize the function

Returns:

the list of added initializers if move_initializer_to_constant is True, and the function name (domain, name), it can be changed if one is already existing

Method GraphBuilder.inline_functions(), GraphBuilder.move_initializers_to_constant() are called on the builder if move_initializer_to_constant is True. It modifies the builder inplace.

make_new_dynamic_shape(rank: int, prefix: str = 'd') Tuple[torch.SymInt, ...][source]

Creates a dynamic shape of a known rank with new dynamic dimension.

make_node(op_type: str, inputs: str | List[str], outputs: int | List[str] | str = 1, domain: str = '', attributes: List[AttributeProto] | None = None, check: bool | None = None, name: str | None = None, sts: Dict[str, Any] | None = None, do_not_remove: bool = False, insert_position: int | None = None, **kwargs) str | List[str][source]

Adds a node in the graph.

Parameters:
  • op_type – operator type

  • inputs – input names

  • outputs – output names, may be None, in that case, the builder chooses them for the user

  • domain – domain

  • attributes – list of attributes to add as AttributeProto

  • check – do some verification

  • name – node name

  • sts – if not specified, tries to set the shape and the type of the new results aftr the node is added, it is not possible for every node, there is no tool which determines the output shape of just one node

  • do_not_remove – prevent this node from being removed

  • insert_position – insert the node at the end (None) or at the top (HEAD).

  • kwargs – additional attributes to add the node

Returns:

output names

make_nodes(builder: GraphBuilder, input_names: List[str], output_names: List[str], prefix: str = '', function_options: FunctionOptions | None = None, optimize: bool = False) str | List[str][source]

Appends all nodes and initializers from another builder. Handles the renaming of results. The content stored in ‘builder’ is modified inplace to avoid copying.

Parameters:
  • builder – other builder

  • input_names – input names

  • output_names – output names

  • prefix – prefix all name from this builder if function_options is None

  • function_options – defines how to create a local function if needed

  • optimize – optimize the function

Returns:

output names

make_shape_from_results(shape: Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...], name='') str[source]

Creates a shape coming from intermediate results.

make_subset_builder(input_names: List[str], name: str, domain: str) GraphBuilder[source]

Creates a copy of the existing builder but with information reduced to the input_names considered as inputs.

Parameters:
  • input_names – new inputs

  • name – function name

  • domain – domain name for the function

Returns:

shortened builder

make_tensor_input(name: str | Tuple[str], elem_type: Any | None = None, shape: Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...] | None = None, is_dimension: bool = False, marker: str = '', default_initializer: Any | None = None) str[source]

Adds a tensor input to the onnx graph.

Parameters:
  • name – name or tuple of names, in case, all inputs are create with the same element type and shape

  • elem_type – element type

  • shape – shape

  • is_dimension – torch is using torch.SymInt to add a dynamic input to the graph

  • marker – to known from this input was created

  • default_initializer – add an initializer with the same name of the input

Returns:

input name

make_tensor_output(name: str | List[str], elem_type: int | None = None, shape: Tuple[int, ...] | None = None, indexed: bool = True, is_dimension: bool | None = None) str | List[str][source]

Adds a tensor output to the onnx graph.

Parameters:
  • name – name

  • elem_type – element type

  • shape – shape

  • indexed – the name must be indexed?

  • is_dimension – torch is using torch.SymInt to add a dynamic input to the graph

Returns:

output name

make_tensor_sequence_input(name: str, elem_type: Any, shape: Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...], marker: str = '') str[source]

Adds a tensor input to the onnx graph.

Parameters:
  • name – name

  • elem_type – element type

  • shape – shape

  • marker – to known from this input was created

Returns:

input name

move_initializers_to_constant(full_parameter_name, threshold: int | None = None, verbose: int = 0) int[source]

Moves initializers as constant nodes.

Parameters:
  • full_parameter_name – keeps the local name or the full name for the parameters

  • threshold – only move intializers to constant if their size is below this limit

  • verbose – verbosity

Returns:

number of moved initializers

optimize() List[Dict[str, Any]][source]

Optimizes a graph. Returns the list of applied processes.

optimize_with_patterns() List[Dict[str, Any]][source]

Optimizes this graph with patterns.

parse_dimension_expression(expr: str, exc: bool = True) Expression[source]

Parses an expression involving dimension.

Parameters:
  • expr – expr

  • exc – raises an exception if it fails

Returns:

an expression or None if exc is False and the parsing failed

process(graph_module: torch.fx.GraphModule, interpreter: DynamoInterpreter)[source]

Environment variable ONNX_BUILDER_PROGRESS=1 can be used to show a progress bar on big models.

rank(name: str) int[source]

Shortcut to get_rank().

register_constraint_dimension(dim_name: str, value: Any)[source]

Registers a constraint on a dimension.

Parameters:
  • dim_name – dimension name

  • value – value to register

register_dynamic_objects_from_dim(dim: str)[source]

Registers all the dynamic objects required in a dimension.

register_dynamic_objects_from_shape(shape: Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...])[source]

Registers all the dynamic objects required in this shape.

register_users(name: str, users: Iterable[str])[source]

Registers users. This is not used except to check the conversion is valid.

remove_identity_nodes() Tuple[int, int][source]

Removes identity nodes. Returns the number of removed nodes and the number of added nodes.

Note

onnxruntime does not handle well when it is executing from domain ‘org.pytorch.aten’ (ATen for example) which outputs results on CPU where the expected output is on CUDA. An identity node must be kept or inserted in that case. In that particular case, a node can be marked so that it does not get deleted: its name must start with '_DONOTREMOVE_'.

remove_unused() int[source]

Simple function to remove unused nodes. It does not look into subgraphs and assumes there is none. Everything is done in one pass. Returns the number of removed nodes.

rename_in_local_functions(replacements: Dict[Tuple[str, str], Tuple[str, str]], list_keys: List[Tuple[str, str]], proto: FunctionProto) FunctionProto[source]

Renames local function in a given list of local functions.

Parameters:
  • replacements – replacements to make

  • list_keys – list of local function to modify

  • proto – one function to update as well

Returns:

the modified proto for proto

The function does not modify inplace the functions, it creates a copy assuming this one is not too big.

select_outputs(output_names: List[str])[source]

Selects new outputs. The type is assumed to be unknown. The method only wipes out the outputs to replace them by others. It assumes the unused nodes are removed afterwards.

Parameters:

output_names – new outputs

set_name(name: str, marker: str)[source]

Adds a name to the list of known names.

set_rank(name: str, value: int)[source]

Sets the rank for a result.

Parameters:
  • name – result name

  • value – rank

set_sequence(name: str, dtype: int | Tuple[int, ...], shapes: Tuple[Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...], ...] | None = None, ranks: Tuple[int, ...] | None = None, unknown: bool = False)[source]

Defines a result as a sequence.

set_shape(name: str, shape: Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...], set_rank: bool = True, set_if_more_precise: bool = False, exc: bool = False)[source]

Sets the shape for a result. It is exists, it checks the new shape is equal to the existing one.

Parameters:
  • name – result name

  • shape – shape

  • set_rank – set the rank as well

  • set_if_more_precise – change the shape if it is more precise

  • exc – raise an exception if inconsistency

set_type(name: str, dtype: int, exc: bool = True)[source]

Sets the shape for a result. It is exists, it checks the new shape is equal to the existing one.

Parameters:
  • name – name

  • dtype – element type (an integer, ONNX)

  • exc – raises an exception

set_type_shape_or_rank(name: str, like: str)[source]

Sets the type and the shape of name like like.

set_value_shape(name: str, value: Any, equal_to: Tuple[str, str] | None = None)[source]

Sets the value for a shape result.

Parameters:
  • name – name

  • value – it cannot be empty

  • equal_to – if specified, the value is also equal to this value

simple_update_value_shape_with_node(node) bool[source]

Updates _known`_value_shape for a particular node.

to_onnx(optimize: bool = True, large_model: bool = False, external_threshold: int = 1024, return_optimize_report: bool = False, inline: bool = False, function_options: FunctionOptions | None = None, mask_outputs: List[bool] | None = None) FunctionProto | ModelProto | TorchModelContainer | Dict[str, Any][source]

Conversion to onnx. Only then the initializers are converted into TensorProto.

Parameters:
  • optimize – disable or enable the optimization, the optimization are set when the class constructor is called

  • large_model – if True returns a onnx.model_container.ModelContainer, it lets the user to decide later if the weights should be part of the model or saved as external weights

  • external_threshold – if large_model is True, every tensor above this limit is stored as external

  • return_optimize_report – return statistics about the optimization as well

  • inline – inline local functions, this is done before any optimization takes place

  • function_options – to be set to export as a function

  • mask_outputs – to filter out some outputs if not None

Returns:

the proto

update_node_constant(name: str, node: NodeProto) bool[source]

Updates a constant NodeProto.

value_as_shape(name: str) bool[source]

Returns the value of a result if it is a shape.

verify_dynamic_shape(shape: Any, name: str | None = None, add: bool = True) Tuple[int | torch.SymInt | torch.SymFloat | float | str, ...][source]

The implementation of this method should be revisited.

FunctionOptions

class experimental_experiment.xbuilder.FunctionOptions(export_as_function: bool = False, name: str = '', domain: str = '', external_threshold: int = 33554432, move_initializer_to_constant: bool = False, return_initializer: bool = False, inline: bool = False, merge_allowed: bool = False, rename_allowed: bool = False)[source]

Defines how local functions must behave.

Parameters:
  • name – function name

  • domain – function domain

  • export_as_function – export the onnx as functions or keep local function

  • external_threshold – whether or not keep initializer as input for the function or move them as constant of the function

  • move_initializer_to_constant – move initializers as constant first before creating the function proto, that depends on the size defined by external_threshold

  • return_initializer – return the remaining initializer and add them as input to the function

  • inline – inline functions

  • rename_allowed – allow to rename the function if a duplicate is detected

  • merge_allowed – allow to merge a function in case the same code is detected

InferShapesOptions

class experimental_experiment.xbuilder.InferShapesOptions(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Defines options when running shape inference on an existing model. Options NEW means shapes informations is removed by running it again.

OptimizationOptions

class experimental_experiment.xbuilder.OptimizationOptions(remove_unused: bool = True, constant_folding: bool = False, constant_size: int = 1024, constant_fusing: bool = True, remove_identity: bool = True, patterns: str | List[PatternOptimization] = 'default', max_iter: int = -1, recursive: bool = False, stop_after: int = -1, verbose: int = 0, verifies: bool = False, dump_applied_patterns: str | None = None, processor: str = 'CPU', order: OrderAlgorithm | None = None)[source]

Defines all the optimization to apply.

Parameters:
  • remove_unused – remove all unused nodes, this must be true if pattern optimization is enabled

  • constant_folding – folds constant as much as possible

  • constant_size – all node Constant above this threshold should be defined as initializer

  • remove_identity – remove identity nodes

  • patterns – list of pattern optimization to apply to the graph, it looks a a specific subsequence of nodes in a graph and do some replacements, ‘default’ means a default list of optimization patterns are applied, see below for the most common values

  • constant_fusing – similar node Constant and ConstantOfShape are used, this options avoids creating new nodes when they are the same

  • max_iter – maximum number of iteration when doing pattern optimizations, -1 to let it undefined

  • recursive – optimizes subgraphs and functions as well

  • stop_after – for investigation, stop_after this number of applies patterns, -1 to never stop

  • verbose – verbosity level (for pattern optimization)

  • verifies – run verifications to ensure the model is correct everytime it is modifies, it is mostly to find bugs, it is very slow

  • dump_applied_patterns – dump applied patterns in a folder, the users can check every pattern dumped as a FunctionProto

  • processor – optimization should be made for this processor or this list of processors (comma separated value)

  • order – order algorithm to apply

It is possible to define a precise of the pattern to apply to a model. The value is interpreter by experimental_experiment.xoptim.get_pattern_list().

  • patterns=None: no pattern optimization

  • patterns="TransposeTranspose,TransposeMatMul": applies two patterns

  • patterns=["FusedMatMul"]: applies one pattern

  • patterns=[RotaryEmbeddingPattern(verbose=10)]: applies one pattern with a specific verbosity value

  • patterns="default: applies all patterns modifying standard onnx operators into other standard onnx operators

  • patterns="default+onnxruntime: applies all patterns modifying standard onnx operators into other standard onnx operators as well as patterns fusing nodes into custom operators implemented by onnxruntime

  • patterns="default+onnxruntime+experimental: applies all patterns modifying standard onnx operators into other standard onnx operators, patterns fusing nodes into custom operators implemented by onnxruntime,

VirtualTensor

class experimental_experiment.xbuilder.VirtualTensor(name: str, dtype: Any, shape: Tuple[int | str, ...])[source]

Defines a the type and shape for a tensor without its content.

Other functions