yobx.helpers.onnx_helper#
- class yobx.helpers.onnx_helper.NodeCoordinates(node: TensorProto | NodeProto | SparseTensorProto | ValueInfoProto | str, path: Tuple[Tuple[int, str, str], ...])[source]#
A way to localize a node, path is a tuple of three information, node index, node type, node name.
- class yobx.helpers.onnx_helper.ResultFound(name: str, producer: NodeCoordinates | None, consumer: NodeCoordinates | None)[source]#
Class returned by
enumerate_results().
- yobx.helpers.onnx_helper.attr_proto_to_python(attr: AttributeProto) Any[source]#
Converts an
onnx.AttributePrototo a plain Python value.- Parameters:
attr – attribute proto to convert
- Returns:
Python value
Supported attribute types: FLOAT, INT, STRING, TENSOR, FLOATS, INTS, STRINGS. Raises
NotImplementedErrorfor unsupported types.
- yobx.helpers.onnx_helper.check_for_non_recursivity(node_indices: List[int], node_list: List[NodeProto | None], inputs: Set[str] | Sequence[str], outputs: Set[str] | Sequence[str], exc: bool = True) List[int][source]#
We need to check that any of this output is not required by one input from the function itself, that would mean one node needs an output of the function and is also required by the function: it is probably missing from the initial set.
- Parameters:
node_indices – node_indices part of the subset
node_list – list of nodes
inputs – input names to consider
outputs – output names which cannot be involved in input names
exc – raise an exception as soon as possible it becomes impossible
- Returns:
list of nodes to add to make the list of node consistence with the list of inputs and outputs (they should be recomputed)
- yobx.helpers.onnx_helper.choose_consistent_domain_opset(domain: str, opsets: Dict[str, int] | None = None) int[source]#
Chooses a compatible opset for a particular domain given this existing one. Only works for ai.onnx.ml, otherwise return 1.
- Parameters:
domain – new domain
opsets – existing opsets
- Returns:
version
- yobx.helpers.onnx_helper.clean_shapes(proto: GraphProto | ModelProto)[source]#
Cleans all shapes inplace.
- yobx.helpers.onnx_helper.compatible_opsets(domain: str, op_type: str, current: int, new_version: int) bool[source]#
Tells if two opset version for a particular operator type means the same version of it.
- Parameters:
domain – domain, only ai.onnx and ai.onnx.ml are checked.
op_type – operator type
current – current domain version
new_version – new version
- Returns:
result
- yobx.helpers.onnx_helper.dtype_to_tensor_dtype(dt: dtype | torch.dtype) int[source]#
Converts a torch dtype or numpy dtype into a onnx element type.
- Parameters:
to – dtype
- Returns:
onnx type
- yobx.helpers.onnx_helper.element_wise_binary_op_types() Set[str][source]#
Returns the list of element-wise operators.
<<<
import pprint from yobx.helpers.onnx_helper import ( element_wise_binary_op_types, ) pprint.pprint(element_wise_binary_op_types())
>>>
{'Add', 'And', 'BitwiseAnd', 'BitwiseOr', 'BitwiseXor', 'Div', 'Max', 'Mean', 'Min', 'Mod', 'Mul', 'Or', 'Sub', 'Sum', 'Xor'}
- yobx.helpers.onnx_helper.element_wise_op_cmp_types() Set[str][source]#
Returns the list of element-wise operators doing comparisons.
<<<
import pprint from yobx.helpers.onnx_helper import element_wise_op_cmp_types pprint.pprint(element_wise_op_cmp_types())
>>>
{'GreaterOrEqual', 'Less', 'LessOrEqual', 'Greater', 'Equal'}
- yobx.helpers.onnx_helper.enumerate_nodes(graph: GraphProto) Iterator[NodeProto][source]#
Enumerates all nodes in a graph, including nodes contained in subgraphs (e.g. bodies of Loop, Scan, If, SequenceMap operators).
- yobx.helpers.onnx_helper.enumerate_results(proto: FunctionProto | GraphProto | ModelProto | Sequence[NodeProto], name: Set[str] | str, verbose: int = 0, coordinates: List[Tuple[int, str, str]] | None = None) Iterator[ResultFound][source]#
Iterates on all nodes, attributes to find where a name is used.
- Parameters:
proto – a proto
name – name or names to find
verbose – verbosity
coordinates – coordinates of a node
- Returns:
iterator on
ResultFound
- yobx.helpers.onnx_helper.enumerate_subgraphs(graph: GraphProto) Iterator[GraphProto][source]#
Enumerates all inputs from a node including all the hidden inputs from subgraphs.
- yobx.helpers.onnx_helper.enumerate_subgraphs_builder(node: NodeProto, recursive: bool = True) Iterator[Tuple[Tuple[NodeProto, str, GraphProto], ...]][source]#
Returns the subgraphs inside a graph.
Returns the hidden inputs (inputs coming from an upper context) used by a subgraph. It excludes empty names.
- yobx.helpers.onnx_helper.get_onnx_signature(model: ModelProto) Tuple[Tuple[str, int, Tuple[int | str, ...] | List[Tuple[str, int, Tuple[int | str, ...]]]], ...][source]#
Produces a tuple of tuples corresponding to the signatures.
- Parameters:
model – model
- Returns:
signature
- yobx.helpers.onnx_helper.make_idg(g: GraphProto) int[source]#
Creates a unique id for a graph hoping collision cannot happen. onnx may reuse sometimes the nodes,
id(node)may not be enough sometimes.
- yobx.helpers.onnx_helper.make_idn(node: NodeProto) int[source]#
Creates a unique id for a node hoping collision cannot happen. onnx may reuse sometimes the nodes,
id(node)may not be enough sometimes.
- yobx.helpers.onnx_helper.make_model_with_local_functions(model: ModelProto, regex: str = '.*[.]layers[.][0-9]+[.]forward$', domain: str = 'local_function', metadata_key_prefix: str | Tuple[str, ...] = ('namespace', 'source['), allow_extensions: bool = True, verbose: int = 0) ModelProto[source]#
Selects nodes based on a regular expression, using metadata
'namespace'. It is going to look into every value matching the regular expression and partition the nodes based on the unique values the regular expression finds. Every set of nodes is replaced by a call to a local function.- Parameters:
model – model proto
regex – regular expression
domain – function domain
metadata_key_prefix – list of metadata keys to consider, every value is split into multiple ones.
allow_extensions – allows the function to take nodes outside a partition if there are not already inside another partition
verbose – verbosity
- Returns:
model proto
Example:
<<<
import numpy as np import onnx import onnx.helper as oh import onnx.numpy_helper as onh from yobx.helpers.onnx_helper import ( make_model_with_local_functions, pretty_onnx, ) model = oh.make_model( oh.make_graph( [ oh.make_node("Unsqueeze", ["X", "zero"], ["xu1"]), oh.make_node("Unsqueeze", ["xu1", "un"], ["xu2"]), oh.make_node("Reshape", ["xu2", "shape1"], ["xm1"]), oh.make_node("Reshape", ["Y", "shape2"], ["xm2c"]), oh.make_node("Cast", ["xm2c"], ["xm2"], to=1), oh.make_node("MatMul", ["xm1", "xm2"], ["xm"]), oh.make_node("Reshape", ["xm", "shape3"], ["Z"]), ], "dummy", [oh.make_tensor_value_info("X", onnx.TensorProto.FLOAT, [320, 1280])], [oh.make_tensor_value_info("Z", onnx.TensorProto.FLOAT, [3, 5, 320, 640])], [ onh.from_array( np.random.rand(3, 5, 1280, 640).astype(np.float32), name="Y" ), onh.from_array(np.array([0], dtype=np.int64), name="zero"), onh.from_array(np.array([1], dtype=np.int64), name="un"), onh.from_array(np.array([1, 320, 1280], dtype=np.int64), name="shape1"), onh.from_array(np.array([15, 1280, 640], dtype=np.int64), name="shape2"), onh.from_array(np.array([3, 5, 320, 640], dtype=np.int64), name="shape3"), ], ), opset_imports=[oh.make_opsetid("", 18)], ir_version=9, ) for i_node in [0, 1, 2, 3]: node = model.graph.node[i_node] meta = node.metadata_props.add() meta.key = f"source[{i_node}]" meta.value = f"LLL{i_node//3}" print("-- model before --") print(pretty_onnx(model)) print() print("-- metadata --") for node in model.graph.node: text = ( f" -- [{node.metadata_props[0].key}: {node.metadata_props[0].value}]" if node.metadata_props else "" ) print( f"-- {node.op_type}({', '.join(node.input)}) -> " f"{', '.join(node.output)}{text}" ) print() new_model = make_model_with_local_functions( model, "^LLL[01]$", metadata_key_prefix="source[", verbose=1 ) print() print("-- model after --") print(pretty_onnx(new_model))
>>>
-- model before -- opset: domain='' version=18 input: name='X' type=dtype('float32') shape=[320, 1280] init: name='Y' type=float32 shape=(3, 5, 1280, 640) init: name='zero' type=int64 shape=(1,) -- array([0]) init: name='un' type=int64 shape=(1,) -- array([1]) init: name='shape1' type=int64 shape=(3,) -- array([ 1, 320, 1280]) init: name='shape2' type=int64 shape=(3,) -- array([ 15, 1280, 640]) init: name='shape3' type=int64 shape=(4,) -- array([ 3, 5, 320, 640]) Reshape(Y, shape2) -> xm2c Cast(xm2c, to=1) -> xm2 Unsqueeze(X, zero) -> xu1 Unsqueeze(xu1, un) -> xu2 Reshape(xu2, shape1) -> xm1 MatMul(xm1, xm2) -> xm Reshape(xm, shape3) -> Z output: name='Z' type=dtype('float32') shape=[3, 5, 320, 640] -- metadata -- -- Unsqueeze(X, zero) -> xu1 -- [source[0]: LLL0] -- Unsqueeze(xu1, un) -> xu2 -- [source[1]: LLL0] -- Reshape(xu2, shape1) -> xm1 -- [source[2]: LLL0] -- Reshape(Y, shape2) -> xm2c -- [source[3]: LLL1] -- Cast(xm2c) -> xm2 -- MatMul(xm1, xm2) -> xm -- Reshape(xm, shape3) -> Z [make_model_with_local_functions] matched 2 partitions [make_model_with_local_functions] 'LLL0': 3 nodes Unsqueeze(X, zero) -> xu1 Unsqueeze(xu1, un) -> xu2 Reshape(xu2, shape1) -> xm1 [make_model_with_local_functions] 'LLL1': 1 nodes Reshape(Y, shape2) -> xm2c [make_model_with_local_functions] move 3 nodes in partition 'LLL0' (function='LLL0') [make_model_with_local_functions] add function LLL0(X, shape1, un, zero) -> xm1 [make_model_with_local_functions] move 1 nodes in partition 'LLL1' (function='LLL1') [make_model_with_local_functions] add function LLL1(Y, shape2) -> xm2c -- model after -- opset: domain='' version=18 opset: domain='local_function' version=1 input: name='X' type=dtype('float32') shape=[320, 1280] init: name='Y' type=float32 shape=(3, 5, 1280, 640) init: name='zero' type=int64 shape=(1,) -- array([0]) init: name='un' type=int64 shape=(1,) -- array([1]) init: name='shape1' type=int64 shape=(3,) -- array([ 1, 320, 1280]) init: name='shape2' type=int64 shape=(3,) -- array([ 15, 1280, 640]) init: name='shape3' type=int64 shape=(4,) -- array([ 3, 5, 320, 640]) LLL0[local_function](X, shape1, un, zero) -> xm1 LLL1[local_function](Y, shape2) -> xm2c Cast(xm2c, to=1) -> xm2 MatMul(xm1, xm2) -> xm Reshape(xm, shape3) -> Z output: name='Z' type=dtype('float32') shape=[3, 5, 320, 640] ----- function name=LLL0 domain=local_function opset: domain='' version=18 input: 'X' input: 'shape1' input: 'un' input: 'zero' Unsqueeze(X, zero) -> xu1 Unsqueeze(xu1, un) -> xu2 Reshape(xu2, shape1) -> xm1 output: name='xm1' type=? shape=? ----- function name=LLL1 domain=local_function opset: domain='' version=18 input: 'Y' input: 'shape2' Reshape(Y, shape2) -> xm2c output: name='xm2c' type=? shape=?
- yobx.helpers.onnx_helper.make_subfunction(name: str, nodes: List[NodeProto], opset_imports: Sequence[OperatorSetIdProto], output_names: List[str], domain: str = 'local_function') FunctionProto[source]#
Creates a function with the given list of nodes. It computes the minimum list of inputs needed for this model. The function assumes the nodes are sorted.
- Parameters:
name – function name
nodes – list of nodes
opset_imports – opset import
output_names – desired outputs
domain – function domain
- Returns:
function proto
- yobx.helpers.onnx_helper.np_dtype_to_tensor_dtype(dtype: dtype) int[source]#
Converts a numpy dtype to an onnx element type.
- yobx.helpers.onnx_helper.onnx_dtype_name(itype: int, exc: bool = True) str[source]#
Returns the ONNX name for a specific element type.
<<<
import onnx from yobx.helpers.onnx_helper import onnx_dtype_name itype = onnx.onnx.TensorProto.BFLOAT16 print(onnx_dtype_name(itype)) print(onnx_dtype_name(7))
>>>
BFLOAT16 INT64
- yobx.helpers.onnx_helper.onnx_find(onx: str | ModelProto, verbose: int = 0, watch: Set[str] | None = None) List[NodeProto | TensorProto][source]#
Looks for node producing or consuming some results.
- Parameters:
onx – model
verbose – verbosity
watch – names to search for
- Returns:
list of nodes
- yobx.helpers.onnx_helper.overwrite_shape_in_model_proto(model: ModelProto, n_in: int | None = None) ModelProto[source]#
Removes inferred shapes. Overwrites input shapes to make them all dynamic.
n_inindicates the number of inputs for which the shape must be rewritten.
- yobx.helpers.onnx_helper.pretty_onnx(onx: AttributeProto | FunctionProto | GraphProto | ModelProto | NodeProto | SparseTensorProto | TensorProto | ValueInfoProto | str, with_attributes: bool = False, highlight: Set[str] | None = None, shape_inference: bool = False) str[source]#
Displays an onnx proto in a better way.
- Parameters:
with_attributes – displays attributes as well, if only a node is printed
highlight – to highlight some names
shape_inference – run shape inference before printing the model
- Returns:
text
- yobx.helpers.onnx_helper.replace_static_dimensions_by_strings(model: ModelProto) Tuple[ModelProto, Dict[str, str | int]][source]#
Replaces static dimensions by dynamic dimensions in a model.
- Parameters:
model – ModelProto
- Returns:
the modified model, a mapping
{new_name: value}
- yobx.helpers.onnx_helper.same_function_proto(f1: FunctionProto, f2: FunctionProto, verbose: int = 0) str | bool[source]#
Compares two functions and tells if they are equal.
- Parameters:
f1 – first function
f2 – second function
verbose – to know why the comparison failed, the function returns a string in that case or True
- Returns:
comparison
They may have different names.
- yobx.helpers.onnx_helper.shadowing_names(proto: FunctionProto | GraphProto | ModelProto | Sequence[NodeProto | None], verbose: int = 0, existing: Set[str] | None = None, shadow_context: Set[str] | None = None, post_shadow_context: Set[str] | None = None) Tuple[Set[str], Set[str], Set[str]][source]#
Returns the shadowing names, the names created in the main graph after they were created in a subgraphs and the names created by the nodes.
- yobx.helpers.onnx_helper.str_tensor_proto_type() str[source]#
Returns the following string:
<<<
from yobx.helpers.onnx_helper import str_tensor_proto_type print(str_tensor_proto_type())
>>>
0:UNDEFINED, 1:FLOAT, 2:UINT8, 3:INT8, 4:UINT16, 5:INT16, 6:INT32, 7:INT64, 8:STRING, 9:BOOL, 10:FLOAT16, 11:DOUBLE, 12:UINT32, 13:UINT64, 14:COMPLEX64, 15:COMPLEX128, 16:BFLOAT16, 17:FLOAT8E4M3FN, 18:FLOAT8E4M3FNUZ, 19:FLOAT8E5M2, 20:FLOAT8E5M2FNUZ, 21:UINT4, 22:INT4, 23:FLOAT4E2M1, 24:FLOAT8E8M0, 25:UINT2, 26:INT2
- yobx.helpers.onnx_helper.tensor_dtype_to_np_dtype(tensor_dtype: int) dtype[source]#
Converts a onnx.TensorProto’s data_type to corresponding numpy dtype. It can be used while making tensor.
- Parameters:
tensor_dtype – onnx.TensorProto’s data_type
- Returns:
numpy’s data_type
- yobx.helpers.onnx_helper.type_info(itype: int, att: str)[source]#
Returns the minimum or maximum value for a type.
- Parameters:
itype – onnx type
att – ‘min’ or ‘max’
- Returns:
value
- yobx.helpers.onnx_helper.unary_like_op_types() Set[str][source]#
Returns the list of unary like operators. They do not change the shape. They may change the type.
<<<
import pprint from yobx.helpers.onnx_helper import unary_like_op_types pprint.pprint(unary_like_op_types())
>>>
{'Abs', 'Acos', 'Acosh', 'Asin', 'Asinh', 'Atan', 'Atanh', 'BitShift', 'BitwiseNot', 'Cast', 'CastLike', 'Ceil', 'Celu', 'Clip', 'Cos', 'Cosh', 'CumSum', 'DequantizeLinear', 'DynamicQuantizeLinear', 'Elu', 'Erf', 'Exp', 'Floor', 'HardSigmoid', 'HardSwish', 'IsInf', 'LRN', 'LeakyRelu', 'Log', 'LogSoftmax', 'LpNormalization', 'MeanVarianceNormalization', 'Mish', 'Neg', 'Not', 'PRelu', 'QuantizeLinear', 'Reciprocal', 'Relu', 'Round', 'Selu', 'Shrink', 'Sigmoid', 'Sign', 'Sin', 'Sinh', 'Softmax', 'SoftmaxCrossEntropyLoss', 'Softplus', 'Softsign', 'Sqrt', 'Tan', 'Tanh', 'ThresholdRelu', 'ThresholdedRelu', 'Trilu', 'Trunc'}