onnx_diagnostic.helpers.onnx_helper¶
- class onnx_diagnostic.helpers.onnx_helper.NodeCoordinates(node: TensorProto | NodeProto | str, path: Tuple[Tuple[int, str, str], ...])[source][source]¶
- A way to localize a node, path is a tuple of three information, node index, node type, node name. 
- class onnx_diagnostic.helpers.onnx_helper.ResultFound(name: str, producer: NodeCoordinates | None, consumer: NodeCoordinates | None)[source][source]¶
- Class returned by - enumerate_results().
- onnx_diagnostic.helpers.onnx_helper.check_model_ort(onx: ModelProto, providers: str | List[Any] | None = None, dump_file: str | None = None) onnxruntime.InferenceSession[source][source]¶
- Loads a model with onnxruntime. - Parameters:
- onx – ModelProto 
- providers – list of providers, None fur CPU, cpu for CPU, cuda for CUDA 
- dump_file – if not empty, dumps the model into this file if an error happened 
 
- Returns:
- InferenceSession 
 
- onnx_diagnostic.helpers.onnx_helper.convert_endian(tensor: TensorProto) None[source][source]¶
- Call to convert endianness of raw data in tensor. - Args:
- tensor: TensorProto to be converted. 
 
- onnx_diagnostic.helpers.onnx_helper.dtype_to_tensor_dtype(dt: dtype | torch.dtype) int[source][source]¶
- Converts a torch dtype or numpy dtype into a onnx element type. - Parameters:
- to – dtype 
- Returns:
- onnx type 
 
- onnx_diagnostic.helpers.onnx_helper.enumerate_results(proto: FunctionProto | GraphProto | ModelProto | Sequence[NodeProto], name: Set[str] | str, verbose: int = 0, coordinates: List[Tuple[int, str, str]] | None = None) Iterator[ResultFound][source][source]¶
- Iterates on all nodes, attributes to find where a name is used. - Parameters:
- proto – a proto 
- name – name or names to find 
- verbose – verbosity 
- coordinates – coordinates of a node 
 
- Returns:
- iterator on - ResultFound
 
- onnx_diagnostic.helpers.onnx_helper.from_array_extended(tensor: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str], name: str | None = None) TensorProto[source][source]¶
- Converts an array into a - onnx.TensorProto.- Parameters:
- tensor – numpy array or torch tensor 
- name – name 
 
- Returns:
- TensorProto 
 
- onnx_diagnostic.helpers.onnx_helper.from_array_ml_dtypes(arr: Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str], name: str | None = None) TensorProto[source][source]¶
- Converts a numpy array to a tensor def assuming the dtype is defined in ml_dtypes. - Args:
- arr: a numpy array. name: (optional) the name of the tensor. 
- Returns:
- TensorProto: the converted tensor def. 
 
- onnx_diagnostic.helpers.onnx_helper.get_onnx_signature(model: ModelProto) Tuple[Tuple[str, Any], ...][source][source]¶
- Produces a tuple of tuples corresponding to the signatures. - Parameters:
- model – model 
- Returns:
- signature 
 
- onnx_diagnostic.helpers.onnx_helper.iterator_initializer_constant(model: FunctionProto | GraphProto | ModelProto, use_numpy: bool = True, prefix: str = '') Iterator[Tuple[str, torch.Tensor | ndarray]][source][source]¶
- Iterates on iniatialiers and constant in an onnx model. - Parameters:
- model – model 
- use_numpy – use numpy or pytorch 
- prefix – for subgraph 
 
- Returns:
- iterator 
 
- onnx_diagnostic.helpers.onnx_helper.np_dtype_to_tensor_dtype(dt: dtype) int[source][source]¶
- Converts a numpy dtype into a onnx element type. - Parameters:
- to – dtype 
- Returns:
- onnx type 
 
- onnx_diagnostic.helpers.onnx_helper.onnx_dtype_name(itype: int, exc: bool = True) str[source][source]¶
- Returns the ONNX name for a specific element type. - <<< - import onnx from onnx_diagnostic.helpers.onnx_helper import onnx_dtype_name itype = onnx.TensorProto.BFLOAT16 print(onnx_dtype_name(itype)) print(onnx_dtype_name(7)) - >>> - BFLOAT16 INT64 
- onnx_diagnostic.helpers.onnx_helper.onnx_dtype_to_np_dtype(itype: int) Any[source][source]¶
- Converts an onnx type into a to numpy dtype. That includes ml_dtypes dtypes. - Parameters:
- to – onnx dtype 
- Returns:
- numpy dtype 
 
- onnx_diagnostic.helpers.onnx_helper.onnx_find(onx: str | ModelProto, verbose: int = 0, watch: Set[str] | None = None) List[NodeProto | TensorProto][source][source]¶
- Looks for node producing or consuming some results. - Parameters:
- onx – model 
- verbose – verbosity 
- watch – names to search for 
 
- Returns:
- list of nodes 
 
- onnx_diagnostic.helpers.onnx_helper.onnx_lighten(onx: str | ModelProto, verbose: int = 0) Tuple[ModelProto, Dict[str, Dict[str, float]]][source][source]¶
- Creates a model without big initializers but stores statistics into dictionaries. The function can be reversed with - onnx_diagnostic.helpers.onnx_helper.onnx_unlighten(). The model is modified inplace.- Parameters:
- onx – model 
- verbose – verbosity 
 
- Returns:
- new model, statistics 
 
- onnx_diagnostic.helpers.onnx_helper.onnx_unlighten(onx: str | ModelProto, stats: Dict[str, Dict[str, float]] | None = None, verbose: int = 0) ModelProto[source][source]¶
- Function fixing the model produced by function - onnx_diagnostic.helpers.onnx_helper.onnx_lighten(). The model is modified inplace.- Parameters:
- onx – model 
- stats – statistics, can be None if onx is a file, then it loads the file - <filename>.stats, it assumes it is json format
- verbose – verbosity 
 
- Returns:
- new model, statistics 
 
- onnx_diagnostic.helpers.onnx_helper.pretty_onnx(onx: FunctionProto | GraphProto | ModelProto | ValueInfoProto | str, with_attributes: bool = False, highlight: Set[str] | None = None, shape_inference: bool = False) str[source][source]¶
- Displays an onnx prot in a better way. - Parameters:
- with_attributes – displays attributes as well, if only a node is printed 
- highlight – to highlight some names 
- shape_inference – run shape inference before printing the model 
 
- Returns:
- text 
 
- onnx_diagnostic.helpers.onnx_helper.shadowing_names(proto: FunctionProto | GraphProto | ModelProto | Sequence[NodeProto], verbose: int = 0, existing: Set[str] | None = None, shadow_context: Set[str] | None = None, post_shadow_context: Set[str] | None = None) Tuple[Set[str], Set[str], Set[str]][source][source]¶
- Returns the shadowing names, the names created in the main graph after they were created in a subgraphs and the names created by the nodes. 
- onnx_diagnostic.helpers.onnx_helper.tensor_dtype_to_np_dtype(tensor_dtype: int) dtype[source][source]¶
- Converts a TensorProto’s data_type to corresponding numpy dtype. It can be used while making tensor. - Parameters:
- tensor_dtype – TensorProto’s data_type 
- Returns:
- numpy’s data_type 
 
- onnx_diagnostic.helpers.onnx_helper.tensor_statistics(tensor: ndarray | TensorProto) Dict[str, float | str][source][source]¶
- Produces statistics on a tensor. - Parameters:
- tensor – tensor 
- Returns:
- statistics 
 - <<< - import pprint import numpy as np from onnx_diagnostic.helpers.onnx_helper import tensor_statistics t = np.random.rand(40, 50).astype(np.float16) pprint.pprint(tensor_statistics(t)) - >>> - {'>0.0': 2000, '>0.00010001659393310547': 2000, '>0.0010004043579101562': 1995, '>0.01000213623046875': 1979, '>0.0999755859375': 1807, '>0.5': 1017, '>1.0': 0, '>1.0013580322265625e-05': 2000, '>1.0132789611816406e-06': 2000, '>1.1920928955078125e-07': 2000, '>1.9599609375': 0, '>10.0': 0, '>100.0': 0, '>1000.0': 0, '>10000.0': 0, 'itype': 10, 'max': 1.0, 'mean': 0.50439453125, 'min': 0.00023746490478515625, 'nnan': 0.0, 'numel': 2000, 'q0.1': 0.10316162109375, 'q0.2': 0.20297851562500002, 'q0.3': 0.30559082031249996, 'q0.4': 0.409521484375, 'q0.5': 0.511962890625, 'q0.6': 0.6095703124999999, 'q0.7': 0.70849609375, 'q0.8': 0.79541015625, 'q0.9': 0.8980957031250002, 'shape': '40x50', 'size': 4000, 'std': 0.28759765625, 'stype': 'FLOAT16'}