onnx_extended.tools.onnx_nodes¶
convert_onnx_model¶
- onnx_extended.tools.onnx_nodes.convert_onnx_model(onnx_model: ModelProto | GraphProto | NodeProto | FunctionProto, opsets: Dict[str, int], recursive: bool = True, use_as_tensor_attributes: bool = True, verbose: int = 0, _from_opset: Dict[str, int] | None = None, debug_info: List[str] | None = None) ModelProto | GraphProto | NodeProto | FunctionProto [source]¶
Upgrades a model to the latest opsets.
- Parameters:
onnx_model – proto
opsets – list of opsets to update
recursive – looks into subgraphs
use_as_tensor_attributes – use attributes siffixed with as_tensor for trees
verbose – verbosity
_from_opset – tells which opset a node belongs too, only used when onnx_model is a NodeProto
debug_info – unused
- Returns:
new proto
enumerate_onnx_node_types¶
- onnx_extended.tools.onnx_nodes.enumerate_onnx_node_types(model: str | ModelProto | GraphProto, level: int = 0, shapes: Dict[str, TypeProto] | None = None, external: bool = True) Iterable[Dict[str, str | float]] [source]¶
Looks into types for every node in a model.
- Parameters:
model – a string or a proto
level – level (recursivity level)
shapes – known shapes, returned by :func:onnx.shape_inference.infer_shapes`
external – loads the external data if the model is loaded
- Returns:
a list of dictionary which can be turned into a dataframe.
multiply_tree¶
- onnx_extended.tools.onnx_nodes.multiply_tree(node: NodeProto, n: int, random: bool = True) NodeProto [source]¶
Multiplies the number of trees in TreeEnsemble operator. It replicates the existing trees but permutes features ids and node values if random is True.
- Parameters:
node – tree ensemble operator
n – number of times the existing trees must be multiplied
random – permutation or thresholds
- Returns:
the new trees
onnx_merge_models¶
- onnx_extended.tools.onnx_nodes.onnx_merge_models(m1: ModelProto, m2: ModelProto, io_map: List[Tuple[str, str]], verbose: int = 0) ModelProto [source]¶
Merges two models. The functions also checks that the model have the same defined opsets (except for function). If not, the most recent opset is selected.
- Parameters:
m1 – first model
m2 – second model
io_map – mapping between outputs of the first model and and the input of the second one
verbose – display some information if one of the model was updated
- Returns:
new model
onnx_remove_node_unused¶
- onnx_extended.tools.onnx_nodes.onnx_remove_node_unused(onnx_model, recursive=True, debug_info=None, **options)[source]¶
Removes unused nodes of the graph. An unused node is not involved in the output computation.
- Parameters:
onnx_model – onnx model
recursive – looks into subgraphs
debug_info – debug information (private)
options – unused
- Returns:
new onnx _model
select_model_inputs_outputs¶
- onnx_extended.tools.onnx_nodes.select_model_inputs_outputs(model: ModelProto, outputs: List[str] | None = None, inputs: List[str] | None = None, infer_shapes: bool = True, overwrite: Dict[str, Any] | None = None, remove_unused: bool = True, verbose: int = 0)[source]¶
Takes a model and changes its outputs.
- Parameters:
model – ONNX model
inputs – new inputs, same ones if None
outputs – new outputs, same ones if None
infer_shapes – infer inputs and outputs shapes
overwrite – overwrite type and shapes for inputs or outputs, overwrite is a dictionary {‘name’: (numpy dtype, shape)}
remove_unused – remove unused nodes from the graph
verbose – display information while converting
- Returns:
modified model
The function removes unneeded nodes.
The following example shows how to change the inputs of model to bypass the first nodes. Shape inferences fails to determine the new inputs type. They need to be overwritten. verbose=1 shows the number of deleted nodes.
import onnx from onnx_extended.tools.onnx_nodes import select_model_inputs_outputs onx = onnx.load(path) onx2 = select_model_inputs_outputs( onx, inputs=["a", "b"], infer_shapes=True, verbose=1, overwrite={'a': (numpy.int32, None), 'b': (numpy.int64, None)}) onnx.save(onx2, path2)