onnx_extended.ortops.optim#

change_onnx_operator_domain#

onnx_extended.ortops.optim.optimize.change_onnx_operator_domain(onx: ModelProto | GraphProto | FunctionProto, op_type: str, op_domain: str = '', new_op_type: str | None = None, new_op_domain: str | None = None, new_opset: int | None = None, **kwargs: Dict[str, Any]) ModelProto | GraphProto | FunctionProto[source]#

Replaces an operator by another one in the same domain or another one.

Parameters:
  • onx – proto to modify

  • op_type – operator to look for

  • op_domain – domain to look for

  • new_op_type – new operator name or None for the same name

  • new_op_domain – new domain name or None the for the same domain

  • new_opset – new opset for the new domain, if not specified, it is 1 for any opset other than “”

  • kwargs – modified parameters, set it to None to remove them

Returns:

same type as the input

The function is not recursive yet.

get_ort_ext_libs#

onnx_extended.ortops.optim.cpu.get_ort_ext_libs() List[str][source]#

Returns the list of libraries implementing new simple onnxruntime kernels implemented for the CPUExecutionProvider.

List of implemented kernels

<<<

from onnx_extended.ortops.optim.cpu import documentation

print("\n".join(documentation()))

>>>

onnx_extented.ortops.option.cpu.DenseToSparse#

Converts a dense tensor into a sparse one. All null values are skipped.

Provider

CPUExecutionProvider

Inputs

  • X (T): 2D tensor

Outputs

  • Y (T): 1D tensor

Constraints

  • T: float

onnx_extented.ortops.option.cpu.SparseToDense#

Converts a spadenserse tensor into a sparse one. All missing values are replaced by 0.

Provider

CPUExecutionProvider

Inputs

  • X (T): 1D tensor

Outputs

  • Y (T): 2D tensor

Constraints

  • T: float

onnx_extented.ortops.option.cpu.TfIdfVectorizer#

Implements TfIdfVectorizer.

Provider

CPUExecutionProvider

Attributes

See onnx TfIdfVectorizer. The implementation does not support string labels. It is adding one attribute.

  • sparse: INT64, default is 0, the output and the computation are sparse, see

Inputs

  • X (T1): tensor of type T1

Outputs

  • label (T3): labels of type T3

  • Y (T2): probabilities of type T2

Constraints

  • T1: float, double

  • T2: float, double

  • T3: int64

onnx_extented.ortops.option.cpu.TreeEnsembleClassifier#

It does the sum of two tensors.

Provider

CPUExecutionProvider

Attributes

See onnx TreeEnsembleClassifier. The implementation does not support string labels. The only change:

nodes_modes: string contenation with ,

Inputs

  • X (T1): tensor of type T1

Outputs

  • label (T3): labels of type T3

  • Y (T2): probabilities of type T2

Constraints

  • T1: float, double

  • T2: float, double

  • T3: int64

onnx_extented.ortops.option.cpu.TreeEnsembleClassifierSparse#

It does the sum of two tensors.

Provider

CPUExecutionProvider

Attributes

See onnx TreeEnsembleClassifier. The implementation does not support string labels. The only change:

nodes_modes: string contenation with ,

Inputs

  • X (T1): tensor of type T1 (sparse)

Outputs

  • label (T3): labels of type T3

  • Y (T2): probabilities of type T2

Constraints

  • T1: float, double

  • T2: float, double

  • T3: int64

onnx_extented.ortops.option.cpu.TreeEnsembleRegressor#

It does the sum of two tensors.

Provider

CPUExecutionProvider

Attributes

See onnx TreeEnsembleRegressor. The only change:

nodes_modes: string contenation with ,

Inputs

  • X (T1): tensor of type T1

Outputs

  • Y (T2): prediction of type T2

Constraints

  • T1: float, double

  • T2: float, double

onnx_extented.ortops.option.cpu.TreeEnsembleRegressorSparse#

It does the sum of two tensors.

Provider

CPUExecutionProvider

Attributes

See onnx TreeEnsembleRegressor. The only change:

nodes_modes: string contenation with ,

Inputs

  • X (T1): tensor of type T1 (sparse)

Outputs

  • Y (T2): prediction of type T2

Constraints

  • T1: float, double

  • T2: float, double

optimize_model#

onnx_extended.ortops.optim.optimize.optimize_model(onx: ModelProto, feeds: Dict[str, ndarray], transform: Callable[[ModelProto], ModelProto], session: Callable[[ModelProto], Any], params: Dict[str, List[Any]], baseline: Callable[[ModelProto], Any] | None = None, verbose: bool = False, number: int = 10, repeat: int = 10, warmup: int = 5, n_tries: int = 2, sleep: float = 0.1) List[Dict[str, str | float]][source]#

Optimizes a model by trying out many possibilities.

Parameters:
  • onx – ModelProto

  • feeds – inputs as a dictionary of numpy arrays

  • transform – function taking a ModelProto and returning a ModelProto based on the values coming from params

  • session – function which takes a modifed ModelProto and return a session

  • params – dictionary of values to test { param_name: [ param_values ] }

  • baseline – function which takes a modifed ModelProto and return a session, identified as the baseline

  • verbose – use tqdm to show improvment

  • number – parameter to measure_time

  • repeat – parameter to measure_time

  • warmup – parameter to measure_time

  • n_tries – number of times to measure, if the measurements returns very different results, values for number or repeat should be increased

  • sleep – time to sleep between two measurements

Returns:

list of results returned by measure_time

See example TreeEnsemble optimization for an example.