yobx.sklearn.ensemble.voting#

yobx.sklearn.ensemble.voting.sklearn_voting_classifier(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: VotingClassifier, X: str, name: str = 'voting_classifier') str | Tuple[str, str][source]#

Converts a sklearn.ensemble.VotingClassifier into ONNX.

Both voting='soft' and voting='hard' are supported.

Soft voting — average (weighted) class probabilities:

X ──[sub-est 0]──► (_, proba_0) (N, C)
X ──[sub-est 1]──► (_, proba_1) (N, C)
        Unsqueeze(axis=0) ──► proba_0 (1, N, C), proba_1 (1, N, C)
            Concat(axis=0) ──► stacked (E, N, C)
                ReduceMean(axis=0) ──► avg_proba (N, C)
                    ArgMax(axis=1) ──Cast──Gather(classes_) ──► label

Hard voting — majority vote via one-hot vote accumulation:

X ──[sub-est 0]──► label_0 (N,)
X ──[sub-est 1]──► label_1 (N,)
  label_to_index ──► idx_0 (N,), idx_1 (N,)
      OneHot ──► votes_0 (N, C), votes_1 (N, C)
          Add ──► total_votes (N, C)
              ArgMax(axis=1) ──Cast──Gather(classes_) ──► label

With non-None weights, the averaging (soft) or one-hot accumulation (hard) is replaced by a weighted version.

Parameters:
  • g – the graph builder to add nodes to

  • sts – shapes and types defined by scikit-learn

  • outputs – desired output tensor names; two entries for soft voting (label + probabilities), one entry for hard voting (label only)

  • estimator – a fitted VotingClassifier

  • X – name of the input tensor

  • name – prefix used for names of nodes added by this converter

Returns:

label tensor name (hard voting) or tuple (label, probabilities) (soft voting)

yobx.sklearn.ensemble.voting.sklearn_voting_regressor(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: VotingRegressor, X: str, name: str = 'voting_regressor') str[source]#

Converts a sklearn.ensemble.VotingRegressor into ONNX.

Each sub-estimator’s predictions are averaged (optionally weighted).

Graph structure (equal weights, two sub-estimators as an example):

X ──[sub-est 0]──► pred_0 (N,)
X ──[sub-est 1]──► pred_1 (N,)
         Unsqueeze(axis=1) ──► pred_0 (N,1), pred_1 (N,1)
                Concat(axis=1) ──► stacked (N, E)
                    ReduceMean(axis=1) ──► predictions (N,)

With weights the mean is replaced by a weighted sum followed by division by the sum of weights.

Parameters:
  • g – the graph builder to add nodes to

  • sts – shapes and types defined by scikit-learn

  • outputs – desired output tensor names (one entry: predictions)

  • estimator – a fitted VotingRegressor

  • X – name of the input tensor

  • name – prefix used for names of nodes added by this converter

Returns:

name of the predictions output tensor