yobx.sklearn.decomposition.nmf#
- yobx.sklearn.decomposition.nmf.sklearn_nmf(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: NMF | MiniBatchNMF, X: str, name: str = 'nmf') str[source]#
Converts a
sklearn.decomposition.NMForsklearn.decomposition.MiniBatchNMFinto ONNX.The converter implements the multiplicative update (MU) rule used by
transform()for the Frobenius (β = 2) loss. All constant matrices are pre-computed at conversion time; onlyXHt = X @ H.Tis computed at inference time.Starting from a uniform initialization
W₀ = sqrt(mean(X) / n_components)(matching sklearn’s MU initialization), the update rule is implemented as an ONNXLooprunning formax_itersteps:H = components_ (k, f) HHt = H @ H.T (k, k) [constant] XHt = X @ H.T (N, k) [runtime] W₀ = sqrt(mean(X) / k) · ones(N, k) (N, k) [runtime] for _ in range(max_iter): denom = max(W @ HHt [+ l1] [+ l2·W], ε) W = W · (XHt / denom)Note
For
NMFthis converter only supportssolver='mu'withbeta_lossset to'frobenius'or2. Other solver / loss combinations raiseNotImplementedError.For
MiniBatchNMFthe default Frobenius loss is always supported (beta_loss='frobenius').Note
Because the converter always runs exactly
max_itermultiplicative steps (no early-stopping tolerance check), the output may differ slightly from sklearn when the model converged beforemax_iterwith a non-zerotol. Settol=0on the estimator before fitting to obtain bit-exact results.- Parameters:
g – the graph builder to add nodes to
sts – shapes defined by scikit-learn
outputs – desired output names (latent representation W)
estimator – a fitted
NMForMiniBatchNMFX – input tensor name – non-negative matrix
(N, n_features)name – prefix name for the added nodes
- Returns:
output tensor name
(N, n_components)