yobx.sklearn.decomposition.nmf#

yobx.sklearn.decomposition.nmf.sklearn_nmf(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: NMF | MiniBatchNMF, X: str, name: str = 'nmf') str[source]#

Converts a sklearn.decomposition.NMF or sklearn.decomposition.MiniBatchNMF into ONNX.

The converter implements the multiplicative update (MU) rule used by transform() for the Frobenius (β = 2) loss. All constant matrices are pre-computed at conversion time; only XHt = X @ H.T is computed at inference time.

Starting from a uniform initialization W₀ = sqrt(mean(X) / n_components) (matching sklearn’s MU initialization), the update rule is implemented as an ONNX Loop running for max_iter steps:

H    = components_                           (k, f)
HHt  = H @ H.T                              (k, k)  [constant]
XHt  = X @ H.T                              (N, k)  [runtime]
W₀   = sqrt(mean(X) / k) · ones(N, k)      (N, k)  [runtime]

for _ in range(max_iter):
    denom  = max(W @ HHt [+ l1] [+ l2·W],  ε)
    W      = W · (XHt / denom)

Note

For NMF this converter only supports solver='mu' with beta_loss set to 'frobenius' or 2. Other solver / loss combinations raise NotImplementedError.

For MiniBatchNMF the default Frobenius loss is always supported (beta_loss='frobenius').

Note

Because the converter always runs exactly max_iter multiplicative steps (no early-stopping tolerance check), the output may differ slightly from sklearn when the model converged before max_iter with a non-zero tol. Set tol=0 on the estimator before fitting to obtain bit-exact results.

Parameters:
  • g – the graph builder to add nodes to

  • sts – shapes defined by scikit-learn

  • outputs – desired output names (latent representation W)

  • estimator – a fitted NMF or MiniBatchNMF

  • X – input tensor name – non-negative matrix (N, n_features)

  • name – prefix name for the added nodes

Returns:

output tensor name (N, n_components)