yobx.sklearn.discriminant_analysis.qda#
- yobx.sklearn.discriminant_analysis.qda.sklearn_quadratic_discriminant_analysis(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: QuadraticDiscriminantAnalysis, X: str, name: str = 'qda') Tuple[str, str][source]#
Converts a
sklearn.discriminant_analysis.QuadraticDiscriminantAnalysisinto ONNX.For each class k the log-likelihood (decision function) is computed from the per-class SVD stored in
rotations_andscalings_(eigenvalues of the class covariance matrix):W_k = R_k * S_k^(-0.5) # scaled rotation (F, r_k) – constant offset_k = mean_k @ W_k # (r_k,) – constant const_k = -0.5 * sum(log S_k) + log(prior_k) # scalar – constant z_k = X @ W_k - offset_k # (N, r_k) norm2_k = ReduceSum(z_k * z_k, axis=1) # (N,) dec_k = -0.5 * norm2_k + const_k # (N,)
When all classes share the same SVD rank r, all classes are processed in a single batched
MatMulfollowing the same pattern as thesklearn_gaussian_mixture()'full'covariance path:W_2d = hstack(W_0, …, W_{C-1}) # (F, C*r) – constant b_flat = hstack(offset_0, …, offset_{C-1}) # (1, C*r) – constant consts = [-0.5*logdet_k + logprior_k …] # (C,) – constant XW = MatMul(X, W_2d) # (N, C*r) diff = XW - b_flat # (N, C*r) diff3d = Reshape(diff, [-1, C, r]) # (N, C, r) norm2 = ReduceSum(diff3d * diff3d, axis=2) # (N, C) dec = -0.5 * norm2 + consts # (N, C)When classes have different SVD ranks (degenerate covariances), the same computation is performed per class and the resulting
(N, 1)columns are concatenated.In both cases probabilities and labels are obtained as:
proba = Softmax(dec, axis=1) # (N, C) – output label = Gather(classes_, ArgMax(proba, 1)) # (N,) – output
- Parameters:
g – the graph builder to add nodes to
sts – shapes defined by scikit-learn
estimator – a fitted
QuadraticDiscriminantAnalysisoutputs – desired names (label, probabilities)
X – input tensor name
name – prefix names for the added nodes
- Returns:
tuple
(label_result_name, proba_result_name)