yobx.sklearn.kernel_approximation.additive_chi2_sampler#
- yobx.sklearn.kernel_approximation.additive_chi2_sampler.sklearn_additive_chi2_sampler(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: AdditiveChi2Sampler, X: str, name: str = 'additive_chi2_sampler') str[source]#
Converts a
sklearn.kernel_approximation.AdditiveChi2Samplerinto ONNX.The conversion replicates
AdditiveChi2Sampler.transform(), which maps each input featurexinto2*sample_steps - 1output features:step 0 (one feature per input feature):
sqrt(x * sample_interval)
step j (j = 1 … sample_steps-1, two features per input feature):
factor_j = sqrt(2 * x * sample_interval / cosh(π * j * sample_interval)) cos_j = factor_j * cos(j * sample_interval * log(x)) sin_j = factor_j * sin(j * sample_interval * log(x))
The output columns are arranged as:
[sqrt(all F features), cos_1(all F features), sin_1(all F features), cos_2(all F features), sin_2(all F features), ...]
giving a total of
n_features * (2 * sample_steps - 1)output columns.Zero-valued inputs produce zero outputs for every component. To avoid
log(0) = -∞causing NaN propagation, the logarithm is evaluated onmax(x, tiny)wheretinyis the smallest positive normal float for the working dtype. Thefactor_jis computed from the originalxand naturally evaluates to zero whenx = 0, so the masked productfactor_j * cos/sin(…)is exactly zero for zero inputs.- Parameters:
g – the graph builder to add nodes to
sts – shape/type information already inferred by scikit-learn; when non-empty the function skips manual
set_type/set_shapecalls because the caller will handle themestimator – a fitted (or stateless)
AdditiveChi2Sampleroutputs – desired output names
X – input tensor name (non-negative values required)
name – prefix name for the added nodes
- Returns:
output tensor name