yobx.sklearn.kernel_approximation.additive_chi2_sampler#

yobx.sklearn.kernel_approximation.additive_chi2_sampler.sklearn_additive_chi2_sampler(g: GraphBuilderExtendedProtocol, sts: Dict, outputs: List[str], estimator: AdditiveChi2Sampler, X: str, name: str = 'additive_chi2_sampler') str[source]#

Converts a sklearn.kernel_approximation.AdditiveChi2Sampler into ONNX.

The conversion replicates AdditiveChi2Sampler.transform(), which maps each input feature x into 2*sample_steps - 1 output features:

  • step 0 (one feature per input feature):

    sqrt(x * sample_interval)
    
  • step j (j = 1 … sample_steps-1, two features per input feature):

    factor_j = sqrt(2 * x * sample_interval / cosh(π * j * sample_interval))
    
    cos_j = factor_j * cos(j * sample_interval * log(x))
    sin_j = factor_j * sin(j * sample_interval * log(x))
    

The output columns are arranged as:

[sqrt(all F features),
 cos_1(all F features), sin_1(all F features),
 cos_2(all F features), sin_2(all F features), ...]

giving a total of n_features * (2 * sample_steps - 1) output columns.

Zero-valued inputs produce zero outputs for every component. To avoid log(0) = -∞ causing NaN propagation, the logarithm is evaluated on max(x, tiny) where tiny is the smallest positive normal float for the working dtype. The factor_j is computed from the original x and naturally evaluates to zero when x = 0, so the masked product factor_j * cos/sin(…) is exactly zero for zero inputs.

Parameters:
  • g – the graph builder to add nodes to

  • sts – shape/type information already inferred by scikit-learn; when non-empty the function skips manual set_type/set_shape calls because the caller will handle them

  • estimator – a fitted (or stateless) AdditiveChi2Sampler

  • outputs – desired output names

  • X – input tensor name (non-negative values required)

  • name – prefix name for the added nodes

Returns:

output tensor name