.skl¶
modules
to_onnx¶
- experimental_experiment.skl.to_onnx(model: BaseEstimator, args: Sequence[torch.Tensor] | None = None, target_opset: Dict[str, int] | int | None = None, as_function: bool = False, options: OptimizationOptions | None = None, optimize: bool = True, filename: str | None = None, inline: bool = False, input_names: Sequence[str] | None = None, output_names: List[str] | None = None, large_model: bool = False, verbose: int = 0, return_builder: bool = False, raise_list: Set[str] | None = None, external_threshold: int = 1024, return_optimize_report: bool = False, function_options: FunctionOptions | None = None) ModelProto | ModelContainer | Tuple[ModelProto | ModelContainer, GraphBuilder] [source]¶
Exports a scikit-learn model into ONNX.
- Parameters:
model – estimator
args – input arguments
kwargs – keyword attributes
input_names – input names
target_opset – targeted opset or targeted opsets as a dictionary
as_function – export as a ModelProto or a FunctionProto
options – optimization options
verbose – verbosity level
return_builder – returns the builder as well
raise_list – the builder stops any time a name falls into that list, this is a debbuging tool
optimize – optimize the model before exporting into onnx
large_model – if True returns a
onnx.model_container.ModelContainer
, it lets the user to decide later if the weights should be part of the model or saved as external weightsexternal_threshold – if large_model is True, every tensor above this limit is stored as external
return_optimize_report – returns statistics on the optimization as well
filename – if specified, stores the model into that file
inline – inline the model before converting to onnx, this is done before any optimization takes place
export_options – to apply differents options before to get the exported program
function_options – to specify what to do with the initializers in local functions, add them as constants or inputs
output_names – to rename the output names
- Returns:
onnx model