yobx.container#
ExportArtifact#
- class yobx.container.ExportArtifact(proto: ModelProto | GraphProto | FunctionProto | None = None, container: ExtendedModelContainer | None = None, report: ExportReport | None = None, filename: str | None = None, builder: GraphBuilderExtendedProtocol | None = None)[source]#
Standard output of every
to_onnx()conversion function.Every top-level
to_onnxfunction (sklearn, tensorflow, litert, torch, sql …) returns anExportArtifactinstead of a bareModelProtoorExtendedModelContainer. The instance bundles the exported proto, the optional large-model container, anExportReportdescribing the export process, and an optional filename.- Parameters:
proto – ModelProto | FunctionProto | GraphProto | None The ONNX proto produced by the export. When large_model was requested the proto contains placeholders for external data; use
get_proto()to obtain a fully self-contained proto.container – ExtendedModelContainer | None The
ExtendedModelContainerproduced when the conversion was called withlarge_model=True.Noneotherwise.report – ExportReport | None Statistics and metadata about the export.
filename – str | None Path where the model was last saved, or
Noneif never saved.builder – GraphBuilderExtendedProtocol Keeps the builder building the onnx model
Example:
import numpy as np from sklearn.linear_model import LinearRegression from yobx.sklearn import to_onnx from yobx.container import ExportArtifact, ExportReport X = np.random.randn(20, 4).astype(np.float32) y = X @ np.array([1.0, 2.0, 3.0, 4.0], dtype=np.float32) reg = LinearRegression().fit(X, y) artifact = to_onnx(reg, (X,)) assert isinstance(artifact, ExportArtifact) assert isinstance(artifact.report, ExportReport) proto = artifact.get_proto() artifact.save("model.onnx")
- SerializeToString() bytes[source]#
Serializes the model to bytes. It does not includes weights if the model is stored in a container.
- property functions: Sequence[FunctionProto]#
Returns the opset import.
- get_proto(include_weights: bool = True) Any[source]#
Return the ONNX proto, optionally with all weights inlined.
When the export was performed with
large_model=True(i.e.containeris set), the rawprotohas external-data placeholders instead of embedded weight tensors. Passinginclude_weights=True(the default) usesto_ir()to build a fully self-containedModelProto.- Parameters:
include_weights – when
True(default) embed the large initializers stored incontainerinto the returned proto. WhenFalsereturn the raw proto as-is.- Returns:
Example:
artifact = to_onnx(estimator, (X,), large_model=True) # Fully self-contained proto (weights embedded): proto_with_weights = artifact.get_proto(include_weights=True) # Proto with external-data placeholders: proto_no_weights = artifact.get_proto(include_weights=False)
- property graph: GraphProto#
Returns the GraphProto is the model is available. Fails otherwise.
- classmethod load(file_path: str, load_large_initializers: bool = True) ExportArtifact[source]#
Load a saved model from file_path.
If the file references external data (i.e. the model was saved with
large_model=True) anExtendedModelContaineris created and returned incontainer. Otherwise the proto is loaded directly withonnx.load()andcontainerisNone.- Parameters:
file_path – path to the
.onnxfile.load_large_initializers – when
True(default) also load the large initializers stored alongside the model file.
- Returns:
ExportArtifactwithfilenameset to file_path.
Example:
artifact = ExportArtifact.load("model.onnx") proto = artifact.get_proto()
- property metadata_props: Sequence[StringStringEntryProto]#
Returns the opset import.
- property opset_import: Sequence[OperatorSetIdProto]#
Returns the opset import.
- save(file_path: str, all_tensors_to_one_file: bool = True) Any[source]#
Save the exported model to file_path.
When a
ExtendedModelContaineris present (large_model=Truewas used during export) the model and its external weight files are saved viasave(). Otherwise the proto is saved withonnx.save_model().- Parameters:
file_path – destination file path (including
.onnxextension).all_tensors_to_one_file – when saving a large model, write all external tensors into a single companion data file.
- Returns:
the saved
ModelProto.
Example:
artifact = to_onnx(estimator, (X,)) artifact.save("model.onnx")
ExportReport#
- class yobx.container.ExportReport(stats: List[Dict[str, Any]] | None = None, extra: Dict[str, Any] | None = None, build_stats: BuildStats | None = None)[source]#
Holds statistics and metadata gathered during an ONNX export.
The
_statsattribute stores the per-pattern optimization statistics returned byto_onnx()when called withreturn_optimize_report=True. Each element of the list is a dict with at least the keys"pattern","added","removed", and"time_in".Additional arbitrary key-value pairs can be recorded via the
update()method and are stored inextra.Example:
report = ExportReport() report.update({"time_total": 0.42}) print(report)
ExtendedModelContainer#
- class yobx.container.ExtendedModelContainer(*args, **kwargs)[source]#
Overwrites
onnx.model_container.ModelContainerto support torch tensors.- load(file_path: str, load_large_initializers: bool = True) ExtendedModelContainer[source]#
Loads the large model.
- Parameters:
file_path – model file
load_large_initializers – loads the large initializers, if not done, the model is incomplete but it can be used to look into the model without executing it and method
onnx.model_container.ModelContainer._load_large_initializerscan be used to load them later
- Returns:
self
- save(file_path: str, all_tensors_to_one_file: bool = True) ModelProto[source]#
Saves the large model. The function returns a ModelProto, the current one if the model did not need any modification, a modified copy of it if it required changes such as giving file names to every external tensor.
- Parameters:
file_path – model file
all_tensors_to_one_file – saves all large tensors in one file or one file per large tensor
- Returns:
the saved ModelProto