yobx.container#

ExportArtifact#

class yobx.container.ExportArtifact(proto: ModelProto | GraphProto | FunctionProto | None = None, container: ExtendedModelContainer | None = None, report: ExportReport | None = None, filename: str | None = None, builder: GraphBuilderExtendedProtocol | None = None)[source]#

Standard output of every to_onnx() conversion function.

Every top-level to_onnx function (sklearn, tensorflow, litert, torch, sql …) returns an ExportArtifact instead of a bare ModelProto or ExtendedModelContainer. The instance bundles the exported proto, the optional large-model container, an ExportReport describing the export process, and an optional filename.

Parameters:
  • proto – ModelProto | FunctionProto | GraphProto | None The ONNX proto produced by the export. When large_model was requested the proto contains placeholders for external data; use get_proto() to obtain a fully self-contained proto.

  • container – ExtendedModelContainer | None The ExtendedModelContainer produced when the conversion was called with large_model=True. None otherwise.

  • report – ExportReport | None Statistics and metadata about the export.

  • filename – str | None Path where the model was last saved, or None if never saved.

  • builder – GraphBuilderExtendedProtocol Keeps the builder building the onnx model

Example:

import numpy as np
from sklearn.linear_model import LinearRegression
from yobx.sklearn import to_onnx
from yobx.container import ExportArtifact, ExportReport

X = np.random.randn(20, 4).astype(np.float32)
y = X @ np.array([1.0, 2.0, 3.0, 4.0], dtype=np.float32)
reg = LinearRegression().fit(X, y)

artifact = to_onnx(reg, (X,))
assert isinstance(artifact, ExportArtifact)
assert isinstance(artifact.report, ExportReport)

proto = artifact.get_proto()
artifact.save("model.onnx")
SerializeToString() bytes[source]#

Serializes the model to bytes. It does not includes weights if the model is stored in a container.

property functions: Sequence[FunctionProto]#

Returns the opset import.

get_proto(include_weights: bool = True) Any[source]#

Return the ONNX proto, optionally with all weights inlined.

When the export was performed with large_model=True (i.e. container is set), the raw proto has external-data placeholders instead of embedded weight tensors. Passing include_weights=True (the default) uses to_ir() to build a fully self-contained ModelProto.

Parameters:

include_weights – when True (default) embed the large initializers stored in container into the returned proto. When False return the raw proto as-is.

Returns:

ModelProto, FunctionProto, or GraphProto.

Example:

artifact = to_onnx(estimator, (X,), large_model=True)
# Fully self-contained proto (weights embedded):
proto_with_weights = artifact.get_proto(include_weights=True)
# Proto with external-data placeholders:
proto_no_weights = artifact.get_proto(include_weights=False)
property graph: GraphProto#

Returns the GraphProto is the model is available. Fails otherwise.

property ir_version: int#

Returns the opset import.

classmethod load(file_path: str, load_large_initializers: bool = True) ExportArtifact[source]#

Load a saved model from file_path.

If the file references external data (i.e. the model was saved with large_model=True) an ExtendedModelContainer is created and returned in container. Otherwise the proto is loaded directly with onnx.load() and container is None.

Parameters:
  • file_path – path to the .onnx file.

  • load_large_initializers – when True (default) also load the large initializers stored alongside the model file.

Returns:

ExportArtifact with filename set to file_path.

Example:

artifact = ExportArtifact.load("model.onnx")
proto = artifact.get_proto()
property metadata_props: Sequence[StringStringEntryProto]#

Returns the opset import.

property opset_import: Sequence[OperatorSetIdProto]#

Returns the opset import.

save(file_path: str, all_tensors_to_one_file: bool = True) Any[source]#

Save the exported model to file_path.

When a ExtendedModelContainer is present (large_model=True was used during export) the model and its external weight files are saved via save(). Otherwise the proto is saved with onnx.save_model().

Parameters:
  • file_path – destination file path (including .onnx extension).

  • all_tensors_to_one_file – when saving a large model, write all external tensors into a single companion data file.

Returns:

the saved ModelProto.

Example:

artifact = to_onnx(estimator, (X,))
artifact.save("model.onnx")
update(data: Any)[source]#

Updates report.

ExportReport#

class yobx.container.ExportReport(stats: List[Dict[str, Any]] | None = None, extra: Dict[str, Any] | None = None, build_stats: BuildStats | None = None)[source]#

Holds statistics and metadata gathered during an ONNX export.

The _stats attribute stores the per-pattern optimization statistics returned by to_onnx() when called with return_optimize_report=True. Each element of the list is a dict with at least the keys "pattern", "added", "removed", and "time_in".

Additional arbitrary key-value pairs can be recorded via the update() method and are stored in extra.

Example:

report = ExportReport()
report.update({"time_total": 0.42})
print(report)
to_dict() Dict[str, Any][source]#

Return a plain dictionary representation of this report.

Returns:

dictionary with keys "stats" and "extra".

update(data: Any)[source]#

Appends data to the report.

Parameters:

data – anything

Returns:

self

ExtendedModelContainer#

class yobx.container.ExtendedModelContainer(*args, **kwargs)[source]#

Overwrites onnx.model_container.ModelContainer to support torch tensors.

load(file_path: str, load_large_initializers: bool = True) ExtendedModelContainer[source]#

Loads the large model.

Parameters:
  • file_path – model file

  • load_large_initializers – loads the large initializers, if not done, the model is incomplete but it can be used to look into the model without executing it and method onnx.model_container.ModelContainer._load_large_initializers can be used to load them later

Returns:

self

save(file_path: str, all_tensors_to_one_file: bool = True) ModelProto[source]#

Saves the large model. The function returns a ModelProto, the current one if the model did not need any modification, a modified copy of it if it required changes such as giving file names to every external tensor.

Parameters:
  • file_path – model file

  • all_tensors_to_one_file – saves all large tensors in one file or one file per large tensor

Returns:

the saved ModelProto

to_ir() onnx_ir.Model[source]#

Conversion to onnx_ir.Model.