Getting Started#

This page walks you through the first steps with yet-another-onnx-builder (yobx). See Installation for installation instructions.

The one-line API#

Every framework converter in yobx follows the same pattern:

expected = model(*args, **kwargs)          # run the model once
onnx_model = to_onnx(model, args, kwargs)  # export to ONNX

The call signatures are intentionally uniform so that you can swap frameworks without learning a new API each time.

scikit-learn#

Convert any fitted scikit-learn estimator or pipeline with yobx.sklearn.to_onnx():

import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from yobx.sklearn import to_onnx

rng = np.random.default_rng(0)
X = rng.standard_normal((20, 4)).astype(np.float32)
y = (X[:, 0] > 0).astype(np.int64)

pipe = Pipeline([
    ("scaler", StandardScaler()),
    ("clf", LogisticRegression()),
]).fit(X, y)

onnx_model = to_onnx(pipe, (X,))

By default to_onnx marks axis 0 of every input as dynamic. To customize the dynamic axes, pass a dynamic_shapes tuple — one Dict[int, str] per input, mapping axis indices to symbolic dimension names:

onnx_model = to_onnx(pipe, (X,), dynamic_shapes=({0: "batch"},))

See scikit-learn Export to ONNX for the full scikit-learn conversion guide.

PyTorch#

Convert a torch.nn.Module with yobx.torch.interpreter.to_onnx():

import torch
from yobx.torch import to_onnx

class MyModel(torch.nn.Module):
    def forward(self, x):
        return torch.relu(x)

model = MyModel()
x = torch.randn(4, 8)
onnx_model = to_onnx(model, (x,))

To mark axis 0 as a dynamic batch dimension, pass a dynamic_shapes dict following the torch.export convention:

import torch
from torch.export import Dim
from yobx.torch import to_onnx

class MyModel(torch.nn.Module):
    def forward(self, x):
        return torch.relu(x)

model = MyModel()
x = torch.randn(4, 8)

batch = Dim("batch")
onnx_model = to_onnx(model, (x,), dynamic_shapes={"x": {0: batch}})

See Torch Export to ONNX for the full PyTorch conversion guide, including dynamic shapes, model patching, and large-model support.

TensorFlow / Keras#

Convert a Keras model with yobx.tensorflow.to_onnx():

import numpy as np
import tensorflow as tf
from yobx.tensorflow import to_onnx

model = tf.keras.Sequential([
    tf.keras.layers.Dense(8, activation="relu", input_shape=(4,)),
    tf.keras.layers.Dense(2),
])

X = np.random.rand(5, 4).astype(np.float32)
onnx_model = to_onnx(model, (X,))

By default axis 0 of every input is made dynamic. To name specific dynamic axes, pass dynamic_shapes:

onnx_model = to_onnx(model, (X,), dynamic_shapes=({0: "batch"},))

See TensorFlow / JAX Export to ONNX for the full TensorFlow/JAX conversion guide.

LiteRT / TFLite#

Convert a .tflite model file with yobx.litert.to_onnx():

import numpy as np
from yobx.litert import to_onnx

X = np.random.rand(1, 4).astype(np.float32)
onnx_model = to_onnx("model.tflite", (X,))

Axis 0 is dynamic by default. Control which axes are dynamic with dynamic_shapes:

onnx_model = to_onnx("model.tflite", (X,), dynamic_shapes=({0: "batch"},))

See LiteRT / TFLite Export to ONNX for details on dynamic shapes and custom op converters.

Running the exported model#

Use onnxruntime to run the exported model and verify the output:

import numpy as np
import onnxruntime as rt

sess = rt.InferenceSession(onnx_model.SerializeToString())
input_name = sess.get_inputs()[0].name
result = sess.run(None, {input_name: X})
print(result)

Next steps#

  • GraphBuilder — build and optimize ONNX graphs programmatically.

  • Available Patterns — pattern-based graph rewriting.

  • ShapeBuilder — symbolic shape expressions for dynamic shapes.

  • Translate — translate ONNX graphs back to Python code.

  • API — full API reference.