Note
Go to the end to download the full example code
First examples with onnx-array-api#
This demonstrates an easy case with onnx-array-api. It shows how a function can be easily converted into ONNX.
A loss function from numpy to ONNX#
The first example takes a loss function and converts it into ONNX.
import numpy as np
from onnx_array_api.npx import absolute, jit_onnx
from onnx_array_api.plotting.text_plot import onnx_simple_text_plot
The function looks like a numpy function.
The function needs to be converted into ONNX with function jit_onnx. jitted_l1_loss is a wrapper. It intercepts all calls to l1_loss. When it happens, it checks the input types and creates the corresponding ONNX graph.
jitted_l1_loss = jit_onnx(l1_loss)
First execution and conversion to ONNX. The wrapper caches the created onnx graph. It reuses it if the input types and the number of dimension are the same. It creates a new one otherwise and keep the old one.
x = np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32)
y = np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32)
res = jitted_l1_loss(x, y)
print(res)
0.09999999
The ONNX graph can be accessed the following way.
print(onnx_simple_text_plot(jitted_l1_loss.get_onnx()))
opset: domain='' version=18
input: name='x0' type=dtype('float32') shape=['', '']
input: name='x1' type=dtype('float32') shape=['', '']
Sub(x0, x1) -> r__0
Abs(r__0) -> r__1
ReduceSum(r__1, keepdims=0) -> r__2
output: name='r__2' type=dtype('float32') shape=None
We can also define a more complex loss by computing L1 loss on the first column and L2 loss on the seconde one.
def l1_loss(x, y):
return absolute(x - y).sum()
def l2_loss(x, y):
return ((x - y) ** 2).sum()
def myloss(x, y):
return l1_loss(x[:, 0], y[:, 0]) + l2_loss(x[:, 1], y[:, 1])
jitted_myloss = jit_onnx(myloss)
x = np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32)
y = np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32)
res = jitted_myloss(x, y)
print(res)
print(onnx_simple_text_plot(jitted_myloss.get_onnx()))
0.042
opset: domain='' version=18
input: name='x0' type=dtype('float32') shape=['', '']
input: name='x1' type=dtype('float32') shape=['', '']
Constant(value=[1]) -> cst__0
Constant(value=[2]) -> cst__1
Constant(value=[1]) -> cst__2
Slice(x0, cst__0, cst__1, cst__2) -> r__12
Constant(value=[1]) -> cst__3
Constant(value=[2]) -> cst__4
Constant(value=[1]) -> cst__5
Slice(x1, cst__3, cst__4, cst__5) -> r__14
Constant(value=[0]) -> cst__6
Constant(value=[1]) -> cst__7
Constant(value=[1]) -> cst__8
Slice(x0, cst__6, cst__7, cst__8) -> r__16
Constant(value=[0]) -> cst__9
Constant(value=[1]) -> cst__10
Constant(value=[1]) -> cst__11
Slice(x1, cst__9, cst__10, cst__11) -> r__18
Constant(value=[1]) -> cst__13
Squeeze(r__12, cst__13) -> r__20
Constant(value=[1]) -> cst__15
Squeeze(r__14, cst__15) -> r__21
Sub(r__20, r__21) -> r__24
Constant(value=[1]) -> cst__17
Squeeze(r__16, cst__17) -> r__22
Constant(value=[1]) -> cst__19
Squeeze(r__18, cst__19) -> r__23
Sub(r__22, r__23) -> r__25
Abs(r__25) -> r__28
ReduceSum(r__28, keepdims=0) -> r__30
Constant(value=2) -> r__26
CastLike(r__26, r__24) -> r__27
Pow(r__24, r__27) -> r__29
ReduceSum(r__29, keepdims=0) -> r__31
Add(r__30, r__31) -> r__32
output: name='r__32' type=dtype('float32') shape=None
Eager mode#
import numpy as np
from onnx_array_api.npx import absolute, eager_onnx
def l1_loss(x, y):
"""
err is a type inheriting from
:class:`EagerTensor <onnx_array_api.npx.npx_tensors.EagerTensor>`.
It needs to be converted to numpy first before any display.
"""
err = absolute(x - y).sum()
print(f"l1_loss={err.numpy()}")
return err
def l2_loss(x, y):
err = ((x - y) ** 2).sum()
print(f"l2_loss={err.numpy()}")
return err
def myloss(x, y):
return l1_loss(x[:, 0], y[:, 0]) + l2_loss(x[:, 1], y[:, 1])
Eager mode is enabled by function eager_onnx
.
It intercepts all calls to my_loss. On the first call,
it replaces a numpy array by a tensor corresponding to the
selected runtime, here numpy as well through
EagerNumpyTensor
.
eager_myloss = eager_onnx(myloss)
x = np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32)
y = np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32)
First execution and conversion to ONNX. The wrapper caches many Onnx graphs corresponding to simple opeator, (+, -, /, *, …), reduce functions, any other function from the API. It reuses it if the input types and the number of dimension are the same. It creates a new one otherwise and keep the old ones.
l1_loss=0.03999999910593033
l2_loss=0.001999999163672328
0.042
There is no ONNX graph to show. Every operation is converted into small ONNX graphs.
Total running time of the script: ( 0 minutes 0.899 seconds)