Example gallery¶
A couple of examples to illustrate different implementation of dot product (see also sphinx-gallery).
Getting started¶
pytorch nightly build should be installed, see Start Locally.
git clone https://github.com/xadupre/experimental-experiment.git
pip install onnxruntime-gpu pynvml
pip install -r requirements-dev.txt
export PYTHONPATH=$PYTHONPATH:<this folder>
Compare torch exporters¶
The script evaluates the memory peak, the computation time of the exporters. It also compares the exported models when run through onnxruntime. The full script takes around 20 minutes to complete. It stores on disk all the graphs, the data used to draw them, and the models.
python _doc/examples/plot_torch_export.py -s large
101: Graph Optimization
101: Profile an existing model with onnxruntime
101: Profile an existing model with onnxruntime
101: Linear Regression and export to ONNX
101: Linear Regression and export to ONNX
101: A custom backend for torch
101: A custom backend for torch
102: Convolution and Matrix Multiplication
102: Convolution and Matrix Multiplication
301: Compares LLAMA exporters
301: Compares LLAMA exporters for onnxrt backend
301: Compares LLAMA exporters for onnxrt backend
102: Measure LLAMA speed
201: Evaluate DORT Training
201: Evaluate DORT
201: Evaluate different ways to export a torch model to ONNX
201: Evaluate different ways to export a torch model to ONNX