Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
General
Integrations
- Azure Machine Learning
- Azure IoT Edge
- Azure Media Services
- Azure SQL Edge and Managed Instance
- Windows Machine Learning
- ML.NET
- Huggingface
Inference only
- Basic
- Resnet50
- ONNX-Ecosystem Docker image samples
- ONNX Runtime Server: SSD Single Shot MultiBox Detector
- NUPHAR EP samples
Inference with model conversion
- SKL tutorials
- Keras - Basic
- SSD Mobilenet (Tensorflow)
- BERT-SQuAD (PyTorch) on CPU
- BERT-SQuAD (PyTorch) on GPU
- BERT-SQuAD (Keras)
- BERT-SQuAD (Tensorflow)
- GPT2 (PyTorch)
- EfficientDet (Tensorflow)
- EfficientNet-Edge (Tensorflow)
- EfficientNet-Lite (Tensorflow)
- EfficientNet(Keras)
- MNIST (Keras)
Quantization
Other
- C: SqueezeNet
- C++: model-explorer - single and batch processing
- C++: SqueezeNet
- C++: MNIST
Inference and deploy through AzureML
For aditional information on training in AzureML, please see AzureML Training Notebooks
- Inferencing on CPU using ONNX Model Zoo models:
- Inferencing on CPU with PyTorch model training:
- Inferencing on CPU with model conversion for existing (CoreML) model:
- Inferencing on GPU with TensorRT Execution Provider (AKS):
Inference and Deploy with Azure IoT Edge
Deploy ONNX model in Azure SQL Edge
Examples of inferencing with ONNX Runtime through Windows Machine Learning