Skip to content

Latest commit

 

History

History
35 lines (26 loc) · 1.8 KB

DNNL-ExecutionProvider.md

File metadata and controls

35 lines (26 loc) · 1.8 KB

DNNL Execution Provider

Intel® Math Kernel Library for Deep Neural Networks (Intel® DNNL) is an open-source performance library for deep-learning applications. The library accelerates deep-learning applications and frameworks on Intel® architecture and Intel® Processor Graphics Architecture. Intel DNNL contains vectorized and threaded building blocks that you can use to implement deep neural networks (DNN) with C and C++ interfaces. For more, please see the DNNL documentation on (https://intel.github.io/mkl-dnn/).

Intel and Microsoft have developed DNNL Execution Provider (EP) for ONNX Runtime to accelerate performance of ONNX Runtime using Intel® Math Kernel Library for Deep Neural Networks (Intel® DNNL) optimized primitives.

For information on how DNNL optimizes subgraphs, see Subgraph Optimization

Build

For build instructions, please see the BUILD page.

Supported OS

  • Ubuntu 16.04
  • Windows 10
  • Mac OS X

Supported backend

  • CPU

Using the DNNL Execution Provider

C/C++

The DNNLExecutionProvider execution provider needs to be registered with ONNX Runtime to enable in the inference session.

Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions sf;
bool enable_cpu_mem_arena = true;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_Dnnl(sf, enable_cpu_mem_arena));

The C API details are here.

Python

When using the python wheel from the ONNX Runtime built with DNNL execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are here.

Performance Tuning

For performance tuning, please see guidance on this page: ONNX Runtime Perf Tuning