-
Notifications
You must be signed in to change notification settings - Fork 63
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
8564632
commit fcf54d6
Showing
64 changed files
with
6,788 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,107 @@ | ||
# Dockerfiles with [Intel® Distribution of OpenVINO™ toolkit](https://github.com/openvinotoolkit/openvino) | ||
|
||
This repository folder contains Dockerfiles to build an docker image with the Intel® Distribution of OpenVINO™ toolkit. | ||
You can use Docker CI framework to build an image, please follow [Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit](../get-started.md). | ||
|
||
1. [Supported Operating Systems for Docker image](#supported-operating-systems-for-docker-image) | ||
2. [Supported devices and distributions](#supported-devices-and-distributions) | ||
3. [Where to get OpenVINO package](#where-to-get-openvino-package) | ||
4. [How to build](#how-to-build) | ||
5. [Prebuilt images](#prebuilt-images) | ||
6. [How to run a container](#how-to-run-a-container) | ||
|
||
## Supported Operating Systems for Docker image | ||
|
||
- `ubuntu18` folder (Ubuntu* 18.04 LTS) | ||
- `ubuntu20` folder (Ubuntu* 20.04 LTS) | ||
- `rhel8` folder (RHEL* 8) | ||
- `winserver2019` folder (Windows* Server Core base OS LTSC 2019) | ||
- `windows20h2` folder (Windows* OS 20H2) | ||
|
||
*Note*: `dl-workbench` folder contains Dockerfiles for OpenVINO™ Deep Learning Workbench. | ||
|
||
## Supported devices and distributions | ||
|
||
![OpenVINO Dockerfile Name](../docs/img/dockerfile_name.png) | ||
|
||
**Devices:** | ||
- CPU | ||
- GPU | ||
- VPU (NCS2) | ||
- HDDL (VPU HDDL) (_Prerequisite_: run HDDL daemon on the host machine, follow the [configuration guide for HDDL device](../install_guide_vpu_hddl.md)) | ||
|
||
OpenVINO documentation for [supported devices](https://docs.openvino.ai/latest/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html). | ||
|
||
**Distributions:** | ||
|
||
- **runtime**: IE core, nGraph, plugins | ||
- **dev**: IE core, nGraph, plugins, samples, Python dev tools: Model Optimizer, Post training Optimization tool, Accuracy checker, Open Model Zoo tools (downloader, converter), OpenCV | ||
- **base** (only for CPU): IE core, nGraph | ||
|
||
You can generate Dockerfile with your settings, please follow the [DockerHub CI documentation](../get-started.md). | ||
* _runtime_ and _dev_ distributions are based on archive package of OpenVINO product. You can just remove unnecessary parts. | ||
* _base_ distribution is created by [OpenVINO™ Deployment Manager](https://docs.openvino.ai/latest/openvino_docs_install_guides_deployment_manager_tool.html). | ||
|
||
## Where to get OpenVINO package | ||
|
||
You can get OpenVINO distribution packages (runtime, dev) directly from [public storage](https://storage.openvinotoolkit.org/repositories/openvino/packages/). | ||
For example: | ||
* take 2022.2 > linux > ubuntu20 `l_openvino_toolkit_ubuntu20_2022.2.0.7713.af16ea1d79a_x86_64.tgz` package. | ||
|
||
## How to build | ||
|
||
**Note:** Please use Docker CI framework release version corresponding to the version of OpenVINO™ Toolkit that you need to build. | ||
|
||
* Base image with CPU only: | ||
|
||
You can use Docker CI framework to build an image, please follow [Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit](../get-started.md). | ||
|
||
```bash | ||
python3 docker_openvino.py build --file "dockerfiles/ubuntu18/openvino_c_base_2022.2.0.dockerfile" -os ubuntu18 -dist base -p 2022.2.0 | ||
``` | ||
|
||
---------------- | ||
|
||
* Dev/runtime image: | ||
|
||
You can use Docker CI framework to build an image, please follow [Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit](../get-started.md). | ||
|
||
```bash | ||
python3 docker_openvino.py build --file "dockerfiles/ubuntu18/openvino_cgvh_dev_2022.2.0.dockerfile" -os ubuntu18 -dist dev -p 2022.2.0 | ||
``` | ||
For runtime distribution, please set appropriate `-dist` and `--file` options. | ||
|
||
Or via Docker Engine directly, but you need specify `package_url` argument (see [Where to get OpenVINO package section](#where-to-get-openvino-package)): | ||
```bash | ||
docker build --build-arg package_url=https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.2/linux/l_openvino_toolkit_ubuntu18_2022.2.0.7713.af16ea1d79a_x86_64.tgz \ | ||
-t ubuntu18_dev:2022.2.0 -f dockerfiles/ubuntu18/openvino_cgvh_dev_2022.2.0.dockerfile . | ||
``` | ||
---------------- | ||
|
||
* Custom image with CPU, iGPU, VPU support | ||
You can use Dockerfiles from the `build_custom` folders to build a custom version of OpenVINO™ from source code for development. To learn more, follow: | ||
* [Build custom Intel® Distribution of OpenVINO™ toolkit Docker image on Ubuntu 18](ubuntu18/build_custom/README.md) | ||
* [Build custom Intel® Distribution of OpenVINO™ toolkit Docker image on Ubuntu 20](ubuntu20/build_custom/README.md) | ||
|
||
## Prebuilt images | ||
|
||
Prebuilt images are available on: | ||
- [Docker Hub](https://hub.docker.com/u/openvino) | ||
- [Red Hat* Quay.io](https://quay.io/organization/openvino) | ||
- [Red Hat* Ecosystem Catalog (runtime image)](https://catalog.redhat.com/software/containers/intel/openvino-runtime/606ff4d7ecb5241699188fb3) | ||
- [Red Hat* Ecosystem Catalog (development image)](https://catalog.redhat.com/software/containers/intel/openvino-dev/613a450dc9bc35f21dc4a1f7) | ||
- [Azure* Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/intel_corporation.openvino) | ||
|
||
|
||
## How to run a container | ||
|
||
Please follow [Run a container](../get-started.md#run-a-container) section in DockerHub CI getting started guide. | ||
|
||
## Documentation | ||
|
||
* [Install Intel® Distribution of OpenVINO™ toolkit for Linux* from a Docker* Image](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_docker_linux.html) | ||
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* from Docker* Image](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_docker_windows.html) | ||
* [Official Dockerfile reference](https://docs.docker.com/engine/reference/builder/) | ||
|
||
--- | ||
\* Other names and brands may be claimed as the property of others. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,81 @@ | ||
# Using OpenVINO™ Toolkit containers with GPU accelerators | ||
|
||
|
||
Containers can be used to execute inference operations with GPU acceleration or with the [virtual devices](https://docs.openvino.ai/nightly/openvino_docs_Runtime_Inference_Modes_Overview.html). | ||
|
||
There are the following prerequisites: | ||
|
||
- Use the Linux kernel with GPU models supported by you integrated GPU or discrete GPU. Check the documetnation on https://dgpu-docs.intel.com/driver/kernel-driver-types.html. | ||
On Linux host, confirm if there is available a character device /dev/dri | ||
|
||
- On Windows Subsystem for Linux (WSL2) refer to the guidelines on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html# | ||
Note, that on WLS2, there must be present a character device `/dev/drx`. | ||
|
||
- Docker image for the container must include GPU runtime drivers like described on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html# | ||
|
||
While the host and preconfigured docker engine is up and running, use the docker run parameters like described below. | ||
|
||
## Linux | ||
|
||
The command below should report both CPU and GPU devices available for inference execution: | ||
``` | ||
export IMAGE=openvino/ubuntu20_dev:2023.0.0 | ||
docker run -it --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE ./samples/cpp/samples_bin/hello_query_device | ||
``` | ||
|
||
`--device /dev/dri` - it passes the GPU device to the container | ||
`--group-add` - it adds a security context to the container command with permission to use the GPU device | ||
|
||
## Windows Subsystem for Linux | ||
|
||
On WSL2, use the command to start the container: | ||
|
||
``` | ||
export IMAGE=openvino/ubuntu20_dev:2023.0.0 | ||
docker run -it --device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl $IMAGE ./samples/cpp/samples_bin/hello_query_device | ||
``` | ||
`--device /dev/dri` - it passes the virtual GPU device to the container | ||
`-v /usr/lib/wsl:/usr/lib/wsl` - it mounts required WSL libs into the container | ||
|
||
|
||
## Usage example: | ||
|
||
Run the benchmark app using GPU accelerator with `-use_device_mem` param showcasing inference without copy between CPU and GPU memory | ||
``` | ||
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \ | ||
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \ | ||
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \ | ||
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d GPU -use_device_mem -inference_only=false" | ||
``` | ||
In the benchmark app, the parameter `-use_device_mem` employs the OV::RemoteTensor as the input buffer. It demonstrates the gain without data copy beteen the host and the GPU device. | ||
|
||
Run the benchmark app using both GPU and CPU. Load will be distributed on both types of devices: | ||
``` | ||
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \ | ||
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \ | ||
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \ | ||
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d MULTI:GPU,CPU" | ||
``` | ||
|
||
|
||
**Check also:** | ||
|
||
[Prebuilt images](#prebuilt-images) | ||
|
||
[Working with OpenVINO Containers](docs/containers.md) | ||
|
||
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md) | ||
|
||
[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html) | ||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,116 @@ | ||
# Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04 | ||
|
||
Intel® Graphics Compute Runtime for OpenCL™ driver components are required to use a GPU plugin and write custom layers for Intel® Integrated Graphics. | ||
The driver is installed in the OpenVINO™ Docker image but you need to activate it in the container for a non-root user if you have Ubuntu 20.04 on your host. | ||
To access GPU capabilities, you need to have correct permissions on the host and Docker container. | ||
Run the following commands to list the group assigned ownership of the render nodes on your host: | ||
|
||
```bash | ||
$ stat -c "group_name=%G group_id=%g" /dev/dri/render* | ||
group_name=render group_id=134 | ||
``` | ||
|
||
OpenVINO™ Docker images do not contain a render group for openvino non-root user because the render group does not have a strict group ID, unlike the video group. | ||
Choose one of the options below to set up access to a GPU device from a container. | ||
|
||
## 1. Configure a Host Non-Root User to Use a GPU Device from an OpenVINO Container on Ubuntu 20 Host [RECOMMENDED] | ||
|
||
To run an OpenVINO container with a default non-root user (openvino) with access to a GPU device, you need to have a non-root user with the same id as `openvino` user inside the container: | ||
By default, `openvino` user has the #1000 user ID. | ||
Create a non-root user, for example, host_openvino, on the host with the same user ID and access to video, render, docker groups: | ||
|
||
```bash | ||
$ useradd -u 1000 -G users,video,render,docker host_openvino | ||
``` | ||
|
||
Now you can use the OpenVINO container with GPU access under the non-root user. | ||
|
||
```bash | ||
$ docker run -it --rm --device /dev/dri <image_name> | ||
``` | ||
|
||
## 2. Configure a Container to Use a GPU Device on Ubuntu 20 Host Under a Non-Root User | ||
|
||
To run an OpenVINO container as non-root with access to a GPU device, specify the render group ID from your host: | ||
|
||
```bash | ||
$ docker run -it --rm --device /dev/dri --group-add=<render_group_id_on_host> <image_name> | ||
``` | ||
|
||
For example, get the render group ID on your host: | ||
|
||
```bash | ||
$ docker run -it --rm --device /dev/dri --group-add=$(stat -c "%g" /dev/dri/render*) <image_name> | ||
``` | ||
|
||
Now you can use the container with GPU access under the non-root user. | ||
|
||
## 3. Configure an Image to Use a GPU Device on Ubuntu 20 Host and Save It | ||
|
||
To run an OpenVINO container as the root with access to a GPU device, use the command below: | ||
|
||
```bash | ||
$ docker run -it --rm --user root --device /dev/dri --name my_container <image_name> | ||
``` | ||
|
||
Check groups for the GPU device in the container: | ||
|
||
```bash | ||
$ ls -l /dev/dri/ | ||
``` | ||
|
||
The output should look like the following: | ||
|
||
```bash | ||
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0 | ||
crw-rw---- 1 root 134 226, 128 Feb 20 14:28 renderD128 | ||
``` | ||
|
||
Create a render group in the container with the same group ID as on your host: | ||
|
||
```bash | ||
$ addgroup --gid 134 render | ||
``` | ||
|
||
Check groups for the GPU device in the container: | ||
|
||
```bash | ||
$ ls -l /dev/dri/ | ||
``` | ||
|
||
The output should look like the following: | ||
|
||
```bash | ||
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0 | ||
crw-rw---- 1 root render 226, 128 Feb 20 14:28 renderD128 | ||
``` | ||
|
||
Add the non-root user to the render group: | ||
|
||
```bash | ||
$ usermod -a -G render openvino | ||
$ id openvino | ||
``` | ||
|
||
Check that the group now contains the user: | ||
|
||
```bash | ||
uid=1000(openvino) gid=1000(openvino) groups=1000(openvino),44(video),100(users),134(render) | ||
``` | ||
|
||
Then relogin as the non-root user: | ||
|
||
```bash | ||
$ su openvino | ||
``` | ||
|
||
Now you can use the container with GPU access under the non-root user or you can save that container as an image and push it to your registry. | ||
Open another terminal and run the commands below: | ||
|
||
```bash | ||
$ docker commit my_container my_image | ||
$ docker run -it --rm --device /dev/dri --user openvino my_image | ||
``` | ||
|
||
--- | ||
\* Other names and brands may be claimed as the property of others. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
# Working with OpenVINO™ Toolkit Images | ||
|
||
## Runtime images | ||
|
||
The runtime images include OpenVINO toolkit with all required dependencies to run inference operations and OpenVINO API both in Python and C++. | ||
There are no development tools installed. | ||
Here are examples how the runtime image could be used: | ||
|
||
``` | ||
export IMAGE=openvino/ubuntu20_runtime:2023.0.0 | ||
``` | ||
|
||
### Building and Using the OpenVINO samples: | ||
|
||
``` | ||
docker run -it -u root $IMAGE bash -c "/opt/intel/openvino/install_dependencies/install_openvino_dependencies.sh -y -c dev && ./samples/cpp/build_samples.sh && \ | ||
/root/openvino_cpp_samples_build/intel64/Release/hello_query_device" | ||
``` | ||
|
||
### Using python samples | ||
``` | ||
docker run -it $IMAGE python3 samples/python/hello_query_device/hello_query_device.py | ||
``` | ||
|
||
## Development images | ||
|
||
Dev images include the OpenVINO runtime components and development tools as well. It includes a complete environment for experimenting with OpenVINO. | ||
Examples how the development container can be used are below: | ||
|
||
``` | ||
export IMAGE=openvino/ubuntu20_dev:2023.0.0 | ||
``` | ||
|
||
### Listing OpenVINO Model Zoo Models | ||
``` | ||
docker run $IMAGE omz_downloader --print_all | ||
``` | ||
|
||
### Download a model | ||
``` | ||
mkdir model | ||
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_downloader --name mozilla-deepspeech-0.6.1 -o /tmp/model | ||
``` | ||
|
||
### Convert the model to IR format | ||
``` | ||
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_converter --name mozilla-deepspeech-0.6.1 -d /tmp/model -o /tmp/model/converted/ | ||
``` | ||
|
||
### Run benchmark app to test the model performance | ||
``` | ||
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE benchmark_app -m /tmp/model/converted/public/mozilla-deepspeech-0.6.1/FP32/mozilla-deepspeech-0.6.1.xml | ||
``` | ||
|
||
### Run a demo from an OpenVINO Model Zoo | ||
``` | ||
docker run $IMAGE bash -c "git clone --depth=1 --recurse-submodules --shallow-submodules https://github.com/openvinotoolkit/open_model_zoo.git && \ | ||
cd open_model_zoo/demos/classification_demo/python && \ | ||
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml && \ | ||
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin && \ | ||
curl -O https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/common/static/images/zebra.jpeg && \ | ||
python3 classification_demo.py -m resnet50-binary-0001.xml -i zebra.jpeg --labels ../../../data/dataset_classes/imagenet_2012.txt --no_show -nstreams 1 -r" | ||
``` | ||
|
||
**Check also:** | ||
|
||
[Prebuilt images](#prebuilt-images) | ||
|
||
[Deployment with GPU accelerator](docs/accelerators.md) | ||
|
||
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md) | ||
|
||
[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html) |
Oops, something went wrong.