Skip to content

Commit

Permalink
documentation updates (#280)
Browse files Browse the repository at this point in the history
* documentation updates
* version 2023.0.0 set as default

*Co-authored-by: Damian Kalinowski <[email protected]>
  • Loading branch information
dtrawins authored Jun 1, 2023
1 parent 1358b12 commit cc43594
Show file tree
Hide file tree
Showing 7 changed files with 444 additions and 325 deletions.
18 changes: 11 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,21 @@ You can reuse available Dockerfiles, add your layer and customize the image of O

## Documentation

* [Get Started with DockerHub CI for OpenVINO™ toolkit](get-started.md)
* [Get Started with OpenVINO™ toolkit images](get-started.md)
* [Available Dockerfiles for OpenVINO™ toolkit](dockerfiles)
* [Available Dockerfiles for OpenVINO™ Deep Learning Workbench](dockerfiles/dl-workbench)
* [Generating the dockerfiles and building the images](docs/openvino_docker.md)
* [Working with OpenVINO containers](docs/containers.md)
* [Deployment with GPU accelerators](docs/accelerators.md)
* [Available Tutorials](docs/tutorials)

As [Docker\*](https://docs.docker.com/) is (mostly) just an isolation tool, the OpenVINO toolkit inside the container is the same as the OpenVINO toolkit installed natively on the host machine,
so the [OpenVINO documentation](https://docs.openvino.ai/) is fully applicable to containerized OpenVINO distribution:
* [Install Intel® Distribution of OpenVINO™ toolkit for Linux* from a Docker* Image](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_docker_linux.html)
* [Deploy with OpenVINO](https://docs.openvino.ai/latest/openvino_deployment_guide.html)
so the [OpenVINO documentation](https://docs.openvino.ai/) is fully applicable to containerized OpenVINO distribution.

## Supported Operating Systems for Docker image:
## Supported Operating Systems for Docker Base Image:

- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- RHEL 8
- RedHat UBI 8

## Prebuilt images

Expand All @@ -33,6 +34,9 @@ Prebuilt images are available on:
- [Red Hat* Ecosystem Catalog (development image)](https://catalog.redhat.com/software/containers/intel/openvino-dev/613a450dc9bc35f21dc4a1f7)
- [Azure* Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/intel_corporation.openvino)

Note: OpenVINO development environment in a docker container is available also in [notebook repository](https://github.com/openvinotoolkit/openvino_notebooks).
It can be deployed in [OpenShift RedHat OpenData Science (RHODS)](https://github.com/openvinotoolkit/operator/blob/main/docs/notebook_in_rhods.md)

## Licenses

The DockerHub CI framework for Intel® Distribution of OpenVINO™ toolkit is licensed under [Apache License Version 2.0](./LICENSE).
Expand Down
81 changes: 81 additions & 0 deletions docs/accelerators.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Using OpenVINO™ Toolkit containers with GPU accelerators


Containers can be used to execute inference operations with GPU acceleration or with the [virtual devices](https://docs.openvino.ai/nightly/openvino_docs_Runtime_Inference_Modes_Overview.html).

There are the following prerequisites:

- Use the Linux kernel with GPU models supported by you integrated GPU or discrete GPU. Check the documetnation on https://dgpu-docs.intel.com/driver/kernel-driver-types.html.
On Linux host, confirm if there is available a character device /dev/dri

- On Windows Subsystem for Linux (WSL2) refer to the guidelines on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#
Note, that on WLS2, there must be present a character device `/dev/drx`.

- Docker image for the container must include GPU runtime drivers like described on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#

While the host and preconfigured docker engine is up and running, use the docker run parameters like described below.

## Linux

The command below should report both CPU and GPU devices available for inference execution:
```
export IMAGE=openvino/ubuntu20_dev:2023.0.0
docker run -it --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE ./samples/cpp/samples_bin/hello_query_device
```

`--device /dev/dri` - it passes the GPU device to the container
`--group-add` - it adds a security context to the container command with permission to use the GPU device

## Windows Subsystem for Linux

On WSL2, use the command to start the container:

```
export IMAGE=openvino/ubuntu20_dev:2023.0.0
docker run -it --device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl $IMAGE ./samples/cpp/samples_bin/hello_query_device
```
`--device /dev/dri` - it passes the virtual GPU device to the container
`-v /usr/lib/wsl:/usr/lib/wsl` - it mounts required WSL libs into the container


## Usage example:

Run the benchmark app using GPU accelerator with `-use_device_mem` param showcasing inference without copy between CPU and GPU memory
```
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d GPU -use_device_mem -inference_only=false"
```
In the benchmark app, the parameter `-use_device_mem` employs the OV::RemoteTensor as the input buffer. It demonstrates the gain without data copy beteen the host and the GPU device.

Run the benchmark app using both GPU and CPU. Load will be distributed on both types of devices:
```
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d MULTI:GPU,CPU"
```


**Check also:**

[Prebuilt images](#prebuilt-images)

[Working with OpenVINO Containers](docs/containers.md)

[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)

[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)












74 changes: 74 additions & 0 deletions docs/containers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Working with OpenVINO™ Toolkit Images

## Runtime images

The runtime images include OpenVINO toolkit with all required dependencies to run inference operations and OpenVINO API both in Python and C++.
There are no development tools installed.
Here are examples how the runtime image could be used:

```
export IMAGE=openvino/ubuntu20_runtime:2023.0.0
```

### Building and Using the OpenVINO samples:

```
docker run -it -u root $IMAGE bash -c "/opt/intel/openvino/install_dependencies/install_openvino_dependencies.sh -y -c dev && ./samples/cpp/build_samples.sh && \
/root/openvino_cpp_samples_build/intel64/Release/hello_query_device"
```

### Using python samples
```
docker run -it $IMAGE python3 samples/python/hello_query_device/hello_query_device.py
```

## Development images

Dev images include the OpenVINO runtime components and development tools as well. It includes a complete environment for experimenting with OpenVINO.
Examples how the development container can be used are below:

```
export IMAGE=openvino/ubuntu20_dev:2023.0.0
```

### Listing OpenVINO Model Zoo Models
```
docker run $IMAGE omz_downloader --print_all
```

### Download a model
```
mkdir model
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_downloader --name mozilla-deepspeech-0.6.1 -o /tmp/model
```

### Convert the model to IR format
```
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_converter --name mozilla-deepspeech-0.6.1 -d /tmp/model -o /tmp/model/converted/
```

### Run benchmark app to test the model performance
```
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE benchmark_app -m /tmp/model/converted/public/mozilla-deepspeech-0.6.1/FP32/mozilla-deepspeech-0.6.1.xml
```

### Run a demo from an OpenVINO Model Zoo
```
docker run $IMAGE bash -c "git clone --depth=1 --recurse-submodules --shallow-submodules https://github.com/openvinotoolkit/open_model_zoo.git && \
cd open_model_zoo/demos/classification_demo/python && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml && \
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin && \
curl -O https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/common/static/images/zebra.jpeg && \
python3 classification_demo.py -m resnet50-binary-0001.xml -i zebra.jpeg --labels ../../../data/dataset_classes/imagenet_2012.txt --no_show -nstreams 1 -r"
```

**Check also:**

[Prebuilt images](#prebuilt-images)

[Deployment with GPU accelerator](docs/accelerators.md)

[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)

[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)
Loading

0 comments on commit cc43594

Please sign in to comment.