Skip to content

Commit

Permalink
AC: Added guide for usage AC configs (openvinotoolkit#1517)
Browse files Browse the repository at this point in the history
* Initial version of how to use omz configs

* Fix misprint

* Fix comments

* Fix comments

* Update description for composite models

* Add target_framework option description
  • Loading branch information
Anna Mironova authored Sep 9, 2020
1 parent ac36aea commit 245b26d
Show file tree
Hide file tree
Showing 3 changed files with 103 additions and 18 deletions.
11 changes: 0 additions & 11 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,17 +218,6 @@ models:
- name: densenet-121-tf
launchers:
- framework: dlsdk
tags:
- FP32
model: public/densenet-121-tf/FP32/densenet-121-tf.xml
weights: public/densenet-121-tf/FP32/densenet-121-tf.bin
adapter: classification
- framework: dlsdk
tags:
- FP16
model: public/densenet-121-tf/FP16/densenet-121-tf.xml
weights: public/densenet-121-tf/FP16/densenet-121-tf.bin
adapter: classification
datasets:
Expand Down
13 changes: 6 additions & 7 deletions tools/accuracy_checker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ python setup.py install_core

#### Troubleshooting during installation

When previous version of the tool already installed in the environment, in some cases, it can broke new installation.
When previous version of the tool already installed in the environment, in some cases, it can broke new installation.
If you see the error about directory/file not found, please try remove manually old tool version from your environment or install the tool with following command (in accuracy checker directory, instead of setup.py install):
```bash
pip install --upgrade --force-reinstall .
Expand All @@ -80,21 +80,20 @@ pip install --upgrade --force-reinstall .

You may test your installation and get familiar with accuracy checker by running [sample](sample/README.md).

Each Open Model Zoo model can be evaluated using a configuration file. Please refer to [How to use predefined configuration files](configs/README.md) guide.

Once you installed accuracy checker you can evaluate your configurations with:

```bash
accuracy_check -c path/to/configuration_file -m /path/to/models -s /path/to/source/data -a /path/to/annotation
```

All relative paths in config files will be prefixed with values specified in command line:
You may refer to `-h, --help` to full list of command line options. Some arguments are:

- `-c, --config` path to configuration file.
- `-m, --models` specifies directory in which models and weights declared in config file will be searched. You also can specify space separated list of directories if you want to run the same configuration several times with models located in different directories or if you have the pipeline with several models.
- `-s, --source` specifies directory in which input images will be searched.
- `-a, --annotations` specifies directory in which annotation and meta files will be searched.

You may refer to `-h, --help` to full list of command line options. Some optional arguments are:

- `-d, --definitions` path to the global configuration file.
- `-e, --extensions` directory with InferenceEngine extensions.
- `-b, --bitstreams` directory with bitstream (for Inference Engine with fpga plugin).
Expand Down Expand Up @@ -131,8 +130,8 @@ models:
- name: model_name
launchers:
- framework: caffe
model: public/alexnet/caffe/bvlc_alexnet.prototxt
weights: public/alexnet/caffe/bvlc_alexnet.caffemodel
model: bvlc_alexnet.prototxt
weights: bvlc_alexnet.caffemodel
adapter: classification
batch: 128
datasets:
Expand Down
97 changes: 97 additions & 0 deletions tools/accuracy_checker/configs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# How to use predefined configuration files

## Structure

Configuration file declares validation process. Every model has to have entry in `models` list. Each entry has to contain distinct `name`, `launchers` and `datasets` sections.

Example:

```yaml
models:
- name: model_name

launchers:
- framework: dlsdk
adapter: adapter_name

datasets:
- name: dataset_name
```
Also there are composite models which consist of several parts (models) and the accuracy measurement requires building the pipeline from these parts. Thus, the evaluation is performed by sequentially executing a set of models and impossible to evaluate them independently. Each composite model has to have entry in `evaluations` list. Each entry should contain distinct `name`, `module` and `module_config`. `module_config` has to consist of `network_info`,`launchers` and `datasets` fields. Custom evaluators are used for such models. More information about defining and using your own evaluator or an existing one can be found in [Custom Evaluators Guide](../accuracy_checker/evaluators/custom_evaluators/README.md)

Example:

```yaml
evaluations:
- name: model_name
module: name_of_class_with_custom_evaluators
module_config:
network_info:
encoder: {}
decoder:
adapter: adapter_name
launchers:
- framework: dlsdk
datasets:
- name: dataset_name
```

## Location

Predefined configuration file `accuracy-check.yml` for each Open Model Zoo model can be found in the model directory.

`<model_name>.yml` file, which is located in current `configs` folder, is a link to `accuracy-check.yml` for `<model_name>` model.

Example:

[alexnet.yml](alexnet.yml) is a link for configuration file [accuracy-check.yml](../../../models/public/alexnet/accuracy-check.yml) for [alexnet](../../../models/public/alexnet/alexnet.md) model.

## Options

1. To run configuration specify the path to the required configuration file to `-c, --config` command line.
2. Configuration files don't contain paths to used models and weights. The model and weights are searched automatically by name of model in path specified in `-m, --models` command line option.
3. There is global configuration file with dataset conversion parameters which is used to avoiding duplication. Global definitions will be merged with evaluation config in the runtime by dataset name. You can use global_definitions to specify path to this file via command line arguments `-d, --definitions`.
4. The path relative to which the `data_source` is specified can be provided via `-s, --source` command line. If you want to evaluate models using well-known datasets, you need to organize folders with validation datasets in a certain way. More detailed information about dataset preparation you can find in [Dataset Preparation Guide](../../../datasets.md).
5. The path relative to which the `annotation` and `dataset_meta` are specified can be provided via `-a, --annotations` command line. Annotation and dataset_meta (if required) will be stored to this directory after annotation conversion step if they do not exist and can be used for the next running to skip annotation conversion. Detailed information about annotation conversion you can find in [Annotation Conversion Guide](../accuracy_checker/annotation_converters/README.md).
6. Some models can have additional files for evaluation (for example, vocabulary files for NLP models), generally, named as model attributes. The relative paths to model specific attributes(vocabulary files, merges files, etc.) can be provided in the configuration file, if it is required. The path prefix for them should be passed through `--model_attributes` command line option (usually, it is the model directory).
7. To specify devices for infer use `-td, --target_devices` command line option. Several devices should be separated by spaces (e.g. -td CPU GPU).
8. Optionally, if several frameworks are provided in the configuration file, you can specify inference framework for evaluation using `-tf, --target_framework` command line option. Otherwise, if the option is not provided evaluation will be launched with all frameworks mentioned in the configuration file.

## Example of usage

See how to evaluate model with using predefined configuration file for [densenet-121-tf](../../../models/public/densenet-121-tf/densenet-121-tf.md) model.

- `OMZ_ROOT` - root of Open Model Zoo project
- `DATASET_DIR` - root directory with dataset
- `MODEL_DIR` - root directory with model
- `OPENVINO_DIR` - root directory with installed the OpenVINO&trade; toolkit

1. First of all, you need to prepare dataset according to [Dataset Preparation Guide](../../../datasets.md)
2. Download original model files from online source using [Model Downloader](../../../tools/downloader/README.md)
```sh
OMZ_ROOT/tools/downloader/downloader.py --name densenet-121-tf --output_dir MODEL_DIR
```
3. Convert model in the Inference Engine IR format using Model Optimizer via [Model Converter](../../../tools/downloader/README.md)
```sh
OMZ_ROOT/tools/downloader/converter.py --name densenet-121-tf --download_dir MODEL_DIR --mo OPENVINO_DIR/deployment_tools/model_optimizer/mo.py
```
4. Run evaluation for model in FP32 precision using [Accuracy Checker](../README.md)
```sh
accuracy-check -c OMZ_ROOT/models/public/densenet-121-tf/accuracy-check.yml -s DATASET_DIR -m MODEL_DIR/public/densenet-121-tf/FP32 -d OMZ_ROOT/tools/accuracy_checker/dataset_definitions.yml -td CPU
```
Similarly, you can run evaluation for model in FP16 precision
```sh
accuracy-check -c OMZ_ROOT/models/public/densenet-121-tf/accuracy-check.yml -s DATASET_DIR -m MODEL_DIR/public/densenet-121-tf/FP16 -d OMZ_ROOT/tools/accuracy_checker/dataset_definitions.yml -td GPU
```
5. Also you can quantize full-precision models in the IR format into low-precision versions via [Model Quantizer](../../../tools/downloader/README.md)
```sh
OMZ_ROOT/tools/downloader/quantizer.py --name densenet-121-tf --dataset_dir DATASET_DIR --model_dir MODEL_DIR
```
Run evaluation for quantized model:
```sh
accuracy-check -c OMZ_ROOT/models/public/densenet-121-tf/accuracy-check.yml -s DATASET_DIR -m MODEL_DIR/public/densenet-121-tf/FP16-INT8 -d OMZ_ROOT/tools/accuracy_checker/dataset_definitions.yml -td CPU GPU
```

0 comments on commit 245b26d

Please sign in to comment.