Skip to content

Latest commit

 

History

History
62 lines (45 loc) · 7.64 KB

README.md

File metadata and controls

62 lines (45 loc) · 7.64 KB

OpenVINO™ Model Server fork of MediaPipe.

This repository allows users to take advantage of OpenVINO™ in the Mediapipe framework. It includes inference calculators which can replace Tensorflow backend with OpenVINO™ Runtime. That way, you can expect more efficient execution and lower latency on CPU.

Check the included demos with pipeline examples or create your own graphs and execution flows.

List of changes introduced in this repository fork

  • added Dockerfile.openvino dockerfile that creates runtime and development environment. This is a build environment image not suitable for production use.
  • added Makefile file with build, test and demo targets for the ease of use.
  • modified build_desktop_examples.sh script to build new demos.
  • added calculators and calculators for OpenVINO™ inference in mediapipe graphs detailed description.
  • added calculators for serialization and inference tasks.
  • modified bazel targets to use OpenVINO™ inference calculators (the list of available demos is in the table below).
  • modified WORKSPACE file to add OpenVINO™ Model Server dependencies. Specifically target @ovms//src:ovms_lib as dependency from OpenVINO Model Server
  • modified graphs and bazel targets to use OpenVINO™ inference instead of TensorFlow inference.
  • added setup_ovms.py script to create models repository used in OpenVINO™ inference. The script needs to be executed to prepare specific directory structures with tflite models and config.json in the mediapipe/models/ovms.
  • modified setup_opecv.py to install 4.7.0 OpenCV version instead of previous 3.4.
OpenVINO™ demo C++ Python Original Google demo
Face Detection Face Detection
Iris Iris
Pose Pose
Holistic Holistic
Object Detection Object Detection

Quick start guide

Check the quick start guide to with easy to follow instructions for building and running the example applications and graphs.

Development instructions

The developer guide includes the list of instructions and practices in developing your own application and graphs.

MediaPipe


Live ML anywhere

MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.

accelerated.png cross_platform.png
End-to-End acceleration: Built-in fast ML inference and processing accelerated even on common hardware Build once, deploy anywhere: Unified solution works across Android, iOS, desktop/cloud, web and IoT
ready_to_use.png open_source.png
Ready-to-use solutions: Cutting-edge ML solutions demonstrating full power of the framework Free and open source: Framework and solutions both under Apache 2.0, fully extensible and customizable

ML solutions in MediaPipe OpenVINO™ fork

Face Detection Iris Pose Holistic Object Detection
face_detection iris pose holistic_tracking object_detection

Fork baseline

The fork is based on original mediapipe release origin/v0.10.3.

Original v0.10.3 Google ML solutions in MediaPipe can be found here