Skip to content

Python Computer Vision & Video Analytics Framework With Batteries Included

License

Notifications You must be signed in to change notification settings

insight-platform/Savant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Savant: High-Performance Computer Vision Framework For Data Center And Edge

GitHub release (with filter) Build status Twitter Blog Discord

⭐ Star us on GitHub β€” it motivates us a lot and helps the project become more visible to developers.

current-demos-page

Savant is an open-source, high-level framework for building real-time, streaming, highly efficient multimedia AI applications on the Nvidia stack. It helps to develop dynamic, fault-tolerant inference pipelines that utilize the best Nvidia approaches for data center and edge accelerators.

Savant is built on DeepStream and provides a high-level abstraction layer for building inference pipelines. It is designed to be easy to use, flexible, and scalable. It is a great choice for building both real-time or high-load computer vision and video analytics applications.

What Version To Use

Savant depends on Nvidia DeepStream and JetPack versions (Jetson). The following tables show the compatibility of Savant versions with DeepStream versions.

0.2.11 LTS

This release is recommended for production use. It uses the time-proven DeepStream 6.3. The release works on dGPU ( Turing, Volta, Ampere, Ada) and Jetson (Xavier NX/AGX, Orin Nano/NX/AGX) hardware.

Known drawbacks:

  • NVJPEG caps on 115MHz on Jetson Orin Nano in JPEG decoding.
Requirements Status DeepStream
X86 Driver 525(Datacenter), 530+ Quadro/GeForce Stable 6.3
Jetson Xavier, Orin with JetPack 5.1.2 GA Stable 6.3

0.4.x Current Production Release

This release contains new features and is tested for production use. It is a choice for users requiring functionality missing in 0.2.x and 0.3.x.

Requirements Status DeepStream
X86 Driver 525(Datacenter), 530+ Quadro/GeForce Stable 6.4
Jetson Orin JetPack 6.0 Stable 6.4

0.5.x Current Development

This release integrates DeepStream 7.0

Requirements Status DeepStream
X86 Driver 525(Datacenter), 530+ Quadro/GeForce Stable 7.0
Jetson Orin JetPack 6.0 Stable 7.0

Chat With Us

The best way to approach us is Discord. We are always happy to help you with any questions you may have.

Quick Links

Getting Started

First, take a look at the runtime configuration guide to configure the working environment.

The demo shows a pipeline featuring person detection, facial detection, tracking, facial blurring (OpenCV CUDA), and a real-time analytics dashboard:

git clone https://github.com/insight-platform/Savant.git
cd Savant/samples/peoplenet_detector
git lfs pull

# if x86
../../utils/check-environment-compatible && docker compose -f docker-compose.x86.yml up

# if Jetson
../../utils/check-environment-compatible && docker compose -f docker-compose.l4t.yml up

# open 'rtsp://127.0.0.1:554/stream/city-traffic' in your player
# or visit 'http://127.0.0.1:888/stream/city-traffic/' (LL-HLS)

# Ctrl+C to stop running the compose bundle

# to get back to project root
cd ../..

Who Would Be Interested in Savant

If your task is to implement high-performance production-ready computer vision and video analytics applications, Savant is for you.

With Savant, developers:

  • get the maximum performance on Nvidia equipment on edge and in the core;
  • decrease time to market when building dynamic pipelines with DeepStream technology but without low-level programming;
  • develop easily maintainable and testable applications with a well-established framework API;
  • build heterogeneous pipelines with different models and data sources;
  • build hybrid edge/datacenter applications with the same codebase;
  • monitor and trace the pipelines with OpenTelemetry and Prometheus;
  • implement on-demand and non-linear processing by utilizing Replay.

Runs On Nvidia Hardware

Savant components, processing video and computer vision, require Nvidia hardware. We support the following devices:

  • Jetson Xavier NX/AGX (0.2.x);
  • Jetson Orin Nano/NX/AGX (0.3.x and newer);
  • Nvidia Turing, Ampere, Ada, Hopper, Blackwell GPUs (0.2.x and newer).

Why We Developed Savant

We developed Savant give computer vision and video analytics engineers a ready-to-use stack for building real-life computer vision applications working at the edge and in the data center. Unlike other computer vision frameworks like PyTorch, TensorFlow, OpenVINO/DlStreamer, and DeepStream, Savant provides users with not only inference and image manipulation tools but also advanced architecture for building distributed edge/datacenter computer vision applications communicating over the network. Thus, Savant users focus on computer vision but do not reinvent the wheel, when developing their applications.

Savant is a very high-level framework hiding low-level internals from developers: computer vision pipelines consist of declarative (YAML) blocks with Python functions.

Features

Savant is packed with many features skyrocketing the development of high-performing computer vision applications.

πŸ”§ All You Need for Building Real-Life Applications

Savant supports everything you need for developing advanced pipelines: detection, classification, segmentation, tracking, and custom pre- and post-processing for meta and images.

We have implemented samples demonstrating pipelines you can build with Savant. Visit the samples folder to learn more.

πŸš€ High Performance

Savant is designed to be fast: it works on top of DeepStream - the fastest SDK for video analytics. Even the heavyweight segmentation models can run in real-time on Savant. See the Performance Regression Tracking Dashboard for the latest performance results.

🌐 Works On Edge and Data Center Equipment

The framework supports running the pipelines on both Nvidia's edge devices (Jetson Family) and data center devices ( Tesla, Quadro, etc.) with minor or zero changes.

❀️ Cloud-Ready

Savant pipelines run in Docker containers. We provide images for x86+dGPU and Jetson hardware. Integrated OpenTelemetry and Prometheus support enable monitoring and tracing of the pipelines.

⚑ Low Latency and High Capacity Processing

Savant can be configured to execute a pipeline in real-time, skipping data when running out of capacity or in high capacity mode, which guarantees the processing of all the data, maximizing the utilization of the available resources.

🀝 Ready-To-Use API

A pipeline is a self-sufficient service communicating with the world via high-performance streaming API. Whether developers use provided adapters or Client SDK, both approaches use the API.

πŸ“ Advanced Data Protocol

The framework universally uses a common protocol for both video and metadata delivery. The protocol is highly flexible, allowing video-related information alongside arbitrary structures useful for IoT and 3rd-party integrations.

⏱ OpenTelemetry Support

In Savant, you can precisely instrument pipelines with OpenTelemetry: a unified monitoring solution. You can use sampled or complete traces to balance the performance and precision. The traces can span from edge to core to business logic through network and storage because their propagation is supported by the Savant protocol.

πŸ“Š Prometheus Support

Savant pipelines can be instrumented with Prometheus: a popular monitoring solution. Prometheus is a great choice for monitoring the pipeline's performance and resource utilization.

🧰 Client SDK

We provide Python-based SDK to interact with Savant pipelines (ingest and receive data). It enables simple integration with 3rd-party services. Client SDK is integrated with OpenTelemetry providing programmatic access to the pipeline traces and logs.

🧘 Development Server

Software development for vanilla DeepStream is a pain. Savant provides a Development Server tool, which enables dynamic reloading of changed code without pipeline restarts. It helps to develop and debug pipelines much faster. Altogether with Client SDK, it makes the development of DeepStream-enabled applications really simple. With the Development Server, you can develop remotely on a Jetson device or server right from your IDE.

πŸ”€ Dynamic Sources Management

In Savant, you can dynamically attach and detach sources and sinks to the pipeline without reloading. The framework resiliently handles situations related to source/sink outages.

🏹 Handy Source and Sink Adapters

The communication interface is not limited to Client SDK: we provide several ready-to-use adapters, which you can use " as is" or modify for your needs.

The following source adapters are available:

Several sink adapters are implemented:

🎯 Dynamic Parameters Ingestion

Advanced ML pipelines may require information from the external environment for their work. The framework enables dynamic configuration of the pipeline with:

  • ingested frame attributes passed in per-frame metadata;
  • Etcd's attributes watched and instantly applied;
  • 3rd-party attributes, which are received through user-defined functions.

πŸ–Ό OpenCV CUDA Support

Savant supports custom OpenCV CUDA bindings enabling operations on DeepStream's in-GPU frames with a broad range of OpenCV CUDA functions: the feature helps in implementing highly efficient video transformations, including but not limited to blurring, cropping, clipping, applying banners and graphical elements over the frame, and others. The feature is available from Python.

πŸ”¦ PyTorch Support

Savant supports PyTorch, one of the most popular ML frameworks. It enables the developer to use ready-to-use PyTorch models from PyTorchHub, a large number of code samples, and reliable extensions. The integration is highly efficient: it allows running inference on GPU-allocated images and processing the results in GPU RAM, avoiding data transfers between CPU and GPU RAM.

πŸ”’ CuPy Support For Post-Processing

Savant supports CuPy: a NumPy-like library for GPU-accelerated computing. It enables the developer to implement custom post-processing functions in Python, executed in GPU RAM, avoiding data transfers between CPU and GPU RAM. The feature allows for accessing model output tensors directly from GPU RAM, which helps implement heavy-weight custom post-processing functions.

The integration also provides a conversion for in-GPU data between CuPy, OpenCV, and PyTorch in-GPU formats.

↻ Rotated Detection Models Support

We frequently deal with the models resulting in bounding boxes rotated relative to a video frame (oriented bounding boxes). For example, it is often the case with bird-eye cameras observing the underlying area from a high point.

Such cases may require detecting the objects with minimal overlap. To achieve that, special models are used which generate bounding boxes that are not orthogonal to the frame axis. Take a look at RAPiD to find more.

image

β‡Ά Parallelization

Savant supports processing parallelization; it helps to utilize the available resources to the maximum. The parallelization is achieved by running the pipeline stages in separate threads. Despite flow control-related Python code is not parallel; the developer can utilize GIL-releasing mechanisms to achieve the desired parallelization with NumPy, Numba, or custom native code in C++ or Rust.

What's Next

Contribution

We welcome anyone who wishes to contribute, report, and learn.

About Us

The In-Sight team is an ML/AI department of Bitworks Software. We develop custom high performance CV applications for various industries providing full-cycle process, which includes but not limited to data labeling, model evaluation, training, pruning, quantization, validation, and verification, pipelines development, CI/CD. We are mostly focused on Nvidia hardware (both datacenter and edge).

Contact us: [email protected]