Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ci test #38

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 23 additions & 1 deletion .iceci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,26 @@ steps:
- "{{ ICECI_GIT_TAG_OR_BRANCH }}-jetson-nano"
buildArgs:
- name: frontend_uri
value: "neuralet/smart-social-distancing:{{ ICECI_GIT_TAG_OR_BRANCH }}-frontend"
value: "neuralet/smart-social-distancing:{{ ICECI_GIT_TAG_OR_BRANCH }}-frontend"


- name: build-docker-image-pytest-x86
runtimeProfile: amd64
when:
event: ["commit", "tag", "pr"]
containerBuild:
user: neuralet
imageName: smart-social-distancing
dockerfilePath: pytest-x86.Dockerfile
tags:
- "{{ ICECI_GIT_TAG_OR_BRANCH }}-pytest-x86"


- name: run-tests
runtimeProfile: amd64
when:
event: ["commit", "tag", "pr"]
containerRun:
image: "{{ ICECI_GIT_TAG_OR_BRANCH }}-pytest-x86"
script: "docker run -it -v "$PWD":/repo "{{ ICECI_GIT_TAG_OR_BRANCH }}-pytest-x86""

103 changes: 74 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ A host edge device. We currently support the following:
* NVIDIA Jetson TX2
* Coral Dev Board
* AMD64 node with attached Coral USB Accelerator
* X86 node (also accelerated with Openvino)

**Software**
* You should have [Docker](https://docs.docker.com/get-docker/) on your device.
Expand All @@ -37,7 +38,7 @@ A host edge device. We currently support the following:

Make sure you have the prerequisites and then clone this repository to your local system by running this command:

```
```bash
git clone https://github.com/neuralet/smart-social-distancing.git
cd smart-social-distancing
```
Expand All @@ -52,8 +53,8 @@ In the following sections we will cover how to build and run each of them depend


**Download Required Files**
```
# Download a sample video file from https://megapixels.cc/oxford_town_centre/
```bash
# Download a sample video file from multiview object tracking dataset
./download_sample_video.sh
```

Expand All @@ -62,7 +63,7 @@ In the following sections we will cover how to build and run each of them depend

The frontend consists of 2 Dockerfiles:
* `frontend.Dockerfile`: Builds the React app.
* `run-frontend.Dockerfile`: Builds a FastAPI backend which serves the React app built in the previous Dockerfile.
* `web-gui.Dockerfile`: Builds a FastAPI backend which serves the React app built in the previous Dockerfile.

If the `frontend` directory on your branch is not identical to the upstream `master` branch, you MUST build the frontend image with
tag "`neuralet/smart-social-distancing:latest-frontend`" BEFORE BUILDING THE MAIN FRONTEND IMAGE.
Expand All @@ -77,44 +78,49 @@ docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-
* To run the frontend, run:

```bash
docker build -f run-frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui" .
docker run -it -p <HOST_PORT>:8000 --rm neuralet/smart-social-distancing:latest-web-gui
docker build -f web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui" .
docker run -it -p HOST_PORT:8000 --rm neuralet/smart-social-distancing:latest-web-gui
```

> Important: There is a `config-frontend.ini` file which tells the frontend where to find the processor container.
> You must set the "Processor" section of the config file with the correct IP and port of the processor.

* Building the frontend is resource intensive. If you are planning to host everything on an edge device,
we suggest building the docker image on your PC/laptop first and then copy it to the edge device.
However, you can always start the frontend container on a PC/laptop and the processor container on the edge device.
---
***NOTE***

To run the frontend on an edge device:
Building the frontend is resource intensive. If you are planning to host everything on an edge device, we suggest building the docker image on your PC/laptop first and then copy it to the edge device. However, you can always start the frontend container on a PC/laptop and the processor container on the edge device.

```
# Run these commands on your PC/laptop:
---

* To run the frontend on an edge device (Only possible on jetson for now):

```bash
# Run this commands on your PC/laptop:
docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
docker build -f run-frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui" .
docker save -o "frontend_image.tar" neuralet/smart-social-distancing:latest-web-gui
docker save -o "frontend_base_image.tar" neuralet/smart-social-distancing:latest-frontend
```

* Then, move the file `frontend_image.tar` that was just built on your PC/laptop to your edge device and load it:
```
* Then, move the file `frontend_base_image.tar` that was just built on your PC/laptop to your jetson platform and load it:
```bash
# Copy "frontend_image.tar" to your edge device and run this command on your device:
docker load -i "frontend_image.tar"
rm frontend_image.tar
docker load -i "frontend_base_image.tar"
rm frontend_base_image.tar
```

* Then build the web-gui image for jetson:
```bash
docker build -f jetson-web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui-jetson" .

# And run it:
docker run -it -p <HOST_PORT>:8000 --rm neuralet/smart-social-distancing:latest-web-gui
docker run -it -p HOST_PORT:8000 --rm neuralet/smart-social-distancing:latest-web-gui-jetson
```

* In our tests, building the frontend image on coral dev board may face some issues related to yarn's timeout, we suggest building the docker image elsewhere and move it to your board.

**The Next sections explain how to run the processor on different devices**

**Run on Jetson Nano**
* You need to have JetPack 4.3 installed on your Jetson Nano.

```
```bash
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_nano_trt.sh

Expand All @@ -128,57 +134,96 @@ docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/r
**Run on Jetson TX2**
* You need to have JetPack 4.3 installed on your Jetson TX2.

```
```bash
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_tx2_trt.sh

# 2) Build Docker image for Jetson TX2
# 2) Build Docker image for Jetson TX2 (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f jetson-tx2.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-tx2" .

# 3) Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-tx2
```

**Run on Coral Dev Board**
```
```bash
# 1) Build Docker image (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f coral-dev-board.Dockerfile -t "neuralet/smart-social-distancing:latest-coral-dev-board" .

# 2) Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-coral-dev-board
```

**Run on AMD64 node with a connected Coral USB Accelerator**
```
```bash
# 1) Build Docker image (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f amd64-usbtpu.Dockerfile -t "neuralet/smart-social-distancing:latest-amd64" .

# 2) Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-amd64
```

**Run on x86**
```
```bash
# 1) Build Docker image (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f x86.Dockerfile -t "neuralet/smart-social-distancing:latest-x86_64" .

# 2) Run Docker container:
docker run -it -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-x86_64
```

**Run on x86 using OpenVino**
```
```bash
# download model first
./download_openvino_model.sh

# 1) Build Docker image (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f x86-openvino.Dockerfile -t "neuralet/smart-social-distancing:latest-x86_64_openvino" .

# 2) Run Docker container:
docker run -it -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-x86_64_openvino
```

### Configurations
You can read and modify the configurations in `config-jetson.ini` file for Jetson Nano / TX2 and `config-skeleton.ini` file for Coral.
You can read and modify the configurations in `config-*.ini` files, accordingly:

`config-jetson.ini`: for Jetson Nano / TX2

`config-coral.ini`: for Coral dev board / usb accelerator

`config-x86.ini`: for plain x86 (cpu) platforms without any acceleration

`config-x86-openvino.ini`: for x86 systems accelerated with Openvion

Under the `[Detector]` section, you can modify the `Min score` parameter to define the person detection threshold. You can also change the distance threshold by altering the value of `DistThreshold`.

### API usage
After you run the processor's docker on your node, no matter if your frontend docker is running or not, you can use the Processor's API to control the Processor's Core, where all the process is getting done.

* The API supported paths are now as the following:

1- `PROCESSOR_IP:PROCESSOR_PORT/process-video-cfg`: Sends command `PROCESS_VIDEO_CFG` to core and returns the response. It starts to process the video adressed in the configuration file. If the response is `true`, it means the core is going to try to process the video (no guarantee if it will do it), and if the response is `false`, it means the process can not be started now (e.g. another process is already requested and running)

2- `PROCESSOR_IP:PROCESSOR_PORT/stop-process-video`: Sends command `STOP_PROCESS_VIDEO` to core and returns the response. It stops processing the video at hand, returns the response `true` if it stopped or `false`, meaning it can not (e.g. no video is already being processed to stop!)

3- `PROCESSOR_IP:PROCESSOR_PORT/get-config`: It returns the config which is used by both processor's API and Core (it is the same so returns just a single configuration set) in json format. This is the file you have used in your Processor's Dockerfile.

4- `PROCESSOR_IP:PROCESSOR_PORT/set-config`: As the configuration file between Processor's API and Core is the same configuration, it sets the given set of json configurations in the config, for both API and Core and reloads the configuration. Core's engine is also restarted so all methods and members (specially those which were initiated with the old config) can use the updated config (this will stop the processing of the video - if any).

***NOTE*** that the config file given in the Dockerfile will be updated, but this will be inside your docker and will be lost after stopping you running docker.

* Usage example:

While the Processor's docker is up and running:

```bash
curl -d '{"App": { "VideoPath" : "/repo/data/YOUR_VIDEO.mp4"} }' -H "Content-Type: application/json" -X POST http://PROCESSOR_IP:PROCESSOR_PORT/set-config
```
(of course you have to put your video under `data/` before) and then enter `http://PROCESSOR_IP:PROCESSOR_PORT/process-video-cfg` in your browser. You can see in your terminal running the docker that your video is being loaded and processed. You also can refresh your dashboard to see the output.

***NOTE***: residual files under `data/web_gui/static/` may cause you to see previous streams and plots stored there! This needs to be issued separately, you can mannually clean that path for now.


## Issues and Contributing

The project is under substantial active development; you can find our roadmap at https://github.com/neuralet/neuralet/projects/1. Feel free to open an issue, send a Pull Request, or reach out if you have any feedback.
Expand Down
9 changes: 1 addition & 8 deletions amd64-usbtpu.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
# docker can be installed on the dev board following these instructions:
# https://docs.docker.com/install/linux/docker-ce/debian/#install-using-the-repository , step 4: x86_64 / amd64
# 1) build: docker build -f amd64-usbtpu.Dockerfile -t "neuralet/smart-social-distancing:latest-amd64" .
# 2) run: docker run -it --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-amd64

FROM amd64/debian:buster

RUN apt-get update && apt-get install -y wget gnupg usbutils \
Expand Down Expand Up @@ -93,9 +88,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \

ENV DEV_ALLOW_ALL_ORIGINS=true

#COPY --from=neuralet/smart-social-distancing:latest-frontend /frontend/build /srv/frontend

COPY . /repo
WORKDIR /repo
ENTRYPOINT ["bash", "start_services.bash"]
CMD ["config-skeleton.ini"]
CMD ["config-coral.ini"]
5 changes: 5 additions & 0 deletions api/config-sample.ini
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ Port = 8000
Resolution = 640,480
Encoder = video_encoder_command_this_is_test_sample_config

[CORE]
Host: 0.0.0.0
QueuePort: 8010
QueueAuthKey: shibalba

[Detector]
Device = hosting_device_this_is_test_sample_config
Name = model_name_this_is_test_sample_config
Expand Down
10 changes: 4 additions & 6 deletions api/test_processor_api.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,21 @@
from fastapi.testclient import TestClient
import os
import sys
sys.path.append("..")
from libs.config_engine import ConfigEngine
from libs.processor_core import ProcessorCore

from api.config_keys import Config
from api.processor_api import ProcessorAPI
import pytest

config_path='/repo/config-skeleton.ini'
config_path='/repo/config-x86.ini'
config = ConfigEngine(config_path)
core = ProcessorCore(config)
app_instance = ProcessorAPI(config)
api = app_instance.app
client = TestClient(api)


sample_config_path='/repo/api/config-sample.ini'
config_backup_path='/repo/config-skeleton-backup.ini'

# make a copy for config file

# read sample config file
config_sample = ConfigEngine(sample_config_path)
Expand Down
40 changes: 40 additions & 0 deletions config-coral.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
[App]
VideoPath = /repo/data/gard1-4.mp4
Resolution = 640,480
Encoder = videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=ultrafast

[API]
Host = 0.0.0.0
Port = 8000

[CORE]
Host = 0.0.0.0
QueuePort = 8010
QueueAuthKey = shibalba

[Detector]
; Supported devices: Jetson , EdgeTPU, Dummy, x86
Device = EdgeTPU
; Detector's Name can be either "mobilenet_ssd_v2", "pedestrian_ssd_mobilenet_v2" or "pedestrian_ssdlite_mobilenet_v2"
; the first one is trained on COCO dataset and next two are trained on Oxford Town Center dataset to detect pedestrians
Name = pedestrian_ssdlite_mobilenet_v2
;ImageSize should be 3 numbers seperated by commas, no spaces: 300,300,3
ImageSize = 300,300,3
ModelPath =
ClassID = 0
MinScore = 0.25

[PostProcessor]
MaxTrackFrame = 5
NMSThreshold = 0.98
; distance threshold for smart distancing in (cm)
DistThreshold = 150
; ditance mesurement method, CenterPointsDistance: compare center of pedestrian boxes together, FourCornerPointsDistance: compare four corresponding points of pedestrian boxes and get the minimum of them.
DistMethod = CenterPointsDistance

[Logger]
Name = csv_logger
TimeInterval = 0.5
LogDirectory = /repo/data/web_gui/static/data


3 changes: 2 additions & 1 deletion config-frontend.ini
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
[App]
Host: 0.0.0.0
Port: 8002
Port: 8000

[Processor]
; The IP and Port on which your Processor node is runnning (according to your docker run's -p HOST_PORT:8000 ... for processor's docker run command)
Host: 0.0.0.0
Port: 8001
Loading