Skip to content
/ JMODT Public
forked from Kemo-Huang/JMODT

Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving

License

Notifications You must be signed in to change notification settings

lx-ynu/JMODT

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JMODT

This is the official code release of the IROS-2021 paper JMODT: Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving.

Overview

The system architecture of JMODT:

image

The region proposal feature processing modules:

drawing

Model Zoo

The results are evaluated on the validation set of the KITTI object tracking dataset. Only Car objects are used. The average precision (AP) scores are measured with 40 recall positions. The run time is only measured for the tracking part (after the region proposal feature processing).

Model AP-Easy AP-Moderate AP-Hard MOTA MOTP IDS FRAG Runtime
JMODT 94.01 87.37 85.22 86.10 87.13 0 129 0.01s

Requirement

The code has been tested in the following environment:

  • Ubuntu 20.04 & Windows 10
  • Python 3.8
  • PyTorch 1.9.0
  • CUDA Toolkit 11.1

Installation

  1. Install PyTorch and CUDA.

  2. Install other required Python packages:

pip install -r requirements.txt
  1. Build and install the required CUDA modules via PyTorch and the CUDA toolkit:
python setup.py develop

Getting Started

Dataset preparation

Please download the official KITTI object tracking dataset.

To generate the detection results, please use the following command to reformat the ground truth to KITTI's object detection format. You can create your own data splits by modifying jmodt/config.py file.

python tools/kitti_converter.py --data_root ${DATA_ROOT}

The final dataset organization should be like this (you can have your custom data root):

JMODT
├── data
    ├── KITTI
        ├── tracking
        │   ├──training
        │   │  ├──calib & velodyne & label_02 & image_02
        │   ├──testing
        │      ├──calib & velodyne & image_02
        ├── tracking_object
            ├──ImageSets
            │  ├──small_val.txt & test.txt & train.txt & val.txt
            ├──training
            │  ├──calib & velodyne & label_2 & image_2 & sample2frame.txt & seq2sample.txt
            ├──testing
               ├──calib & velodyne & image_2 & sample2frame.txt & seq2sample.txt

Training & Testing

Training

Finetune the additional link/start-end branches based on a pretrained detection model:

python tools/train.py --data_root ${DATA_ROOT} --ckpt ${PRETRAINED_MODEL} --finetune --batch_size ${BATCH_SIZE} --output_dir ${OUTPUT}
  • If you want to train with multiple GPUs, add the --mgpus option.

  • If you want to jointly train the detection and correlation models, remove the --finetune option.

Testing

Evaluate the tracking performance on the validation set:

python tools/eval.py --data_root ${DATA_ROOT} --det_output ${DETECTION_OUTPUT} --ckpt ${CKPT}

Visualization

Please try the code under tools/visualization directory to visualize your 3D object tracking results and make an impressive video!

License

JMODT is released under the MIT license.

Acknowledgement

The object detection module of JMODT is based on EPNet and OpenPCDet. The data association module is based on mmMOT. Many thanks to their official implementation.

Citation

TODO

About

Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 82.7%
  • Cuda 10.9%
  • C++ 5.8%
  • C 0.6%