IDEA Research's Most Capable Open-World Object Detection Model Series.
The project provides examples for using the models, which are hosted on DeepDataSpace.
Grounding.DINO.1.5.Pro.mp4
- Contents
- Introduction
- Model Framework
- Performance
- API Usage
- Case Analysis and Qualitative Visualization
- Related Work
- LICENSE
- BibTeX
We introduce Grounding DINO 1.5, a suite of advanced open-set object detection models developed by IDEA Research, which aims to advanced the "Edge" of open-set object detection. The suite encompasses two models:
-
Grounding DINO 1.5 Pro: Our most capable model for open-set object detection, which is designed for stronger generalization capability across a wide range of scenarios.
-
Grounding DINO 1.5 Edge: Our most efficient model for edge computing scenarios, which is optimized for faster speed demanded in many applications requiring edge deployment.
Note: We use "edge" for its dual meaning both as in pushing the boundaries and as in running on edge devices.
The overall framework of Grounding DINO 1.5 is as the following image:
Grounding DINO 1.5 Pro preserves the core architecture of Grounding DINO which employs a deep early fusion architecture.
Model | COCO (AP box) |
LVIS-minival (AP all) |
LVIS-minival (AP rare) |
LVIS-val (AP all) |
LVIS-val (AP rare) |
ODinW35 (AP avg) |
ODinW13 (AP avg) |
---|---|---|---|---|---|---|---|
Other Best Open-Set Model |
53.4 (OmDet-Turbo) |
47.6 (T-Rex2 visual) |
45.4 (T-Rex2 visual) |
45.3 (T-Rex2 visual) |
43.8 (T-Rex2 visual) |
30.1 (OmDet-Turbo) |
59.8 (APE-B) |
DetCLIPv3 | - | 48.8 | 49.9 | 41.4 | 41.4 | - | - |
Grounding DINO | 52.5 | 27.4 | 18.1 | - | - | 26.1 | 56.9 |
T-Rex2 (text) | 52.2 | 54.9 | 49.2 | 45.8 | 42.7 | 22.0 | - |
Grounding DINO 1.5 Pro | 54.3 | 55.7 | 56.1 | 47.6 | 44.6 | 30.2 | 58.7 |
- Grounding DINO 1.5 Pro achieves SOTA performance on COCO, LVIS-minival, LVIS-val, and ODinW35 zero-shot transfer benchmarks.
Model | LVIS-minival (AP all) |
LVIS-minival (AP rare) |
LVIS-val (AP all) |
LVIS-val (AP rare) |
ODinW35 (AP avg) |
ODinW13 (AP avg) |
---|---|---|---|---|---|---|
GLIP | - | - | - | - | - | 68.9 |
GLEE-Pro | - | - | - | - | - | 69.0 |
GLIPv2 | 59.8 | - | - | - | - | 70.4 |
OWL-ST + FT † | 54.4 | 46.1 | 49.4 | 44.6 | - | - |
DetCLIPv2 | 58.3 | 60.1 | 53.1 | 49.0 | - | 70.4 |
DetCLIPv3 | 60.5 | 60.7 | - | - | - | 72.1 |
DetCLIPv3 † | 60.8 | 56.7 | 54.1 | 45.8 | - | - |
Grounding DINO 1.5 Pro (zero-shot) | 55.7 | 56.1 | 47.6 | 44.6 | 30.2 | 58.7 |
Grounding DINO 1.5 Pro | 68.1 | 68.7 | 63.5 | 64.0 | 70.6 | 72.4 |
- † indicates results of fine-tuning with LVIS base categories only.
pip install -v -e .
Refer to the DeepDataSpace for API keys: https://deepdataspace.com/request_api
python demo/demo.py --token <API_TOKEN>
python gradio_app.py --token <API_TOKEN>
- Grounding DINO: Strong open-set object detection model.
- Grounded-Segment-Anything: Open-set detection and segmentation model by combining Grounding DINO with SAM.
- T-Rex/T-Rex2: Generic open-set detection model supporting both text and visual prompts.
Grounding DINO 1.5 API License
Grounding DINO 1.5 is released under the Apache 2.0 license. Please see the LICENSE file for more information.
Copyright (c) IDEA. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
If you find our work helpful for your research, please consider citing the following BibTeX entry.
@misc{ren2024grounding,
title={Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection},
author={Tianhe Ren and Qing Jiang and Shilong Liu and Zhaoyang Zeng and Wenlong Liu and Han Gao and Hongjie Huang and Zhengyu Ma and Xiaoke Jiang and Yihao Chen and Yuda Xiong and Hao Zhang and Feng Li and Peijun Tang and Kent Yu and Lei Zhang},
year={2024},
eprint={2405.10300},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{jiang2024trex2,
title={T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy},
author={Qing Jiang and Feng Li and Zhaoyang Zeng and Tianhe Ren and Shilong Liu and Lei Zhang},
year={2024},
eprint={2403.14610},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{liu2023grounding,
title={Grounding dino: Marrying dino with grounded pre-training for open-set object detection},
author={Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Hao and Yang, Jie and Li, Chunyuan and Yang, Jianwei and Su, Hang and Zhu, Jun and others},
journal={arXiv preprint arXiv:2303.05499},
year={2023}
}