From c7d625d9dce23fecf341d08badaf57deb491b41e Mon Sep 17 00:00:00 2001
From: "Teo (Timothy) Wu Haoning" <38696372+teowu@users.noreply.github.com>
Date: Tue, 13 Feb 2024 13:53:30 +0800
Subject: [PATCH] Update README.md
---
README.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 420b3cf..61b156a 100644
--- a/README.md
+++ b/README.md
@@ -6,6 +6,7 @@
+
@@ -72,7 +73,7 @@ The proposed Q-Bench includes three realms for low-level vision: perception (A1)
- For assessment (A3), as we use **public datasets**, we provide an abstract evaluation code for arbitrary MLLMs for anyone to test.
## Release
-- [2/10] 🔥 We are releasing the extended [Q-bench+](https://github.com/Q-Future/Q-Bench/blob/master/Q_Bench%2B.pdf), which challenges MLLMs with both single images and image pairs on low-level vision. The [LeaderBoard](https://huggingface.co/spaces/q-future/Q-Bench-Leaderboard) is onsite, check out the low-level vision ability for your favorite MLLMs!! More details coming soon.
+- [2/10] 🔥 We are releasing the extended [Q-bench+](https://arxiv.org/abs/2402.07116), which challenges MLLMs with both single images and **image pairs** on low-level vision. The [LeaderBoard](https://huggingface.co/spaces/q-future/Q-Bench-Leaderboard) is onsite, check out the low-level vision ability for your favorite MLLMs!! More details coming soon.
- [1/16] 🔥 Our work ["Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision"](https://arxiv.org/abs/2309.14181) is accepted by **ICLR2024 as Spotlight Presentation**.
## Close-source MLLMs (GPT-4V-Turbo, Gemini, Qwen-VL-Plus, GPT-4V)