Skip to content

Commit

Permalink
feat(model): Support Qwen2.5 models
Browse files Browse the repository at this point in the history
  • Loading branch information
fangyinc committed Sep 19, 2024
1 parent 6ce620f commit 896122b
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 4 deletions.
9 changes: 9 additions & 0 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,15 @@ DB-GPTのアーキテクチャは以下の図に示されています:
私たちは、LLaMA/LLaMA2、Baichuan、ChatGLM、Wenxin、Tongyi、Zhipuなど、オープンソースおよびAPIエージェントからの数十の大規模言語モデル(LLM)を含む幅広いモデルをサポートしています。

- ニュース
- 🔥🔥🔥 [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
- 🔥🔥🔥 [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- 🔥🔥🔥 [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- 🔥🔥🔥 [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
- 🔥🔥🔥 [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- 🔥🔥🔥 [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
- 🔥🔥🔥 [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
- 🔥🔥🔥 [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
- 🔥🔥🔥 [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
Expand Down
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,15 @@ At present, we have introduced several key features to showcase our current capa
We offer extensive model support, including dozens of large language models (LLMs) from both open-source and API agents, such as LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, and many more.

- News
- 🔥🔥🔥 [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
- 🔥🔥🔥 [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- 🔥🔥🔥 [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- 🔥🔥🔥 [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
- 🔥🔥🔥 [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- 🔥🔥🔥 [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
- 🔥🔥🔥 [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
- 🔥🔥🔥 [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
- 🔥🔥🔥 [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
Expand Down
9 changes: 9 additions & 0 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,15 @@
海量模型支持,包括开源、API代理等几十种大语言模型。如LLaMA/LLaMA2、Baichuan、ChatGLM、文心、通义、智谱等。当前已支持如下模型:

- 新增支持模型
- 🔥🔥🔥 [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
- 🔥🔥🔥 [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- 🔥🔥🔥 [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- 🔥🔥🔥 [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
- 🔥🔥🔥 [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- 🔥🔥🔥 [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
- 🔥🔥🔥 [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
- 🔥🔥🔥 [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
- 🔥🔥🔥 [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
Expand Down
11 changes: 11 additions & 0 deletions dbgpt/configs/model_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,17 @@ def get_device() -> str:
"qwen2-0.5b-instruct-gptq-int4": os.path.join(
MODEL_PATH, "Qwen2-0.5B-Instruct-GPTQ-Int4"
),
"qwen2.5-0.5b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-0.5B-Instruct"),
"qwen2.5-1.5b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-1.5B-Instruct"),
"qwen2.5-3b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-3B-Instruct"),
"qwen2.5-7b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-7B-Instruct"),
"qwen2.5-14b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-14B-Instruct"),
"qwen2.5-32b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-32B-Instruct"),
"qwen2.5-72b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-72B-Instruct"),
"qwen2.5-coder-1.5b-instruct": os.path.join(
MODEL_PATH, "Qwen2.5-Coder-1.5B-Instruct"
),
"qwen2.5-coder-7b-instruct": os.path.join(MODEL_PATH, "Qwen2.5-Coder-7B-Instruct"),
# (Llama2 based) We only support WizardLM-13B-V1.2 for now, which is trained from Llama-2 13b, see https://huggingface.co/WizardLM/WizardLM-13B-V1.2
"wizardlm-13b": os.path.join(MODEL_PATH, "WizardLM-13B-V1.2"),
# wget https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF/resolve/main/vicuna-13b-v1.5.Q4_K_M.gguf -O models/ggml-model-q4_0.gguf
Expand Down
13 changes: 9 additions & 4 deletions dbgpt/model/adapter/hf_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -370,10 +370,15 @@ class Qwen2Adapter(QwenAdapter):
support_8bit: bool = True

def do_match(self, lower_model_name_or_path: Optional[str] = None):
return (
lower_model_name_or_path
and "qwen2" in lower_model_name_or_path
and "instruct" in lower_model_name_or_path
return lower_model_name_or_path and (
(
"qwen2" in lower_model_name_or_path
and "instruct" in lower_model_name_or_path
)
or (
"qwen2.5" in lower_model_name_or_path
and "instruct" in lower_model_name_or_path
)
)


Expand Down

0 comments on commit 896122b

Please sign in to comment.