Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 把glm-4-flash设置为默认模型,但是依旧访问openapi的接口 #6053

Open
chunxingque opened this issue Jan 9, 2025 · 7 comments
Labels
bug Something isn't working

Comments

@chunxingque
Copy link

📦 部署方式

官方安装包

📌 软件版本

v2.15.8

💻 系统环境

Other Linux

📌 系统版本

centos7.9

🌐 浏览器

Chrome

📌 浏览器版本

131.0.6778.264

🐛 问题描述

1.我使用docker部署,我想默认使用glm-4-flash模型,部署命令如下

docker run --rm -p 3000:3000 --name chatgpt-next-web \
   -e OPENAI_API_KEY=************ \
   -e CHATGLM_API_KEY=*************** \
   -e CHATGLM_URL=https://open.bigmodel.cn \
   -e DEFAULT_MODEL=glm-4-flash \
   -e CUSTOM_MODELS=-all,glm-4-flash \
   -e DEFAULT_INPUT_TEMPLATE=0.95 \
   -e DISABLE_GPT4=1 \
   -v /etc/localtime:/etc/localtime:ro  \
yidadaa/chatgpt-next-web:v2.15.8

2.我打开网页后,输入文本时,前台显示连接错误
image

3.我看了运行日志,显示调用的是openapi的api地址,而不是glm-4-flash模型的api地址
image

4.我看了当前会话,显示的模型是glm-4-flash,但是依旧访问不了
image

我怀疑是浏览器的缓存问题,但是我打开无痕窗口重新访问,接口依旧访问到openapi的api地址,而不是glm-4-flash模型的api地址

📷 复现步骤

🚦 期望结果

设置glm-4-flash为默认模型,打开的聊天,使用的是glm-4-flash模型,而不是openapi模型,并且可以正常使用

📝 补充信息

@chunxingque chunxingque added the bug Something isn't working label Jan 9, 2025
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] Set glm-4-flash as the default model, but still access the openapi interface

📦 Deployment method

Official installation package

📌 Software version

v2.15.8

💻 System environment

Other Linux

📌 System version

centos7.9

🌐 Browser

Chrome

📌 Browser version

131.0.6778.264

🐛 Problem description

  1. I use docker to deploy. I want to use the glm-4-flash model by default. The deployment command is as follows
docker run --rm -p 3000:3000 --name chatgpt-next-web \
   -e OPENAI_API_KEY=************ \
   -e CHATGLM_API_KEY=**************** \
   -e CHATGLM_URL=https://open.bigmodel.cn \
   -e DEFAULT_MODEL=glm-4-flash \
   -e CUSTOM_MODELS=-all,glm-4-flash \
   -e DEFAULT_INPUT_TEMPLATE=0.95 \
   -e DISABLE_GPT4=1 \
   -v /etc/localtime:/etc/localtime:ro \
yidadaa/chatgpt-next-web:v2.15.8
  1. After I open the web page and enter text, a connection error is displayed in the foreground.
    image

  2. I looked at the running log and it showed that the api address of openapi was called, not the api address of the glm-4-flash model.
    image

  3. I checked the current session and the displayed model is glm-4-flash, but I still cannot access it.
    image

I suspected that it was a browser cache problem, but when I opened the incognito window and visited again, the interface still accessed the api address of openapi, not the api address of the glm-4-flash model.

📷 Steps to reproduce

none

🚦 Expected results

Set glm-4-flash as the default model. The opened chat uses the glm-4-flash model instead of the openapi model and can be used normally.

📝 Supplementary information

None

@TianqiZhang
Copy link

我这里有类似情况,区别是我没有设置DEFAULT_MODEL,但是在config里指定了gpt-4o@Azure。当开始一个新的对话的时候,仍然默认去访问openai的接口。模型名称是空的。每次都需要手动选择一次:
image

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


I have a similar situation here, the difference is that I did not set DEFAULT_MODEL, but specified gpt-4o@Azure in the config. When starting a new conversation, the openai interface is still accessed by default. Model name is empty. You need to select it manually every time:
image

@imoisture
Copy link

同样的问题:

Image

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Same question:

Image

@imoisture
Copy link

@chunxingque 升级了一下docker镜像,好像又可以了~~~~~

环境变量配置如下

Image

docker logs 查看到的运行日志如下

Image

不过有个小问题,每次换浏览器访问,需要先**手动选择**一下 glm-4-plus (尽管启动容器的时候设置了只有这一个模型),才能正常输出。

Image

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


@chunxingque upgraded the docker image and it seems to be working again~~~~~

The environment variables are configured as follows

Image

docker logs The running logs viewed are as follows

Image

However, there is a small problem. Every time you change the browser to access, you need to manually select glm-4-plus (even though only this model is set when starting the container) before it can output normally.

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants