We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我只是修改了constant_map.py最后为train_model_config = train_info_models['chatglm2-6b-32k-int4']
'chatglm2-6b-32k-int4': { 'model_type': 'chatglm2', 'model_name_or_path': '/data/ChatGLM/pre_model/chatglm2-6b-32k-int4', 'config_name': '/data/ChatGLM/pre_model/chatglm2-6b-32k-int4/config.json', 'tokenizer_name': '/data/ChatGLM/pre_model/chatglm2-6b-32k-int4', },
相关配置是这样的。
最后报错:AssertionError: quantization ptv2 not support
The text was updated successfully, but these errors were encountered:
您这个应该是使用量化后的权重进行lora训练,建议使用16精度权重,进行lora int4训练
Sorry, something went wrong.
No branches or pull requests
我只是修改了constant_map.py最后为train_model_config = train_info_models['chatglm2-6b-32k-int4']
相关配置是这样的。
最后报错:AssertionError: quantization ptv2 not support
The text was updated successfully, but these errors were encountered: