Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatGLM #527

Merged
merged 15 commits into from
Jan 12, 2024
Merged

ChatGLM #527

merged 15 commits into from
Jan 12, 2024

Conversation

loxs123
Copy link
Contributor

@loxs123 loxs123 commented Dec 29, 2023

No description provided.

@loxs123
Copy link
Contributor Author

loxs123 commented Jan 8, 2024

eval 结果

{'results': {'hellaswag': {'acc': 0.4439354710217088, 'acc_stderr': 0.0049583141142664905, 'acc_norm': 0.5696076478789086, 'acc_norm_stderr': 0.0049411916073179105}}, 'versions': {'hellaswag': 0}, 'config': {'model': 'chatglm', 'batch_size': 1, 'device': 'cuda:0', 'num_fewshot': 0, 'limit': None, 'bootstrap_iters': 100000}}

尚存在问题

训练阶段最后会卡住(trainer.train()结束后会卡住不动),可以正常训练和保存所有模型

@loxs123
Copy link
Contributor Author

loxs123 commented Jan 9, 2024

该Pr完成了: chatglm-6B的微调(lora微调/全量),推理; [lora部分也可移植到其他模型]
不同微调方式占用显存如下:

显存占用情况

1. full finetune

1n4g[1-4] fp16 1dp 4tp 1pp batch_size=1

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 545.23.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:4F:00.0 Off |                    0 |
| N/A   71C    P0             254W / 250W |  39721MiB / 40960MiB |     97%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-PCIE-40GB          On  | 00000000:50:00.0 Off |                    0 |
| N/A   50C    P0             133W / 250W |  36073MiB / 40960MiB |    100%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-PCIE-40GB          On  | 00000000:53:00.0 Off |                    0 |
| N/A   46C    P0              75W / 250W |  36145MiB / 40960MiB |    100%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-PCIE-40GB          On  | 00000000:57:00.0 Off |                    0 |
| N/A   48C    P0              77W / 250W |  36073MiB / 40960MiB |    100%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA A100-PCIE-40GB          On  | 00000000:9C:00.0 Off |                    0 |
| N/A   44C    P0             101W / 250W |  35953MiB / 40960MiB |    100%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA A100-PCIE-40GB          On  | 00000000:9D:00.0 Off |                    0 |
| N/A   33C    P0              38W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   6  NVIDIA A100-PCIE-40GB          On  | 00000000:A0:00.0 Off |                    0 |
| N/A   33C    P0              49W / 250W |   7407MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   7  NVIDIA A100-PCIE-40GB          On  | 00000000:A4:00.0 Off |                    0 |
| N/A   32C    P0              52W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

[01/09 18:34:36 lb.utils.events]: eta: 17:02:20 iteration: 9/27736 consumed_samples: 80 total_loss: 5.674 time: 2.2448 s/iter data_time: 0.1167 s/iter total_throughput: 3.56 samples/s lr: 1.62e-08

1n4g[1-4] fp16 1dp 1tp 4pp batch_size=1

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 545.23.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:4F:00.0 Off |                    0 |
| N/A   43C    P0              67W / 250W |  37883MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-PCIE-40GB          On  | 00000000:50:00.0 Off |                    0 |
| N/A   53C    P0             147W / 250W |  33099MiB / 40960MiB |     34%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-PCIE-40GB          On  | 00000000:53:00.0 Off |                    0 |
| N/A   51C    P0             106W / 250W |  31749MiB / 40960MiB |     49%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-PCIE-40GB          On  | 00000000:57:00.0 Off |                    0 |
| N/A   51C    P0             158W / 250W |  31749MiB / 40960MiB |     45%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA A100-PCIE-40GB          On  | 00000000:9C:00.0 Off |                    0 |
| N/A   43C    P0              64W / 250W |  25299MiB / 40960MiB |     39%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA A100-PCIE-40GB          On  | 00000000:9D:00.0 Off |                    0 |
| N/A   34C    P0              39W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   6  NVIDIA A100-PCIE-40GB          On  | 00000000:A0:00.0 Off |                    0 |
| N/A   35C    P0              47W / 250W |   8555MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   7  NVIDIA A100-PCIE-40GB          On  | 00000000:A4:00.0 Off |                    0 |
| N/A   33C    P0              43W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

[01/09 18:43:45 lb.utils.events]: eta: 10:05:12 iteration: 9/27736 consumed_samples: 80 total_loss: 5.674 time: 1.3446 s/iter data_time: 0.0538 s/iter total_throughput: 5.95 samples/s lr: 1.62e-08

2. lora finetune

1n4g[1-4] fp16 1dp 4tp 1pp batch_size=1

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 545.23.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:4F:00.0 Off |                    0 |
| N/A   46C    P0              72W / 250W |  37883MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-PCIE-40GB          On  | 00000000:50:00.0 Off |                    0 |
| N/A   47C    P0             100W / 250W |  11727MiB / 40960MiB |     99%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-PCIE-40GB          On  | 00000000:53:00.0 Off |                    0 |
| N/A   44C    P0              93W / 250W |  11773MiB / 40960MiB |     99%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-PCIE-40GB          On  | 00000000:57:00.0 Off |                    0 |
| N/A   45C    P0              97W / 250W |  11701MiB / 40960MiB |     99%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA A100-PCIE-40GB          On  | 00000000:9C:00.0 Off |                    0 |
| N/A   43C    P0              93W / 250W |  11581MiB / 40960MiB |    100%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA A100-PCIE-40GB          On  | 00000000:9D:00.0 Off |                    0 |
| N/A   34C    P0              39W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   6  NVIDIA A100-PCIE-40GB          On  | 00000000:A0:00.0 Off |                    0 |
| N/A   34C    P0              51W / 250W |  16237MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   7  NVIDIA A100-PCIE-40GB          On  | 00000000:A4:00.0 Off |                    0 |
| N/A   33C    P0              51W / 250W |      7MiB / 40960MiB |      0%      Default |
+-----------------------------------------+----------------------+----------------------+

[01/09 18:55:21 lb.utils.events]: eta: 12:51:07 iteration: 9/27736 consumed_samples: 80 total_loss: 5.674 time: 1.7432 s/iter data_time: 0.0278 s/iter total_throughput: 4.59 samples/s lr: 1.62e-08

1n4g[1-4] fp16 1dp 1tp 4pp batch_size=1


+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 545.23.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:4F:00.0 Off |                    0 |
| N/A   71C    P0             243W / 250W |  37883MiB / 40960MiB |     98%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-PCIE-40GB          On  | 00000000:50:00.0 Off |                    0 |
| N/A   41C    P0              44W / 250W |  10889MiB / 40960MiB |     22%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA A100-PCIE-40GB          On  | 00000000:53:00.0 Off |                    0 |
| N/A   37C    P0              66W / 250W |  10437MiB / 40960MiB |     23%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA A100-PCIE-40GB          On  | 00000000:57:00.0 Off |                    0 |
| N/A   39C    P0             165W / 250W |  10437MiB / 40960MiB |     15%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA A100-PCIE-40GB          On  | 00000000:9C:00.0 Off |                    0 |
| N/A   36C    P0              66W / 250W |   8505MiB / 40960MiB |     24%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA A100-PCIE-40GB          On  | 00000000:9D:00.0 Off |                    0 |
| N/A   32C    P0              40W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   6  NVIDIA A100-PCIE-40GB          On  | 00000000:A0:00.0 Off |                    0 |
| N/A   32C    P0              55W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   7  NVIDIA A100-PCIE-40GB          On  | 00000000:A4:00.0 Off |                    0 |
| N/A   32C    P0              45W / 250W |      7MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

[01/09 19:01:28 lb.utils.events]: eta: 6:29:57 iteration: 9/27736 consumed_samples: 80 total_loss: 5.674 time: 0.8229 s/iter data_time: 0.0110 s/iter total_throughput: 9.72 samples/s lr: 1.62e-08

@loxs123
Copy link
Contributor Author

loxs123 commented Jan 11, 2024

def main(args):
    cfg = LazyConfig.load(args.config_file)
    cfg = LazyConfig.apply_overrides(cfg, args.opts)
    default_setup(cfg, args)

    seed_for_rank = cfg.train.seed + flow.env.get_rank()
    flow.manual_seed(seed_for_rank)
    flow.cuda.manual_seed(seed_for_rank)
    np.random.seed(seed_for_rank)
    random.seed(seed_for_rank)

    if args.fast_dev_run:
        cfg.train.train_epoch = 0
        cfg.train.train_iter = 20
        cfg.train.evaluation.eval_period = 10
        cfg.train.log_period = 1

    if args.eval_only:
        tokenizer = None
        if try_get_key(cfg, "tokenization") is not None:
            tokenizer = DefaultTrainer.build_tokenizer(cfg)
        model = DefaultTrainer.build_model(cfg)
        Checkpointer(model, save_dir=cfg.train.output_dir).resume_or_load(
            cfg.train.load_weight, resume=args.resume
        )
        if try_get_key(cfg, "graph.enabled", default=False):
            model = DefaultTrainer.build_graph(cfg, model, is_train=False)
        test_loader = DefaultTrainer.build_test_loader(cfg, tokenizer)
        if len(test_loader) == 0:
            logger = logging.getLogger(__name__)
            logger.info("No dataset in dataloader.test, please set dataset for dataloader.test")
        _ = DefaultTrainer.test(cfg, test_loader, model)
        return

    trainer = ChatGLMTrainer(cfg)
    return trainer.train()


if __name__ == "__main__":
    args = default_argument_parser().parse_args()
    main(args)

在训练结束后(保存完所有模型,并且在libai中trainer.train()完成后),会卡住不动一段时间,然后发生下面的报错

[01/11 05:29:14 lb.utils.events]:  eta: 0:00:00  iteration: 3109/3110  consumed_samples: 49760  total_loss: 1.352  time: 7.4602 s/iter  data_time: 0.0080 s/iter total_throughput: 2.14 samples/s lr: 2.00e-05  
[01/11 05:29:14 lb.utils.events]:  eta: 0:00:00  iteration: 3109/3110  consumed_samples: 49760  total_loss: 1.352  time: 7.4602 s/iter  data_time: 0.0080 s/iter total_throughput: 2.14 samples/s lr: 2.00e-05  
[01/11 05:29:14 lb.engine.hooks]: Overall training speed: 3108 iterations in 6:26:26 (7.4603 s / it)
[01/11 05:29:14 lb.engine.hooks]: Overall training speed: 3108 iterations in 6:26:26 (7.4603 s / it)
[01/11 05:29:14 lb.engine.hooks]: Total training time: 6:26:59 (0:00:32 on hooks)
[01/11 05:29:14 lb.engine.hooks]: Total training time: 6:26:59 (0:00:32 on hooks)
Exception ignored in atexit callback: <function atexit_hook at 0x7fc686ff5480>
Traceback (most recent call last):
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/__init__.py", line 301, in atexit_hook
    __oneflow_global_unique_env.switch_to_shutting_down(hook.is_normal_exit())
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/framework/env_util.py", line 225, in switch_to_shutting_down
    self._env_cxt.SwitchToShuttingDownPhase(is_normal_exit)
oneflow._oneflow_internal.exception.Exception: blocking instructions
GlobalSync:s_barrier ptr: 0x562a8774f370 dispatched:0 launched:0 done:0
Barrier:s_barrier ptr: 0x562a87754610 dispatched:0 launched:0 done:0

  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/api/python/env/env.cpp", line 63, in SwitchToShuttingDownPhase
    vm::ClusterSync()
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/vm/vm_util.cpp", line 44, in ClusterSync
    bc->WaitUntilCntEqualZero(VirtualMachine::GetPredicatorNoMoreInstructionsFinished())
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/common/blocking_counter.cpp", line 58, in WaitUntilCntEqualZero
    StopWaitingAfterTimeout()
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/vm/virtual_machine.cpp", line 211, in operator()
    
Error Type: oneflow.ErrorProto.check_failed_error
Exception ignored in atexit callbackException ignored in atexit callback: : <function atexit_hook at 0x7f43ac5f5480><function atexit_hook at 0x7f2d6cbf5480>

Traceback (most recent call last):
Traceback (most recent call last):
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/__init__.py", line 301, in atexit_hook
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/__init__.py", line 301, in atexit_hook
Exception ignored in atexit callback: <function atexit_hook at 0x7fd8e61f5480>
Traceback (most recent call last):
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/__init__.py", line 301, in atexit_hook
    __oneflow_global_unique_env.switch_to_shutting_down(hook.is_normal_exit())
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/framework/env_util.py", line 225, in switch_to_shutting_down
    self._env_cxt.SwitchToShuttingDownPhase(is_normal_exit)
oneflow._oneflow_internal.exception.RuntimeError: Error: blocking instructions
GlobalSync:s_barrier ptr: 0x55a21f2930b0 dispatched:0 launched:0 done:0
Barrier:s_barrier ptr: 0x55a20f419f90 dispatched:0 launched:0 done:0


    __oneflow_global_unique_env.switch_to_shutting_down(hook.is_normal_exit())
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/framework/env_util.py", line 225, in switch_to_shutting_down
    __oneflow_global_unique_env.switch_to_shutting_down(hook.is_normal_exit())
  File "/home/lixin/anaconda3/envs/oneflow/lib/python3.10/site-packages/oneflow/framework/env_util.py", line 225, in switch_to_shutting_down
    self._env_cxt.SwitchToShuttingDownPhase(is_normal_exit)
oneflow._oneflow_internal.exception.RuntimeError: Error: blocking instructions
GlobalSync:s_barrier ptr: 0x5635102aae90 dispatched:0 launched:0 done:0
Barrier:s_barrier ptr: 0x5634f63736f0 dispatched:0 launched:0 done:0


    self._env_cxt.SwitchToShuttingDownPhase(is_normal_exit)
oneflow._oneflow_internal.exception.RuntimeError: Error: blocking instructions
GlobalSync:s_barrier ptr: 0x56504e753580 dispatched:0 launched:0 done:0
Barrier:s_barrier ptr: 0x5650323d2f70 dispatched:0 launched:0 done:0

版本信息

------------------------- ------------ ---------------------------------------
absl-py                   2.0.0
accelerate                0.25.0
addict                    2.4.0
aiofiles                  23.2.1
aiohttp                   3.9.1
aiosignal                 1.3.1
aliyun-python-sdk-core    2.14.0
aliyun-python-sdk-kms     2.16.2
altair                    5.2.0
annotated-types           0.6.0
antlr4-python3-runtime    4.8
anyio                     4.2.0
appdirs                   1.4.4
async-timeout             4.0.3
attrs                     23.2.0
autoflake                 1.7.8
black                     21.4b2
boto3                     1.34.13
botocore                  1.34.13
certifi                   2023.11.17
cffi                      1.16.0
chardet                   5.2.0
charset-normalizer        3.3.2
click                     8.0.2
cloudpickle               3.0.0
colorama                  0.4.6
contourpy                 1.2.0
crcmod                    1.7
cryptography              41.0.7
cycler                    0.12.1
DataProperty              1.0.1
datasets                  2.16.1
dill                      0.3.7
distro                    1.9.0
docstring-parser          0.15
einops                    0.7.0
exceptiongroup            1.2.0
fastapi                   0.108.0
ffmpy                     0.3.1
filelock                  3.13.1
flake8                    3.8.1
flowvision                0.1.0
fonttools                 4.47.0
frozenlist                1.4.1
fsspec                    2023.10.0
gast                      0.5.4
gradio                    3.50.2
gradio_client             0.6.1
h11                       0.14.0
httpcore                  1.0.2
httpx                     0.26.0
huggingface-hub           0.20.1
hydra-core                1.1.2
idna                      3.6
importlib-metadata        7.0.1
importlib-resources       6.1.1
iniconfig                 2.0.0
isort                     5.10.1
jieba                     0.42.1
Jinja2                    3.1.2
jmespath                  0.10.0
joblib                    1.3.2
jsonlines                 4.0.0
jsonschema                4.20.0
jsonschema-specifications 2023.12.1
kiwisolver                1.4.5
LiBai                     0.2.0        /home/lixin/libai
lm-eval                   0.3.0        /home/lixin/lm-evaluation-harness-0.3.0
markdown-it-py            3.0.0
MarkupSafe                2.1.3
matplotlib                3.8.2
mbstrdecoder              1.1.3
mccabe                    0.6.1
mdurl                     0.1.2
modelscope                1.10.0
mpmath                    1.3.0
multidict                 6.0.4
multiprocess              0.70.15
mypy-extensions           1.0.0
networkx                  3.2.1
nltk                      3.8.1
numexpr                   2.8.8
numpy                     1.26.3
nvidia-cublas-cu12        12.1.3.1
nvidia-cuda-cupti-cu12    12.1.105
nvidia-cuda-nvrtc-cu12    12.1.105
nvidia-cuda-runtime-cu12  12.1.105
nvidia-cudnn-cu12         8.9.2.26
nvidia-cufft-cu12         11.0.2.54
nvidia-curand-cu12        10.3.2.106
nvidia-cusolver-cu12      11.4.5.107
nvidia-cusparse-cu12      12.1.0.106
nvidia-nccl-cu12          2.18.1
nvidia-nvjitlink-cu12     12.3.101
nvidia-nvtx-cu12          12.1.105
omegaconf                 2.1.0
oneflow                   0.9.0
openai                    1.6.1
orjson                    3.9.10
oss2                      2.18.4
packaging                 23.2
pandas                    2.1.4
pathspec                  0.12.1
pathvalidate              3.2.0
peft                      0.7.1
pillow                    10.2.0
pip                       23.3.2
platformdirs              4.1.0
pluggy                    1.3.0
portalocker               2.8.2
protobuf                  3.20.1
psutil                    5.9.7
pyarrow                   14.0.2
pyarrow-hotfix            0.6
pybind11                  2.11.1
pycodestyle               2.6.0
pycountry                 23.12.11
pycparser                 2.21
pycryptodome              3.19.1
pydantic                  2.5.3
pydantic_core             2.14.6
pydub                     0.25.1
pyflakes                  2.2.0
Pygments                  2.17.2
pyparsing                 3.1.1
pytablewriter             1.2.0
pytest                    7.4.4
python-dateutil           2.8.2
python-multipart          0.0.6
pytz                      2023.3.post1
PyYAML                    6.0.1
referencing               0.32.1
regex                     2023.12.25
requests                  2.31.0
rich                      13.7.0
rouge-chinese             1.0.3
rouge-score               0.1.2
rpds-py                   0.16.2
s3transfer                0.10.0
sacrebleu                 1.5.0
safetensors               0.4.1
scikit-learn              1.3.2
scipy                     1.11.4
semantic-version          2.10.0
sentencepiece             0.1.99
setuptools                68.2.2
shtab                     1.6.5
simplejson                3.19.2
six                       1.16.0
sniffio                   1.3.0
sortedcontainers          2.4.0
sqlitedict                2.1.0
sse-starlette             1.8.2
starlette                 0.32.0.post1
sympy                     1.12
tabledata                 1.3.3
tabulate                  0.9.0
tcolorpy                  0.1.4
tensorboardX              2.5.1
termcolor                 2.4.0
threadpoolctl             3.2.0
tiktoken                  0.5.2
tokenizers                0.15.0
toml                      0.10.2
tomli                     2.0.1
toolz                     0.12.0
torch                     2.1.2
tqdm                      4.66.1
tqdm-multiprocess         0.0.11
transformers              4.36.2
triton                    2.1.0
trl                       0.7.7
typepy                    1.3.2
typing_extensions         4.9.0
tyro                      0.6.3
tzdata                    2023.4
urllib3                   2.0.7
uvicorn                   0.25.0
websockets                11.0.3
wget                      3.2
wheel                     0.41.2
xxhash                    3.4.1
yapf                      0.40.2
yarl                      1.9.4
zipp                      3.17.0
zstandard                 0.22.0

@loxs123 loxs123 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot January 11, 2024 06:44
@loxs123 loxs123 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot January 11, 2024 08:33
@xiezipeng-ML xiezipeng-ML removed the request for review from oneflow-ci-bot January 11, 2024 10:14
@loxs123 loxs123 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot January 11, 2024 10:26
@loxs123 loxs123 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot January 11, 2024 10:33
@loxs123 loxs123 requested review from oneflow-ci-bot and removed request for oneflow-ci-bot January 11, 2024 11:07
@loxs123 loxs123 merged commit d875d23 into main Jan 12, 2024
2 checks passed
@loxs123 loxs123 deleted the chatglm branch January 12, 2024 03:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants