-
Notifications
You must be signed in to change notification settings - Fork 526
SMP2019 ECISA
zhezhaoa edited this page Aug 15, 2023
·
7 revisions
Here is an example of doing stacking on SMP2019-ECISA dataset. More information about ensemble can be found here. One can download SMP2019-ECISA dataset from Downstream datasets section. The example of fine-tuning (K-fold cross validation) to obtain classifiers and features on train set:
CUDA_VISIBLE_DEVICES=0,1 python3 finetune/run_classifier_cv.py --pretrained_model_path models/google_zh_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/base_config.json \
--output_model_path models/ecisa_classifier_model_0.bin \
--train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/train_features_0.npy \
--epochs_num 3 --batch_size 32 --folds_num 5
CUDA_VISIBLE_DEVICES=0,1 python3 finetune/run_classifier_cv.py --pretrained_model_path models/review_roberta_large_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--output_model_path models/ecisa_classifier_model_1.bin \
--train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/train_features_1.npy \
--epochs_num 3 --batch_size 32 --folds_num 5
CUDA_VISIBLE_DEVICES=0 python3 finetune/run_classifier_cv.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/base_config.json \
--output_model_path models/ecisa_classifier_model_2.bin \
--train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/train_features_2.npy \
--epochs_num 3 --batch_size 32 --seq_length 160 --folds_num 5
CUDA_VISIBLE_DEVICES=0 python3 finetune/run_classifier_cv.py --pretrained_model_path models/cluecorpussmall_gpt2_seq1024_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/config.json \
--output_model_path models/ecisa_classifier_model_3.bin \
--train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/train_features_3.npy \
--epochs_num 3 --batch_size 32 --seq_length 100 --folds_num 5 \
--pooling mean
CUDA_VISIBLE_DEVICES=0,1 python3 finetune/run_classifier_cv.py --pretrained_model_path models/mixed_corpus_bert_large_model.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--output_model_path models/ecisa_classifier_model_4.bin \
--train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/train_features_4.npy \
--epochs_num 3 --batch_size 32 --folds_num 5
CUDA_VISIBLE_DEVICES=0,1 python3 finetune/run_classifier_cv.py --pretrained_model_path models/chinese_roberta_wwm_large_ext_pytorch.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--output_model_path models/ecisa_classifier_model_5.bin \
--train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/train_features_5.npy \
--folds_num 5 --epochs_num 3 --batch_size 32
The example of doing inference (K-fold cross validation) to obtain features on development set:
CUDA_VISIBLE_DEVICES=0 python3 inference/run_classifier_cv_infer.py --load_model_path models/ecisa_classifier_model_0.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/base_config.json \
--test_path datasets/smp2019-ecisa/dev.tsv \
--test_features_path datasets/smp2019-ecisa/test_features_0.npy \
--folds_num 5 --labels_num 3
CUDA_VISIBLE_DEVICES=0 python3 inference/run_classifier_cv_infer.py --load_model_path models/ecisa_classifier_model_1.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--test_path datasets/smp2019-ecisa/dev.tsv \
--test_features_path datasets/smp2019-ecisa/test_features_1.npy \
--folds_num 5 --labels_num 3
CUDA_VISIBLE_DEVICES=0 python3 inference/run_classifier_cv_infer.py --load_model_path models/ecisa_classifier_model_2.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/base_config.json \
--test_path datasets/smp2019-ecisa/dev.tsv \
--test_features_path datasets/smp2019-ecisa/test_features_2.npy \
--seq_length 160 --folds_num 5 --labels_num 3
CUDA_VISIBLE_DEVICES=0 python3 inference/run_classifier_cv_infer.py --load_model_path models/ecisa_classifier_model_3.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/config.json \
--test_path datasets/smp2019-ecisa/dev.tsv \
--test_features_path datasets/smp2019-ecisa/test_features_3.npy \
--seq_length 100 --folds_num 5 --labels_num 3 \
--pooling mean
CUDA_VISIBLE_DEVICES=0 python3 inference/run_classifier_cv_infer.py --load_model_path models/ecisa_classifier_model_4.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--test_path datasets/smp2019-ecisa/dev.tsv \
--test_features_path datasets/smp2019-ecisa/test_features_4.npy \
--folds_num 5 --labels_num 3
CUDA_VISIBLE_DEVICES=0 python3 inference/run_classifier_cv_infer.py --load_model_path models/ecisa_classifier_model_5.bin \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/large_config.json \
--test_path datasets/smp2019-ecisa/dev.tsv \
--test_features_path datasets/smp2019-ecisa/test_features_5.npy \
--folds_num 5 --labels_num 3
The example of searching hyper-parameters for LightGBM:
python3 scripts/run_lgb_cv_bayesopt.py --train_path datasets/smp2019-ecisa/train.tsv \
--train_features_path datasets/smp2019-ecisa/ \
--models_num 6 --folds_num 5 --labels_num 3 --epochs_num 100
The example of using LightGBM to train and validate:
python3 scripts/run_lgb.py --train_path datasets/smp2019-ecisa/train.tsv --test_path datasets/smp2019-ecisa/dev.tsv \
--train_features_path datasets/smp2019-ecisa/ --test_features_path datasets/smp2019-ecisa/ \
--models_num 6 --labels_num 3
One could find more details on competition's homepage.