Skip to content

rrajasek95/DSTC9-Dialog-Evaluation-Challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cruz Control

GPT2 Models

We currently have the following training scripts for the models:

  • GPT2 Baseline Text + Fact
  • Knowledge Dependent Policy Driven Neural Response Generator using Mezza Tags

Contact

For any clarification related to the above code, please reach out to Rishi Rajasekaran ([email protected])

DSTC9 Baseline Code (untested)

Response Generation

Scripts to train Seq2Seq and Transformer models on the Amazon Topical-Chat Corpus. This code serves as the baseline for DSTC9 Track 3.

To train: python3 train.py --use_knowledge --transformer --save_path transformer/

To test: python3 test.py --use_knowledge --transformer --save_path transformer/

To serve interactive model with TF-IDF based fact selection: python3 dynamic.py --use_knowledge --transformer --save_path transformer/

Data

The pre-processed data can be found in data.zip. If you would like to use a different pre-processing strategy, please download the original data from here.

The dataset preparation code is split between the utils.py file and the tc_dataset.py. The data loading and tokenization is done in utils.py while the data preparation to feed into the model is done in tc_dataset.py.

Contact

If you experience any issues with this code, please contact me at [email protected]

Setup

  • spacy
  • python -m spacy download en_core_web_lg
  • nltk.download('punkt')

About

Project repo for the DSTC9 dialog evaluation challenge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages