-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deep Classification, Embedding & Text Generation - Challenge #54
Comments
Intuitions
Dataset |
Intuitions:
Data: |
Intuition: Dataset: COCA News(I have the data for 2002 and maybe can start from this) https://drive.google.com/file/d/1rzcTmYxeT5zLRG1UJnyce2izIIM324zi/view?usp=sharing |
Intuitions: Data: |
*1. Feminism has always been the theme of Gilmore Girls. Dataset: |
Intuitions: Data: |
Intuition:
Dataset : https://data.world/romanticmonkey/syrianwarfakenews |
Intuitions: Data: Scraping data from Archive of Our Own (http://archiveofourown.org/) using this script (https://github.com/radiolarian/AO3Scraper), along with the Davies TV Corpus |
data: Glassdoor company review database. |
Two intuitions:
Dataset: |
Intuitions: (+) In structured debates, the winning teams' arguments will be centered around the debate topic. I didn't collect the data on this because this is unrelated to my project, but it can be scraped from Munk Debates and Intelligence Squared websites. |
Intuitions (*) BERT provides unprecedented performance on this dataset compared to any other model used so far. |
Intuitions:
|
Intuitions:
|
People speaking about Latin American politicians that ran for president (2005-2015):
Corpus del Español: This corpus contains about two billion words of Spanish, taken from about two million web pages from 21 different Spanish-speaking countries. It was web-scraped in 2015. Class dataset: Corpus del Español ("SPAN"). |
First, write down two intuitions you have about broad content patterns you will discover about your data as encoded within a pre-trained or fine-tuned deep contextual (e.g., BERT) embedding. These can be the same as those from last week...or they can evolve based on last week's explorations and the novel possibilities that emerge from dynamic, contextual embeddings--e.g., they could be about text generation from a tuned model. As before, place an asterisk next to the one you expect most firmly, and a plus next to the one that, if true, would be the biggest or most important surprise to others (especially the research community to whom you might communicate it, if robustly supported). Second, describe the dataset(s) you would like to fine-tune or embed within a pre-trained contextual embedding model to explore these intuitions. Note that this need not be large text--you could simply encode a few texts in a pretrained contextual embedding and explore their position relative to one another and the semantics of the model. Then place (a) a link to the data, (b) a script to download and clean it, (c) a reference to a class dataset, or (d) an invitation for a TA to contact you about it. Please do NOT spend time/space explaining the precise embedding or analysis strategy you will use to explore your intuitions. (Then upvote the 5 most interesting, relevant and challenging challenge responses from others).
The text was updated successfully, but these errors were encountered: