Tutorial for International Summer School on Deep Learning, 2019 in Gdansk, Poland
https://docs.google.com/presentation/d/1DJI1yX4U5IgApGwavt0AmOCLWwso7ou1Un93sMuAWmA/
There are currently 3 hands-on sections to this tutorial.
-
The first section covers pre-trained word embeddings (colab)
-
The second section covers pre-trained contextual emeddings (colab)
-
The third section covers fine-tuning a pre-trained model (colab)
-
April 2022 If you are interested in learning how to build different Transformer architectures from the ground up, I have a new set of tutorials with in-depth details and full implementations of several popular Transformer models. They show how to build models step by step, how to pretrain them, and how to use them for downstream tasks. There is an accompanying Python package that contains all of the tutorial pieces put together
-
July 2020 I have posted a set of Colab tutorials using MEAD which is referenced in these tutorials. This new set of notebooks covers similar material, including transfer learning for classification and taggers, as well as training Transformer-based models from scratch using the MEAD API with TPUs. MEAD makes it easy to train lots of powerful models for NLP using a simple YAML configuration and makes it easy to extend the code with new models while comparing against strong baselines!