Changes between January, 9th and January, 15th
What's Changed
Empirical Study ⚗️
- Run feature engineering on large scale (100 %) 💡 by @KarelZe in #109
- Run exploratory data analysis on cluster (10 %) by @KarelZe in #108
Writing 📖
- Add chapter on input embedding (finished) positional encoding (cont'd) 🛌 by @KarelZe in #107
- Finish chapter on positional encoding🧵 by @KarelZe in #111
- Add chapter on TabTransformer🔢 by @KarelZe in #112
- Add chapter on FTTransformer 🤖 by @KarelZe in #113
- Correction of column embedding in chapter TabTransformer 🤖 by @KarelZe in #115
Other Changes
- Bump google-auth from 2.15.0 to 2.16.0 by @dependabot in #110
- Bump requests from 2.28.1 to 2.28.2 by @dependabot in #114
Outlook💡
- Perform a code review of all previously written code.
- Continue with transformer week. 🤖 Mainly write remaining chapters on the classical transformer architecture, attention and MHSA, as well as pre-training of transformers.
- Research additional tricks from literature to optimize training behaviour of transformers. Structure them for the chapter on training and tuning our models.
- Increase performance of current transformer implementations by applying the tricks from above to match the performance of gradient-boosted trees.
- Add shared embeddings to the
TabTransformer
implementation. - Restructure notes and draft chapter for model selection of supervised and semi-supervised models.
Full Changelog: 23-02...23-03