diff --git a/notebooks/index.yml b/notebooks/index.yml index 26cd98d..9f3b461 100644 --- a/notebooks/index.yml +++ b/notebooks/index.yml @@ -5,3 +5,4 @@ - notebook: notebooks/decision_trees - notebook: notebooks/robust_and_trustworthy_machine_learning - notebook: notebooks/deep_reinforcement_learning +- notebook: notebooks/transformer diff --git a/notebooks/transformer/images/masked_multi-head_attention.png b/notebooks/transformer/images/masked_multi-head_attention.png new file mode 100644 index 0000000..8488bb9 Binary files /dev/null and b/notebooks/transformer/images/masked_multi-head_attention.png differ diff --git a/notebooks/transformer/images/multi-head_attention.png b/notebooks/transformer/images/multi-head_attention.png new file mode 100644 index 0000000..be5cdb9 Binary files /dev/null and b/notebooks/transformer/images/multi-head_attention.png differ diff --git a/notebooks/transformer/images/positional_encoding.jpg b/notebooks/transformer/images/positional_encoding.jpg new file mode 100644 index 0000000..a8f4849 Binary files /dev/null and b/notebooks/transformer/images/positional_encoding.jpg differ diff --git a/notebooks/transformer/images/transformer_decoder.jpg b/notebooks/transformer/images/transformer_decoder.jpg new file mode 100644 index 0000000..71b7fdd Binary files /dev/null and b/notebooks/transformer/images/transformer_decoder.jpg differ diff --git a/notebooks/transformer/images/transformer_encoder.jpg b/notebooks/transformer/images/transformer_encoder.jpg new file mode 100644 index 0000000..1e8dd4d Binary files /dev/null and b/notebooks/transformer/images/transformer_encoder.jpg differ diff --git a/notebooks/transformer/images/transformer_model.jpg b/notebooks/transformer/images/transformer_model.jpg new file mode 100644 index 0000000..f6116ff Binary files /dev/null and b/notebooks/transformer/images/transformer_model.jpg differ diff --git a/notebooks/transformer/index.ipynb b/notebooks/transformer/index.ipynb new file mode 100644 index 0000000..e528f5f --- /dev/null +++ b/notebooks/transformer/index.ipynb @@ -0,0 +1,320 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "xPtZUk9Hii0M" + }, + "source": [ + "## Table of Contents\n", + "* [Introduction](#intro)\n", + "* [Transformer Architecture](#transformer_architecture)\n", + " * [Transformer Encoder](#transformer_encoder)\n", + " * [Transformer Decoder](#transformer_decoder)\n", + " * [Positional Encoding](#positional_encoding)\n", + "* [Supplementary Material](#supplementary_material)\n", + " * [Attention Mechanism](#attention_mechanism)\n", + " * [Multi-Head Attention Mechanism](#multi-head_attention_mechanism)\n", + " * [Masked Multi-Head Attention Mechanism](#masked_multi-head_attention_mechanism)\n", + "* [References](#references)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "VXdHmheRj1lL" + }, + "source": [ + "\n", + "\n", + "## Introduction\n", + "\n", + "Before transformers, the state of the art aproaches in sequence modeling were mostly based on recurrent neural networks. The problem with recurrent neural networks is their limitation is parallelization during training phase because of processing the sequence elements one at a time and it becomes more problematic when we have long sequences and the memory limit, limits batching across the examples.\n", + "Transformer, on the other hand, is a model architecture, entirely based on attention mechanism to consider long-term dependencies in the sequence and between input and output sequences and it allows notably more parallelization. \n", + "In the following sections we first explain transformer architecture and then give an example of transformer models.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EfjsbCSvkO_Q" + }, + "source": [ + "\n", + "## Transformer Architecture\n", + "\n", + "The main transformer model intruduced in Attention Is All You Need paper consists of two main parts: Encoder and Decoder.\n", + "The encoder and the decoder both consist of N = 6 similar layers." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "h3Tdo5D1sSeV" + }, + "source": [ + "![Transformer Model](images/transformer_model.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Transformer Encoder\n", + "\n", + "The architecture of each encoder layer is shown at the left side of the figure bellow. Each encoder layer is made of two sub-layers.\n", + "The first sub layer is multi-head self-attention block and the second sub-layer is a fully connected network. As you can see in the image below, Both sub-layers are wrapped by a residual connection followed by a layer normalization." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Transformer Encoder](images/transformer_encoder.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Transformer Decoder\n", + "\n", + "The architecture of each decoder layer is shown at the right side of the figure below. The decoder consists of three sub-layers. A masked multi-head self attention, a masked multi-head attention and a fully connected feed forward network. The same as encoder part, the three sub-layers are wraped by a residual connection followed by a layer normalization. As you can see in the image below, the two of the multi-head attention inputs are from the output of the encoder stack. The third sub-layer which is called Masked Multi-Head Attention is a modification of self-attention module and ensures that the predictions for position i only attends to the known outputs which means the outputs at position less than i. That is because, for example if you are using transformer for machine translation task, the decoder input will be the translated sentence. Hence the transformer only should attent to the tokens of the sentence that has been translated until the current step, not to the whole translated sequence." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Transformer Decoder](images/transformer_decoder.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Positional Encoding\n", + "\n", + "Since the attention mechanism in transformer model doesn't consider any order for the token in the input sequence, it is needed to inject some information about the position of the tokens in the sequence. To do so, the transformer makes use of a kind of embedding which is called positional embedding and it has the same dimension as the input embeddings so it can be added to the input embeddings. There are multiple options for such an embedding that encodes the position in a sequence. The transformer model makes use of sine and cosine functions for this purpose." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$$PE_{(pos, 2i)} = sin(pos/10000^{2i/d_{model}}) $$\n", + "$$PE_{(pos, 2i+1)} = cos(pos/10000^{2i/d_{model}}) $$\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the above formula, pos is the position of token in the sequence and i is the position in the embedding. for example, if the token is the second token in the sentence and the positional embedding dimension is 100, for computing the 5th element in that embedding, the formula will be: \n", + "\n", + "$$PE_{(2, 5)} = cos(2/10000^{4/100})$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here is a visualization of positional embeddings for better understanding. You can see the visualization of a 100-dimentional positional embedding for a sequence with the maximum length of 30. As you can see, the tokens that are near each other in the sequence have similar positional embeddings and as the token gets farther away from another token, its positional embedding becomes more different." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "d_model = 100\n", + "max_len = 30\n", + "positional_encodings = np.zeros((max_len, d_model))\n", + "for pos in range(max_len):\n", + " for i in range(d_model):\n", + " if i % 2 == 0:\n", + " positional_encodings[pos, i] = np.sin(pos/10000**(i/d_model))\n", + " else:\n", + " positional_encodings[pos, i] = np.sin(pos/10000**((i-1)/d_model))" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlAAAADICAYAAAAnSN9CAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAcSElEQVR4nO3df5DddX3v8df7nD27J5td8vsXCZJAMEDbAXRFqL8oYksRRfvDyqgTLR3svaLoaJXL9I69M20HOxZrW4eWChq8XpVqKsgw3stFvEqtSPglPwMkJGTzYxPyc/Nrf5zzvn+cwzQ6CXm/s/vdPWd9Pmac7J595bufPScH33v2/d63ubsAAAAQV5rsAwAAALQbCigAAIAkCigAAIAkCigAAIAkCigAAIAkCigAAICkMRVQZnapma01s+fN7LrxOhQAAEArsxP9PVBmVpb0rKS3SeqX9KCkK939qWP9nXLPdO+YNfuEPh8AAMBEGu7vf8nd5x3tYx1juO75kp539/WSZGbflHSFpGMWUB2zZuvkT358DJ8SAABgYmz4xKc2HutjY/kR3mJJm454v795GwAAwJRWeBO5mV1tZmvMbE3twIGiPx0AAEDhxlJAbZZ0yhHvL2ne9gvc/WZ373P3vvL06WP4dAAAAK1hLAXUg5LOMLNlZtYp6b2S7hyfYwEAALSuE24id/dRM7tG0v+WVJZ0q7s/+Up/57RZ23Xbu/8+dP11I0dtej+qF4bmh7ObDuemALcd7g1ndw91h7ODQ13h7KHhSjg7MlIOZ0dH41lJqo/G621PZFVPHKJm8axnsokzSLJ6QdcuKpuRud+SrKgzF6XdzgtgwoxlCk/ufreku8fpLAAAAG2B30QOAACQRAEFAACQRAEFAACQNKYeqKyK6lpQHg5lX9u1N3zdkem7wtmB2qFwVpI2jsYbw9cNx5vZM43v/YdnhbM7DveEs3uGpoWzkjQ41BnODo3EG9+Hh+PN7PVaIjsab4b2Wu57Cc80s2ea5DMN3InrWqqhPtE5nWyyPsHNUYELF9P4Xlw7fYFofAcmBK9AAQAAJFFAAQAAJFFAAQAAJFFAAQAAJFFAAQAAJE3oFN7awQV68w+uCWU/cN5Pw9d938yfhbOnd8Sn6iRpUWLbyYrKxnB2Y9eWcHbdNNbavCyz1mZ4OP7Pu5acwitsrU0tcYiCBusyU1yplTbJa+emyYoZPfMC5/AKW2tT1JGZ7gN+Aa9AAQAAJFFAAQAAJFFAAQAAJFFAAQAAJE1oE3l1W01n/s1gKPvtK94Svu59F706nL122b3hrCRdPG1bODu3PD2c7S2NhLOzS5vD2Tnl/eHsrI4D4awkndQxN5zdWpkRzu7oSKyfKcfXzxwsxxvODyea0yVp1OLTBbVSvPs203CeWieTaU5PNIZ74mtr/IV4NLd+ppgzpMLJdTIFbZ9pjeZ0Gs7xK4BXoAAAAJIooAAAAJIooAAAAJIooAAAAJLG1ERuZhskDarRojrq7n3jcSgAAIBWNh5TeL/l7i9Fgj40rPq62LqTpbfFp8m27jk1nP3s294RzkrSjjN/GM5e3rM2nF2SmDxb0hF/obBie8PZqsUnARv54XC2qzQaznZYvZDsnlJ8Ys+So0tDpcxKmfjEXmYzSt0S03JW0DqZ+MPx8kHi0Xr8MbECV67EFTh6lrnfEndFS0zsZTHhhxbBj/AAAACSxlpAuaT/Y2YPmdnV43EgAACAVjfWH+G90d03m9l8SfeY2TPu/qMjA83C6mpJqqp7jJ8OAABg8o3pFSh339z8c7ukf5N0/lEyN7t7n7v3Vaw6lk8HAADQEk64gDKz6WbW+/Lbkn5b0hPjdTAAAIBWNZYf4S2Q9G/WmATqkPS/3P37r/QXRud1a9t7Xxu6+MJ/WhM+yKI7w1FtVXxiT5K+ZPGdfFoRj16RmNhblJjYW5DYFScdTGQlaXsyj6zh+KCjpMR0X2JczlPfVyXHrRKTdSpN5Yk95abJMuNy7Taxl8VOPrSIEy6g3H29pHPG8SwAAABtgV9jAAAAkEQBBQAAkEQBBQAAkDQeq1zC5szbqw/+6d2h7O07Lg1ft/dbD4SzC+/NNFlLm3oXhLNfm/b6cHbmsngD9yXd/eHs3PL0cHZeuSuclaQRj595pLIrnD3snYkzxBunRz3+/UEmK0mj9XjeM029iabXkcx1y4mvL3GIzHkbfyHRAZxqss5cdoo3nBd1hHZsOAcKxCtQAAAASRRQAAAASRRQAAAASRRQAAAASRRQAAAASRM6hTe/PKSPzlwfyq7+0ED4urbu18PZ2oOPh7OStPAnveHshjnxib3v9p4Xzp68aHc422fxXSDdpfj0myTNK9fC2cO+P5w9kJjYG6pXwtlDtXh2qJZ7KmSm8GqJ8aVa4rr1jsR6lsQZ6uXMdZPfgxU1nVUvZlouNbGXGVPLSl26mLUvbYm1LygQr0ABAAAkUUABAAAkUUABAAAkUUABAAAkTWgT+XOHZ+rta98Ryq4++3+Gr/ubv/+pcHb5+tnhrCSVfx5repekua86O5z92aKl4exZPfHrLpz5YDh7uuUe/mkWbzqfUz4Uzg76YDzbEV/FM1iphrOH6rmG+uF6fKXMSCKbaiLPNIYnmqw9sS7HSrnO20wze26VS2F7Xwq5bFpRDc6Z+y2zOih5X7TE6hcazpHEK1AAAABJFFAAAABJFFAAAABJFFAAAABJxy2gzOxWM9tuZk8ccdtsM7vHzJ5r/jmr2GMCAAC0jsgY1lcl/aOk24647TpJ97r7DWZ2XfP9zxzvQj5Q0eHPnxw62OBN8TGHP/idfw9n7//pBeGsJE377s/C2VmPxVeu7Fs6J5z94aIzwtlzul8MZ+eVtoezkjSr3B3O9lh8jcqc0lA4O9ixL5zdV49P7O2vdYWzknSoEv/6DmdWyozGJyNHy4l1MomsZyb2Etdt/IX48zoRzY19FTawlxvNskLH9oJaZJos8/C1xMQeoMArUO7+I0m/vKzsCkmrmm+vkvSu8T0WAABA6zrRHqgF7r61+fY2SfEtugAAAG1uzE3k7u56hReCzexqM1tjZmtGhg+M9dMBAABMuhMtoAbMbJEkNf88ZjONu9/s7n3u3lfpnH6Cnw4AAKB1nGgBdaeklc23V0q6Y3yOAwAA0PqOO/JjZt+QdJGkuWbWL+mzkm6QdLuZXSVpo6T3RD6Z7T2orrtju9re+fDVoZwkrTl/1fFDTWddcmE4K0lnPRibGpSk+gubwtlZa2eGsxtOmxfO/sfc5eHsGZUd4awknVSqh7PdpfhuuRk+Es7OLMV37M0sx39kPKMj9+rovtH4hN+BjuFw9lBHfGJvuBbfWVcuxx+7zN68zD4+SfLM7rzEOZS5bmqKq8BJuXabJitob15bYm8eFCig3P3KY3zoreN8FgAAgLbAbyIHAABIooACAABIooACAABIiu+NGAc+o1tDb3pdKDv7lniX3sBr46tA/vAND4SzkvQffeeHs9Pu2BLO9q7dG7/umbPD2Z+demo4e2HP8+GsJC3piDedz7B4k3V3Yu3LzFK8IXtOeX84u6vcE85K0t6ORBN5Ld5Qf7Ajni1s7Ustni0l92qkmsgTWU/tAolHi2wA9sR9x9qX/8TaF7QKXoECAABIooACAABIooACAABIooACAABIooACAABImtApvNKCEXV/anMoW3vrQPi6f7outElGknTb8n8NZyXpNy+Mr35Zfn98Wk4vxif2ZrwwM5zduGJOOPvY/FeFs5J0dmf8Memx+OqQLov/M+wtxde+9JYOh7MzywfDWSk3tTetHD9zNZGtlOMTex3lYta+ZCb2JCnxzyI1pSYraAovNcaVnJSbyhNi2fG3qbz6JfulTeV/F1MMr0ABAAAkUUABAAAkUUABAAAkUUABAAAkTWgT+fLqHn3v1XeFsu847ffD19383UXh7IxPV8NZSVr2uk3h7OiKU8JZ+8lj4WzvhkPh7M7N3eHs48tODmcl6S09veHsknK8gbu7FG+G7rZ4M3SvJRrOy/H7uJGPf33Ty/FVQ9Vy/N9nV3k0nB1ONJEPJZr6S5nVLJLqmfUliW/vvJ5pOI9HU83p2ebf1KUT91tRDdktsgIHaBW8AgUAAJBEAQUAAJBEAQUAAJBEAQUAAJB03ALKzG41s+1m9sQRt/2FmW02s0eb/7us2GMCAAC0jsi4zVcl/aOk237p9i+4++czn2x7raov7l4eyq774MLwdZd/uT+c/cZ/WRDOStInTr0nnL3+nD8OZ+c/mFjDsWlnODu9f3o4++zOeeGsJD07N/6YrKg8G85mpvC6rBLO9pbik3WZtS9SbmqvuzwczubWvsSn8A6X4vdbR7kWzo5mV7mUEqNc9fjkoCWmAb3eCmtfkhcvcKNMIWfIytx3iSnDzEBi+uErCtOObeO4//Vz9x9J2jUBZwEAAGgLY+mBusbMft78Ed+scTsRAABAizvRAuomSadLOlfSVkl/e6ygmV1tZmvMbM3+XfEfZQAAALSqEyqg3H3A3WvuXpf0L5LOf4Xsze7e5+59PbPjvS4AAACt6oQKKDM7cnfKuyU9cawsAADAVHPcKTwz+4akiyTNNbN+SZ+VdJGZnavGDMAGSR+OfLKdO07S1/7p0tDBPvZfvxfKSdKdfz4nnP3LR3O/ceHJN30lnN19bnwqatHC+eFsbfuOcLZ3c3y/3aaBnnBWktaeEp/Cu2Da+nB2rsenviqJXXjVxB6zkyy+r07KTe31JPbmTUtM7HUmpvAqicm6cmKirVyuh7OSVEtM7dXbbrKuqPG35KWZzAImxHELKHe/8ig331LAWQAAANoCv4kcAAAgiQIKAAAgiQIKAAAgKbLKZfw+2Y6DWvDPa0LZj1y/KXzdu855Yzh70v+NrzqRpKE3xldrvO7X4o3Tu5afGs6WN8VX1XRvjq8Y6dyeayJ/bn+88X3bjN5w9rSOwXA210Qez3aX4g3ZkjS9FG867y4Vs8qls5Rovs9kEw3nI6Px+1iSSonG8FSvd6bZO9OcntoFEo82Lp7MRy+buC8s8/VltMh9ARSJV6AAAACSKKAAAACSKKAAAACSKKAAAACSKKAAAACSJnQKz7o6VTp9aSj7D7sHwtd94fdmhbOnr9oSzkrSHX+2OJxdufDfw9n/vuLV4ez8H8cfpvK23eFs90B8Uk6SNu6O388b5s8LZ3+jM37mnkTJ32WVcLbbclN43YkpvMzEXrUUn8LLTOx1lOIrVyqJbCmRlXLTcpa5dj0xDVjUepbs5FlR52iF7TNFTtVl7rfElGFq4LJVpgZZ8TOpeAUKAAAgiQIKAAAgiQIKAAAgiQIKAAAgaUKbyA8vLOuZz8TWh6xbfVn4um+77KFw9rnPxht6JemLz18czv7gnK+Fs3vPjHf0LZwzO5yt70o0kW9fEs5K0pZd8TU4Lw7NCWf3TIvX8fM90eCc6LCsWq6bdrrF17NkGs4za1+6EutnOhPZcqJ5u5xYiyIlV7lkmnoz61nqBTaGtwIai4EJwStQAAAASRRQAAAASRRQAAAASRRQAAAAScctoMzsFDO7z8yeMrMnzeza5u2zzeweM3uu+Wf811QDAAC0scgU3qikT7r7w2bWK+khM7tH0gcl3evuN5jZdZKuk/SZV7rQit4B3fVbfx862Icven8oJ0k3fOj/hbPvWfDOcFaS9v80vpKk57xqODvv1S+Fs/Ul8TP4wzvC2e6B3ERieWf861t/YG44u+uk+HWHPD5N1l3qDGcrlnsxNrP6JbPKpSuxyiWT7SzVwtkOy0zh5Va5lBI7MDITe/X4l5daJ+OZnR3pVS4FrWdJyHx9ltl1ksXkINrQcf9fw923uvvDzbcHJT0tabGkKyStasZWSXpXQWcEAABoKalvu81sqaTzJD0gaYG7b21+aJukBeN7NAAAgNYULqDMrEfSdyR93N33Hfkxd3cd44VVM7vazNaY2Zpdu3Iv9wMAALSiUAFlZhU1iqevu/vq5s0DZrao+fFFkrYf7e+6+83u3ufufbNnM/QHAADaX2QKzyTdIulpd7/xiA/dKWll8+2Vku4Y/+MBAAC0nsgU3hskfUDS42b2aPO26yXdIOl2M7tK0kZJ7znehUZU0kAtNhk1umFTKCdJBz0+grP3zcvCWUla9JP4BNXTVx0MZy9f8kQ4+/2lbwlnpz8UH1Gp7DgQzkpS165p4Wz//pnh7LbReHaosi2c7VZ8Cq9qubWQVYv/u6hafFquqF14XeV4tlKOP5/KmSk15ab2MtNymWxm+i01KNeOe/Myivz62myyLvNYJ58iaCPH/X8Nd79fx37qvHV8jwMAAND6aEoCAABIooACAABIooACAABIynXOjtH63fP1R6s/Fsoufnu82fSajTPD2S2X5H4X1VnXvxDOfmffa8LZ3+39eTj7rWUXh7O91fhaFO3cE89KmvbSnHB2x76ecHbLSHyN4oHqlnA2s5yxlPxeoppoIs00kVetmCbyzkQ2s8qlI7nKpZxYz5JZ5ZLp1M01hhfTnN7IZ7KZTuTJXxHTMjL321SfAmBdzrjjFSgAAIAkCigAAIAkCigAAIAkCigAAIAkCigAAICkCZ3Cqw4Ma8WNL4aym740I3zdnd9bEc5e9b4fhLOS9OOd8am2bz7/2nD2z17/eDh7YFl8tUZpzuxwtr5zVzgrSdVd8YmrnXu7wtn+4fi83N56OZxd5PHzVix+3UY+PtJStfgEXLWUmNhLZCuJybrOxNqXzGoWqbj1LJmJvXq9wMm6jKIm61JnSGSZzAJ+Aa9AAQAAJFFAAQAAJFFAAQAAJFFAAQAAJE1oE7nqdfnBQ6HoHefdHr7sRz69Mpz96EcfCWcl6Sfz3hnO1h6LN753XVAJZ+cujTd71+bHG7J9y9ZwVpKqO+NNy+U9neHs5kMzw9k9M+LN6aOKN9+Xk99LVCyer1r8HJm1L12JJvJMNrXKJZGVck3npYLWs6Q2qCTOkFr7kj1JCzR7Z74+K3ItSgvcF4DEK1AAAABpFFAAAABJFFAAAABJFFAAAABJxy2gzOwUM7vPzJ4ysyfN7Nrm7X9hZpvN7NHm/y4r/rgAAACTLzKFNyrpk+7+sJn1SnrIzO5pfuwL7v756Ccbml/VCx8+K5TtLd0Vvaxqz64LZ6uWGzw89JpTw9l5j8VXYPSP7g9n37Qw/vWtWdwXzlYfyY2oVHYdDGc798RX4Gw9eFI4u7PWE84e9p3hbJfFpyIlqaL46peqDSeyiVUuqYm9+L/NjlJiejG5yqWcWc9S0NqXTDYz3pfd+lLkoNqkK/Jra7PJuuzjnB7mxKQ5bjXh7lslbW2+PWhmT0taXPTBAAAAWlWqB8rMlko6T9IDzZuuMbOfm9mtZhb/BUQAAABtLFxAmVmPpO9I+ri775N0k6TTJZ2rxitUf3uMv3e1ma0xszW1gwfGfmIAAIBJFiqgzKyiRvH0dXdfLUnuPuDuNXevS/oXSecf7e+6+83u3ufufeXu6eN1bgAAgEkTmcIzSbdIetrdbzzi9kVHxN4t6YnxPx4AAEDriYykvUHSByQ9bmaPNm+7XtKVZnauGjMRGyR9+HgXWjJnpz73/q+GDvauJz8QyknSSWfFJ6K+vDc+mSVJ2y6M73Q77bYt4ez9h04JZy+e8VQ4+8PFrw9np3XkJhJLu+OTg117ZoezL+2PvzK5YzQ+sXewvj2cnZH8jWgVi/+b60pM4VQTO+sy2Upisq4rke1ITuFlJutKiWvnJusKGnNqlV14qXMkLjzVp8My99uUHqEU+waDIlN49+vod+fd438cAACA1sdvIgcAAEiigAIAAEiigAIAAEjKdRGPUW9pVBdP2xXK3vCV+eHrbnx3vA688aFLwllJWnjBtnC29pf94ezqHa8JZ7/4qjvC2f1LwlEt6Mn9WgkfHAxnO/fGOwt3HoivfRkYmRHOHkw0N9Y82Qyd6LLsTOz4KGyVi8VXuWQazjssnpVyq19KiUbWUimzyiV+3Uw23Us7xfuQgamOV6AAAACSKKAAAACSKKAAAACSKKAAAACSKKAAAACSJnQK75kDc3Xhg38cyi5e/XD4ust/FJ/iWnvTr4WzknTtn68OZ2+pnRbOrll3ajg7f1l3ODu8OD6ZZbNmhrOSVOvfGs5W98Sns2qDlXB223B8lctgPX7denKGKrPKpZKZ2FN8Sq2SmICrlOJTeJnJuuwql0w+M7GXWeVS2NqXzMhe+tqpC+fOUcRlf4XXe+BXB69AAQAAJFFAAQAAJFFAAQAAJFFAAQAAJE1oE3llm2nR38Qae8vz54ave+OSfw1nV94Xb0KWpMv/ekc4u2rhheHstGfije9KbJ9ZfHJsVY4k1ebm7gt/YWM427kn3szesS9+X7w01BPO7vOucHbEh8NZKdlEbvHvU7oSDdzVUvzM1cwql0wTueWayMuJfKoxPHGG1CqX1HVzndM+hZu9PXlfmE/d+wJTF69AAQAAJFFAAQAAJFFAAQAAJFFAAQAAJFFAAQAAJJn7xI0emNkOSS+Pcs2V9NKEfXKMJx679sbj1954/NoXj137OdXd5x3tAxNaQP3CJzZb4+59k/LJMSY8du2Nx6+98fi1Lx67qYUf4QEAACRRQAEAACRNZgF18yR+bowNj1174/Frbzx+7YvHbgqZtB4oAACAdsWP8AAAAJImvIAys0vNbK2ZPW9m103050eOmZ1iZveZ2VNm9qSZXdu8fbaZ3WNmzzX/nDXZZ8XRmVnZzB4xs7ua7y8zsweaz8FvmVnnZJ8RR2dmM83s22b2jJk9bWYX8txrH2b2ieZ/N58ws2+YWZXn39QxoQWUmZUlfUnS70o6W9KVZnb2RJ4BaaOSPunuZ0u6QNJHmo/ZdZLudfczJN3bfB+t6VpJTx/x/uckfcHdl0vaLemqSTkVIr4o6fvufqakc9R4HHnutQEzWyzpY5L63P3XJZUlvVc8/6aMiX4F6nxJz7v7encflvRNSVdM8BmQ4O5b3f3h5tuDavwHfLEaj9uqZmyVpHdNygHxisxsiaS3S/py832TdLGkbzcjPHYtysxmSHqzpFskyd2H3X2PeO61kw5J08ysQ1K3pK3i+TdlTHQBtVjSpiPe72/ehjZgZkslnSfpAUkL3H1r80PbJC2YrHPhFf2dpE9LqjffnyNpj7uPNt/nOdi6lknaIekrzR/BftnMpovnXltw982SPi/pRTUKp72SHhLPvymDJnKEmFmPpO9I+ri77zvyY94Y5WScs8WY2eWStrv7Q5N9FpyQDkmvkXSTu58n6YB+6cd1PPdaV7M37Qo1CuGTJU2XdOmkHgrjaqILqM2STjni/SXN29DCzKyiRvH0dXdf3bx5wMwWNT++SNL2yTofjukNkt5pZhvU+HH5xWr01Mxs/khB4jnYyvol9bv7A833v61GQcVzrz1cIukFd9/h7iOSVqvxnOT5N0VMdAH1oKQzmlMInWo01N05wWdAQrNn5hZJT7v7jUd86E5JK5tvr5R0x0SfDa/M3f+buy9x96VqPNd+4O7vk3SfpD9oxnjsWpS7b5O0ycxWNG96q6SnxHOvXbwo6QIz627+d/Tlx4/n3xQx4b9I08wuU6MvoyzpVnf/qwk9AFLM7I2Sfizpcf1nH831avRB3S7pVZI2SnqPu++alEPiuMzsIkmfcvfLzew0NV6Rmi3pEUnvd/ehSTwejsHMzlVjAKBT0npJH1LjG1+ee23AzP6HpD9SY5r5EUl/okbPE8+/KYDfRA4AAJBEEzkAAEASBRQAAEASBRQAAEASBRQAAEASBRQAAEASBRQAAEASBRQAAEASBRQAAEDS/wdpgVtUOMXeIwAAAABJRU5ErkJggg==\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "import matplotlib.pyplot as plt\n", + "\n", + "plt.figure(figsize=(10, 5))\n", + "plt.imshow(positional_encodings, interpolation='nearest')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "## Supplementary Material" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the following sections we get into more details in the attention mechanisms of the transformer model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Attention Mechanism\n", + "\n", + "In this section we explain what attention mechanism is and how it works.\n", + "Attention mechanism first was introduced by [Dzmitry Bahdanau](https://arxiv.org/pdf/1409.0473.pdf) to improve the performance of encoder-decoder architectures in neural machine translation. It mentions that using a fixed-length vector is a bottleneck in the performance of encoder-decoder architectures and suggests that we use an architecture that allows the model to predict the target word by automatically attending to the parts of the input sentence that are relevant to the target word, regardless of how far the relevant parts are in that sentence in contrast to RNNs which as we go further in the sentence, we start forgetting about the past information in the sentence which means that the ability of RNNs in encoding long-term dependencies is limited which is fixed in attention mechanism." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here is an example of the Attention Mechanism. \n", + "Assume that we have a sequence of N tokens and each token in the sequence have a $d_k$-dimensional initial embedding, but these embeddings are computed independently, hence the context of the sentence is not encoded in those embeddings. Attention mechanism computes new embeddings for each token, so that, in addition to the information of that token, the information of other tokens in that sentence be considered in the embedding of that token. To do so, attention mechanism, computes the embedding of that token by using a weighted sum of the embeddings of tokens in that sentence. Actually not exactly the weighted sum of initial embeddings, but the weighted sum of a transformation of the initial embeddings." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "if $h_j$ is the $j^{th}$ token of the input sequence and $c_i$ is the $i^{th}$ new contextualized embedding, we will have:\n", + "$$\n", + "\\begin{aligned}\n", + "q_j = fc^q(h_j) \\\\\n", + "k_j = fc^k(h_j) \\\\\n", + "v_j = fc^v(h_j) \\\\\n", + "K = [k_1, ..., k_N] \\\\\n", + "\\alpha_{i} = softmax(\\frac{}{\\sqrt{d_k}}) \\\\\n", + "c_i = \\sum_{j = 1}^{N} \\alpha_{ij} \\times h_j \\\\\n", + "\\end{aligned}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What we explained above was the process of self attention because the initial embeddings of query, key and values were the same, if the query input embeddings be different than key and value input embeddings, we call it attention mechanism." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Multi-Head Attention Mechanism" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lHgHJSUmqvno" + }, + "source": [ + "Multi-Head Attention Mechanism is the same as attention mechanism except that instead of having one attenion mechanism we will split the Query, Key and Value embeddings into N parts and passes each splitted part through a separate attention mechanism which is called attention head and at the end, it merges the output of attention heads into one embedding by concatenation which will be the output of Multi-Head Attention Mechanism.\n", + "This makes the attention mechanism to be able to encode different kinds of relations between the tokens of input sequence." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Multi-Head Attention Mechanism](images/multi-head_attention.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Masked Multi-Head Attention Mechanism" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Masked Multi-Head Attention works almost the same as Multi-Head Attention except that it masks out the padding and future words in the target sequence." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Masked Multi-Head Attention Mechanism](images/masked_multi-head_attention.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tHon2lfejN71" + }, + "source": [ + "\n", + "## References\n", + "* [Attention Is All You Need](https://arxiv.org/abs/1706.03762)\n", + "* [What is Transformer](https://medium.com/inside-machine-learning/what-is-a-transformer-d07dd1fbec04)\n", + "* [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473)\n", + "* [The Attention Mechanism From Scratch](https://machinelearningmastery.com/the-attention-mechanism-from-scratch/#:~:text=The%20idea%20behind%20the%20attention,being%20attributed%20the%20highest%20weights.)\n", + "* [Transformers Explained Visually (Part 3): Multi-head Attention, deep dive](https://towardsdatascience.com/transformers-explained-visually-part-3-multi-head-attention-deep-dive-1c1ff1024853)" + ] + } + ], + "metadata": { + "colab": { + "name": "transformer.ipynb", + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.10" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/transformer/metadata.yml b/notebooks/transformer/metadata.yml new file mode 100644 index 0000000..1a13c1b --- /dev/null +++ b/notebooks/transformer/metadata.yml @@ -0,0 +1,25 @@ +title: Transformer + +header: + title: Transformer + description: Transformer Tutorial + +authors: + label: + position: top + content: + - name: Zahra TehraniNasab + role: Author + contact: + - link: https://github.com/realmarv + icon: fab fa-github + - link: https://www.linkedin.com/in/zahra-tehraninasab + icon: fab fa-linkedin + - link: mailto://zahratehraninasab@gmail.com + icon: fas fa-envelope + + +comments: + # enable comments for your post + label: false + kind: comments \ No newline at end of file