Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
  • Loading branch information
beloveddie authored Jul 16, 2024
1 parent b9a2fb3 commit ff5be17
Showing 1 changed file with 343 additions and 0 deletions.
343 changes: 343 additions & 0 deletions Mean Normalization and Data Separation.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,343 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Mean Normalization\n",
"\n",
"In machine learning we use large amounts of data to train our models. Some machine learning algorithms may require that the data is *normalized* in order to work correctly. The idea of normalization, also known as *feature scaling*, is to ensure that all the data is on a similar scale, *i.e.* that all the data takes on a similar range of values. For example, we might have a dataset that has values between 0 and 5,000. By normalizing the data we can make the range of values be between 0 and 1.\n",
"\n",
"In this lab, you will be performing a different kind of feature scaling known as *mean normalization*. Mean normalization will scale the data, but instead of making the values be between 0 and 1, it will distribute the values evenly in some small interval around zero. For example, if we have a dataset that has values between 0 and 5,000, after mean normalization the range of values will be distributed in some small range around 0, for example between -3 to 3. Because the range of values are distributed evenly around zero, this guarantees that the average (mean) of all elements will be zero. Therefore, when you perform *mean normalization* your data will not only be scaled but it will also have an average of zero. \n",
"\n",
"# To Do:\n",
"\n",
"You will start by importing NumPy and creating a rank 2 ndarray of random integers between 0 and 5,000 (inclusive) with 1000 rows and 20 columns. This array will simulate a dataset with a wide range of values. Fill in the code below"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(1000, 20)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# import NumPy into Python\n",
"import numpy as np\n",
"\n",
"\n",
"# Create a 1000 x 20 ndarray with random integers in the half-open interval [0, 5001).\n",
"X = np.random.randint(0, 5001, size=(1000, 20))\n",
"\n",
"# print the shape of X\n",
"X.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you created the array we will mean normalize it. We will perform mean normalization using the following equation:\n",
"\n",
"$\\mbox{Norm_Col}_i = \\frac{\\mbox{Col}_i - \\mu_i}{\\sigma_i}$\n",
"\n",
"where $\\mbox{Col}_i$ is the $i$th column of $X$, $\\mu_i$ is average of the values in the $i$th column of $X$, and $\\sigma_i$ is the standard deviation of the values in the $i$th column of $X$. In other words, mean normalization is performed by subtracting from each column of $X$ the average of its values, and then by dividing by the standard deviation of its values. In the space below, you will first calculate the average and standard deviation of each column of $X$. "
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"# Average of the values in each column of X\n",
"ave_cols = np.mean(X)\n",
"\n",
"# Standard Deviation of the values in each column of X\n",
"std_cols = np.std(X)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you have done the above calculations correctly, then `ave_cols` and `std_cols`, should both be vectors with shape `(20,)` since $X$ has 20 columns. You can verify this by filling the code below:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2490.86635\n",
"1442.0605417553306\n"
]
}
],
"source": [
"# Print the shape of ave_cols\n",
"print(ave_cols)\n",
"\n",
"# Print the shape of std_cols\n",
"print(std_cols)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now take advantage of Broadcasting to calculate the mean normalized version of $X$ in just one line of code using the equation above. Fill in the code below"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"# Mean normalize X\n",
"X_norm = (X - ave_cols) / std_cols"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you have performed the mean normalization correctly, then the average of all the elements in $X_{\\tiny{\\mbox{norm}}}$ should be close to zero, and they should be evenly distributed in some small interval around zero. You can verify this by filing the code below:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Average of all the values in X_norm: 1.4992451724538114e-16\n",
"Average of minimum values in each column of X_norm: -1.7250429354179395\n",
"Average of maximum values in each column of X_norm: 1.737016981929872\n"
]
}
],
"source": [
"# Print the average of all the values of X_norm\n",
"print(f\"Average of all the values in X_norm: {np.mean(X_norm)}\")\n",
"\n",
"\n",
"# Print the average of the minimum value in each column of X_norm\n",
"cols_min = np.min(X_norm, axis=0)\n",
"print(f\"Average of minimum values in each column of X_norm: {np.mean(cols_min)}\")\n",
"\n",
"\n",
"# Print the average of the maximum value in each column of X_norm\n",
"cols_max = np.max(X_norm, axis=0)\n",
"print(f\"Average of maximum values in each column of X_norm: {np.mean(cols_max)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You should note that since $X$ was created using random integers, the above values will vary. \n",
"\n",
"# Data Separation\n",
"\n",
"After the data has been mean normalized, it is customary in machine learnig to split our dataset into three sets:\n",
"\n",
"1. A Training Set\n",
"2. A Cross Validation Set\n",
"3. A Test Set\n",
"\n",
"The dataset is usually divided such that the Training Set contains 60% of the data, the Cross Validation Set contains 20% of the data, and the Test Set contains 20% of the data. \n",
"\n",
"In this part of the lab you will separate `X_norm` into a Training Set, Cross Validation Set, and a Test Set. Each data set will contain rows of `X_norm` chosen at random, making sure that we don't pick the same row twice. This will guarantee that all the rows of `X_norm` are chosen and randomly distributed among the three new sets.\n",
"\n",
"You will start by creating a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`. You can do this by using the `np.random.permutation()` function. The `np.random.permutation(N)` function creates a random permutation of integers from 0 to `N - 1`. Let's see an example:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0, 4, 1, 3, 2])"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We create a random permutation of integers 0 to 4\n",
"np.random.permutation(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# To Do\n",
"\n",
"In the space below create a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`. You can do this in one line of code by extracting the number of rows of `X_norm` using the `shape` attribute and then passing it to the `np.random.permutation()` function. Remember the `shape` attribute returns a tuple with two numbers in the form `(rows,columns)`."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[186 724 262 918 675 875 786 147 444 742]\n"
]
}
],
"source": [
"# Create a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`\n",
"row_indices = np.random.permutation(X_norm.shape[0])\n",
"print(row_indices[:10])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can create the three datasets using the `row_indices` ndarray to select the rows that will go into each dataset. Rememeber that the Training Set contains 60% of the data, the Cross Validation Set contains 20% of the data, and the Test Set contains 20% of the data. Each set requires just one line of code to create. Fill in the code below"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"# Make any necessary calculations.\n",
"# You can save your calculations into variables to use later.\n",
"\n",
"# Calculate the number of samples for each dataset\n",
"n_samples = X_norm.shape[0]\n",
"n_train = int(0.6 * n_samples)\n",
"n_cv = int(0.2 * n_samples)\n",
"n_test = n_samples - n_train - n_cv\n",
"\n",
"\n",
"# Create a Training Set\n",
"X_train = X_norm[row_indices[:n_train]]\n",
"\n",
"# Create a Cross Validation Set\n",
"X_cv = X_norm[row_indices[n_train:n_train+n_cv]]\n",
"\n",
"# Create a Test Set\n",
"X_test = X_norm[row_indices[n_train+n_cv:]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you performed the above calculations correctly, then `X_tain` should have 600 rows and 20 columns, `X_crossVal` should have 200 rows and 20 columns, and `X_test` should have 200 rows and 20 columns. You can verify this by filling the code below:"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"X_norm shape: (1000, 20)\n",
"X_train shape: (600, 20)\n",
"X_cv shape: (200, 20)\n",
"X_test shape: (200, 20)\n"
]
}
],
"source": [
"# Print the shape of X_norm\n",
"print(f\"X_norm shape: {X_norm.shape}\")\n",
"\n",
"# Print the shape of X_train\n",
"print(f\"X_train shape: {X_train.shape}\")\n",
"\n",
"# Print the shape of X_crossVal\n",
"print(f\"X_cv shape: {X_cv.shape}\")\n",
"\n",
"# Print the shape of X_test\n",
"print(f\"X_test shape: {X_test.shape}\")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total rows in all datasets: 1000\n",
"Original number of rows: 1000\n"
]
}
],
"source": [
"# Verify that we've used all the data\n",
"total_rows = X_train.shape[0] + X_cv.shape[0] + X_test.shape[0]\n",
"print(f\"Total rows in all datasets: {total_rows}\")\n",
"print(f\"Original number of rows: {n_samples}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

0 comments on commit ff5be17

Please sign in to comment.