From 25ff8ad01238ef30760f5c5322b554475f1e67e2 Mon Sep 17 00:00:00 2001 From: ArturoAmorQ Date: Tue, 7 Nov 2023 14:48:40 +0100 Subject: [PATCH 1/3] Generate notebooks --- .../linear_models/linear_models_quiz_m4_03.md | 12 +++++ notebooks/parameter_tuning_ex_02.ipynb | 8 +-- notebooks/parameter_tuning_ex_03.ipynb | 2 +- notebooks/parameter_tuning_grid_search.ipynb | 38 +++++++------- notebooks/parameter_tuning_nested.ipynb | 12 ++--- .../parameter_tuning_parallel_plot.ipynb | 4 +- notebooks/parameter_tuning_sol_02.ipynb | 8 +-- notebooks/parameter_tuning_sol_03.ipynb | 10 ++-- notebooks/trees_dataset.ipynb | 16 +++--- notebooks/trees_ex_02.ipynb | 10 ++-- notebooks/trees_hyperparameters.ipynb | 49 +++++++++---------- notebooks/trees_regression.ipynb | 21 ++++---- notebooks/trees_sol_02.ipynb | 16 +++--- 13 files changed, 108 insertions(+), 98 deletions(-) diff --git a/jupyter-book/linear_models/linear_models_quiz_m4_03.md b/jupyter-book/linear_models/linear_models_quiz_m4_03.md index 672f04e58..1a6cb1b1e 100644 --- a/jupyter-book/linear_models/linear_models_quiz_m4_03.md +++ b/jupyter-book/linear_models/linear_models_quiz_m4_03.md @@ -79,6 +79,18 @@ _Select all answers that apply_ +++ +```{admonition} Question +By default, a [`LogisticRegression`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) in scikit-learn applies: + +- a) no penalty +- b) a penalty that shrinks the magnitude of the weights towards zero (also called "l2 penalty") +- c) a penalty that ensures all weights are equal + +_Select a single answer_ +``` + ++++ + ```{admonition} Question The parameter `C` in a logistic regression is: diff --git a/notebooks/parameter_tuning_ex_02.ipynb b/notebooks/parameter_tuning_ex_02.ipynb index 2aa096d5c..026e37fd8 100644 --- a/notebooks/parameter_tuning_ex_02.ipynb +++ b/notebooks/parameter_tuning_ex_02.ipynb @@ -76,10 +76,10 @@ "source": [ "Use the previously defined model (called `model`) and using two nested `for`\n", "loops, make a search of the best combinations of the `learning_rate` and\n", - "`max_leaf_nodes` parameters. In this regard, you will need to train and test\n", - "the model by setting the parameters. The evaluation of the model should be\n", - "performed using `cross_val_score` on the training set. We will use the\n", - "following parameters search:\n", + "`max_leaf_nodes` parameters. In this regard, you have to train and test the\n", + "model by setting the parameters. The evaluation of the model should be\n", + "performed using `cross_val_score` on the training set. Use the following\n", + "parameters search:\n", "- `learning_rate` for the values 0.01, 0.1, 1 and 10. This parameter controls\n", " the ability of a new tree to correct the error of the previous sequence of\n", " trees\n", diff --git a/notebooks/parameter_tuning_ex_03.ipynb b/notebooks/parameter_tuning_ex_03.ipynb index ee40ef916..e26aa4150 100644 --- a/notebooks/parameter_tuning_ex_03.ipynb +++ b/notebooks/parameter_tuning_ex_03.ipynb @@ -31,7 +31,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this exercise, we will progressively define the regression pipeline and\n", + "In this exercise, we progressively define the regression pipeline and\n", "later tune its hyperparameters.\n", "\n", "Start by defining a pipeline that:\n", diff --git a/notebooks/parameter_tuning_grid_search.ipynb b/notebooks/parameter_tuning_grid_search.ipynb index e0912cb54..cdf8117cc 100644 --- a/notebooks/parameter_tuning_grid_search.ipynb +++ b/notebooks/parameter_tuning_grid_search.ipynb @@ -7,7 +7,7 @@ "# Hyperparameter tuning by grid-search\n", "\n", "In the previous notebook, we saw that hyperparameters can affect the\n", - "generalization performance of a model. In this notebook, we will show how to\n", + "generalization performance of a model. In this notebook, we show how to\n", "optimize hyperparameters using a grid-search approach." ] }, @@ -91,8 +91,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will define a pipeline as seen in the first module. It will handle both\n", - "numerical and categorical features.\n", + "We define a pipeline as seen in the first module, to handle both numerical and\n", + "categorical features.\n", "\n", "The first step is to select all the categorical columns." ] @@ -113,7 +113,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here we will use a tree-based model as a classifier (i.e.\n", + "Here we use a tree-based model as a classifier (i.e.\n", "`HistGradientBoostingClassifier`). That means:\n", "\n", "* Numerical variables don't need scaling;\n", @@ -201,8 +201,8 @@ "code.\n", "\n", "Let's see how to use the `GridSearchCV` estimator for doing such search. Since\n", - "the grid-search will be costly, we will only explore the combination\n", - "learning-rate and the maximum number of nodes." + "the grid-search is costly, we only explore the combination learning-rate and\n", + "the maximum number of nodes." ] }, { @@ -226,7 +226,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Finally, we will check the accuracy of our model using the test set." + "Finally, we check the accuracy of our model using the test set." ] }, { @@ -261,17 +261,17 @@ "metadata": {}, "source": [ "The `GridSearchCV` estimator takes a `param_grid` parameter which defines all\n", - "hyperparameters and their associated values. The grid-search will be in charge\n", + "hyperparameters and their associated values. The grid-search is in charge\n", "of creating all possible combinations and test them.\n", "\n", - "The number of combinations will be equal to the product of the number of\n", - "values to explore for each parameter (e.g. in our example 4 x 3 combinations).\n", - "Thus, adding new parameters with their associated values to be explored become\n", + "The number of combinations are equal to the product of the number of values to\n", + "explore for each parameter (e.g. in our example 4 x 3 combinations). Thus,\n", + "adding new parameters with their associated values to be explored become\n", "rapidly computationally expensive.\n", "\n", "Once the grid-search is fitted, it can be used as any other predictor by\n", - "calling `predict` and `predict_proba`. Internally, it will use the model with\n", - "the best parameters found during `fit`.\n", + "calling `predict` and `predict_proba`. Internally, it uses the model with the\n", + "best parameters found during `fit`.\n", "\n", "Get predictions for the 5 first samples using the estimator with the best\n", "parameters." @@ -312,8 +312,8 @@ "parameters \"by hand\" through a double for loop.\n", "\n", "In addition, we can inspect all results which are stored in the attribute\n", - "`cv_results_` of the grid-search. We will filter some specific columns from\n", - "these results." + "`cv_results_` of the grid-search. We filter some specific columns from these\n", + "results." ] }, { @@ -371,9 +371,9 @@ "With only 2 parameters, we might want to visualize the grid-search as a\n", "heatmap. We need to transform our `cv_results` into a dataframe where:\n", "\n", - "- the rows will correspond to the learning-rate values;\n", - "- the columns will correspond to the maximum number of leaf;\n", - "- the content of the dataframe will be the mean test scores." + "- the rows correspond to the learning-rate values;\n", + "- the columns correspond to the maximum number of leaf;\n", + "- the content of the dataframe is the mean test scores." ] }, { @@ -430,7 +430,7 @@ "\n", "The precise meaning of those two parameters will be explained later.\n", "\n", - "For now we will note that, in general, **there is no unique optimal parameter\n", + "For now we note that, in general, **there is no unique optimal parameter\n", "setting**: 4 models out of the 12 parameter configurations reach the maximal\n", "accuracy (up to small random fluctuations caused by the sampling of the\n", "training set)." diff --git a/notebooks/parameter_tuning_nested.ipynb b/notebooks/parameter_tuning_nested.ipynb index efc43173d..f632d16f4 100644 --- a/notebooks/parameter_tuning_nested.ipynb +++ b/notebooks/parameter_tuning_nested.ipynb @@ -10,12 +10,12 @@ "However, we did not present a proper framework to evaluate the tuned models.\n", "Instead, we focused on the mechanism used to find the best set of parameters.\n", "\n", - "In this notebook, we will reuse some knowledge presented in the module\n", - "\"Selecting the best model\" to show how to evaluate models where\n", - "hyperparameters need to be tuned.\n", + "In this notebook, we reuse some knowledge presented in the module \"Selecting\n", + "the best model\" to show how to evaluate models where hyperparameters need to\n", + "be tuned.\n", "\n", - "Thus, we will first load the dataset and create the predictive model that we\n", - "want to optimize and later on, evaluate.\n", + "Thus, we first load the dataset and create the predictive model that we want\n", + "to optimize and later on, evaluate.\n", "\n", "## Loading the dataset\n", "\n", @@ -155,7 +155,7 @@ "### With hyperparameter tuning\n", "\n", "As shown in the previous notebook, one can use a search strategy that uses\n", - "cross-validation to find the best set of parameters. Here, we will use a\n", + "cross-validation to find the best set of parameters. Here, we use a\n", "grid-search strategy and reproduce the steps done in the previous notebook.\n", "\n", "First, we have to embed our model into a grid-search and specify the\n", diff --git a/notebooks/parameter_tuning_parallel_plot.ipynb b/notebooks/parameter_tuning_parallel_plot.ipynb index 6b2cbe200..32f411b35 100644 --- a/notebooks/parameter_tuning_parallel_plot.ipynb +++ b/notebooks/parameter_tuning_parallel_plot.ipynb @@ -158,8 +158,8 @@ "spread the active ranges and improve the readability of the plot.

\n", "\n", "\n", - "The parallel coordinates plot will display the values of the hyperparameters\n", - "on different columns while the performance metric is color coded. Thus, we are\n", + "The parallel coordinates plot displays the values of the hyperparameters on\n", + "different columns while the performance metric is color coded. Thus, we are\n", "able to quickly inspect if there is a range of hyperparameters which is\n", "working or not.\n", "\n", diff --git a/notebooks/parameter_tuning_sol_02.ipynb b/notebooks/parameter_tuning_sol_02.ipynb index 58ef6a501..4035e5717 100644 --- a/notebooks/parameter_tuning_sol_02.ipynb +++ b/notebooks/parameter_tuning_sol_02.ipynb @@ -76,10 +76,10 @@ "source": [ "Use the previously defined model (called `model`) and using two nested `for`\n", "loops, make a search of the best combinations of the `learning_rate` and\n", - "`max_leaf_nodes` parameters. In this regard, you will need to train and test\n", - "the model by setting the parameters. The evaluation of the model should be\n", - "performed using `cross_val_score` on the training set. We will use the\n", - "following parameters search:\n", + "`max_leaf_nodes` parameters. In this regard, you need to train and test the\n", + "model by setting the parameters. The evaluation of the model should be\n", + "performed using `cross_val_score` on the training set. Use the following\n", + "parameters search:\n", "- `learning_rate` for the values 0.01, 0.1, 1 and 10. This parameter controls\n", " the ability of a new tree to correct the error of the previous sequence of\n", " trees\n", diff --git a/notebooks/parameter_tuning_sol_03.ipynb b/notebooks/parameter_tuning_sol_03.ipynb index c7e032fce..c7eb4a778 100644 --- a/notebooks/parameter_tuning_sol_03.ipynb +++ b/notebooks/parameter_tuning_sol_03.ipynb @@ -31,8 +31,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this exercise, we will progressively define the regression pipeline and\n", - "later tune its hyperparameters.\n", + "In this exercise, we progressively define the regression pipeline and later\n", + "tune its hyperparameters.\n", "\n", "Start by defining a pipeline that:\n", "* uses a `StandardScaler` to normalize the numerical data;\n", @@ -158,8 +158,8 @@ ] }, "source": [ - "To simplify the axis of the plot, we will rename the column of the dataframe\n", - "and only select the mean test score and the value of the hyperparameters." + "To simplify the axis of the plot, we rename the column of the dataframe and\n", + "only select the mean test score and the value of the hyperparameters." ] }, { @@ -266,7 +266,7 @@ "vary between 0 and 10,000 (e.g. the variable `\"Population\"`) and B is a\n", "feature that varies between 1 and 10 (e.g. the variable `\"AveRooms\"`), then\n", "distances between samples (rows of the dataframe) are mostly impacted by\n", - "differences in values of the column A, while values of the column B will be\n", + "differences in values of the column A, while values of the column B are\n", "comparatively ignored. If one applies StandardScaler to such a database, both\n", "the values of A and B will be approximately between -3 and 3 and the neighbor\n", "structure will be impacted more or less equivalently by both variables.\n", diff --git a/notebooks/trees_dataset.ipynb b/notebooks/trees_dataset.ipynb index 7202c1073..c2509a248 100644 --- a/notebooks/trees_dataset.ipynb +++ b/notebooks/trees_dataset.ipynb @@ -13,7 +13,7 @@ "\n", "## Classification dataset\n", "\n", - "We will use this dataset in classification setting to predict the penguins'\n", + "We use this dataset in classification setting to predict the penguins'\n", "species from anatomical information.\n", "\n", "Each penguin is from one of the three following species: Adelie, Gentoo, and\n", @@ -24,15 +24,15 @@ "penguins](https://github.com/allisonhorst/palmerpenguins/raw/master/man/figures/lter_penguins.png)\n", "\n", "This problem is a classification problem since the target is categorical. We\n", - "will limit our input data to a subset of the original features to simplify our\n", - "explanations when presenting the decision tree algorithm. Indeed, we will use\n", + "limit our input data to a subset of the original features to simplify our\n", + "explanations when presenting the decision tree algorithm. Indeed, we use\n", "features based on penguins' culmen measurement. You can learn more about the\n", "penguins' culmen with the illustration below:\n", "\n", "![Image of\n", "culmen](https://github.com/allisonhorst/palmerpenguins/raw/master/man/figures/culmen_depth.png)\n", "\n", - "We will start by loading this subset of the dataset." + "We start by loading this subset of the dataset." ] }, { @@ -101,11 +101,11 @@ "\n", "In a regression setting, the target is a continuous variable instead of\n", "categories. Here, we use two features of the dataset to make such a problem:\n", - "the flipper length will be used as data and the body mass will be the target.\n", - "In short, we want to predict the body mass using the flipper length.\n", + "the flipper length is used as data and the body mass as the target. In short,\n", + "we want to predict the body mass using the flipper length.\n", "\n", - "We will load the dataset and visualize the relationship between the flipper\n", - "length and the body mass of penguins." + "We load the dataset and visualize the relationship between the flipper length\n", + "and the body mass of penguins." ] }, { diff --git a/notebooks/trees_ex_02.ipynb b/notebooks/trees_ex_02.ipynb index 3b1c0e141..0d35b25be 100644 --- a/notebooks/trees_ex_02.ipynb +++ b/notebooks/trees_ex_02.ipynb @@ -12,7 +12,7 @@ "By extrapolation, we refer to values predicted by a model outside of the range\n", "of feature values seen during the training.\n", "\n", - "We will first load the regression data." + "We first load the regression data." ] }, { @@ -98,10 +98,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now, we will check the extrapolation capabilities of each model. Create a\n", - "dataset containing a broader range of values than your previous dataset, in\n", - "other words, add values below and above the minimum and the maximum of the\n", - "flipper length seen during training." + "Now, we check the extrapolation capabilities of each model. Create a dataset\n", + "containing a broader range of values than your previous dataset, in other\n", + "words, add values below and above the minimum and the maximum of the flipper\n", + "length seen during training." ] }, { diff --git a/notebooks/trees_hyperparameters.ipynb b/notebooks/trees_hyperparameters.ipynb index b9de0ac27..1e271f491 100644 --- a/notebooks/trees_hyperparameters.ipynb +++ b/notebooks/trees_hyperparameters.ipynb @@ -6,11 +6,11 @@ "source": [ "# Importance of decision tree hyperparameters on generalization\n", "\n", - "In this notebook, we will illustrate the importance of some key\n", - "hyperparameters on the decision tree; we will demonstrate their effects on the\n", - "classification and regression problems we saw previously.\n", + "In this notebook, we illustrate the importance of some key hyperparameters on\n", + "the decision tree; we demonstrate their effects on the classification and\n", + "regression problems we saw previously.\n", "\n", - "First, we will load the classification and regression datasets." + "First, we load the classification and regression datasets." ] }, { @@ -54,7 +54,7 @@ "source": [ "## Create helper functions\n", "\n", - "We will create some helper functions to plot the data samples as well as the\n", + "We create some helper functions to plot the data samples as well as the\n", "decision boundary for classification and the regression line for regression." ] }, @@ -207,10 +207,10 @@ "metadata": {}, "source": [ "For both classification and regression setting, we observe that increasing the\n", - "depth will make the tree model more expressive. However, a tree that is too\n", - "deep will overfit the training data, creating partitions which are only\n", - "correct for \"outliers\" (noisy samples). The `max_depth` is one of the\n", - "hyperparameters that one should optimize via cross-validation and grid-search." + "depth makes the tree model more expressive. However, a tree that is too deep\n", + "may overfit the training data, creating partitions which are only correct for\n", + "\"outliers\" (noisy samples). The `max_depth` is one of the hyperparameters that\n", + "one should optimize via cross-validation and grid-search." ] }, { @@ -266,15 +266,15 @@ "\n", "The `max_depth` hyperparameter controls the overall complexity of the tree.\n", "This parameter is adequate under the assumption that a tree is built\n", - "symmetrically. However, there is no guarantee that a tree will be symmetrical.\n", + "symmetrically. However, there is no reason why a tree should be symmetrical.\n", "Indeed, optimal generalization performance could be reached by growing some of\n", "the branches deeper than some others.\n", "\n", - "We will build a dataset where we will illustrate this asymmetry. We will\n", - "generate a dataset composed of 2 subsets: one subset where a clear separation\n", - "should be found by the tree and another subset where samples from both classes\n", - "will be mixed. It implies that a decision tree will need more splits to\n", - "classify properly samples from the second subset than from the first subset." + "We build a dataset where we illustrate this asymmetry. We generate a dataset\n", + "composed of 2 subsets: one subset where a clear separation should be found by\n", + "the tree and another subset where samples from both classes are mixed. It\n", + "implies that a decision tree needs more splits to classify properly samples\n", + "from the second subset than from the first subset." ] }, { @@ -288,11 +288,11 @@ "data_clf_columns = [\"Feature #0\", \"Feature #1\"]\n", "target_clf_column = \"Class\"\n", "\n", - "# Blobs that will be interlaced\n", + "# Blobs that are interlaced\n", "X_1, y_1 = make_blobs(\n", " n_samples=300, centers=[[0, 0], [-1, -1]], random_state=0\n", ")\n", - "# Blobs that will be easily separated\n", + "# Blobs that can be easily separated\n", "X_2, y_2 = make_blobs(n_samples=300, centers=[[3, 6], [7, 0]], random_state=0)\n", "\n", "X = np.concatenate([X_1, X_2], axis=0)\n", @@ -324,9 +324,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will first train a shallow decision tree with `max_depth=2`. We would\n", - "expect this depth to be enough to separate the blobs that are easy to\n", - "separate." + "We first train a shallow decision tree with `max_depth=2`. We would expect\n", + "this depth to be enough to separate the blobs that are easy to separate." ] }, { @@ -348,7 +347,7 @@ "metadata": {}, "source": [ "As expected, we see that the blue blob in the lower right and the red blob on\n", - "the top are easily separated. However, more splits will be required to better\n", + "the top are easily separated. However, more splits are required to better\n", "split the blob were both blue and red data points are mixed." ] }, @@ -369,7 +368,7 @@ "metadata": {}, "source": [ "We see that the right branch achieves perfect classification. Now, we increase\n", - "the depth to check how the tree will grow." + "the depth to check how the tree grows." ] }, { @@ -406,8 +405,8 @@ "beneficial that a branch continue growing.\n", "\n", "The hyperparameters `min_samples_leaf`, `min_samples_split`, `max_leaf_nodes`,\n", - "or `min_impurity_decrease` allows growing asymmetric trees and apply a\n", - "constraint at the leaves or nodes level. We will check the effect of\n", + "or `min_impurity_decrease` allow growing asymmetric trees and apply a\n", + "constraint at the leaves or nodes level. We check the effect of\n", "`min_samples_leaf`." ] }, @@ -442,7 +441,7 @@ "metadata": {}, "source": [ "This hyperparameter allows to have leaves with a minimum number of samples and\n", - "no further splits will be searched otherwise. Therefore, these hyperparameters\n", + "no further splits are searched otherwise. Therefore, these hyperparameters\n", "could be an alternative to fix the `max_depth` hyperparameter." ] } diff --git a/notebooks/trees_regression.ipynb b/notebooks/trees_regression.ipynb index 5e137e01e..217d2e165 100644 --- a/notebooks/trees_regression.ipynb +++ b/notebooks/trees_regression.ipynb @@ -44,9 +44,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To illustrate how decision trees are predicting in a regression setting, we\n", - "will create a synthetic dataset containing all possible flipper length from\n", - "the minimum to the maximum of the original data." + "To illustrate how decision trees predict in a regression setting, we create a\n", + "synthetic dataset containing some of the possible flipper length values\n", + "between the minimum and the maximum of the original data." ] }, { @@ -76,9 +76,9 @@ "some intuitive understanding on the shape of the decision function of the\n", "learned decision trees.\n", "\n", - "However computing an evaluation metric on such a synthetic test set would be\n", + "However, computing an evaluation metric on such a synthetic test set would be\n", "meaningless since the synthetic dataset does not follow the same distribution\n", - "as the real world data on which the model will be deployed." + "as the real world data on which the model would be deployed." ] }, { @@ -100,7 +100,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will first illustrate the difference between a linear model and a decision\n", + "We first illustrate the difference between a linear model and a decision\n", "tree." ] }, @@ -172,9 +172,8 @@ "metadata": {}, "source": [ "Contrary to linear models, decision trees are non-parametric models: they do\n", - "not make assumptions about the way data is distributed. This will affect the\n", - "prediction scheme. Repeating the above experiment will highlight the\n", - "differences." + "not make assumptions about the way data is distributed. This affects the\n", + "prediction scheme. Repeating the above experiment highlights the differences." ] }, { @@ -272,8 +271,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Increasing the depth of the tree will increase the number of partition and\n", - "thus the number of constant values that the tree is capable of predicting.\n", + "Increasing the depth of the tree increases the number of partitions and thus\n", + "the number of constant values that the tree is capable of predicting.\n", "\n", "In this notebook, we highlighted the differences in behavior of a decision\n", "tree used in a classification problem in contrast to a regression problem." diff --git a/notebooks/trees_sol_02.ipynb b/notebooks/trees_sol_02.ipynb index cd7de2cff..64010ef3e 100644 --- a/notebooks/trees_sol_02.ipynb +++ b/notebooks/trees_sol_02.ipynb @@ -12,7 +12,7 @@ "By extrapolation, we refer to values predicted by a model outside of the range\n", "of feature values seen during the training.\n", "\n", - "We will first load the regression data." + "We first load the regression data." ] }, { @@ -153,10 +153,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now, we will check the extrapolation capabilities of each model. Create a\n", - "dataset containing a broader range of values than your previous dataset, in\n", - "other words, add values below and above the minimum and the maximum of the\n", - "flipper length seen during training." + "Now, we check the extrapolation capabilities of each model. Create a dataset\n", + "containing a broader range of values than your previous dataset, in other\n", + "words, add values below and above the minimum and the maximum of the flipper\n", + "length seen during training." ] }, { @@ -226,9 +226,9 @@ ] }, "source": [ - "The linear model will extrapolate using the fitted model for flipper lengths <\n", - "175 mm and > 235 mm. In fact, we are using the model parametrization to make\n", - "this predictions.\n", + "The linear model extrapolates using the fitted model for flipper lengths < 175\n", + "mm and > 235 mm. In fact, we are using the model parametrization to make these\n", + "predictions.\n", "\n", "As mentioned, decision trees are non-parametric models and we observe that\n", "they cannot extrapolate. For flipper lengths below the minimum, the mass of\n", From 533fcb74fa962290a1aff82319d6228b2c05f789 Mon Sep 17 00:00:00 2001 From: ArturoAmorQ Date: Tue, 7 Nov 2023 15:34:44 +0100 Subject: [PATCH 2/3] Prefer verbs in present mode --- python_scripts/01_tabular_data_exploration.py | 34 ++++++------- .../01_tabular_data_exploration_ex_01.py | 4 +- .../01_tabular_data_exploration_sol_01.py | 2 +- .../02_numerical_pipeline_cross_validation.py | 24 +++++----- python_scripts/02_numerical_pipeline_ex_00.py | 9 ++-- python_scripts/02_numerical_pipeline_ex_01.py | 10 ++-- .../02_numerical_pipeline_hands_on.py | 23 +++++---- .../02_numerical_pipeline_scaling.py | 31 ++++++------ .../02_numerical_pipeline_sol_00.py | 7 +-- .../02_numerical_pipeline_sol_01.py | 14 +++--- python_scripts/03_categorical_pipeline.py | 48 +++++++++---------- ...categorical_pipeline_column_transformer.py | 40 ++++++++-------- .../03_categorical_pipeline_ex_01.py | 17 ++++--- .../03_categorical_pipeline_ex_02.py | 2 +- .../03_categorical_pipeline_sol_01.py | 17 ++++--- .../03_categorical_pipeline_sol_02.py | 2 +- .../03_categorical_pipeline_visualization.py | 4 +- 17 files changed, 142 insertions(+), 146 deletions(-) diff --git a/python_scripts/01_tabular_data_exploration.py b/python_scripts/01_tabular_data_exploration.py index 69427f8d0..4b07c4add 100644 --- a/python_scripts/01_tabular_data_exploration.py +++ b/python_scripts/01_tabular_data_exploration.py @@ -8,8 +8,8 @@ # %% [markdown] # # First look at our dataset # -# In this notebook, we will look at the necessary steps required before any -# machine learning takes place. It involves: +# In this notebook, we look at the necessary steps required before any machine +# learning takes place. It involves: # # * loading the data; # * looking at the variables in the dataset, in particular, differentiate @@ -21,14 +21,14 @@ # %% [markdown] # ## Loading the adult census dataset # -# We will use data from the 1994 US census that we downloaded from +# We use data from the 1994 US census that we downloaded from # [OpenML](http://openml.org/). # # You can look at the OpenML webpage to learn more about this dataset: # # -# The dataset is available as a CSV (Comma-Separated Values) file and we will -# use `pandas` to read it. +# The dataset is available as a CSV (Comma-Separated Values) file and we use +# `pandas` to read it. # # ```{note} # [Pandas](https://pandas.pydata.org/) is a Python library used for @@ -74,9 +74,9 @@ # The column named **class** is our target variable (i.e., the variable which we # want to predict). The two possible classes are `<=50K` (low-revenue) and # `>50K` (high-revenue). The resulting prediction problem is therefore a binary -# classification problem as `class` has only two possible values. We will use -# the left-over columns (any column other than `class`) as input variables for -# our model. +# classification problem as `class` has only two possible values. We use the +# left-over columns (any column other than `class`) as input variables for our +# model. # %% target_column = "class" @@ -90,7 +90,7 @@ # and may need special techniques when building a predictive model. # # For example in a medical setting, if we are trying to predict whether subjects -# will develop a rare disease, there will be a lot more healthy subjects than +# may develop a rare disease, there would be a lot more healthy subjects than # ill subjects in the dataset. # ``` @@ -247,8 +247,8 @@ # %% import seaborn as sns -# We will plot a subset of the data to keep the plot readable and make the -# plotting faster +# We plot a subset of the data to keep the plot readable and make the plotting +# faster n_samples_to_plot = 5000 columns = ["age", "education-num", "hours-per-week"] _ = sns.pairplot( @@ -320,12 +320,12 @@ # a mix of blue points and orange points. It seems complicated to choose which # class we should predict in this region. # -# It is interesting to note that some machine learning models will work -# similarly to what we did: they are known as decision tree models. The two -# thresholds that we chose (27 years and 40 hours) are somewhat arbitrary, i.e. -# we chose them by only looking at the pairplot. In contrast, a decision tree -# will choose the "best" splits based on data without human intervention or -# inspection. Decision trees will be covered more in detail in a future module. +# It is interesting to note that some machine learning models work similarly to +# what we did: they are known as decision tree models. The two thresholds that +# we chose (27 years and 40 hours) are somewhat arbitrary, i.e. we chose them by +# only looking at the pairplot. In contrast, a decision tree chooses the "best" +# splits based on data without human intervention or inspection. Decision trees +# will be covered more in detail in a future module. # # Note that machine learning is often used when creating rules by hand is not # straightforward. For example because we are in high dimension (many features diff --git a/python_scripts/01_tabular_data_exploration_ex_01.py b/python_scripts/01_tabular_data_exploration_ex_01.py index b09b00dc3..37548c006 100644 --- a/python_scripts/01_tabular_data_exploration_ex_01.py +++ b/python_scripts/01_tabular_data_exploration_ex_01.py @@ -5,7 +5,7 @@ # extension: .py # format_name: percent # format_version: '1.3' -# jupytext_version: 1.14.5 +# jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 @@ -59,5 +59,5 @@ # Write your code here. # %% [markdown] -# Looking at these distributions, how hard do you think it will be to classify +# Looking at these distributions, how hard do you think it would be to classify # the penguins only using `"culmen depth"` and `"culmen length"`? diff --git a/python_scripts/01_tabular_data_exploration_sol_01.py b/python_scripts/01_tabular_data_exploration_sol_01.py index 95d89b203..b3ef6a88d 100644 --- a/python_scripts/01_tabular_data_exploration_sol_01.py +++ b/python_scripts/01_tabular_data_exploration_sol_01.py @@ -78,7 +78,7 @@ pairplot_figure = seaborn.pairplot(penguins, hue="Species", height=4) # %% [markdown] -# Looking at these distributions, how hard do you think it will be to classify +# Looking at these distributions, how hard do you think it would be to classify # the penguins only using `"culmen depth"` and `"culmen length"`? # %% [markdown] tags=["solution"] diff --git a/python_scripts/02_numerical_pipeline_cross_validation.py b/python_scripts/02_numerical_pipeline_cross_validation.py index 0edbd1cf8..e93868352 100644 --- a/python_scripts/02_numerical_pipeline_cross_validation.py +++ b/python_scripts/02_numerical_pipeline_cross_validation.py @@ -8,9 +8,9 @@ # %% [markdown] # # Model evaluation using cross-validation # -# In this notebook, we will still use only numerical features. +# In this notebook, we still use numerical features only. # -# We will discuss the practical aspects of assessing the generalization +# Here we discuss the practical aspects of assessing the generalization # performance of our model via **cross-validation** instead of a single # train-test split. # @@ -24,8 +24,8 @@ adult_census = pd.read_csv("../datasets/adult-census.csv") # %% [markdown] -# We will now drop the target from the data we will use to train our -# predictive model. +# We now drop the target from the data we will use to train our predictive +# model. # %% target_name = "class" @@ -56,11 +56,11 @@ # ## The need for cross-validation # # In the previous notebook, we split the original data into a training set and a -# testing set. The score of a model will in general depend on the way we make -# such a split. One downside of doing a single split is that it does not give -# any information about this variability. Another downside, in a setting where -# the amount of data is small, is that the data available for training and -# testing will be even smaller after splitting. +# testing set. The score of a model in general depends on the way we make such a +# split. One downside of doing a single split is that it does not give any +# information about this variability. Another downside, in a setting where the +# amount of data is small, is that the data available for training and testing +# would be even smaller after splitting. # # Instead, we can use cross-validation. Cross-validation consists of repeating # the procedure such that the training and testing sets are different each time. @@ -69,8 +69,8 @@ # model's generalization performance. # # Note that there exists several cross-validation strategies, each of them -# defines how to repeat the `fit`/`score` procedure. In this section, we will -# use the K-fold strategy: the entire dataset is split into `K` partitions. The +# defines how to repeat the `fit`/`score` procedure. In this section, we use the +# K-fold strategy: the entire dataset is split into `K` partitions. The # `fit`/`score` procedure is repeated `K` times where at each iteration `K - 1` # partitions are used to fit the model and `1` partition is used to score. The # figure below illustrates this K-fold strategy. @@ -129,7 +129,7 @@ # [`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html) # to collect additional information, such as the training scores of the models # obtained on each round or even return the models themselves instead of -# discarding them. These features will be covered in a future notebook. +# discarding them. These features will be covered in a future notebook. # # Let's extract the scores computed on the test fold of each cross-validation # round from the `cv_result` dictionary and compute the mean accuracy and the diff --git a/python_scripts/02_numerical_pipeline_ex_00.py b/python_scripts/02_numerical_pipeline_ex_00.py index 0436dfc50..f251ca7f9 100644 --- a/python_scripts/02_numerical_pipeline_ex_00.py +++ b/python_scripts/02_numerical_pipeline_ex_00.py @@ -5,7 +5,7 @@ # extension: .py # format_name: percent # format_version: '1.3' -# jupytext_version: 1.14.5 +# jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 @@ -38,11 +38,12 @@ # number of neighbors we are going to use to make a prediction for a new data # point. # -# What is the default value of the `n_neighbors` parameter? Hint: Look at the -# documentation on the [scikit-learn +# What is the default value of the `n_neighbors` parameter? +# +# **Hint**: Look at the documentation on the [scikit-learn # website](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) # or directly access the description inside your notebook by running the -# following cell. This will open a pager pointing to the documentation. +# following cell. This opens a pager pointing to the documentation. # %% from sklearn.neighbors import KNeighborsClassifier diff --git a/python_scripts/02_numerical_pipeline_ex_01.py b/python_scripts/02_numerical_pipeline_ex_01.py index 7654753d4..2f9c5c240 100644 --- a/python_scripts/02_numerical_pipeline_ex_01.py +++ b/python_scripts/02_numerical_pipeline_ex_01.py @@ -5,7 +5,7 @@ # extension: .py # format_name: percent # format_version: '1.3' -# jupytext_version: 1.14.5 +# jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 @@ -35,8 +35,8 @@ adult_census = pd.read_csv("../datasets/adult-census.csv") # %% [markdown] -# We will first split our dataset to have the target separated from the data -# used to train our predictive model. +# We first split our dataset to have the target separated from the data used to +# train our predictive model. # %% target_name = "class" @@ -61,8 +61,8 @@ # Write your code here. # %% [markdown] -# Use a `DummyClassifier` such that the resulting classifier will always predict -# the class `' >50K'`. What is the accuracy score on the test set? Repeat the +# Use a `DummyClassifier` such that the resulting classifier always predict the +# class `' >50K'`. What is the accuracy score on the test set? Repeat the # experiment by always predicting the class `' <=50K'`. # # Hint: you can set the `strategy` parameter of the `DummyClassifier` to achieve diff --git a/python_scripts/02_numerical_pipeline_hands_on.py b/python_scripts/02_numerical_pipeline_hands_on.py index 83f4346ed..913b78105 100644 --- a/python_scripts/02_numerical_pipeline_hands_on.py +++ b/python_scripts/02_numerical_pipeline_hands_on.py @@ -21,8 +21,7 @@ # * using a scikit-learn helper to separate data into train-test sets; # * training and evaluating a more complex scikit-learn model. # -# We will start by loading the adult census dataset used during the data -# exploration. +# We start by loading the adult census dataset used during the data exploration. # # ## Loading the entire dataset # @@ -70,12 +69,12 @@ # numerical data usually requires very little work before getting started with # training. # -# The first task here will be to identify numerical data in our dataset. +# The first task here is to identify numerical data in our dataset. # # ```{caution} -# Numerical data are represented with numbers, but numbers are not always -# representing numerical data. Categories could already be encoded with -# numbers and you will need to identify these features. +# Numerical data are represented with numbers, but numbers do not always +# represent numerical data. Categories could already be encoded with +# numbers and you may need to identify these features. # ``` # # Thus, we can check the data type for each of the column in the dataset. @@ -123,7 +122,7 @@ # %% [markdown] # We can see the age varies between 17 and 90 years. # -# We could extend our analysis and we will find that `"capital-gain"`, +# We could extend our analysis and we would find that `"capital-gain"`, # `"capital-loss"`, and `"hours-per-week"` are also representing quantitative # data. # @@ -162,7 +161,7 @@ # %% [markdown] # When calling the function `train_test_split`, we specified that we would like # to have 25% of samples in the testing set while the remaining samples (75%) -# will be available in the training set. We can check quickly if we got what we +# are assigned to the training set. We can check quickly if we got what we # expected. # %% @@ -182,8 +181,8 @@ # %% [markdown] # In the previous notebook, we used a k-nearest neighbors model. While this # model is intuitive to understand, it is not widely used in practice. Now, we -# will use a more useful model, called a logistic regression, which belongs to -# the linear models family. +# use a more useful model, called a logistic regression, which belongs to the +# linear models family. # # ```{note} # In short, linear models find a set of weights to combine features linearly @@ -192,8 +191,8 @@ # * if `0.1 * age + 3.3 * hours-per-week - 15.1 > 0`, predict `high-income` # * otherwise predict `low-income` # -# Linear models, and in particular the logistic regression, will be covered in -# more details in the "Linear models" module later in this course. For now the +# Linear models, and in particular the logistic regression, will be covered +# more in detail in the "Linear models" module later in this course. For now the # focus is to use this logistic regression model in scikit-learn rather than # understand how it works in details. # ``` diff --git a/python_scripts/02_numerical_pipeline_scaling.py b/python_scripts/02_numerical_pipeline_scaling.py index 66370921d..4a0025f5d 100644 --- a/python_scripts/02_numerical_pipeline_scaling.py +++ b/python_scripts/02_numerical_pipeline_scaling.py @@ -8,9 +8,9 @@ # %% [markdown] # # Preprocessing for numerical features # -# In this notebook, we will still use only numerical features. +# In this notebook, we still use numerical features only. # -# We will introduce these new aspects: +# Here we introduce these new aspects: # # * an example of preprocessing, namely **scaling numerical variables**; # * using a scikit-learn **pipeline** to chain preprocessing and model training. @@ -25,8 +25,7 @@ adult_census = pd.read_csv("../datasets/adult-census.csv") # %% [markdown] -# We will now drop the target from the data we will use to train our predictive -# model. +# We now drop the target from the data we use to train our predictive model. # %% target_name = "class" @@ -67,7 +66,7 @@ # %% [markdown] # We see that the dataset's features span across different ranges. Some # algorithms make some assumptions regarding the feature distributions and -# usually normalizing features will be helpful to address these assumptions. +# normalizing features is usually helpful to address such assumptions. # # ```{tip} # Here are some reasons for scaling features: @@ -84,13 +83,13 @@ # Whether or not a machine learning model requires scaling the features depends # on the model family. Linear models such as logistic regression generally # benefit from scaling the features while other models such as decision trees do -# not need such preprocessing (but will not suffer from it). +# not need such preprocessing (but would not suffer from it). # # We show how to apply such normalization using a scikit-learn transformer # called `StandardScaler`. This transformer shifts and scales each feature # individually so that they all have a 0-mean and a unit standard deviation. # -# We will investigate different steps used in scikit-learn to achieve such a +# We now investigate different steps used in scikit-learn to achieve such a # transformation of the data. # # First, one needs to call the method `fit` in order to learn the scaling from @@ -115,10 +114,10 @@ # are the model states. # # ```{note} -# The fact that the model states of this scaler are arrays of means and -# standard deviations is specific to the `StandardScaler`. Other -# scikit-learn transformers will compute different statistics and store them -# as model states, in the same fashion. +# The fact that the model states of this scaler are arrays of means and standard +# deviations is specific to the `StandardScaler`. Other scikit-learn +# transformers may compute different statistics and store them as model states, +# in a similar fashion. # ``` # # We can inspect the computed means and standard deviations. @@ -225,7 +224,7 @@ # %% [markdown] # We can easily combine sequential operations with a scikit-learn `Pipeline`, # which chains together operations and is used as any other classifier or -# regressor. The helper function `make_pipeline` will create a `Pipeline`: it +# regressor. The helper function `make_pipeline` creates a `Pipeline`: it # takes as arguments the successive transformations to perform, followed by the # classifier or regressor model. @@ -240,8 +239,8 @@ # %% [markdown] # The `make_pipeline` function did not require us to give a name to each step. # Indeed, it was automatically assigned based on the name of the classes -# provided; a `StandardScaler` will be a step named `"standardscaler"` in the -# resulting pipeline. We can check the name of each steps of our model: +# provided; a `StandardScaler` step is named `"standardscaler"` in the resulting +# pipeline. We can check the name of each steps of our model: # %% model.named_steps @@ -263,7 +262,7 @@ # ![pipeline fit diagram](../figures/api_diagram-pipeline.fit.svg) # # When calling `model.fit`, the method `fit_transform` from each underlying -# transformer (here a single transformer) in the pipeline will be called to: +# transformer (here a single transformer) in the pipeline is called to: # # - learn their internal model states # - transform the training data. Finally, the preprocessed data are provided to @@ -284,7 +283,7 @@ # called to preprocess the data. Note that there is no need to call the `fit` # method for these transformers because we are using the internal model states # computed when calling `model.fit`. The preprocessed data is then provided to -# the predictor that will output the predicted target by calling its method +# the predictor that outputs the predicted target by calling its method # `predict`. # # As a shorthand, we can check the score of the full predictive pipeline calling diff --git a/python_scripts/02_numerical_pipeline_sol_00.py b/python_scripts/02_numerical_pipeline_sol_00.py index 7ac9a5496..a10f8555d 100644 --- a/python_scripts/02_numerical_pipeline_sol_00.py +++ b/python_scripts/02_numerical_pipeline_sol_00.py @@ -32,11 +32,12 @@ # number of neighbors we are going to use to make a prediction for a new data # point. # -# What is the default value of the `n_neighbors` parameter? Hint: Look at the -# documentation on the [scikit-learn +# What is the default value of the `n_neighbors` parameter? +# +# **Hint**: Look at the documentation on the [scikit-learn # website](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) # or directly access the description inside your notebook by running the -# following cell. This will open a pager pointing to the documentation. +# following cell. This opens a pager pointing to the documentation. # %% from sklearn.neighbors import KNeighborsClassifier diff --git a/python_scripts/02_numerical_pipeline_sol_01.py b/python_scripts/02_numerical_pipeline_sol_01.py index 70a21c31d..3e77f6372 100644 --- a/python_scripts/02_numerical_pipeline_sol_01.py +++ b/python_scripts/02_numerical_pipeline_sol_01.py @@ -29,8 +29,8 @@ adult_census = pd.read_csv("../datasets/adult-census.csv") # %% [markdown] -# We will first split our dataset to have the target separated from the data -# used to train our predictive model. +# We first split our dataset to have the target separated from the data used to +# train our predictive model. # %% target_name = "class" @@ -58,8 +58,8 @@ ) # %% [markdown] -# Use a `DummyClassifier` such that the resulting classifier will always predict -# the class `' >50K'`. What is the accuracy score on the test set? Repeat the +# Use a `DummyClassifier` such that the resulting classifier always predict the +# class `' >50K'`. What is the accuracy score on the test set? Repeat the # experiment by always predicting the class `' <=50K'`. # # Hint: you can set the `strategy` parameter of the `DummyClassifier` to achieve @@ -79,8 +79,8 @@ # %% [markdown] tags=["solution"] # We clearly see that the score is below 0.5 which might be surprising at first. -# We will now check the generalization performance of a model which always -# predict the low revenue class, i.e. `" <=50K"`. +# We now check the generalization performance of a model which always predict +# the low revenue class, i.e. `" <=50K"`. # %% tags=["solution"] class_to_predict = " <=50K" @@ -97,7 +97,7 @@ # %% [markdown] tags=["solution"] # Therefore, any predictive model giving results below this dummy classifier -# will not be helpful. +# would not be helpful. # %% tags=["solution"] adult_census["class"].value_counts() diff --git a/python_scripts/03_categorical_pipeline.py b/python_scripts/03_categorical_pipeline.py index 5acdefc82..62cd9be98 100644 --- a/python_scripts/03_categorical_pipeline.py +++ b/python_scripts/03_categorical_pipeline.py @@ -8,9 +8,9 @@ # %% [markdown] # # Encoding of categorical variables # -# In this notebook, we will present typical ways of dealing with -# **categorical variables** by encoding them, namely **ordinal encoding** and -# **one-hot encoding**. +# In this notebook, we present some typical ways of dealing with **categorical +# variables** by encoding them, namely **ordinal encoding** and **one-hot +# encoding**. # %% [markdown] # Let's first load the entire adult dataset containing both numerical and @@ -62,9 +62,9 @@ # ## Select features based on their data type # # In the previous notebook, we manually defined the numerical columns. We could -# do a similar approach. Instead, we will use the scikit-learn helper function -# `make_column_selector`, which allows us to select columns based on -# their data type. We will illustrate how to use this helper. +# do a similar approach. Instead, we can use the scikit-learn helper function +# `make_column_selector`, which allows us to select columns based on their data +# type. We now illustrate how to use this helper. # %% from sklearn.compose import make_column_selector as selector @@ -97,9 +97,8 @@ # ### Encoding ordinal categories # # The most intuitive strategy is to encode each category with a different -# number. The `OrdinalEncoder` will transform the data in such manner. -# We will start by encoding a single column to understand how the encoding -# works. +# number. The `OrdinalEncoder` transforms the data in such manner. We start by +# encoding a single column to understand how the encoding works. # %% from sklearn.preprocessing import OrdinalEncoder @@ -160,13 +159,13 @@ # # `OneHotEncoder` is an alternative encoder that prevents the downstream # models to make a false assumption about the ordering of categories. For a -# given feature, it will create as many new columns as there are possible +# given feature, it creates as many new columns as there are possible # categories. For a given sample, the value of the column corresponding to the -# category will be set to `1` while all the columns of the other categories -# will be set to `0`. +# category is set to `1` while all the columns of the other categories +# are set to `0`. # -# We will start by encoding a single feature (e.g. `"education"`) to illustrate -# how the encoding works. +# We can encode a single feature (e.g. `"education"`) to illustrate how the +# encoding works. # %% from sklearn.preprocessing import OneHotEncoder @@ -187,7 +186,7 @@ # ``` # %% [markdown] -# We see that encoding a single feature will give a dataframe full of zeros +# We see that encoding a single feature gives a dataframe full of zeros # and ones. Each category (unique value) became a column; the encoding # returned, for each sample, a 1 to specify which category it belongs to. # @@ -215,8 +214,8 @@ # %% [markdown] # ### Choosing an encoding strategy # -# Choosing an encoding strategy will depend on the underlying models and the -# type of categories (i.e. ordinal vs. nominal). +# Choosing an encoding strategy depends on the underlying models and the type of +# categories (i.e. ordinal vs. nominal). # %% [markdown] # ```{note} @@ -226,12 +225,11 @@ # ``` # %% [markdown] -# -# Using an `OrdinalEncoder` will output ordinal categories. This means +# Using an `OrdinalEncoder` outputs ordinal categories. This means # that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The # impact of violating this ordering assumption is really dependent on the -# downstream models. Linear models will be impacted by misordered categories -# while tree-based models will not. +# downstream models. Linear models would be impacted by misordered categories +# while tree-based models would not. # # You can still use an `OrdinalEncoder` with linear models but you need to be # sure that: @@ -265,7 +263,7 @@ # We see that the `"Holand-Netherlands"` category is occurring rarely. This will # be a problem during cross-validation: if the sample ends up in the test set # during splitting then the classifier would not have seen the category during -# training and will not be able to encode it. +# training and would not be able to encode it. # # In scikit-learn, there are some possible solutions to bypass this issue: # @@ -289,9 +287,9 @@ # ```{tip} # Be aware the `OrdinalEncoder` exposes a parameter also named `handle_unknown`. # It can be set to `use_encoded_value`. If that option is chosen, you can define -# a fixed value to which all unknowns will be set to during `transform`. For -# example, `OrdinalEncoder(handle_unknown='use_encoded_value', -# unknown_value=42)` will set all values encountered during `transform` to `42` +# a fixed value that is assigned to all unknown categories during `transform`. +# For example, `OrdinalEncoder(handle_unknown='use_encoded_value', +# unknown_value=-1)` would set all values encountered during `transform` to `-1` # which are not part of the data encountered during the `fit` call. You are # going to use these parameters in the next exercise. # ``` diff --git a/python_scripts/03_categorical_pipeline_column_transformer.py b/python_scripts/03_categorical_pipeline_column_transformer.py index 002889af3..fd429749e 100644 --- a/python_scripts/03_categorical_pipeline_column_transformer.py +++ b/python_scripts/03_categorical_pipeline_column_transformer.py @@ -8,12 +8,12 @@ # %% [markdown] # # Using numerical and categorical variables together # -# In the previous notebooks, we showed the required preprocessing to apply -# when dealing with numerical and categorical variables. However, we decoupled -# the process to treat each type individually. In this notebook, we will show -# how to combine these preprocessing steps. +# In the previous notebooks, we showed the required preprocessing to apply when +# dealing with numerical and categorical variables. However, we decoupled the +# process to treat each type individually. In this notebook, we show how to +# combine these preprocessing steps. # -# We will first load the entire adult census dataset. +# We first load the entire adult census dataset. # %% import pandas as pd @@ -30,10 +30,10 @@ # %% [markdown] # ## Selection based on data types # -# We will separate categorical and numerical variables using their data -# types to identify them, as we saw previously that `object` corresponds -# to categorical columns (strings). We make use of `make_column_selector` -# helper to select the corresponding columns. +# We separate categorical and numerical variables using their data types to +# identify them, as we saw previously that `object` corresponds to categorical +# columns (strings). We make use of `make_column_selector` helper to select the +# corresponding columns. # %% from sklearn.compose import make_column_selector as selector @@ -62,14 +62,14 @@ # In the previous sections, we saw that we need to treat data differently # depending on their nature (i.e. numerical or categorical). # -# Scikit-learn provides a `ColumnTransformer` class which will send specific +# Scikit-learn provides a `ColumnTransformer` class which sends specific # columns to a specific transformer, making it easy to fit a single predictive # model on a dataset that combines both kinds of variables together # (heterogeneously typed tabular data). # # We first define the columns depending on their data type: # -# * **one-hot encoding** will be applied to categorical columns. Besides, we use +# * **one-hot encoding** is applied to categorical columns. Besides, we use # `handle_unknown="ignore"` to solve the potential issues due to rare # categories. # * **numerical scaling** numerical features which will be standardized. @@ -107,11 +107,11 @@ # A `ColumnTransformer` does the following: # # * It **splits the columns** of the original dataset based on the column names -# or indices provided. We will obtain as many subsets as the number of -# transformers passed into the `ColumnTransformer`. +# or indices provided. We obtain as many subsets as the number of transformers +# passed into the `ColumnTransformer`. # * It **transforms each subsets**. A specific transformer is applied to each -# subset: it will internally call `fit_transform` or `transform`. The output -# of this step is a set of transformed datasets. +# subset: it internally calls `fit_transform` or `transform`. The output of +# this step is a set of transformed datasets. # * It then **concatenates the transformed datasets** into a single dataset. # The important thing is that `ColumnTransformer` is like any other scikit-learn @@ -161,7 +161,7 @@ # %% [markdown] # Then, we can send the raw dataset straight to the pipeline. Indeed, we do not # need to make any manual preprocessing (calling the `transform` or -# `fit_transform` methods) as it will be handled when calling the `predict` +# `fit_transform` methods) as it is already handled when calling the `predict` # method. As an example, we predict on the five first samples from the test set. # %% @@ -212,10 +212,10 @@ # # However, it is often useful to check whether more complex models such as an # ensemble of decision trees can lead to higher predictive performance. In this -# section we will use such a model called **gradient-boosting trees** and -# evaluate its generalization performance. More precisely, the scikit-learn -# model we will use is called `HistGradientBoostingClassifier`. Note that -# boosting models will be covered in more detail in a future module. +# section we use such a model called **gradient-boosting trees** and evaluate +# its generalization performance. More precisely, the scikit-learn model we use +# is called `HistGradientBoostingClassifier`. Note that boosting models will be +# covered in more detail in a future module. # # For tree-based models, the handling of numerical and categorical variables is # simpler than for linear models: diff --git a/python_scripts/03_categorical_pipeline_ex_01.py b/python_scripts/03_categorical_pipeline_ex_01.py index ae19eab2f..4f2054867 100644 --- a/python_scripts/03_categorical_pipeline_ex_01.py +++ b/python_scripts/03_categorical_pipeline_ex_01.py @@ -5,7 +5,7 @@ # extension: .py # format_name: percent # format_version: '1.3' -# jupytext_version: 1.14.5 +# jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 @@ -39,9 +39,8 @@ # %% [markdown] # In the previous notebook, we used `sklearn.compose.make_column_selector` to # automatically select columns with a specific data type (also called `dtype`). -# Here, we will use this selector to get only the columns containing strings -# (column with `object` dtype) that correspond to categorical features in our -# dataset. +# Here, we use this selector to get only the columns containing strings (column +# with `object` dtype) that correspond to categorical features in our dataset. # %% from sklearn.compose import make_column_selector as selector @@ -73,11 +72,11 @@ # # ```{note} # Be aware that if an error happened during the cross-validation, -# `cross_validate` will raise a warning and return NaN (Not a Number) as scores. +# `cross_validate` would raise a warning and return NaN (Not a Number) as scores. # To make it raise a standard Python exception with a traceback, you can pass # the `error_score="raise"` argument in the call to `cross_validate`. An -# exception will be raised instead of a warning at the first encountered problem -# and `cross_validate` will stop right away instead of returning NaN values. +# exception would be raised instead of a warning at the first encountered problem +# and `cross_validate` would stop right away instead of returning NaN values. # This is particularly handy when developing complex machine learning pipelines. # ``` @@ -88,8 +87,8 @@ # %% [markdown] # Now, we would like to compare the generalization performance of our previous -# model with a new model where instead of using an `OrdinalEncoder`, we will use -# a `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare +# model with a new model where instead of using an `OrdinalEncoder`, we use a +# `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare # the score of both models and conclude on the impact of choosing a specific # encoding strategy when using a linear model. diff --git a/python_scripts/03_categorical_pipeline_ex_02.py b/python_scripts/03_categorical_pipeline_ex_02.py index 7daacfbd4..979b8b0b9 100644 --- a/python_scripts/03_categorical_pipeline_ex_02.py +++ b/python_scripts/03_categorical_pipeline_ex_02.py @@ -5,7 +5,7 @@ # extension: .py # format_name: percent # format_version: '1.3' -# jupytext_version: 1.14.5 +# jupytext_version: 1.15.2 # kernelspec: # display_name: Python 3 # name: python3 diff --git a/python_scripts/03_categorical_pipeline_sol_01.py b/python_scripts/03_categorical_pipeline_sol_01.py index 0847e7e30..e7b30598c 100644 --- a/python_scripts/03_categorical_pipeline_sol_01.py +++ b/python_scripts/03_categorical_pipeline_sol_01.py @@ -33,9 +33,8 @@ # %% [markdown] # In the previous notebook, we used `sklearn.compose.make_column_selector` to # automatically select columns with a specific data type (also called `dtype`). -# Here, we will use this selector to get only the columns containing strings -# (column with `object` dtype) that correspond to categorical features in our -# dataset. +# Here, we use this selector to get only the columns containing strings (column +# with `object` dtype) that correspond to categorical features in our dataset. # %% from sklearn.compose import make_column_selector as selector @@ -71,11 +70,11 @@ # # ```{note} # Be aware that if an error happened during the cross-validation, -# `cross_validate` will raise a warning and return NaN (Not a Number) as scores. +# `cross_validate` would raise a warning and return NaN (Not a Number) as scores. # To make it raise a standard Python exception with a traceback, you can pass # the `error_score="raise"` argument in the call to `cross_validate`. An -# exception will be raised instead of a warning at the first encountered problem -# and `cross_validate` will stop right away instead of returning NaN values. +# exception would be raised instead of a warning at the first encountered problem +# and `cross_validate` would stop right away instead of returning NaN values. # This is particularly handy when developing complex machine learning pipelines. # ``` @@ -114,8 +113,8 @@ # %% [markdown] # Now, we would like to compare the generalization performance of our previous -# model with a new model where instead of using an `OrdinalEncoder`, we will use -# a `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare +# model with a new model where instead of using an `OrdinalEncoder`, we use a +# `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare # the score of both models and conclude on the impact of choosing a specific # encoding strategy when using a linear model. @@ -139,4 +138,4 @@ # # The important message here is: linear model and `OrdinalEncoder` are used # together only for ordinal categorical features, i.e. features that have a -# specific ordering. Otherwise, your model will perform poorly. +# specific ordering. Otherwise, your model would perform poorly. diff --git a/python_scripts/03_categorical_pipeline_sol_02.py b/python_scripts/03_categorical_pipeline_sol_02.py index f73671fe4..36e8ddcdc 100644 --- a/python_scripts/03_categorical_pipeline_sol_02.py +++ b/python_scripts/03_categorical_pipeline_sol_02.py @@ -199,7 +199,7 @@ # | Tree-based model | `OrdinalEncoder` | `OrdinalEncoder` | # | Linear model | `OrdinalEncoder` with caution | `OneHotEncoder` | # -# - `OneHotEncoder`: will always do something meaningful, but can be unnecessary +# - `OneHotEncoder`: always does something meaningful, but can be unnecessary # slow with trees. # - `OrdinalEncoder`: can be detrimental for linear models unless your category # has a meaningful order and you make sure that `OrdinalEncoder` respects this diff --git a/python_scripts/03_categorical_pipeline_visualization.py b/python_scripts/03_categorical_pipeline_visualization.py index 0b10a6f42..ad22e5ee3 100644 --- a/python_scripts/03_categorical_pipeline_visualization.py +++ b/python_scripts/03_categorical_pipeline_visualization.py @@ -19,8 +19,8 @@ # ## First we load the dataset # %% [markdown] -# We need to define our data and target. In this case we will build a -# classification model +# We need to define our data and target. In this case we build a classification +# model # %% import pandas as pd From 2db3b7e5066f46f507cc174f1788fc8417def73d Mon Sep 17 00:00:00 2001 From: ArturoAmorQ Date: Tue, 7 Nov 2023 15:35:59 +0100 Subject: [PATCH 3/3] Synchronize notebooks --- notebooks/01_tabular_data_exploration.ipynb | 34 +++++++------- .../01_tabular_data_exploration_ex_01.ipynb | 2 +- .../01_tabular_data_exploration_sol_01.ipynb | 2 +- ..._numerical_pipeline_cross_validation.ipynb | 24 +++++----- notebooks/02_numerical_pipeline_ex_00.ipynb | 7 +-- notebooks/02_numerical_pipeline_ex_01.ipynb | 8 ++-- .../02_numerical_pipeline_hands_on.ipynb | 23 +++++----- notebooks/02_numerical_pipeline_scaling.ipynb | 31 ++++++------- notebooks/02_numerical_pipeline_sol_00.ipynb | 7 +-- notebooks/02_numerical_pipeline_sol_01.ipynb | 14 +++--- notebooks/03_categorical_pipeline.ipynb | 46 +++++++++---------- ...egorical_pipeline_column_transformer.ipynb | 40 ++++++++-------- notebooks/03_categorical_pipeline_ex_01.ipynb | 15 +++--- .../03_categorical_pipeline_sol_01.ipynb | 17 ++++--- .../03_categorical_pipeline_sol_02.ipynb | 2 +- ...3_categorical_pipeline_visualization.ipynb | 4 +- 16 files changed, 136 insertions(+), 140 deletions(-) diff --git a/notebooks/01_tabular_data_exploration.ipynb b/notebooks/01_tabular_data_exploration.ipynb index 2f47aadde..6e11251e6 100644 --- a/notebooks/01_tabular_data_exploration.ipynb +++ b/notebooks/01_tabular_data_exploration.ipynb @@ -6,8 +6,8 @@ "source": [ "# First look at our dataset\n", "\n", - "In this notebook, we will look at the necessary steps required before any\n", - " machine learning takes place. It involves:\n", + "In this notebook, we look at the necessary steps required before any machine\n", + " learning takes place. It involves:\n", "\n", "* loading the data;\n", "* looking at the variables in the dataset, in particular, differentiate\n", @@ -23,14 +23,14 @@ "source": [ "## Loading the adult census dataset\n", "\n", - "We will use data from the 1994 US census that we downloaded from\n", + "We use data from the 1994 US census that we downloaded from\n", "[OpenML](http://openml.org/).\n", "\n", "You can look at the OpenML webpage to learn more about this dataset:\n", "\n", "\n", - "The dataset is available as a CSV (Comma-Separated Values) file and we will\n", - "use `pandas` to read it.\n", + "The dataset is available as a CSV (Comma-Separated Values) file and we use\n", + "`pandas` to read it.\n", "\n", "
\n", "

Note

\n", @@ -105,9 +105,9 @@ "The column named **class** is our target variable (i.e., the variable which we\n", "want to predict). The two possible classes are `<=50K` (low-revenue) and\n", "`>50K` (high-revenue). The resulting prediction problem is therefore a binary\n", - "classification problem as `class` has only two possible values. We will use\n", - "the left-over columns (any column other than `class`) as input variables for\n", - "our model." + "classification problem as `class` has only two possible values. We use the\n", + "left-over columns (any column other than `class`) as input variables for our\n", + "model." ] }, { @@ -131,7 +131,7 @@ "with \" <=50K\" than with \" >50K\". Class imbalance happens often in practice\n", "and may need special techniques when building a predictive model.

\n", "

For example in a medical setting, if we are trying to predict whether subjects\n", - "will develop a rare disease, there will be a lot more healthy subjects than\n", + "may develop a rare disease, there would be a lot more healthy subjects than\n", "ill subjects in the dataset.

\n", "
" ] @@ -389,8 +389,8 @@ "source": [ "import seaborn as sns\n", "\n", - "# We will plot a subset of the data to keep the plot readable and make the\n", - "# plotting faster\n", + "# We plot a subset of the data to keep the plot readable and make the plotting\n", + "# faster\n", "n_samples_to_plot = 5000\n", "columns = [\"age\", \"education-num\", \"hours-per-week\"]\n", "_ = sns.pairplot(\n", @@ -486,12 +486,12 @@ " a mix of blue points and orange points. It seems complicated to choose which\n", " class we should predict in this region.\n", "\n", - "It is interesting to note that some machine learning models will work\n", - "similarly to what we did: they are known as decision tree models. The two\n", - "thresholds that we chose (27 years and 40 hours) are somewhat arbitrary, i.e.\n", - "we chose them by only looking at the pairplot. In contrast, a decision tree\n", - "will choose the \"best\" splits based on data without human intervention or\n", - "inspection. Decision trees will be covered more in detail in a future module.\n", + "It is interesting to note that some machine learning models work similarly to\n", + "what we did: they are known as decision tree models. The two thresholds that\n", + "we chose (27 years and 40 hours) are somewhat arbitrary, i.e. we chose them by\n", + "only looking at the pairplot. In contrast, a decision tree chooses the \"best\"\n", + "splits based on data without human intervention or inspection. Decision trees\n", + "will be covered more in detail in a future module.\n", "\n", "Note that machine learning is often used when creating rules by hand is not\n", "straightforward. For example because we are in high dimension (many features\n", diff --git a/notebooks/01_tabular_data_exploration_ex_01.ipynb b/notebooks/01_tabular_data_exploration_ex_01.ipynb index 040c50c82..373db2d55 100644 --- a/notebooks/01_tabular_data_exploration_ex_01.ipynb +++ b/notebooks/01_tabular_data_exploration_ex_01.ipynb @@ -109,7 +109,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Looking at these distributions, how hard do you think it will be to classify\n", + "Looking at these distributions, how hard do you think it would be to classify\n", "the penguins only using `\"culmen depth\"` and `\"culmen length\"`?" ] } diff --git a/notebooks/01_tabular_data_exploration_sol_01.ipynb b/notebooks/01_tabular_data_exploration_sol_01.ipynb index 3cd2ae2c0..d8bd25e63 100644 --- a/notebooks/01_tabular_data_exploration_sol_01.ipynb +++ b/notebooks/01_tabular_data_exploration_sol_01.ipynb @@ -168,7 +168,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Looking at these distributions, how hard do you think it will be to classify\n", + "Looking at these distributions, how hard do you think it would be to classify\n", "the penguins only using `\"culmen depth\"` and `\"culmen length\"`?" ] }, diff --git a/notebooks/02_numerical_pipeline_cross_validation.ipynb b/notebooks/02_numerical_pipeline_cross_validation.ipynb index c7422f698..82b8ac2eb 100644 --- a/notebooks/02_numerical_pipeline_cross_validation.ipynb +++ b/notebooks/02_numerical_pipeline_cross_validation.ipynb @@ -6,9 +6,9 @@ "source": [ "# Model evaluation using cross-validation\n", "\n", - "In this notebook, we will still use only numerical features.\n", + "In this notebook, we still use numerical features only.\n", "\n", - "We will discuss the practical aspects of assessing the generalization\n", + "Here we discuss the practical aspects of assessing the generalization\n", "performance of our model via **cross-validation** instead of a single\n", "train-test split.\n", "\n", @@ -32,8 +32,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will now drop the target from the data we will use to train our\n", - "predictive model." + "We now drop the target from the data we will use to train our predictive\n", + "model." ] }, { @@ -94,11 +94,11 @@ "## The need for cross-validation\n", "\n", "In the previous notebook, we split the original data into a training set and a\n", - "testing set. The score of a model will in general depend on the way we make\n", - "such a split. One downside of doing a single split is that it does not give\n", - "any information about this variability. Another downside, in a setting where\n", - "the amount of data is small, is that the data available for training and\n", - "testing will be even smaller after splitting.\n", + "testing set. The score of a model in general depends on the way we make such a\n", + "split. One downside of doing a single split is that it does not give any\n", + "information about this variability. Another downside, in a setting where the\n", + "amount of data is small, is that the data available for training and testing\n", + "would be even smaller after splitting.\n", "\n", "Instead, we can use cross-validation. Cross-validation consists of repeating\n", "the procedure such that the training and testing sets are different each time.\n", @@ -107,8 +107,8 @@ "model's generalization performance.\n", "\n", "Note that there exists several cross-validation strategies, each of them\n", - "defines how to repeat the `fit`/`score` procedure. In this section, we will\n", - "use the K-fold strategy: the entire dataset is split into `K` partitions. The\n", + "defines how to repeat the `fit`/`score` procedure. In this section, we use the\n", + "K-fold strategy: the entire dataset is split into `K` partitions. The\n", "`fit`/`score` procedure is repeated `K` times where at each iteration `K - 1`\n", "partitions are used to fit the model and `1` partition is used to score. The\n", "figure below illustrates this K-fold strategy.\n", @@ -178,7 +178,7 @@ "[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)\n", "to collect additional information, such as the training scores of the models\n", "obtained on each round or even return the models themselves instead of\n", - "discarding them. These features will be covered in a future notebook.\n", + "discarding them. These features will be covered in a future notebook.\n", "\n", "Let's extract the scores computed on the test fold of each cross-validation\n", "round from the `cv_result` dictionary and compute the mean accuracy and the\n", diff --git a/notebooks/02_numerical_pipeline_ex_00.ipynb b/notebooks/02_numerical_pipeline_ex_00.ipynb index ef7d6b923..4c09e2233 100644 --- a/notebooks/02_numerical_pipeline_ex_00.ipynb +++ b/notebooks/02_numerical_pipeline_ex_00.ipynb @@ -44,11 +44,12 @@ "number of neighbors we are going to use to make a prediction for a new data\n", "point.\n", "\n", - "What is the default value of the `n_neighbors` parameter? Hint: Look at the\n", - "documentation on the [scikit-learn\n", + "What is the default value of the `n_neighbors` parameter?\n", + "\n", + "**Hint**: Look at the documentation on the [scikit-learn\n", "website](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)\n", "or directly access the description inside your notebook by running the\n", - "following cell. This will open a pager pointing to the documentation." + "following cell. This opens a pager pointing to the documentation." ] }, { diff --git a/notebooks/02_numerical_pipeline_ex_01.ipynb b/notebooks/02_numerical_pipeline_ex_01.ipynb index 688f435e6..08c008f6b 100644 --- a/notebooks/02_numerical_pipeline_ex_01.ipynb +++ b/notebooks/02_numerical_pipeline_ex_01.ipynb @@ -37,8 +37,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will first split our dataset to have the target separated from the data\n", - "used to train our predictive model." + "We first split our dataset to have the target separated from the data used to\n", + "train our predictive model." ] }, { @@ -93,8 +93,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Use a `DummyClassifier` such that the resulting classifier will always predict\n", - "the class `' >50K'`. What is the accuracy score on the test set? Repeat the\n", + "Use a `DummyClassifier` such that the resulting classifier always predict the\n", + "class `' >50K'`. What is the accuracy score on the test set? Repeat the\n", "experiment by always predicting the class `' <=50K'`.\n", "\n", "Hint: you can set the `strategy` parameter of the `DummyClassifier` to achieve\n", diff --git a/notebooks/02_numerical_pipeline_hands_on.ipynb b/notebooks/02_numerical_pipeline_hands_on.ipynb index 326bc0aad..fff46e8cc 100644 --- a/notebooks/02_numerical_pipeline_hands_on.ipynb +++ b/notebooks/02_numerical_pipeline_hands_on.ipynb @@ -19,8 +19,7 @@ "* using a scikit-learn helper to separate data into train-test sets;\n", "* training and evaluating a more complex scikit-learn model.\n", "\n", - "We will start by loading the adult census dataset used during the data\n", - "exploration.\n", + "We start by loading the adult census dataset used during the data exploration.\n", "\n", "## Loading the entire dataset\n", "\n", @@ -105,13 +104,13 @@ "numerical data usually requires very little work before getting started with\n", "training.\n", "\n", - "The first task here will be to identify numerical data in our dataset.\n", + "The first task here is to identify numerical data in our dataset.\n", "\n", "
\n", "

Caution!

\n", - "

Numerical data are represented with numbers, but numbers are not always\n", - "representing numerical data. Categories could already be encoded with\n", - "numbers and you will need to identify these features.

\n", + "

Numerical data are represented with numbers, but numbers do not always\n", + "represent numerical data. Categories could already be encoded with\n", + "numbers and you may need to identify these features.

\n", "
\n", "\n", "Thus, we can check the data type for each of the column in the dataset." @@ -209,7 +208,7 @@ "source": [ "We can see the age varies between 17 and 90 years.\n", "\n", - "We could extend our analysis and we will find that `\"capital-gain\"`,\n", + "We could extend our analysis and we would find that `\"capital-gain\"`,\n", "`\"capital-loss\"`, and `\"hours-per-week\"` are also representing quantitative\n", "data.\n", "\n", @@ -273,7 +272,7 @@ "source": [ "When calling the function `train_test_split`, we specified that we would like\n", "to have 25% of samples in the testing set while the remaining samples (75%)\n", - "will be available in the training set. We can check quickly if we got what we\n", + "are assigned to the training set. We can check quickly if we got what we\n", "expected." ] }, @@ -309,8 +308,8 @@ "source": [ "In the previous notebook, we used a k-nearest neighbors model. While this\n", "model is intuitive to understand, it is not widely used in practice. Now, we\n", - "will use a more useful model, called a logistic regression, which belongs to\n", - "the linear models family.\n", + "use a more useful model, called a logistic regression, which belongs to the\n", + "linear models family.\n", "\n", "
\n", "

Note

\n", @@ -321,8 +320,8 @@ "
  • if 0.1 * age + 3.3 * hours-per-week - 15.1 > 0, predict high-income
  • \n", "
  • otherwise predict low-income
  • \n", "\n", - "

    Linear models, and in particular the logistic regression, will be covered in\n", - "more details in the \"Linear models\" module later in this course. For now the\n", + "

    Linear models, and in particular the logistic regression, will be covered\n", + "more in detail in the \"Linear models\" module later in this course. For now the\n", "focus is to use this logistic regression model in scikit-learn rather than\n", "understand how it works in details.

    \n", "
    \n", diff --git a/notebooks/02_numerical_pipeline_scaling.ipynb b/notebooks/02_numerical_pipeline_scaling.ipynb index c7bd8d751..4fe003f24 100644 --- a/notebooks/02_numerical_pipeline_scaling.ipynb +++ b/notebooks/02_numerical_pipeline_scaling.ipynb @@ -6,9 +6,9 @@ "source": [ "# Preprocessing for numerical features\n", "\n", - "In this notebook, we will still use only numerical features.\n", + "In this notebook, we still use numerical features only.\n", "\n", - "We will introduce these new aspects:\n", + "Here we introduce these new aspects:\n", "\n", "* an example of preprocessing, namely **scaling numerical variables**;\n", "* using a scikit-learn **pipeline** to chain preprocessing and model training.\n", @@ -33,8 +33,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will now drop the target from the data we will use to train our predictive\n", - "model." + "We now drop the target from the data we use to train our predictive model." ] }, { @@ -115,7 +114,7 @@ "source": [ "We see that the dataset's features span across different ranges. Some\n", "algorithms make some assumptions regarding the feature distributions and\n", - "usually normalizing features will be helpful to address these assumptions.\n", + "normalizing features is usually helpful to address such assumptions.\n", "\n", "
    \n", "

    Tip

    \n", @@ -133,13 +132,13 @@ "Whether or not a machine learning model requires scaling the features depends\n", "on the model family. Linear models such as logistic regression generally\n", "benefit from scaling the features while other models such as decision trees do\n", - "not need such preprocessing (but will not suffer from it).\n", + "not need such preprocessing (but would not suffer from it).\n", "\n", "We show how to apply such normalization using a scikit-learn transformer\n", "called `StandardScaler`. This transformer shifts and scales each feature\n", "individually so that they all have a 0-mean and a unit standard deviation.\n", "\n", - "We will investigate different steps used in scikit-learn to achieve such a\n", + "We now investigate different steps used in scikit-learn to achieve such a\n", "transformation of the data.\n", "\n", "First, one needs to call the method `fit` in order to learn the scaling from\n", @@ -175,10 +174,10 @@ "\n", "
    \n", "

    Note

    \n", - "

    The fact that the model states of this scaler are arrays of means and\n", - "standard deviations is specific to the StandardScaler. Other\n", - "scikit-learn transformers will compute different statistics and store them\n", - "as model states, in the same fashion.

    \n", + "

    The fact that the model states of this scaler are arrays of means and standard\n", + "deviations is specific to the StandardScaler. Other scikit-learn\n", + "transformers may compute different statistics and store them as model states,\n", + "in a similar fashion.

    \n", "
    \n", "\n", "We can inspect the computed means and standard deviations." @@ -353,7 +352,7 @@ "source": [ "We can easily combine sequential operations with a scikit-learn `Pipeline`,\n", "which chains together operations and is used as any other classifier or\n", - "regressor. The helper function `make_pipeline` will create a `Pipeline`: it\n", + "regressor. The helper function `make_pipeline` creates a `Pipeline`: it\n", "takes as arguments the successive transformations to perform, followed by the\n", "classifier or regressor model." ] @@ -378,8 +377,8 @@ "source": [ "The `make_pipeline` function did not require us to give a name to each step.\n", "Indeed, it was automatically assigned based on the name of the classes\n", - "provided; a `StandardScaler` will be a step named `\"standardscaler\"` in the\n", - "resulting pipeline. We can check the name of each steps of our model:" + "provided; a `StandardScaler` step is named `\"standardscaler\"` in the resulting\n", + "pipeline. We can check the name of each steps of our model:" ] }, { @@ -421,7 +420,7 @@ "![pipeline fit diagram](../figures/api_diagram-pipeline.fit.svg)\n", "\n", "When calling `model.fit`, the method `fit_transform` from each underlying\n", - "transformer (here a single transformer) in the pipeline will be called to:\n", + "transformer (here a single transformer) in the pipeline is called to:\n", "\n", "- learn their internal model states\n", "- transform the training data. Finally, the preprocessed data are provided to\n", @@ -452,7 +451,7 @@ "called to preprocess the data. Note that there is no need to call the `fit`\n", "method for these transformers because we are using the internal model states\n", "computed when calling `model.fit`. The preprocessed data is then provided to\n", - "the predictor that will output the predicted target by calling its method\n", + "the predictor that outputs the predicted target by calling its method\n", "`predict`.\n", "\n", "As a shorthand, we can check the score of the full predictive pipeline calling\n", diff --git a/notebooks/02_numerical_pipeline_sol_00.ipynb b/notebooks/02_numerical_pipeline_sol_00.ipynb index ff144d5c0..e5be6f7e2 100644 --- a/notebooks/02_numerical_pipeline_sol_00.ipynb +++ b/notebooks/02_numerical_pipeline_sol_00.ipynb @@ -44,11 +44,12 @@ "number of neighbors we are going to use to make a prediction for a new data\n", "point.\n", "\n", - "What is the default value of the `n_neighbors` parameter? Hint: Look at the\n", - "documentation on the [scikit-learn\n", + "What is the default value of the `n_neighbors` parameter?\n", + "\n", + "**Hint**: Look at the documentation on the [scikit-learn\n", "website](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)\n", "or directly access the description inside your notebook by running the\n", - "following cell. This will open a pager pointing to the documentation." + "following cell. This opens a pager pointing to the documentation." ] }, { diff --git a/notebooks/02_numerical_pipeline_sol_01.ipynb b/notebooks/02_numerical_pipeline_sol_01.ipynb index 2198c76b8..352cf234f 100644 --- a/notebooks/02_numerical_pipeline_sol_01.ipynb +++ b/notebooks/02_numerical_pipeline_sol_01.ipynb @@ -37,8 +37,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will first split our dataset to have the target separated from the data\n", - "used to train our predictive model." + "We first split our dataset to have the target separated from the data used to\n", + "train our predictive model." ] }, { @@ -96,8 +96,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Use a `DummyClassifier` such that the resulting classifier will always predict\n", - "the class `' >50K'`. What is the accuracy score on the test set? Repeat the\n", + "Use a `DummyClassifier` such that the resulting classifier always predict the\n", + "class `' >50K'`. What is the accuracy score on the test set? Repeat the\n", "experiment by always predicting the class `' <=50K'`.\n", "\n", "Hint: you can set the `strategy` parameter of the `DummyClassifier` to achieve\n", @@ -131,8 +131,8 @@ }, "source": [ "We clearly see that the score is below 0.5 which might be surprising at first.\n", - "We will now check the generalization performance of a model which always\n", - "predict the low revenue class, i.e. `\" <=50K\"`." + "We now check the generalization performance of a model which always predict\n", + "the low revenue class, i.e. `\" <=50K\"`." ] }, { @@ -175,7 +175,7 @@ }, "source": [ "Therefore, any predictive model giving results below this dummy classifier\n", - "will not be helpful." + "would not be helpful." ] }, { diff --git a/notebooks/03_categorical_pipeline.ipynb b/notebooks/03_categorical_pipeline.ipynb index 3972842a5..575268c9f 100644 --- a/notebooks/03_categorical_pipeline.ipynb +++ b/notebooks/03_categorical_pipeline.ipynb @@ -6,9 +6,9 @@ "source": [ "# Encoding of categorical variables\n", "\n", - "In this notebook, we will present typical ways of dealing with\n", - "**categorical variables** by encoding them, namely **ordinal encoding** and\n", - "**one-hot encoding**." + "In this notebook, we present some typical ways of dealing with **categorical\n", + "variables** by encoding them, namely **ordinal encoding** and **one-hot\n", + "encoding**." ] }, { @@ -94,9 +94,9 @@ "## Select features based on their data type\n", "\n", "In the previous notebook, we manually defined the numerical columns. We could\n", - "do a similar approach. Instead, we will use the scikit-learn helper function\n", - "`make_column_selector`, which allows us to select columns based on\n", - "their data type. We will illustrate how to use this helper." + "do a similar approach. Instead, we can use the scikit-learn helper function\n", + "`make_column_selector`, which allows us to select columns based on their data\n", + "type. We now illustrate how to use this helper." ] }, { @@ -159,9 +159,8 @@ "### Encoding ordinal categories\n", "\n", "The most intuitive strategy is to encode each category with a different\n", - "number. The `OrdinalEncoder` will transform the data in such manner.\n", - "We will start by encoding a single column to understand how the encoding\n", - "works." + "number. The `OrdinalEncoder` transforms the data in such manner. We start by\n", + "encoding a single column to understand how the encoding works." ] }, { @@ -258,13 +257,13 @@ "\n", "`OneHotEncoder` is an alternative encoder that prevents the downstream\n", "models to make a false assumption about the ordering of categories. For a\n", - "given feature, it will create as many new columns as there are possible\n", + "given feature, it creates as many new columns as there are possible\n", "categories. For a given sample, the value of the column corresponding to the\n", - "category will be set to `1` while all the columns of the other categories\n", - "will be set to `0`.\n", + "category is set to `1` while all the columns of the other categories\n", + "are set to `0`.\n", "\n", - "We will start by encoding a single feature (e.g. `\"education\"`) to illustrate\n", - "how the encoding works." + "We can encode a single feature (e.g. `\"education\"`) to illustrate how the\n", + "encoding works." ] }, { @@ -299,7 +298,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We see that encoding a single feature will give a dataframe full of zeros\n", + "We see that encoding a single feature gives a dataframe full of zeros\n", "and ones. Each category (unique value) became a column; the encoding\n", "returned, for each sample, a 1 to specify which category it belongs to.\n", "\n", @@ -353,8 +352,8 @@ "source": [ "### Choosing an encoding strategy\n", "\n", - "Choosing an encoding strategy will depend on the underlying models and the\n", - "type of categories (i.e. ordinal vs. nominal)." + "Choosing an encoding strategy depends on the underlying models and the type of\n", + "categories (i.e. ordinal vs. nominal)." ] }, { @@ -373,12 +372,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n", - "Using an `OrdinalEncoder` will output ordinal categories. This means\n", + "Using an `OrdinalEncoder` outputs ordinal categories. This means\n", "that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The\n", "impact of violating this ordering assumption is really dependent on the\n", - "downstream models. Linear models will be impacted by misordered categories\n", - "while tree-based models will not.\n", + "downstream models. Linear models would be impacted by misordered categories\n", + "while tree-based models would not.\n", "\n", "You can still use an `OrdinalEncoder` with linear models but you need to be\n", "sure that:\n", @@ -426,7 +424,7 @@ "We see that the `\"Holand-Netherlands\"` category is occurring rarely. This will\n", "be a problem during cross-validation: if the sample ends up in the test set\n", "during splitting then the classifier would not have seen the category during\n", - "training and will not be able to encode it.\n", + "training and would not be able to encode it.\n", "\n", "In scikit-learn, there are some possible solutions to bypass this issue:\n", "\n", @@ -455,8 +453,8 @@ "

    Tip

    \n", "

    Be aware the OrdinalEncoder exposes a parameter also named handle_unknown.\n", "It can be set to use_encoded_value. If that option is chosen, you can define\n", - "a fixed value to which all unknowns will be set to during transform. For\n", - "example, OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=42) will set all values encountered during transform to 42\n", + "a fixed value that is assigned to all unknown categories during transform.\n", + "For example, OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1) would set all values encountered during transform to -1\n", "which are not part of the data encountered during the fit call. You are\n", "going to use these parameters in the next exercise.

    \n", "
    " diff --git a/notebooks/03_categorical_pipeline_column_transformer.ipynb b/notebooks/03_categorical_pipeline_column_transformer.ipynb index aca827f4c..f9f3d5293 100644 --- a/notebooks/03_categorical_pipeline_column_transformer.ipynb +++ b/notebooks/03_categorical_pipeline_column_transformer.ipynb @@ -6,12 +6,12 @@ "source": [ "# Using numerical and categorical variables together\n", "\n", - "In the previous notebooks, we showed the required preprocessing to apply\n", - "when dealing with numerical and categorical variables. However, we decoupled\n", - "the process to treat each type individually. In this notebook, we will show\n", - "how to combine these preprocessing steps.\n", + "In the previous notebooks, we showed the required preprocessing to apply when\n", + "dealing with numerical and categorical variables. However, we decoupled the\n", + "process to treat each type individually. In this notebook, we show how to\n", + "combine these preprocessing steps.\n", "\n", - "We will first load the entire adult census dataset." + "We first load the entire adult census dataset." ] }, { @@ -38,10 +38,10 @@ "source": [ "## Selection based on data types\n", "\n", - "We will separate categorical and numerical variables using their data\n", - "types to identify them, as we saw previously that `object` corresponds\n", - "to categorical columns (strings). We make use of `make_column_selector`\n", - "helper to select the corresponding columns." + "We separate categorical and numerical variables using their data types to\n", + "identify them, as we saw previously that `object` corresponds to categorical\n", + "columns (strings). We make use of `make_column_selector` helper to select the\n", + "corresponding columns." ] }, { @@ -84,14 +84,14 @@ "In the previous sections, we saw that we need to treat data differently\n", "depending on their nature (i.e. numerical or categorical).\n", "\n", - "Scikit-learn provides a `ColumnTransformer` class which will send specific\n", + "Scikit-learn provides a `ColumnTransformer` class which sends specific\n", "columns to a specific transformer, making it easy to fit a single predictive\n", "model on a dataset that combines both kinds of variables together\n", "(heterogeneously typed tabular data).\n", "\n", "We first define the columns depending on their data type:\n", "\n", - "* **one-hot encoding** will be applied to categorical columns. Besides, we use\n", + "* **one-hot encoding** is applied to categorical columns. Besides, we use\n", " `handle_unknown=\"ignore\"` to solve the potential issues due to rare\n", " categories.\n", "* **numerical scaling** numerical features which will be standardized.\n", @@ -149,11 +149,11 @@ "A `ColumnTransformer` does the following:\n", "\n", "* It **splits the columns** of the original dataset based on the column names\n", - " or indices provided. We will obtain as many subsets as the number of\n", - " transformers passed into the `ColumnTransformer`.\n", + " or indices provided. We obtain as many subsets as the number of transformers\n", + " passed into the `ColumnTransformer`.\n", "* It **transforms each subsets**. A specific transformer is applied to each\n", - " subset: it will internally call `fit_transform` or `transform`. The output\n", - " of this step is a set of transformed datasets.\n", + " subset: it internally calls `fit_transform` or `transform`. The output of\n", + " this step is a set of transformed datasets.\n", "* It then **concatenates the transformed datasets** into a single dataset.\n", "\n", "The important thing is that `ColumnTransformer` is like any other scikit-learn\n", @@ -234,7 +234,7 @@ "source": [ "Then, we can send the raw dataset straight to the pipeline. Indeed, we do not\n", "need to make any manual preprocessing (calling the `transform` or\n", - "`fit_transform` methods) as it will be handled when calling the `predict`\n", + "`fit_transform` methods) as it is already handled when calling the `predict`\n", "method. As an example, we predict on the five first samples from the test set." ] }, @@ -337,10 +337,10 @@ "\n", "However, it is often useful to check whether more complex models such as an\n", "ensemble of decision trees can lead to higher predictive performance. In this\n", - "section we will use such a model called **gradient-boosting trees** and\n", - "evaluate its generalization performance. More precisely, the scikit-learn\n", - "model we will use is called `HistGradientBoostingClassifier`. Note that\n", - "boosting models will be covered in more detail in a future module.\n", + "section we use such a model called **gradient-boosting trees** and evaluate\n", + "its generalization performance. More precisely, the scikit-learn model we use\n", + "is called `HistGradientBoostingClassifier`. Note that boosting models will be\n", + "covered in more detail in a future module.\n", "\n", "For tree-based models, the handling of numerical and categorical variables is\n", "simpler than for linear models:\n", diff --git a/notebooks/03_categorical_pipeline_ex_01.ipynb b/notebooks/03_categorical_pipeline_ex_01.ipynb index 1f7ab830e..d77bbef38 100644 --- a/notebooks/03_categorical_pipeline_ex_01.ipynb +++ b/notebooks/03_categorical_pipeline_ex_01.ipynb @@ -47,9 +47,8 @@ "source": [ "In the previous notebook, we used `sklearn.compose.make_column_selector` to\n", "automatically select columns with a specific data type (also called `dtype`).\n", - "Here, we will use this selector to get only the columns containing strings\n", - "(column with `object` dtype) that correspond to categorical features in our\n", - "dataset." + "Here, we use this selector to get only the columns containing strings (column\n", + "with `object` dtype) that correspond to categorical features in our dataset." ] }, { @@ -102,11 +101,11 @@ "
    \n", "

    Note

    \n", "

    Be aware that if an error happened during the cross-validation,\n", - "cross_validate will raise a warning and return NaN (Not a Number) as scores.\n", + "cross_validate would raise a warning and return NaN (Not a Number) as scores.\n", "To make it raise a standard Python exception with a traceback, you can pass\n", "the error_score=\"raise\" argument in the call to cross_validate. An\n", - "exception will be raised instead of a warning at the first encountered problem\n", - "and cross_validate will stop right away instead of returning NaN values.\n", + "exception would be raised instead of a warning at the first encountered problem\n", + "and cross_validate would stop right away instead of returning NaN values.\n", "This is particularly handy when developing complex machine learning pipelines.

    \n", "
    " ] @@ -127,8 +126,8 @@ "metadata": {}, "source": [ "Now, we would like to compare the generalization performance of our previous\n", - "model with a new model where instead of using an `OrdinalEncoder`, we will use\n", - "a `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare\n", + "model with a new model where instead of using an `OrdinalEncoder`, we use a\n", + "`OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare\n", "the score of both models and conclude on the impact of choosing a specific\n", "encoding strategy when using a linear model." ] diff --git a/notebooks/03_categorical_pipeline_sol_01.ipynb b/notebooks/03_categorical_pipeline_sol_01.ipynb index 206a36f4c..916e2be5f 100644 --- a/notebooks/03_categorical_pipeline_sol_01.ipynb +++ b/notebooks/03_categorical_pipeline_sol_01.ipynb @@ -47,9 +47,8 @@ "source": [ "In the previous notebook, we used `sklearn.compose.make_column_selector` to\n", "automatically select columns with a specific data type (also called `dtype`).\n", - "Here, we will use this selector to get only the columns containing strings\n", - "(column with `object` dtype) that correspond to categorical features in our\n", - "dataset." + "Here, we use this selector to get only the columns containing strings (column\n", + "with `object` dtype) that correspond to categorical features in our dataset." ] }, { @@ -106,11 +105,11 @@ "
    \n", "

    Note

    \n", "

    Be aware that if an error happened during the cross-validation,\n", - "cross_validate will raise a warning and return NaN (Not a Number) as scores.\n", + "cross_validate would raise a warning and return NaN (Not a Number) as scores.\n", "To make it raise a standard Python exception with a traceback, you can pass\n", "the error_score=\"raise\" argument in the call to cross_validate. An\n", - "exception will be raised instead of a warning at the first encountered problem\n", - "and cross_validate will stop right away instead of returning NaN values.\n", + "exception would be raised instead of a warning at the first encountered problem\n", + "and cross_validate would stop right away instead of returning NaN values.\n", "This is particularly handy when developing complex machine learning pipelines.

    \n", "
    " ] @@ -177,8 +176,8 @@ "metadata": {}, "source": [ "Now, we would like to compare the generalization performance of our previous\n", - "model with a new model where instead of using an `OrdinalEncoder`, we will use\n", - "a `OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare\n", + "model with a new model where instead of using an `OrdinalEncoder`, we use a\n", + "`OneHotEncoder`. Repeat the model evaluation using cross-validation. Compare\n", "the score of both models and conclude on the impact of choosing a specific\n", "encoding strategy when using a linear model." ] @@ -216,7 +215,7 @@ "\n", "The important message here is: linear model and `OrdinalEncoder` are used\n", "together only for ordinal categorical features, i.e. features that have a\n", - "specific ordering. Otherwise, your model will perform poorly." + "specific ordering. Otherwise, your model would perform poorly." ] } ], diff --git a/notebooks/03_categorical_pipeline_sol_02.ipynb b/notebooks/03_categorical_pipeline_sol_02.ipynb index 725a86cdd..161d0cbdd 100644 --- a/notebooks/03_categorical_pipeline_sol_02.ipynb +++ b/notebooks/03_categorical_pipeline_sol_02.ipynb @@ -287,7 +287,7 @@ "\n", "\n", "
      \n", - "
    • OneHotEncoder: will always do something meaningful, but can be unnecessary\n", + "
    • OneHotEncoder: always does something meaningful, but can be unnecessary\n", "slow with trees.
    • \n", "
    • OrdinalEncoder: can be detrimental for linear models unless your category\n", "has a meaningful order and you make sure that OrdinalEncoder respects this\n", diff --git a/notebooks/03_categorical_pipeline_visualization.ipynb b/notebooks/03_categorical_pipeline_visualization.ipynb index dd16ea0b3..48110a944 100644 --- a/notebooks/03_categorical_pipeline_visualization.ipynb +++ b/notebooks/03_categorical_pipeline_visualization.ipynb @@ -29,8 +29,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We need to define our data and target. In this case we will build a\n", - "classification model" + "We need to define our data and target. In this case we build a classification\n", + "model" ] }, {