Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate notebooks #750

Merged
merged 3 commits into from
Nov 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions jupyter-book/linear_models/linear_models_quiz_m4_03.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,18 @@ _Select all answers that apply_

+++

```{admonition} Question
By default, a [`LogisticRegression`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) in scikit-learn applies:

- a) no penalty
- b) a penalty that shrinks the magnitude of the weights towards zero (also called "l2 penalty")
- c) a penalty that ensures all weights are equal

_Select a single answer_
```

+++

```{admonition} Question
The parameter `C` in a logistic regression is:

Expand Down
34 changes: 17 additions & 17 deletions notebooks/01_tabular_data_exploration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@
"source": [
"# First look at our dataset\n",
"\n",
"In this notebook, we will look at the necessary steps required before any\n",
" machine learning takes place. It involves:\n",
"In this notebook, we look at the necessary steps required before any machine\n",
" learning takes place. It involves:\n",
"\n",
"* loading the data;\n",
"* looking at the variables in the dataset, in particular, differentiate\n",
Expand All @@ -23,14 +23,14 @@
"source": [
"## Loading the adult census dataset\n",
"\n",
"We will use data from the 1994 US census that we downloaded from\n",
"We use data from the 1994 US census that we downloaded from\n",
"[OpenML](http://openml.org/).\n",
"\n",
"You can look at the OpenML webpage to learn more about this dataset:\n",
"<http://www.openml.org/d/1590>\n",
"\n",
"The dataset is available as a CSV (Comma-Separated Values) file and we will\n",
"use `pandas` to read it.\n",
"The dataset is available as a CSV (Comma-Separated Values) file and we use\n",
"`pandas` to read it.\n",
"\n",
"<div class=\"admonition note alert alert-info\">\n",
"<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n",
Expand Down Expand Up @@ -105,9 +105,9 @@
"The column named **class** is our target variable (i.e., the variable which we\n",
"want to predict). The two possible classes are `<=50K` (low-revenue) and\n",
"`>50K` (high-revenue). The resulting prediction problem is therefore a binary\n",
"classification problem as `class` has only two possible values. We will use\n",
"the left-over columns (any column other than `class`) as input variables for\n",
"our model."
"classification problem as `class` has only two possible values. We use the\n",
"left-over columns (any column other than `class`) as input variables for our\n",
"model."
]
},
{
Expand All @@ -131,7 +131,7 @@
"with <tt class=\"docutils literal\">\" &lt;=50K\"</tt> than with <tt class=\"docutils literal\">\" &gt;50K\"</tt>. Class imbalance happens often in practice\n",
"and may need special techniques when building a predictive model.</p>\n",
"<p class=\"last\">For example in a medical setting, if we are trying to predict whether subjects\n",
"will develop a rare disease, there will be a lot more healthy subjects than\n",
"may develop a rare disease, there would be a lot more healthy subjects than\n",
"ill subjects in the dataset.</p>\n",
"</div>"
]
Expand Down Expand Up @@ -389,8 +389,8 @@
"source": [
"import seaborn as sns\n",
"\n",
"# We will plot a subset of the data to keep the plot readable and make the\n",
"# plotting faster\n",
"# We plot a subset of the data to keep the plot readable and make the plotting\n",
"# faster\n",
"n_samples_to_plot = 5000\n",
"columns = [\"age\", \"education-num\", \"hours-per-week\"]\n",
"_ = sns.pairplot(\n",
Expand Down Expand Up @@ -486,12 +486,12 @@
" a mix of blue points and orange points. It seems complicated to choose which\n",
" class we should predict in this region.\n",
"\n",
"It is interesting to note that some machine learning models will work\n",
"similarly to what we did: they are known as decision tree models. The two\n",
"thresholds that we chose (27 years and 40 hours) are somewhat arbitrary, i.e.\n",
"we chose them by only looking at the pairplot. In contrast, a decision tree\n",
"will choose the \"best\" splits based on data without human intervention or\n",
"inspection. Decision trees will be covered more in detail in a future module.\n",
"It is interesting to note that some machine learning models work similarly to\n",
"what we did: they are known as decision tree models. The two thresholds that\n",
"we chose (27 years and 40 hours) are somewhat arbitrary, i.e. we chose them by\n",
"only looking at the pairplot. In contrast, a decision tree chooses the \"best\"\n",
"splits based on data without human intervention or inspection. Decision trees\n",
"will be covered more in detail in a future module.\n",
"\n",
"Note that machine learning is often used when creating rules by hand is not\n",
"straightforward. For example because we are in high dimension (many features\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/01_tabular_data_exploration_ex_01.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at these distributions, how hard do you think it will be to classify\n",
"Looking at these distributions, how hard do you think it would be to classify\n",
"the penguins only using `\"culmen depth\"` and `\"culmen length\"`?"
]
}
Expand Down
2 changes: 1 addition & 1 deletion notebooks/01_tabular_data_exploration_sol_01.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at these distributions, how hard do you think it will be to classify\n",
"Looking at these distributions, how hard do you think it would be to classify\n",
"the penguins only using `\"culmen depth\"` and `\"culmen length\"`?"
]
},
Expand Down
24 changes: 12 additions & 12 deletions notebooks/02_numerical_pipeline_cross_validation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
"source": [
"# Model evaluation using cross-validation\n",
"\n",
"In this notebook, we will still use only numerical features.\n",
"In this notebook, we still use numerical features only.\n",
"\n",
"We will discuss the practical aspects of assessing the generalization\n",
"Here we discuss the practical aspects of assessing the generalization\n",
"performance of our model via **cross-validation** instead of a single\n",
"train-test split.\n",
"\n",
Expand All @@ -32,8 +32,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We will now drop the target from the data we will use to train our\n",
"predictive model."
"We now drop the target from the data we will use to train our predictive\n",
"model."
]
},
{
Expand Down Expand Up @@ -94,11 +94,11 @@
"## The need for cross-validation\n",
"\n",
"In the previous notebook, we split the original data into a training set and a\n",
"testing set. The score of a model will in general depend on the way we make\n",
"such a split. One downside of doing a single split is that it does not give\n",
"any information about this variability. Another downside, in a setting where\n",
"the amount of data is small, is that the data available for training and\n",
"testing will be even smaller after splitting.\n",
"testing set. The score of a model in general depends on the way we make such a\n",
"split. One downside of doing a single split is that it does not give any\n",
"information about this variability. Another downside, in a setting where the\n",
"amount of data is small, is that the data available for training and testing\n",
"would be even smaller after splitting.\n",
"\n",
"Instead, we can use cross-validation. Cross-validation consists of repeating\n",
"the procedure such that the training and testing sets are different each time.\n",
Expand All @@ -107,8 +107,8 @@
"model's generalization performance.\n",
"\n",
"Note that there exists several cross-validation strategies, each of them\n",
"defines how to repeat the `fit`/`score` procedure. In this section, we will\n",
"use the K-fold strategy: the entire dataset is split into `K` partitions. The\n",
"defines how to repeat the `fit`/`score` procedure. In this section, we use the\n",
"K-fold strategy: the entire dataset is split into `K` partitions. The\n",
"`fit`/`score` procedure is repeated `K` times where at each iteration `K - 1`\n",
"partitions are used to fit the model and `1` partition is used to score. The\n",
"figure below illustrates this K-fold strategy.\n",
Expand Down Expand Up @@ -178,7 +178,7 @@
"[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)\n",
"to collect additional information, such as the training scores of the models\n",
"obtained on each round or even return the models themselves instead of\n",
"discarding them. These features will be covered in a future notebook.\n",
"discarding them. These features will be covered in a future notebook.\n",
"\n",
"Let's extract the scores computed on the test fold of each cross-validation\n",
"round from the `cv_result` dictionary and compute the mean accuracy and the\n",
Expand Down
7 changes: 4 additions & 3 deletions notebooks/02_numerical_pipeline_ex_00.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,12 @@
"number of neighbors we are going to use to make a prediction for a new data\n",
"point.\n",
"\n",
"What is the default value of the `n_neighbors` parameter? Hint: Look at the\n",
"documentation on the [scikit-learn\n",
"What is the default value of the `n_neighbors` parameter?\n",
"\n",
"**Hint**: Look at the documentation on the [scikit-learn\n",
"website](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)\n",
"or directly access the description inside your notebook by running the\n",
"following cell. This will open a pager pointing to the documentation."
"following cell. This opens a pager pointing to the documentation."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions notebooks/02_numerical_pipeline_ex_01.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We will first split our dataset to have the target separated from the data\n",
"used to train our predictive model."
"We first split our dataset to have the target separated from the data used to\n",
"train our predictive model."
]
},
{
Expand Down Expand Up @@ -93,8 +93,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Use a `DummyClassifier` such that the resulting classifier will always predict\n",
"the class `' >50K'`. What is the accuracy score on the test set? Repeat the\n",
"Use a `DummyClassifier` such that the resulting classifier always predict the\n",
"class `' >50K'`. What is the accuracy score on the test set? Repeat the\n",
"experiment by always predicting the class `' <=50K'`.\n",
"\n",
"Hint: you can set the `strategy` parameter of the `DummyClassifier` to achieve\n",
Expand Down
23 changes: 11 additions & 12 deletions notebooks/02_numerical_pipeline_hands_on.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@
"* using a scikit-learn helper to separate data into train-test sets;\n",
"* training and evaluating a more complex scikit-learn model.\n",
"\n",
"We will start by loading the adult census dataset used during the data\n",
"exploration.\n",
"We start by loading the adult census dataset used during the data exploration.\n",
"\n",
"## Loading the entire dataset\n",
"\n",
Expand Down Expand Up @@ -105,13 +104,13 @@
"numerical data usually requires very little work before getting started with\n",
"training.\n",
"\n",
"The first task here will be to identify numerical data in our dataset.\n",
"The first task here is to identify numerical data in our dataset.\n",
"\n",
"<div class=\"admonition caution alert alert-warning\">\n",
"<p class=\"first admonition-title\" style=\"font-weight: bold;\">Caution!</p>\n",
"<p class=\"last\">Numerical data are represented with numbers, but numbers are not always\n",
"representing numerical data. Categories could already be encoded with\n",
"numbers and you will need to identify these features.</p>\n",
"<p class=\"last\">Numerical data are represented with numbers, but numbers do not always\n",
"represent numerical data. Categories could already be encoded with\n",
"numbers and you may need to identify these features.</p>\n",
"</div>\n",
"\n",
"Thus, we can check the data type for each of the column in the dataset."
Expand Down Expand Up @@ -209,7 +208,7 @@
"source": [
"We can see the age varies between 17 and 90 years.\n",
"\n",
"We could extend our analysis and we will find that `\"capital-gain\"`,\n",
"We could extend our analysis and we would find that `\"capital-gain\"`,\n",
"`\"capital-loss\"`, and `\"hours-per-week\"` are also representing quantitative\n",
"data.\n",
"\n",
Expand Down Expand Up @@ -273,7 +272,7 @@
"source": [
"When calling the function `train_test_split`, we specified that we would like\n",
"to have 25% of samples in the testing set while the remaining samples (75%)\n",
"will be available in the training set. We can check quickly if we got what we\n",
"are assigned to the training set. We can check quickly if we got what we\n",
"expected."
]
},
Expand Down Expand Up @@ -309,8 +308,8 @@
"source": [
"In the previous notebook, we used a k-nearest neighbors model. While this\n",
"model is intuitive to understand, it is not widely used in practice. Now, we\n",
"will use a more useful model, called a logistic regression, which belongs to\n",
"the linear models family.\n",
"use a more useful model, called a logistic regression, which belongs to the\n",
"linear models family.\n",
"\n",
"<div class=\"admonition note alert alert-info\">\n",
"<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n",
Expand All @@ -321,8 +320,8 @@
"<li>if <tt class=\"docutils literal\">0.1 * age + 3.3 * <span class=\"pre\">hours-per-week</span> - 15.1 &gt; 0</tt>, predict <tt class=\"docutils literal\"><span class=\"pre\">high-income</span></tt></li>\n",
"<li>otherwise predict <tt class=\"docutils literal\"><span class=\"pre\">low-income</span></tt></li>\n",
"</ul>\n",
"<p class=\"last\">Linear models, and in particular the logistic regression, will be covered in\n",
"more details in the \"Linear models\" module later in this course. For now the\n",
"<p class=\"last\">Linear models, and in particular the logistic regression, will be covered\n",
"more in detail in the \"Linear models\" module later in this course. For now the\n",
"focus is to use this logistic regression model in scikit-learn rather than\n",
"understand how it works in details.</p>\n",
"</div>\n",
Expand Down
31 changes: 15 additions & 16 deletions notebooks/02_numerical_pipeline_scaling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
"source": [
"# Preprocessing for numerical features\n",
"\n",
"In this notebook, we will still use only numerical features.\n",
"In this notebook, we still use numerical features only.\n",
"\n",
"We will introduce these new aspects:\n",
"Here we introduce these new aspects:\n",
"\n",
"* an example of preprocessing, namely **scaling numerical variables**;\n",
"* using a scikit-learn **pipeline** to chain preprocessing and model training.\n",
Expand All @@ -33,8 +33,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We will now drop the target from the data we will use to train our predictive\n",
"model."
"We now drop the target from the data we use to train our predictive model."
]
},
{
Expand Down Expand Up @@ -115,7 +114,7 @@
"source": [
"We see that the dataset's features span across different ranges. Some\n",
"algorithms make some assumptions regarding the feature distributions and\n",
"usually normalizing features will be helpful to address these assumptions.\n",
"normalizing features is usually helpful to address such assumptions.\n",
"\n",
"<div class=\"admonition tip alert alert-warning\">\n",
"<p class=\"first admonition-title\" style=\"font-weight: bold;\">Tip</p>\n",
Expand All @@ -133,13 +132,13 @@
"Whether or not a machine learning model requires scaling the features depends\n",
"on the model family. Linear models such as logistic regression generally\n",
"benefit from scaling the features while other models such as decision trees do\n",
"not need such preprocessing (but will not suffer from it).\n",
"not need such preprocessing (but would not suffer from it).\n",
"\n",
"We show how to apply such normalization using a scikit-learn transformer\n",
"called `StandardScaler`. This transformer shifts and scales each feature\n",
"individually so that they all have a 0-mean and a unit standard deviation.\n",
"\n",
"We will investigate different steps used in scikit-learn to achieve such a\n",
"We now investigate different steps used in scikit-learn to achieve such a\n",
"transformation of the data.\n",
"\n",
"First, one needs to call the method `fit` in order to learn the scaling from\n",
Expand Down Expand Up @@ -175,10 +174,10 @@
"\n",
"<div class=\"admonition note alert alert-info\">\n",
"<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n",
"<p class=\"last\">The fact that the model states of this scaler are arrays of means and\n",
"standard deviations is specific to the <tt class=\"docutils literal\">StandardScaler</tt>. Other\n",
"scikit-learn transformers will compute different statistics and store them\n",
"as model states, in the same fashion.</p>\n",
"<p class=\"last\">The fact that the model states of this scaler are arrays of means and standard\n",
"deviations is specific to the <tt class=\"docutils literal\">StandardScaler</tt>. Other scikit-learn\n",
"transformers may compute different statistics and store them as model states,\n",
"in a similar fashion.</p>\n",
"</div>\n",
"\n",
"We can inspect the computed means and standard deviations."
Expand Down Expand Up @@ -353,7 +352,7 @@
"source": [
"We can easily combine sequential operations with a scikit-learn `Pipeline`,\n",
"which chains together operations and is used as any other classifier or\n",
"regressor. The helper function `make_pipeline` will create a `Pipeline`: it\n",
"regressor. The helper function `make_pipeline` creates a `Pipeline`: it\n",
"takes as arguments the successive transformations to perform, followed by the\n",
"classifier or regressor model."
]
Expand All @@ -378,8 +377,8 @@
"source": [
"The `make_pipeline` function did not require us to give a name to each step.\n",
"Indeed, it was automatically assigned based on the name of the classes\n",
"provided; a `StandardScaler` will be a step named `\"standardscaler\"` in the\n",
"resulting pipeline. We can check the name of each steps of our model:"
"provided; a `StandardScaler` step is named `\"standardscaler\"` in the resulting\n",
"pipeline. We can check the name of each steps of our model:"
]
},
{
Expand Down Expand Up @@ -421,7 +420,7 @@
"![pipeline fit diagram](../figures/api_diagram-pipeline.fit.svg)\n",
"\n",
"When calling `model.fit`, the method `fit_transform` from each underlying\n",
"transformer (here a single transformer) in the pipeline will be called to:\n",
"transformer (here a single transformer) in the pipeline is called to:\n",
"\n",
"- learn their internal model states\n",
"- transform the training data. Finally, the preprocessed data are provided to\n",
Expand Down Expand Up @@ -452,7 +451,7 @@
"called to preprocess the data. Note that there is no need to call the `fit`\n",
"method for these transformers because we are using the internal model states\n",
"computed when calling `model.fit`. The preprocessed data is then provided to\n",
"the predictor that will output the predicted target by calling its method\n",
"the predictor that outputs the predicted target by calling its method\n",
"`predict`.\n",
"\n",
"As a shorthand, we can check the score of the full predictive pipeline calling\n",
Expand Down
Loading