Skip to content

Commit

Permalink
Correcting typos in tutorials
Browse files Browse the repository at this point in the history
  • Loading branch information
mirjagranfors committed Aug 29, 2024
1 parent 178b07b commit 8fde925
Show file tree
Hide file tree
Showing 14 changed files with 49 additions and 39 deletions.
9 changes: 7 additions & 2 deletions tutorials/developers/DT101_files.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"## Root Level Files\n",
"\n",
"Deeplay contains the following files at the root level:\n",
"- `.gitignore`: Contains the files to be ingnored by GIT.\n",
"- `.gitignore`: Contains the files to be ignored by GIT.\n",
"- `.pylintrc`: Configuration file for the pylint tool. It contains the rules for code formatting and style.\n",
"- `LICENSE.txt`: Deeplay's project license.\n",
"- `README.md`: Deeplay's project README file\n",
Expand Down Expand Up @@ -108,7 +108,7 @@
"\n",
"- `blocks`\n",
"\n",
" This directory contains the classes and functions related to blocks in the Deeplay library. Blocks are the building blocks of models in the Deeplay library. They are used to define the architecture of a model, and can be combined to create complex models. The most important block classes are in the subfolders `conv`, `linear`, `sequence` and in the files `base.py` anb `sequential.py`.\n",
" This directory contains the classes and functions related to blocks in the Deeplay library. Blocks are the building blocks of models in the Deeplay library. They are used to define the architecture of a model, and can be combined to create complex models. The most important block classes are in the subfolders `conv`, `linear`, `sequence` and in the files `base.py` and `sequential.py`.\n",
"\n",
"- `components`\n",
"\n",
Expand Down Expand Up @@ -148,6 +148,11 @@
"\n",
" This directory contains the unit tests for the library. These are used to ensure that the library is working correctly and to catch any bugs that may arise."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
Expand Down
7 changes: 6 additions & 1 deletion tutorials/developers/DT121_overview.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,13 @@
"| Is the object a small structural object with a sequential forward pass, such as a layer activation? | Yes | `Block` |\n",
"| Is the object a unit of computation, such as a convolution or a pooling operation? | Yes | `Operation` |\n",
"\n",
"As a general rule of thumb, for objects derived from `Component`, the number of features in each layer should be defineable by the input arguments. For objects derived from `Model`, only the input and output features must be defineable by the input arguments. In both cases, it is recommended to subclass an existing model or component if possible. This will make it easier to implement the required methods and attributes, and will ensure that the new model or component is compatible with the rest of the library."
"As a general rule of thumb, for objects derived from `Component`, the number of features in each layer should be definable by the input arguments. For objects derived from `Model`, only the input and output features must be definable by the input arguments. In both cases, it is recommended to subclass an existing model or component if possible. This will make it easier to implement the required methods and attributes, and will ensure that the new model or component is compatible with the rest of the library."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
Expand Down
4 changes: 2 additions & 2 deletions tutorials/developers/DT131_applications.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
"Therefore, applications should strive to be as model agnostic as possible, so that they can be used with any model that fits the input and output shapes.\n",
"\n",
"Applications define the training and inference loops, and the loss function.\n",
"Applications may also define custom metrics, or specialmethods used for inference. Forexample, a classifier application may define a method to predict a hard label from the input, instead of the probabilities.\n",
"Applications may also define custom metrics, or special methods used for inference. For example, a classifier application may define a method to predict a hard label from the input, instead of the probabilities.\n",
"\n",
"Examples of applications are `Classifier`, `Regressor`, `Segmentor`, and `VanillaGAN`."
]
Expand Down Expand Up @@ -56,7 +56,7 @@
"source": [
"### 1. Create a New File\n",
"\n",
"The first step is to create a new file in the `deeplay/applications` directory. It should generally be in a subdirectory, named after the type of application. For example, a binary classifier application should be in `deeplay/applications/classificaiton/binary.py`.\n",
"The first step is to create a new file in the `deeplay/applications` directory. It should generally be in a subdirectory, named after the type of application. For example, a binary classifier application should be in `deeplay/applications/classification/binary.py`.\n",
"\n",
"**The base class: `Application`.** \n",
"Applications should inherit from the `Application` class. This class is a subclass of both `DeeplayModule` and `lightning.LightningModule`. This is to ensure that the \n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/developers/DT141_models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now instatiate this block and verify its structure."
"You can now instantiate this block and verify its structure."
]
},
{
Expand Down
12 changes: 6 additions & 6 deletions tutorials/developers/DT151_components.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -191,15 +191,15 @@
" The input tensor of shape (N, C, H, W).\n",
" Where N is the batch size, C is the number of channels, H is the \n",
" height, and W is the width.\n",
" Additial dimensions before C are allowed.\n",
" Additional dimensions before C are allowed.\n",
" \n",
" Output\n",
" ------\n",
" y : torch.Tensor\n",
" The output tensor of shape (N, out_channels, H', W').\n",
" Where N is the batch size, out_channels is the number of output \n",
" channels, H is the height, and W is the width.\n",
" Additial dimensions before out_channels will be preserved.\n",
" Additional dimensions before out_channels will be preserved.\n",
"\n",
" Evaluation\n",
" ----------\n",
Expand Down Expand Up @@ -338,15 +338,15 @@
" The input tensor of shape (N, C, H, W).\n",
" Where N is the batch size, C is the number of channels, H is the \n",
" height, and W is the width.\n",
" Additial dimensions before C are allowed.\n",
" Additional dimensions before C are allowed.\n",
" \n",
" Output\n",
" ------\n",
" y : torch.Tensor\n",
" The output tensor of shape (N, out_channels, H', W').\n",
" Where N is the batch size, out_channels is the number of output \n",
" channels, H is the height, and W is the width.\n",
" Additial dimensions before out_channels will be preserved.\n",
" Additional dimensions before out_channels will be preserved.\n",
"\n",
" Evaluation\n",
" ----------\n",
Expand Down Expand Up @@ -479,15 +479,15 @@
" The input tensor of shape (N, C, H, W).\n",
" Where N is the batch size, C is the number of channels, H is the \n",
" height, and W is the width.\n",
" Additial dimensions before C are allowed.\n",
" Additional dimensions before C are allowed.\n",
" \n",
" Output\n",
" ------\n",
" y : torch.Tensor\n",
" The output tensor of shape (N, out_channels, H', W').\n",
" Where N is the batch size, out_channels is the number of output \n",
" channels, H is the height, and W is the width.\n",
" Additial dimensions before out_channels will be preserved.\n",
" Additional dimensions before out_channels will be preserved.\n",
"\n",
" Evaluation\n",
" ----------\n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/developers/DT171_blocks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"remove steps from the block. For example, adding an activation function, or\n",
"removing a dropout layer. \n",
"\n",
"Also, remember that blocks shouls be small and modular. If you are implementing\n",
"Also, remember that blocks should be small and modular. If you are implementing\n",
"a block that is too big, you should consider breaking it down into smaller blocks.\n",
"\n",
"Finally, blocks should be pretty strict in terms of input and output. This is \n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/developers/DT181_internals.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@
"\n",
"Since Deeplay modules requires an additional `build` step before the weights are created, so the default checkpointing system of `lightning` does not work.\n",
"\n",
"We have solved this by storing the state of the `Application` object immediately before building as a a hyperparameter in the checkpoint. This is then loaded when the model is loaded from the checkpoint, and the `build` method is called with the same arguments as before before the weights are loaded."
"We have solved this by storing the state of the `Application` object immediately before building as a hyperparameter in the checkpoint. This is then loaded when the model is loaded from the checkpoint, and the `build` method is called with the same arguments as before the weights are loaded."
]
}
],
Expand Down
8 changes: 4 additions & 4 deletions tutorials/developers/dev_intro.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
"source": [
"# Introduction for Developers\n",
"\n",
"This notebook presents a minimal example of how to implement a neural network and and an application with deeplay.\n",
"This notebook presents a minimal example of how to implement a neural network and an application with deeplay.\n",
"Specifically, it implements the classes for a multilayer perceptron and a classifier.\n",
"Then, it combines them to demonstrate how tehy can be used for a simple classification task.\n",
"Then, it combines them to demonstrate how they can be used for a simple classification task.\n",
"Finally, it upgrades these classes adding functionalities that are required to improved the user experience when using an IDE."
]
},
Expand All @@ -20,7 +20,7 @@
"\n",
"Here, we implement the minimal class `SimpleMLP`. It extends directly `dl.DeeplayModule`, which is the base class for all modules in `deeplay`.\n",
"\n",
"It represents a multilayer perceptron with a certain umber of inputs (`ìn_features`, which is an integer), a series of hidden layers with a certain number of neurons (`hidden_features`, a vector with the number of neurons for each layer), and a certain numebr of outputs (`out_features`, which is an integer).\n",
"It represents a multilayer perceptron with a certain umber of inputs (`ìn_features`, which is an integer), a series of hidden layers with a certain number of neurons (`hidden_features`, a vector with the number of neurons for each layer), and a certain number of outputs (`out_features`, which is an integer).\n",
"\n",
"The constructor initializes the MLP by creating a sequence of linear and ReLU activation layers.\n",
"\n",
Expand Down Expand Up @@ -229,7 +229,7 @@
"source": [
"## Example\n",
"\n",
"We'll now use `classifier` for the simple task of determinig whether the sum of two numbers is larger or smaller than 0."
"We'll now use `classifier` for the simple task of determining whether the sum of two numbers is larger or smaller than 0."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions tutorials/getting-started/GS101_core_objects.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"\n",
"- **Layer:** A layer consists of a single torch layer, such as `torch.nn.Linear` and `torch.nn.Conv2d`. Layers are the most basic building blocks in Deeplay.\n",
"\n",
"In the following sections, you'll create some examples of these obejcts."
"In the following sections, you'll create some examples of these objects."
]
},
{
Expand All @@ -27,7 +27,7 @@
"source": [
"## Importing Deeplay\n",
"\n",
"Import `deeplay` (shortened to `dl`, as an abbreaviation of both *deeplay* and *deep learning*) ... "
"Import `deeplay` (shortened to `dl`, as an abbreviation of both *deeplay* and *deep learning*) ... "
]
},
{
Expand Down
24 changes: 12 additions & 12 deletions tutorials/getting-started/GS121_modules.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# Working with Deeplay Modules\n",
"\n",
"In this section, you'll learn how to create and build Deeplay modules as well as how to configure their properties. You'll also understand the difference between Deeplay and PyTorch modules. You'll "
"In this section, you'll learn how to create and build Deeplay modules as well as how to configure their properties. You'll also understand the difference between Deeplay and PyTorch modules."
]
},
{
Expand Down Expand Up @@ -275,7 +275,7 @@
"MultiLayerPerceptron(\n",
" (blocks): LayerList(\n",
" (0): LinearBlock(\n",
" (layer): Layer[Linear](in_features=728, out_features=32, bias=True)\n",
" (layer): Layer[Linear](in_features=784, out_features=32, bias=True)\n",
" (activation): Layer[Tanh]()\n",
" )\n",
" (1): LinearBlock(\n",
Expand All @@ -294,7 +294,7 @@
"source": [
"import torch.nn as nn\n",
"\n",
"mlp = dl.MultiLayerPerceptron(728, [32, 16], 10)\n",
"mlp = dl.MultiLayerPerceptron(784, [32, 16], 10)\n",
"mlp[\"blocks\", :].all.configure(activation=dl.Layer(nn.Tanh))\n",
"\n",
"print(mlp)"
Expand All @@ -319,7 +319,7 @@
"MultiLayerPerceptron(\n",
" (blocks): LayerList(\n",
" (0): LinearBlock(\n",
" (layer): Layer[Linear](in_features=728, out_features=32, bias=True)\n",
" (layer): Layer[Linear](in_features=784, out_features=32, bias=True)\n",
" (activation): Layer[Tanh]()\n",
" )\n",
" (1): LinearBlock(\n",
Expand All @@ -336,7 +336,7 @@
}
],
"source": [
"mlp = dl.MultiLayerPerceptron(728, [32, 16], 10)\n",
"mlp = dl.MultiLayerPerceptron(784, [32, 16], 10)\n",
"mlp[\"blocks\", :].all.activated(nn.Tanh)\n",
"\n",
"print(mlp)"
Expand All @@ -354,7 +354,7 @@
"MultiLayerPerceptron(\n",
" (blocks): LayerList(\n",
" (0): LinearBlock(\n",
" (layer): Layer[Linear](in_features=728, out_features=32, bias=True)\n",
" (layer): Layer[Linear](in_features=784, out_features=32, bias=True)\n",
" (activation): Layer[Tanh]()\n",
" )\n",
" (1): LinearBlock(\n",
Expand All @@ -371,7 +371,7 @@
}
],
"source": [
"mlp = dl.MultiLayerPerceptron(728, [32, 16], 10)\n",
"mlp = dl.MultiLayerPerceptron(784, [32, 16], 10)\n",
"mlp[...].hasattr(\"activated\").all.activated(nn.Tanh)\n",
"\n",
"print(mlp)"
Expand All @@ -389,7 +389,7 @@
"MultiLayerPerceptron(\n",
" (blocks): LayerList(\n",
" (0): LinearBlock(\n",
" (layer): Layer[Linear](in_features=728, out_features=32, bias=True)\n",
" (layer): Layer[Linear](in_features=784, out_features=32, bias=True)\n",
" (activation): Layer[Tanh]()\n",
" )\n",
" (1): LinearBlock(\n",
Expand All @@ -406,7 +406,7 @@
}
],
"source": [
"mlp = dl.MultiLayerPerceptron(728, [32, 16], 10)\n",
"mlp = dl.MultiLayerPerceptron(784, [32, 16], 10)\n",
"mlp[...].isinstance(dl.LinearBlock).all.activated(nn.Tanh)\n",
"\n",
"print(mlp)"
Expand All @@ -424,7 +424,7 @@
"MultiLayerPerceptron(\n",
" (blocks): LayerList(\n",
" (0): LinearBlock(\n",
" (layer): Layer[Linear](in_features=728, out_features=32, bias=True)\n",
" (layer): Layer[Linear](in_features=784, out_features=32, bias=True)\n",
" (activation): Layer[Tanh]()\n",
" )\n",
" (1): LinearBlock(\n",
Expand All @@ -441,7 +441,7 @@
}
],
"source": [
"mlp = dl.MultiLayerPerceptron(728, [32, 16], 10)\n",
"mlp = dl.MultiLayerPerceptron(784, [32, 16], 10)\n",
"for block in mlp.blocks:\n",
" block.activated(nn.Tanh)\n",
" \n",
Expand Down Expand Up @@ -618,7 +618,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
"version": "3.11.9"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion tutorials/getting-started/GS131_methods.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1137,7 +1137,7 @@
"source": [
"## `DeeplayModule.predict()`\n",
"\n",
"The `DeeplayModule.predict()` method is a convienient way to get predictions from the model on a large dataset. It does the batching, moving the data to the correct device and returning the predictions in a single call to the model without gradients."
"The `DeeplayModule.predict()` method is a convenient way to get predictions from the model on a large dataset. It does the batching, moving the data to the correct device and returning the predictions in a single call to the model without gradients."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions tutorials/getting-started/GS141_applications.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -449,14 +449,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**NOTE:** In this case, however, the `.loss` atttribute will still be set to the default loss, so you won't see the correct loss when you print it."
"**NOTE:** In this case, however, the `.loss` attribute will still be set to the default loss, so you won't see the correct loss when you print it."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copntrolling the Optimizer\n",
"## Controlling the Optimizer\n",
"\n",
"Deeplay provides some wrappers around the PyTorch optimizers. Their main use is to be able to create the optimizer before the model is built and the parameters are known. Examples of optimizers available in Deeplay are:"
]
Expand Down Expand Up @@ -982,7 +982,7 @@
],
"source": [
"x_numpy_test = numpy.random.randn(100, 10) # Input test data.\n",
"y_numpy_test = x_numpy_val.max(axis=1, keepdims=True) # Target test data.\n",
"y_numpy_test = x_numpy_test.max(axis=1, keepdims=True) # Target test data.\n",
"\n",
"test_results = regressor.test((x_numpy_test, y_numpy_test))\n",
"print(f\"Test results: {test_results}\\n\") # Benjamin: Should this also work? If not, why not?"
Expand Down Expand Up @@ -1084,7 +1084,7 @@
],
"source": [
"x_numpy_test = numpy.random.randn(100, 10) # Input test data.\n",
"y_numpy_test = x_numpy_val.max(axis=1, keepdims=True) # Target test data.\n",
"y_numpy_test = x_numpy_test.max(axis=1, keepdims=True) # Target test data.\n",
"\n",
"x_torch_test = torch.from_numpy(x_numpy_test).float()\n",
"y_torch_test = torch.from_numpy(y_numpy_test).float()\n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/getting-started/GS151_models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1003,7 +1003,7 @@
"source": [
"### Making a Model by Subclassing\n",
"\n",
"A model is just a `DeeplayModule` sublass like any other. For some applications, it might be more convenient to subclass the model and implement the `.forward()` method."
"A model is just a `DeeplayModule` subclass like any other. For some applications, it might be more convenient to subclass the model and implement the `.forward()` method."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/getting-started/GS161_components.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@
" out_activation=dl.Layer(torch.nn.Tanh),\n",
")\n",
"\n",
"encdec"
"print(encdec)"
]
},
{
Expand Down

0 comments on commit 8fde925

Please sign in to comment.