diff --git a/docs/customize.md b/docs/customize.md index f6b9df16e..9c33cd523 100644 --- a/docs/customize.md +++ b/docs/customize.md @@ -119,17 +119,17 @@ This file contains mid-level information regarding various parameters that can b - These are various parameters that control the overall training process. - `verbose`: generate verbose messages on console; generally used for debugging. -- `batch_size`: defines the batch size to be used for training. -- `in_memory`: this is to enable or disable lazy loading - setting to true reads all data once during data loading, resulting in improvements. -- `num_epochs`: defines the number of epochs to train for. -- `patience`: defines the number of epochs to wait for improvement before early stopping. -- `learning_rate`: defines the learning rate to be used for training. -- `scheduler`: defines the learning rate scheduler to be used for training, more details are [here](https://github.com/mlcommons/GaNDLF/blob/master/GANDLF/schedulers/__init__.py); can take the following sub-parameters: +- `batch_size`: batch size to be used for training. +- `in_memory`: this is to enable or disable lazy loading. If set to `True`, all data is loaded onto the RAM at once during the construction of the dataloader (either training/validation/testing), thus resulting in faster training. If set to `False`, data gets read into RAM on-the-go when needed (also called ["lazy loading"](https://en.wikipedia.org/wiki/Lazy_loading)), which slows down training but lessens the memory load. The latter is recommended if the user's RAM has limited capacity. +- `num_epochs`: number of epochs to train for. +- `patience`: number of epochs to wait for improvement in the validation loss before early stopping. +- `learning_rate`: learning rate to be used for training. +- `scheduler`: learning rate scheduler to be used for training, more details are [here](https://github.com/mlcommons/GaNDLF/blob/master/GANDLF/schedulers/__init__.py); can take the following sub-parameters: - `type`: `triangle`, `triangle_modified`, `exp`, `step`, `reduce-on-plateau`, `cosineannealing`, `triangular`, `triangular2`, `exp_range` - - `min_lr`: defines the minimum learning rate to be used for training. - - `max_lr`: defines the maximum learning rate to be used for training. -- `optimizer`: defines the optimizer to be used for training, more details are [here](https://github.com/mlcommons/GaNDLF/blob/master/GANDLF/optimizers/__init__.py). -- `nested_training`: defines the number of folds to use nested training, takes `testing` and `validation` as sub-parameters, with integer values defining the number of folds to use. + - `min_lr`: minimum learning rate to be used for training. + - `max_lr`: maximum learning rate to be used for training. +- `optimizer`: optimizer to be used for training, more details are [here](https://github.com/mlcommons/GaNDLF/blob/master/GANDLF/optimizers/__init__.py). +- `nested_training`: number of folds to use nested training, takes `testing` and `validation` as sub-parameters, with integer values defining the number of folds to use. - `memory_save_mode`: if enabled, resize/resample operations in `data_preprocessing` will save files to disk instead of directly getting read into memory as tensors - **Queue configuration**: this defines how the queue for the input to the model is to be designed **after** the [patching strategy](#patching-strategy) has been applied, and more details are [here](https://torchio.readthedocs.io/data/patch_training.html?#queue). This takes the following sub-parameters: - `q_max_length`: his determines the maximum number of patches that can be stored in the queue. Using a large number means that the queue needs to be filled less often, but more CPU memory is needed to store the patches. diff --git a/docs/setup.md b/docs/setup.md index ff9ce222f..9f9cb5397 100644 --- a/docs/setup.md +++ b/docs/setup.md @@ -22,21 +22,21 @@ Alternatively, you can run GaNDLF via [Docker](https://www.docker.com/). This ne ### Install PyTorch -GaNDLF's primary computational foundation is built on PyTorch, and as such it supports all hardware types that PyTorch supports. Please install PyTorch for your hardware type before installing GaNDLF. See the [PyTorch installation instructions](https://pytorch.org/get-started/previous-versions/#v1131) for more details. An example installation using CUDA, ROCm, and CPU-only is shown below: +GaNDLF's primary computational foundation is built on PyTorch, and as such it supports all hardware types that PyTorch supports. Please install PyTorch for your hardware type before installing GaNDLF. See the [PyTorch installation instructions](https://pytorch.org/get-started/previous-versions/#v1131) for more details. + +First, instantiate your environment ```bash (base) $> conda create -n venv_gandlf python=3.9 -y (base) $> conda activate venv_gandlf (venv_gandlf) $> ### subsequent commands go here -### PyTorch installation - https://pytorch.org/get-started/previous-versions/#v210 -## CUDA 12.1 -# (venv_gandlf) $> pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121 -## CUDA 11.8 -# (venv_gandlf) $> pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118 -## ROCm 6.0 -# (venv_gandlf) $> pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/rocm6.0 -## CPU-only -# (venv_gandlf) $> pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cpu +``` + +You may install pytorch to be compatible with CUDA, ROCm, or CPU-only. An exhaustive list of PyTorch installations for the specific version compatible with GaNDLF can be found here: https://pytorch.org/get-started/previous-versions/#v231 +Use one of the following depending on your needs: +- CUDA 12.1 +```bash +(venv_gandlf) $> pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121 ``` ### Optional Dependencies @@ -53,33 +53,38 @@ The following dependencies are **optional**, and are only needed to access speci This option is recommended for most users, and allows for the quickest way to get started with GaNDLF. ```bash -# continue from previous shell (venv_gandlf) $> pip install gandlf # this will give you the latest stable release -## you can also use conda -# (venv_gandlf) $> conda install -c conda-forge gandlf -y +``` +You can also use conda +```bash +(venv_gandlf) $> conda install -c conda-forge gandlf -y ``` If you are interested in running the latest version of GaNDLF, you can install the nightly build by running the following command: ```bash -# continue from previous shell (venv_gandlf) $> pip install --pre gandlf -## you can also use conda -# (venv_gandlf) $> conda install -c conda-forge/label/gandlf_dev -c conda-forge gandlf -y ``` +You can also use conda +```bash +(venv_gandlf) $> conda install -c conda-forge/label/gandlf_dev -c conda-forge gandlf -y +``` ### Install from Sources Use this option if you want to [contribute to GaNDLF](https://github.com/mlcommons/GaNDLF/blob/master/CONTRIBUTING.md), or are interested to make other code-level changes for your own use. ```bash -# continue from previous shell (venv_gandlf) $> git clone https://github.com/mlcommons/GaNDLF.git (venv_gandlf) $> cd GaNDLF (venv_gandlf) $> pip install -e . ``` +Test your installation: +```bash +(venv_gandlf) $> gandlf verify-install +``` ## Docker Installation