Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch parallelism #5916

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
---
Title: 'Distributed Data Parallelism'
Description: 'An overview of distributed data parallelism in PyTorch.'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description is too generic. Explain what distributed data parallelism is within 1-2 lines.
https://github.com/Codecademy/docs/blob/main/documentation/style-guide.md

Subjects:
- 'Data Science'
- 'Machine Learning'
Tags:
- 'PyTorch'
- 'Data'
- 'Data Parallelism'
Comment on lines +8 to +10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- 'PyTorch'
- 'Data'
- 'Data Parallelism'
- 'Data'
- 'Data Parallelism'
- 'PyTorch'

CatalogContent:
- 'intro-to-py-torch-and-neural-networks'
- 'paths/build-a-machine-learning-model'
---

## Introduction to Distributed Data Parallelism

Distributed Data Parallelism (DDP) in PyTorch is a module that enables users to efficiently train models across multiple GPUs and machines. By splitting the training process across multiple machines, DDP helps reduce training time and facilitates scaling to larger models and datasets. It achieves parallelism by splitting the input data into smaller chunks, processing them on different GPUs, and aggregating results for updates. Compared to `DataParallel`, DDP offers better performance and scalability by minimising device communication overhead.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Distributed Data Parallelism (DDP) in PyTorch is a module that enables users to efficiently train models across multiple GPUs and machines. By splitting the training process across multiple machines, DDP helps reduce training time and facilitates scaling to larger models and datasets. It achieves parallelism by splitting the input data into smaller chunks, processing them on different GPUs, and aggregating results for updates. Compared to `DataParallel`, DDP offers better performance and scalability by minimising device communication overhead.
Distributed Data Parallelism (DDP) in PyTorch is a module that enables users to efficiently train models across multiple GPUs and machines. By splitting the training process across multiple machines, DDP helps reduce training time and facilitates scaling to larger models and datasets.
It achieves parallelism by splitting the input data into smaller chunks, processing them on different GPUs, and aggregating results for updates. Compared to `DataParallel`, DDP offers better performance and scalability by minimizing device communication overhead.


### Key Features

- High performance: Efficient communication using NVIDIA Collective Communications Library (NCCL) or Gloo backend (for Windows platforms).
- Fault tolerance: Handles errors during distributed training.
- Scalability: Suitable for multi-node, multi-GPU setups.

## Syntax

To use DDP, a distributed process group needs to be initialised and wrapped to a model with `torch.nn.parallel.DistributedDataParallel`.

```py
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
```py
```pseudo

import os
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP

# Set up environment variables
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '8000'

# Initialise the process group
dist.init_process_group(backend="nccl", init_method="env://", world_size=4, rank=0)

# Wrap the model with DDP
model = torch.nn.Linear(10, 10).cuda()
model = DDP(model)
```

- `backend`: This is optional and by default, a `gloo` and `nccl` backend will be created.
- `init_method`: This is optional as well and the default is `env://`.

### Required Environment Variables

- `WORLD_SIZE`: The total number of processes.
- `RANK`: Rank of the current process and must be between 0 and WORLD_SIZE - 1.
- `MASTER_ADDR`: The address of the master node.
- `MASTER_PORT`: The port of the master node.

## Example

The following example demonstrates the use of DDP for distributed training:

```py
import os
import torch
import torch.nn as nn
import torch.optim as optim
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP

# Create a function to set up the initialisation process
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '8000'
dist.init_process("nccl", rank=rank, world_size=world_size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dist.init_process("nccl", rank=rank, world_size=world_size)
dist.init_process_group("nccl", rank=rank, world_size=world_size)


# All resources allocated to the process group are released after process is complete
def cleanup():
dist.destroy_process_group()

def main():
setup(0, 4)

# Model and data
model = nn.Linear(10, 10).cuda()
ddp_model = DDP(model)
optimiser = optim.SGD(ddp_model.parameters(), lr=0.01)

# Dummy data and labels
data = torch.randn(20, 10).cuda()
labels = torch.randn(20, 10).cuda()

# Training
for _ in range(10):
optimiser.zero_grad()
outputs = ddp_model(data)
loss = nn.MSELoss()(outputs, labels)
loss.backward()
optimiser.step()

cleanup()

if __name__ == "__main__":
main()
```
1 change: 1 addition & 0 deletions documentation/tags.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ Cybersecurity
D3
Dart
Data
Data Parallelism
Data Structures
Data Types
Database
Expand Down
Loading