Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch parallelism #5916

Merged
merged 17 commits into from
Jan 21, 2025
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
Title: 'Distributed Data Parallelism'
Description: 'Enables users to efficiently train models across multiple GPUs and machines.'
Subjects:
- 'Data Science'
- 'Machine Learning'
Tags:
- 'Data'
- 'Data Parallelism'
- 'PyTorch'
CatalogContent:
- 'intro-to-py-torch-and-neural-networks'
- 'paths/build-a-machine-learning-model'
---

## Introduction to Distributed Data Parallelism
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved

Distributed Data Parallelism (DDP) in PyTorch is a module that enables users to efficiently train models across multiple GPUs and machines. By splitting the training process across multiple machines, DDP helps reduce training time and facilitates scaling to larger models and datasets.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved
It achieves parallelism by splitting the input data into smaller chunks, processing them on different GPUs, and aggregating results for updates. Compared to `DataParallel`, DDP offers better performance and scalability by minimizing device communication overhead.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved

### Key Features
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved

- High performance: Efficient communication using NVIDIA Collective Communications Library (NCCL) or Gloo backend (for Windows platforms).
- Fault tolerance: Handles errors during distributed training.
- Scalability: Suitable for multi-node, multi-GPU setups.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved

## Syntax

To use DDP, a distributed process group needs to be initialised and wrapped to a model with `torch.nn.parallel.DistributedDataParallel`.

```pseudo
import os
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP

# Set up environment variables
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '8000'

# Initialise the process group
dist.init_process_group(backend="nccl", init_method="env://", world_size=4, rank=0)

# Wrap the model with DDP
model = torch.nn.Linear(10, 10).cuda()
model = DDP(model)
```

- `backend`: This is optional and by default, a `gloo` and `nccl` backend will be created.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved
- `init_method`: This is optional as well and the default is `env://`.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved

### Required Environment Variables

- `WORLD_SIZE`: The total number of processes.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved
- `RANK`: Rank of the current process and must be between 0 and WORLD_SIZE - 1.
PragatiVerma18 marked this conversation as resolved.
Show resolved Hide resolved
- `MASTER_ADDR`: The address of the master node.
- `MASTER_PORT`: The port of the master node.

## Example

The following example demonstrates the use of DDP for distributed training:

```py
import os
import torch
import torch.nn as nn
import torch.optim as optim
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP

# Create a function to set up the initialisation process
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '8000'
dist.init_process_group("nccl", rank=rank, world_size=world_size)

# All resources allocated to the process group are released after process is complete
def cleanup():
dist.destroy_process_group()

def main():
setup(0, 4)

# Model and data
model = nn.Linear(10, 10).cuda()
ddp_model = DDP(model)
optimiser = optim.SGD(ddp_model.parameters(), lr=0.01)

# Dummy data and labels
data = torch.randn(20, 10).cuda()
labels = torch.randn(20, 10).cuda()

# Training
for _ in range(10):
optimiser.zero_grad()
outputs = ddp_model(data)
loss = nn.MSELoss()(outputs, labels)
loss.backward()
optimiser.step()

cleanup()

if __name__ == "__main__":
main()
```
1 change: 1 addition & 0 deletions documentation/tags.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ Cybersecurity
D3
Dart
Data
Data Parallelism
Data Structures
Data Types
Database
Expand Down
Loading