Add Balance Loss to MoE Example for Enhanced Expert Load Distribution (Issue #1300) #1311
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add Balance Loss to MoE Example for Enhanced Expert Load Distribution (Issue #1300)
What changes were proposed in this pull request?
This pull request proposes the integration of a balance loss mechanism into the Mixture-of-Experts (MoE) example in the
atorch
codebase. Specifically, an auxiliary loss has been added to theTopNRouter
class to facilitate balanced load distribution across experts, improving model performance and efficiency. Key modifications include:Router Updates:
TopNRouter
class to compute and return an auxiliary loss based on router probabilities, helping distribute tokens more evenly._compute_auxiliary_loss()
method to calculate this auxiliary loss, which is currently set up to use the mean of router probabilities as a placeholder. This can be customized based on specific balancing requirements.MoE Layer Enhancements:
_SparseMLP
class to incorporate auxiliary loss from the router and propagate it back, enhancing the MoE layer’s load-balancing capabilities.Training Loop Modifications:
Auxiliary Loss Weight Configurability:
--aux_loss_weight
, to allow users to adjust the weight of the auxiliary loss as needed. This flexibility enables fine-tuning of the loss function based on model requirements.Why are the changes needed?
These changes address issue #1300 by introducing an auxiliary balance loss mechanism to the MoE example, which aims to improve the distribution of workload across experts. In multi-expert architectures, load imbalances can lead to inefficiencies and underutilized resources, ultimately impacting model performance. The proposed auxiliary loss provides a straightforward way to mitigate these imbalances, enhancing both training efficiency and overall model effectiveness.
Does this PR introduce any user-facing change?
Yes, this PR introduces a new command-line argument,
--aux_loss_weight
, which allows users to adjust the weight of the auxiliary loss as needed. By default, it is set to 0.01 but can be configured according to specific model or training needs.How was this patch tested?
The patch was tested through the following steps:
TopNRouter
class and is correctly incorporated into the MoE layer.--aux_loss_weight
argument correctly adjusts the auxiliary loss weight in different runs.These changes are expected to contribute to improved expert load balancing, benefiting users who require scalable and efficient MoE models.