-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Seq Packing in NeMo / Neva2 #11633
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
…a/neva2_seq_packing
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yashaswikarnati <[email protected]>
…a/neva2_seq_packing # Conflicts: # nemo/collections/vlm/neva/model/base.py
Signed-off-by: yaoyu-33 <[email protected]>
…o yuya/neva2_seq_packing
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
…a/neva2_seq_packing
Signed-off-by: yaoyu-33 <[email protected]>
# Conflicts: # nemo/collections/multimodal/data/energon/base.py # nemo/collections/multimodal/data/energon/conversation.py
Signed-off-by: yaoyu-33 <[email protected]>
…a/neva2_seq_packing
|
||
__restore_key__: Tuple[Union[str, int, tuple], ...] = () | ||
position_ids: torch.Tensor = field(default_factory=lambda: torch.empty(0, dtype=torch.float)) | ||
packed_seq_params: PackedSeqParams = field(default_factory=lambda: PackedSeqParams()) |
Check notice
Code scanning / CodeQL
Unnecessary lambda Note
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you resolve this?
"""Sample type for image text raw batch""" | ||
|
||
position_ids: torch.Tensor = field(default_factory=lambda: torch.empty(0, dtype=torch.float)) | ||
packed_seq_params: PackedSeqParams = field(default_factory=lambda: PackedSeqParams()) |
Check notice
Code scanning / CodeQL
Unnecessary lambda Note
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
|
||
__restore_key__: Tuple[Union[str, int, tuple], ...] = () | ||
position_ids: torch.Tensor = field(default_factory=lambda: torch.empty(0, dtype=torch.float)) | ||
packed_seq_params: PackedSeqParams = field(default_factory=lambda: PackedSeqParams()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you resolve this?
packed_sequence=False, | ||
packing_seq_length=4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you use consistent naming as fine_tuning.py? i.e.
use one variable packed_sequence_size
, and packed_sequence_size=-1
indicates not using packed sequence
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i kept packed_sequence
here for consistency with other vlm data modules where we don't use packed_sequence_size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed name from packing_seq_length
to packed_sequence_size
@@ -1711,7 +1711,10 @@ def masked_token_loss(tensor: Tensor, mask: Tensor): | |||
""" | |||
losses = tensor.float() | |||
loss_mask = mask.view(-1).float() | |||
loss = torch.sum(losses.view(-1) * loss_mask) / loss_mask.sum() # sequence level nll | |||
num_valid_tokens = loss_mask.sum() | |||
if num_valid_tokens < 0.5: # no valid tokens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explain a bit more when this is the case? is this only valid for neva?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not only for neva, also for SFT. If the system and user prompt is very long, and predict answer only. After truncation from right, there might not be any answer/valid tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually we truncate the input/context and keep answer intact, so that wouldn't happen
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep the logic is a bit different for vlm. We don't want to truncate from left.
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
…a/neva2_seq_packing # Conflicts: # nemo/collections/multimodal/data/energon/task_encoder.py
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: Yu Yao <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
Signed-off-by: yaoyu-33 <[email protected]>
beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base. Your code was analyzed with PyLint. The following annotations have been identified:
Mitigation guide:
By applying these rules, we reduce the occurance of this message in future. Thank you for improving NeMo's documentation! |
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information