Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

feat: make flash-attn configurabele #73

Draft
wants to merge 5 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ Keep it human-readable, your future self will thank you!
- Remove credential prompt from mlflow login, replace with seed refresh token via web - [#78](https://github.com/ecmwf/anemoi-training/pull/78)
- Update CODEOWNERS
- Change how mlflow measures CPU Memory usage - [94](https://github.com/ecmwf/anemoi-training/pull/94)
- Updated configuration examples to configure flash-attn - [#73] (https://github.com/ecmwf/anemoi-training/pull/73)

## [0.1.0 - Anemoi training - First release](https://github.com/ecmwf/anemoi-training/releases/tag/0.1.0) - 2024-08-16

Expand Down
1 change: 1 addition & 0 deletions src/anemoi/training/config/model/transformer.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ processor:
num_heads: 16 # GraphTransformer or Transformer only
window_size: 512
dropout_p: 0.0 # GraphTransformer
attention_implementation: flash_attention # Transformer

encoder:
_target_: anemoi.models.layers.mapper.GraphTransformerForwardMapper
Expand Down
Loading