Skip to content

Commit

Permalink
..
Browse files Browse the repository at this point in the history
  • Loading branch information
ShashankMosaicML committed Dec 4, 2024
1 parent fc8a120 commit 5f88093
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions llmfoundry/models/mpt/modeling_mpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,6 @@ def gen_sequence_id_info(
```.
(The description above is taken verbatim from https://github.com/Dao-AILab/flash-attention/blob/9356a1c0389660d7e231ff3163c1ac17d9e3824a/flash_attn/bert_padding.py#L125 .)
"""
sequence_id_info = None
if (sequence_id is not None) and attn_uses_sequence_id and (
attn_impl == 'flash' or attn_impl == 'flex'
):
Expand Down Expand Up @@ -271,9 +270,9 @@ def gen_sequence_id_info(
mode='constant',
value=0,
)
sequence_id_info = attention_mask_in_length
return attention_mask_in_length

return sequence_id_info
return None


def gen_flash_attn_padding_info(
Expand Down

0 comments on commit 5f88093

Please sign in to comment.