Skip to content

Commit

Permalink
..
Browse files Browse the repository at this point in the history
  • Loading branch information
ShashankMosaicML committed Jan 18, 2024
1 parent 11d3d70 commit 00bc72b
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions llmfoundry/models/layers/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -670,8 +670,7 @@ def forward(

extra_attn_kwargs = {}
if self.attn_impl == 'flash':
if flash_attn_padding_info is not None:
key_padding_mask = None
key_padding_mask = None
extra_attn_kwargs = {
'should_repeat_kv_for_gqa': not is_flash_v2_installed(),
'sliding_window_size': self.sliding_window_size,
Expand Down

0 comments on commit 00bc72b

Please sign in to comment.