forked from triton-lang/triton
-
Notifications
You must be signed in to change notification settings - Fork 28
Pull requests: ROCm/triton
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
add blocked version to address performance issue of when N is large
#672
opened Dec 4, 2024 by
xiaohuguo2023
Loading…
persistent kernel version of the flash attention forward + FLOP calculation fix when seqlen_q != seqlen_k
enhancement
New feature or request
#670
opened Nov 29, 2024 by
juuso-oskari
Loading…
5 tasks done
Add performance reference for important matmul kernels
#642
opened Sep 16, 2024 by
zhanglx13
Loading…
[CODE SHARING] Insertions of custom LLVM IR and AMDGCN codes to triton
#610
opened Jul 10, 2024 by
ravil-mobile
•
Draft
[release/internal/2.2.x] Only include HIP headers for triton
#561
opened Apr 17, 2024 by
jithunnair-amd
•
Draft
support bypassing data layout conversion for atomic operator
#556
opened Apr 9, 2024 by
xiaohuguo2023
Loading…
Solve official flash-attention.py fails: typeError:'function' object is not subscriptable
#541
opened Mar 20, 2024 by
zhangxiao-stack
Loading…
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.