-
Notifications
You must be signed in to change notification settings - Fork 87
Issues: ROCm/AMDMIGraphX
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Error in fuse_ops for hip_gemm_pointwise
bug
Something isn't working
#3690
opened Dec 6, 2024 by
ahsan-ca
Add fusion for Contiguous Transpose Gemm for hipBLASLt
enhancement
New feature or request
#3664
opened Nov 28, 2024 by
ahsan-ca
Fail in find_inner_broadcast due to preserve_output_layout
bug
Something isn't working
#3649
opened Nov 20, 2024 by
shivadbhavsar
Use select_module to compile single prefill/decode program for llama2
#3646
opened Nov 20, 2024 by
turneram
Investigate memory failures when workspace size is set to 0 in compile hipblaslt pass
bug
Something isn't working
#3622
opened Nov 14, 2024 by
ahsan-ca
Quantized distillgpt2 accuracy issue when quant params are inputs
bug
Something isn't working
#3612
opened Nov 11, 2024 by
shivadbhavsar
UAI llama2 script recompiles for each prompt even with prefill approach
#3601
opened Nov 8, 2024 by
turneram
Dangling quantizelinear from horizontal fusion, BERT and DistilGPT2
FP8
issues related to FP8 implemenation
INT8
Perf Improve
#3598
opened Nov 7, 2024 by
CharlieL7
Missing constant propagation: issues related to FP8 implemenation
INT8
Perf Improve
Literal
-> Multibroadcast
-> Quantizelinear
FP8
#3597
opened Nov 7, 2024 by
CharlieL7
GroupQueryAttention produces incorrect results when loaded from mxr
#3596
opened Nov 6, 2024 by
turneram
Remove hipblaslt version check in
hip_gemm_impl.cpp
for OCP FP8 types
#3592
opened Nov 5, 2024 by
CharlieL7
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.