Skip to content

"torch.bfloat16 is not supported for quantization method awq. Supported dtypes: [torch.float16]" error even after trying dtypr=auto/half/float16/bfloat16 #9116

Unanswered
lavishasharma asked this question in Q&A
Discussion options

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant