You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 1, 2023. It is now read-only.
Thanks for this great framework!
I was wondering if there is an explicit 'no' or a limitation for quantizing weights and/or activations to higher than 8 bits using asymmetric methods? When I tried 16/32 for weights, on asymetric_s (similarly for activations) the accuracy drops to 0.2% while it should improve.
The text was updated successfully, but these errors were encountered:
Amin-Azar
changed the title
Higher than 8-bit Quantization
Higher than 8-bit Quantization not working properly!?
Mar 11, 2021
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thanks for this great framework!
I was wondering if there is an explicit 'no' or a limitation for quantizing weights and/or activations to higher than 8 bits using asymmetric methods? When I tried 16/32 for weights, on asymetric_s (similarly for activations) the accuracy drops to 0.2% while it should improve.
The text was updated successfully, but these errors were encountered: