Quantitation Aware Fine Tuning? #1075
Answered
by
jbischof
ramkumarkoppu
asked this question in
General
-
Hi, I used to do Quantitation aware training on CNN models for improved accuracy on converted TFlite models for int8 inference using tensorflow_model_optimization. Will this same approach useful on keras_nlp LLM fine-tuning for int8 TFlite inference? |
Beta Was this translation helpful? Give feedback.
Answered by
jbischof
Jun 12, 2023
Replies: 1 comment
-
Yes @ramkumarkoppu all our models are compatible with TFMOT. See https://www.tensorflow.org/model_optimization |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ramkumarkoppu
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Yes @ramkumarkoppu all our models are compatible with TFMOT. See https://www.tensorflow.org/model_optimization