-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Info about other tokenizer #26
Comments
Qwen is not tested in Sana. And what do you mean by offloading for low end PCs? |
Thanks for the info on qwen, regarding offloading I mean loading and unloading the various parts or layers into RAM so as to fit Sana into a 8 or 6 gb GPU (and maybe even 4gb) besides running for example vae on CPU or even part of gemma. After all gemma also runs on 6gb desktop GPU and android with 8gb RAM. |
@lawrence-cj However, the |
We tried qwen once—just a prototype and no release plan for it since didn't see any advantage rather to gemma or T5. So why qwen is necessary here? |
I thought that using qwen(0.5B instruct) instead of gemma(2B-it) as text encoder would lead to lower resource consumption, that's why was asking |
We tried T5-large once, which contains 400M parameters. But the ability is relatively low. So we haven't tried other LLM or VLM less than 1B parameters. We are also working on a 4-bit Gemma quantization. Once it's done, the whole pipeline will run on 8GB GPUs, so resource issue is not the hightest priority. We may use Qwen, if we have a plan for a small model in the future. |
Amazing work.
An info about the tokenizer, in the tokenizer and embedded code there are references about Qwen2-0.5B-Instruct and 1.5B; I wanted to know if there have been any tests using them and if so, what precision and fidelity they achieved.
Because using them could greatly reduce the VRAM.
An other info, Is there any chance that an inference with offloading of the various models is planned, for low end PCs?
The text was updated successfully, but these errors were encountered: