Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Info about other tokenizer #26

Open
chri002 opened this issue Nov 22, 2024 · 6 comments
Open

Info about other tokenizer #26

chri002 opened this issue Nov 22, 2024 · 6 comments
Labels
Answered Answered the question

Comments

@chri002
Copy link

chri002 commented Nov 22, 2024

Amazing work.
An info about the tokenizer, in the tokenizer and embedded code there are references about Qwen2-0.5B-Instruct and 1.5B; I wanted to know if there have been any tests using them and if so, what precision and fidelity they achieved.
Because using them could greatly reduce the VRAM.

An other info, Is there any chance that an inference with offloading of the various models is planned, for low end PCs?

@lawrence-cj
Copy link
Collaborator

Qwen is not tested in Sana. And what do you mean by offloading for low end PCs?

@chri002
Copy link
Author

chri002 commented Nov 22, 2024

Thanks for the info on qwen, regarding offloading I mean loading and unloading the various parts or layers into RAM so as to fit Sana into a 8 or 6 gb GPU (and maybe even 4gb) besides running for example vae on CPU or even part of gemma. After all gemma also runs on 6gb desktop GPU and android with 8gb RAM.

@bil-ash
Copy link

bil-ash commented Dec 15, 2024

Qwen is not tested in Sana. And what do you mean by offloading for low end PCs?

@lawrence-cj However, the diffusion/model/builder.py file seems to contain code for qwen2(0.5b and 1.5b instruct) as text encoder. Also, train_scripts/train.py file seems to suggest that it is possible to train sana with qwen and t5 also as text encoder. If some version of sana has been trained with qwen2-0.5b-instruct as text encoder, please release that- would love to see the results from such a tiny text encoder.

@lawrence-cj
Copy link
Collaborator

We tried qwen once—just a prototype and no release plan for it since didn't see any advantage rather to gemma or T5. So why qwen is necessary here?

@lawrence-cj lawrence-cj added the Answered Answered the question label Dec 18, 2024
@bil-ash
Copy link

bil-ash commented Dec 18, 2024

We tried qwen once—just a prototype and no release plan for it since didn't see any advantage rather to gemma or T5. So why qwen is necessary here?

I thought that using qwen(0.5B instruct) instead of gemma(2B-it) as text encoder would lead to lower resource consumption, that's why was asking

@lawrence-cj
Copy link
Collaborator

lawrence-cj commented Dec 18, 2024

We tried T5-large once, which contains 400M parameters. But the ability is relatively low. So we haven't tried other LLM or VLM less than 1B parameters. We are also working on a 4-bit Gemma quantization. Once it's done, the whole pipeline will run on 8GB GPUs, so resource issue is not the hightest priority. We may use Qwen, if we have a plan for a small model in the future.

Refer to the revised version of Sana in Openreview:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Answered Answered the question
Projects
None yet
Development

No branches or pull requests

3 participants