Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues #263

Open
bryanhpchiang opened this issue Aug 29, 2023 · 3 comments
Open

Performance issues #263

bryanhpchiang opened this issue Aug 29, 2023 · 3 comments

Comments

@bryanhpchiang
Copy link

Have you tried this yet?

https://github.com/InternLM/lmdeploy

On my initial testing for 7B and 13B models there's a noticeable per-token latency improvement (measured in time to generate the first 5 tokens).

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Aug 29, 2023

Uses AWQ. I wonder about perplexity and memory performance of that format vs GPTQ.

@bryanhpchiang
Copy link
Author

bryanhpchiang commented Aug 29, 2023 via email

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Aug 30, 2023

The paper probably doesn't compare optimized exllama at 64G. Remember the SPQR paper doing similar. Noticed a lot of authors do very favorable results in their graphs and creatively omit projects which compete. Then they are suddenly the best thing since sliced bread but in reality it's not so.

The real-real of this will come out with multi GPU 70b not 7b. Ideally should add AWQ into textgen and then see how their default implementation does. When I first saw it, I think it was incomplete and then I forgot about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants