Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM] Clarity on prefix caching #300

Closed
attafosu opened this issue Oct 22, 2024 · 1 comment
Closed

[LLM] Clarity on prefix caching #300

attafosu opened this issue Oct 22, 2024 · 1 comment

Comments

@attafosu
Copy link

In the FAQs on LLMs there's no comment on prefix caching. However, the Llama3-405B intends to use vLLM for reference implementation, which supports the prefix caching feature. Is this optimization going to be allowed? If so, will this conflict with any of the existing rules on caching?

@attafosu
Copy link
Author

Deferring to the taskforce for resolution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant