-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Support offloading KV cache to CPU #10874
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: ApostaC <[email protected]> Co-authored-by: KuntaiDu <[email protected]>
Signed-off-by: ApostaC <[email protected]>
da35ed9
to
e6654f2
Compare
…ssues Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
@@ -362,7 +362,7 @@ def test_swap_blocks( | |||
block_mapping = list(zip(src_blocks, dst_blocks)) | |||
block_mapping_tensor = torch.tensor(block_mapping, | |||
dtype=torch.int64, | |||
device="cpu").view(-1, 2) | |||
device=device).view(-1, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this because that this tensor need to be accessed by the new CUDA memcpy kernel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The new paged_copy
kernel need to access the block mapping from GPU.
@@ -508,3 +523,19 @@ def get_num_cached_tokens(self, seq: Sequence) -> int: | |||
cached in the block manager for the sequence. | |||
""" | |||
return self._computed_blocks_tracker.get_num_cached_tokens(seq) | |||
|
|||
def get_and_reset_swaps(self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function seems like not get the real physical block ID from get_physical_block_id
? Especially for CPU PrefixCachingBlockAllocator, whose start ID is not zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, this function should not return the physical block IDs because the get_physical_block_id
will be called in block_manager.swap_in()
/block_manager.swap_out()
later.
Call chain is: scheduler._swap_in() --> block_manager.swap_in() --> block_allocator.get_physical_block_id()
(similar for swapping out).
(Let me know if my understanding is incorrect and I will fix it asap, thanks!)
|
||
# NOTE(Kuntai): extend the swapping list for CPU offloading | ||
new_swap_out, new_swap_in = \ | ||
self.block_manager.get_and_reset_swaps(time.time()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, the get_and_reset_swaps
is called directly here without get_physical_block_id
. I think these block IDs are sent to the cache engine directly later, so not the real physical block IDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it! I double-checked the logic and you are right. Just pushed another commit to fix the issue and update the docstring. Thanks for the catch!
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
An implementation for CPU KV cache offloading (#7697)
TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.
This PR is for fixing the DCO issue for the Kuntai's original CPU offloading PR #9682 . It also contains new CUDA kernels to improve the KV cache offloading performance.
End-to-end benchmarking results:
A long document QA workload (see
benchmarks/benchmark_long_document_qa.py
) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.(Following are the original data for the above figure)
New kernel implementation microbenchmark
The numbers are collected on A100-40GB-SXM GPUs
The new kernel can achieve 4x better throughput than the old
swap_block
implementation.Also, it won't decrease the performance when the number of pages are small.
Potential improvement:
Currently, the
swap_block
is invoked once per layer. If we can aggregate the copy of all the layers into one kernel, the throughput of copying 1 page will also achieve >10GB/s.Implementation
This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.
The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.
Here is the flow diagram
This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.