-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[misc] use out argument for flash attention #10822
Conversation
Signed-off-by: youkaichao <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we keep the current implementation? Basically, I'd like to keep some ops (e.g., query.view
) in CUDA graph region, since it causes high CPU overheads.
You can test the perf impact of this PR by VLLM_USE_V1=1 python benchmarks/benchmark_latency.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in 0899730
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the fix. However, I think we need a more general solution than just moving the view ops out. Fundamentally, the boundary of the attention forward
method does not necessary align with the boundary of CUDA graphs. For example, we might want to additionally move the reshape_and_cache_flash
op into the CUDA graph region, for better performance. I believe we need to preserve the flexibility in the current main branch, where I can freely customize the boundary of CUDA graphs within the attention layer. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you can move reshape_and_cache_flash
into cudagraph, it needs slot mapping as input, which is hidden from torch.compile
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the boundary of piecewise cudagraph needs to be aligned with pytorch custom op. it is not about the boundary of the attention layer's forward
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the boundary of piecewise cudagraph needs to be aligned with pytorch custom op. it is not about the boundary of the attention layer's forward method.
Yes, I wanted to say that the two boundaries should be decoupled. IIRC, the PR "essentially" enforce that the attention forward
method be the pytorch custom op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the attention metadata object needs to be hidden from torch.compile
, this is a basic requirement of torch.compile
integration.
the fewer arguments for the attention op, the lower overhead you get. this is the optimization. and in this pr, I already use the minimum arguments for the attention op.
if you have any operation that does not depend on attention metadata, but is specific to v1 attention, please let me know, and I can make a separate custom op for it. if no, i'd like to unify them to reduce the maintenance cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, the only requirement, is that the caller of the custom op should not see anything from the attention metadata.
even if you have some operations specific to v1 attention, we can add it to the attention layer, like if self.use_v1: do_something
I want to limit the custom op call inside the attention layer, so that the code in attention backend becomes more intuitive. I get several complaints previously, that the attention implementation calls a torch.ops.vllm
, and then it directs to another function below, which becomes very confusing for people who don't understand torch.compile
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Makes sense. Thanks for the explanation.
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]> Signed-off-by: Andrew Feldman <[email protected]>
replace #9740 since that is very old.
To make mypy happy, all attention forward signature needs to match. so i add
output: Optional[torch.Tensor] = None,
for all attention.