-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]: Merge input processor and input mapper for multi-modal models #10114
Comments
This is great. In the You initiative here will fit very well with the One other note. The Very excited about this! |
I would like to discuss an edge case where passing the input ids and the MultiModal args is rather useful. |
maybe a solution is explicitly including a supeclass in the model definition which will allow such behavior, otherwise deprecating it? |
Maybe we can make a special case and allow token IDs if all other inputs aren't processed by HF. |
Motivation
Background
To provide more control over the model inputs, we currently define two methods for multi-modal models in vLLM:
LLMEngine
to extend the prompt with placeholder tokens which are reserved for vLLM features such as KV cache and chunked prefill.ModelRunner
to transform multi-modal inputs (e.g.PIL
images) into tensor inputs, usually via the modality-specific processor (e.g.AutoImageProcessor
) from HuggingFace.Issues with the current design
AutoTokenizer
, a list of token IDs, instead of the text prompt. Since HFAutoProcessor
doesn’t accept token IDs, we have to write custom code to edit the list of token IDs based on the multi-modal inputs. For some models (such as Phi-3-vision), this means re-implementing code from their HFAutoProcessor
, complicating the process of porting the model to vLLM.ModelRunner
, lies on the critical path of vLLM’s model execution. Even when the input mapper is fast, the tail TTFT and TPOT suffers because of this. As the input mapper takes up more time, our overall throughput decreases proportionally which can be avoided if we move it outside of the critical path. Nevertheless, we can do little if theAutoProcessor
inside input mapper is very slow, like in #9238. Hope that huggingface/transformers#33810 can help with that!AutoProcessor
that already performs most of the work for calculating the number of placeholder tokens.Proposed Change
Unified multi-modal processor
We plan to merge our input processor and input mapper into a unified multi-modal processor and call it inside the
LLMEngine
(and thus benefit from #8779), taking the role of the existing tokenizer. After this change, each input type will be processed as follows:AutoTokenizer
) [Unchanged]AutoProcessor
) [NEW]List of token IDs with multi-modal input:[DEPRECATED, see below]This multi-modal processor will first call HF
AutoProcessor
, and then modify the processed token IDs by inserting placeholder tokens. (These processed token IDs are not to be confused with the deprecated “list of token IDs with multi-modal input", in which “list of token IDs" represents the tokenized text before processing with multi-modal input.) The number of placeholder tokens to assign can be determined by the existing feature size calculations for each model.Deprecate token IDs with multi-modal input
To be compatible with OpenAI’s (legacy) Completions API, we currently support passing token IDs directly to both
LLM
class and OpenAI-compatible server. However, Completions API doesn’t support multi-modal inputs, so we will deprecate passing token IDs alongside multi-modal inputs to simplify model implementation (see Issue 1 above). Please tell us if you have a use case for this and don’t want to see it removed!Feedback Period
Feel free to comment as the effort progresses!
Timeline
MultiModalInputs
toMultiModalKwargs
#10040The majority of our code will be called inside the existing
InputPreprocessor
which is separated from the vLLM engine, making it easy to integrate with #8779.CC List
@ywang96 @Isotr0py @WoosukKwon @robertgshaw2-neuralmagic
Any Other Things
Multi-modal plugins remain supported
You can define additional modalities in
MultiModalProcessingMetadata
to handle your custom multi-modal plugins. If the names of those modalities are not valid keyword arguments to HFAutoProcessor
, you can override the default multi-modal processor (similar to how you currently need to define_default_input_mapper
for multi-modal plugins).Some users currently use multi-modal plugins to directly pass custom model inputs (#6260). We can implement an alternative process_multimodal to help them migrate to the new processing framework.
No batched preprocessing for now
Currently, preprocessing is performed per prompt in vLLM. While we can call HF tokenizer and modality-specific processor on batched inputs separately, calling the wrapping HF
AutoProcessor
with both list of texts and list of multi-modal data results in the processed multi-modal data (e.g. image) being assigned to every text in the list, rather than the more intuitivezip
-like behavior (e.g. thei
th image only assigned to thei
th text). To support batched preprocessing, we would have to write custom code for each model to combine the outputs of HF tokenizer and modality-specific processors. Given that this can significantly complicate model implementation (see Issue 1 above), we will not consider batched preprocessing at this stage, even with this change.The text was updated successfully, but these errors were encountered: