Skip to content

Commit

Permalink
fix: adjust batch_tokenized_inputs output in mllama
Browse files Browse the repository at this point in the history
  • Loading branch information
drbh committed Dec 13, 2024
1 parent 3299a26 commit 58e24a3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion server/text_generation_server/models/mllama_causal_lm.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ def from_pb_processor(
dtype: torch.dtype,
device: torch.device,
) -> "VlmCausalLMBatch":
batch_tokenized_inputs, image_inputs = cls.batch_tokenized_inputs(
batch_tokenized_inputs, image_inputs, _video_inputs = cls.batch_tokenized_inputs(
pb.requests, tokenizer, processor, config
)
batch = cls.from_tokenized(pb, tokenizer, batch_tokenized_inputs, dtype, device)
Expand Down

0 comments on commit 58e24a3

Please sign in to comment.