Skip to content

Commit

Permalink
fix typo
Browse files Browse the repository at this point in the history
Signed-off-by: Salar Hosseini <[email protected]>
  • Loading branch information
skhorasganiTT committed Dec 17, 2024
1 parent 78e9f29 commit 91e468b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion models/demos/llama3/tt/generator_vllm.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def input_processor_for_mllama(ctx: InputContext, inputs: Union[DecoderOnlyInput
inputs["encoder_multi_modal_data"] = {}
return inputs

# Set encoder prompt length based on the number of vision tokens so block manager allocates enable blocks (cross block tables).
# Set encoder prompt length based on the number of vision tokens so block manager allocates enough blocks (cross block tables).
hf_config = ctx.model_config.hf_config
assert hf_config.vision_config.image_size % 14 == 0, "chunk size should be multiple of 14"
token_per_chunk = nearest_32(
Expand Down

0 comments on commit 91e468b

Please sign in to comment.