-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qwen2-VL-7B Inference Code #42
Comments
`import os Configurationif len(sys.argv) == 2: MODEL = "Qwen/Qwen2-VL-7B-Instruct" MODEL_ID = "Qwen2-vl" Define file paths and other constantsPROMPTS_FILE = "Prompts/prompts_mmmu-pro.yaml" print(f"Device selected: {DEVICE}") Model and Processor Loadingmodel = Qwen2VLForConditionalGeneration.from_pretrained( processor = AutoProcessor.from_pretrained(MODEL)min_pixels = 256 * 28 * 28 Load prompt configurationwith open(PROMPTS_FILE, "r") as file: Helper functionsdef replace_images_tokens(input_string): def parse_options(options): def construct_prompt(doc):
def mmmu_doc_to_text(doc): def origin_mmmu_doc_to_visual(doc): def vision_mmmu_doc_to_visual(doc): def process_prompt(data):
def initialize_json(file_path): def load_existing_data(file_path): def update_json(file_path, new_entry): def run_and_save():
def main(): if name == 'main': |
Can you please provide your inference code for Qwen2-VL-7B model. I am getting only 41.3% for the standard-4 choices case.
Below is my inference code.
The text was updated successfully, but these errors were encountered: