[🆕 博客] [🤔 常见问题] [🚀 InternVL2 博客] [🗨️ 对话Demo] [🤗 HF Demo] [📖 文档] [🌐 API] [🚀 快速开始]
[📜 InternVL 1.0 论文] [📜 InternVL 1.5 报告] [📖 1.0 中文解读] [📖 1.5 中文解读] [📖 2.0 中文解读]
2024/08/01
: Chartmimic 团队在他们的基准测试中评估了 InternVL2 系列模型。InternVL2-26B 和 76B 模型在开源模型中取得了前两名的成绩,其中 InternVL2-Llama3-76B 模型超过了 GeminiProVision,并表现出与 Claude-3-opus 相当的结果。2024/08/01
: InternVL2-Pro 在 CharXiv 数据集中实现了开源模型中的 SOTA 性能,也比部分知名闭源模型如 GPT-4V、Gemini 1.5 Flash、Claude 3 Sonnet 取得了更好成绩2024/07/24
: MLVU团队在它们的基准测试中评估了InternVL-1.5。在多项选择任务上的平均表现为50.4%,而在生成任务上的表现为4.02。多项选择任务的表现在所有开源多模态大语言模型中排名第一。2024/07/18
: 🔥🔥 InternVL2-40B 在 Video-MME 数据集中实现了开源模型中的 SOTA 性能,当输入 16 帧时得分为 61.2,输入 32 帧时得分为 64.4,大幅领先其它开源模型,是最接近 GPT-4o mini 的开源模型。2024/07/18
: 🔥 InternVL2-Pro 在 DocVQA 和 InfoVQA 的基准测试中实现了 SOTA 性能。2024/07/04
: 🚀 我们发布了 InternVL2 系列模型。InternVL2-Pro 在 MMMU 基准测试中达到了 62.0% 的准确率,实现了与 GPT-4o 等领先闭源商业模型比肩的性能。该模型的免费 API 可以通过填写 (英文申请表) / (中文申请表) 来申请。其它模型可在 HF 链接 中下载。2024/06/19
: 我们提出了 Needle In A Multimodal Haystack (MM-NIAH),这是第一个针对模型关于长多模态文档理解能力的评测基准。2024/05/30
: 我们发布了 ShareGPT-4o,这是一个大规模、高质量的多模态数据集。我们计划开源一批使用 GPT-4o 精心标注的数据,包括 200K 条图像详细描述、10K 条视频详细描述,以及 10K 条音频详细描述。2024/05/29
: 我们开源了 Mini-InternVL 系列,包括以下两个对话模型:Mini-InternVL-Chat-2B-V1-5 和 Mini-InternVL-Chat-4B-V1-5。这些模型在极小的尺寸下实现了令人印象深刻的性能:2B 模型以 8% 的模型尺寸实现了 80% 的性能,4B 模型以 16% 的模型尺寸实现了 90% 的性能。更多细节请查看我们的博客。2024/05/28
: 感谢 lmdeploy 团队提供的 AWQ 量化支持。InternVL 1.5 的 4-bit 模型发布在 OpenGVLab/InternVL-Chat-V1-5-AWQ。2024/05/13
: InternVL 1.0 现在可以作为扩散模型的 文本编码器,支持全球超过 110 种语言的多语言生成。详情请看 MuLan。2024/04/18
: InternVL-Chat-V1-5 已经在 HuggingFace 发布,在 MMMU、DocVQA、ChartQA、MathVista 等各种基准测试中,性能接近 GPT-4V 和 Gemini Pro。2024/02/27
: InternVL 已被 CVPR 2024 (Oral) 接收!🎉2024/02/24
: InternVL-Chat 系列模型已经接入 VLMEvalKit 评测框架。2024/02/21
: InternVL-Chat-V1-2-Plus 在 MathVista(59.9)、MMBench(83.8)和 MMVP(58.7)上实现了 SOTA 性能。详情请看我们的博客。2024/02/12
: InternVL-Chat-V1-2 已经发布,它在 MMMU 验证集上达到了 51.6,在 MMBench 测试集上达到了 82.3。 更多信息请参考我们的博客以及 SFT 数据。该模型已经在 HuggingFace 发布,训练、测评的数据和脚本均已开源。2024/01/24
: InternVL-Chat-V1-1 已经发布,它支持中文对话,并具备强大的 OCR 能力,详情请看这里。2024/01/16
: 我们发布了 定制的 mmcv/mmsegmentation/mmdetection 代码库,集成了 DeepSpeed,可以用于训练检测和分割大模型。
- 支持 vLLM 和 Ollama
- 使用 readthedocs 重新构建文档
- 支持使用 LoRA 微调不同的 LLMs
- 在 Demo 中支持视频和 PDF 输入
- 发布集成 VisionLLMv2 的 InternVL2
- 发布 InternVL2 的
requirements.txt
- 发布 InternVL2 系列的训练 / 评估代码
- 发布 InternVL1.5 和 InternVL2 的 Streamlit 网页 UI
-
开始
- 配置环境: [环境] [requirements.txt]
- 评测数据准备: [InternVL 评测]
- 对话数据格式: [元文件] [纯文本] [单图] [多图] [视频]
- InternVL-Chat API: [InternVL2-Pro]
- 本地在线 Demo: [Streamlit Demo] [Gradio Demo] [LMDeploy Demo]
- 教程: [使用LoRA微调提升InternVL2在COCO Caption上的表现]
-
InternVL 系列
Model Name | Vision Part | Language Part | HF Link | MS Link | Document |
---|---|---|---|---|---|
InternVL2‑1B | InternViT‑300M‑448px | Qwen2‑0.5B‑Instruct | 🤗 link | 🤖 link | 📖 doc |
InternVL2‑2B | InternViT‑300M‑448px | internlm2‑chat‑1‑8b | 🤗 link | 🤖 link | 📖 doc |
InternVL2‑4B | InternViT‑300M‑448px | Phi‑3‑mini‑128k‑instruct | 🤗 link | 🤖 link | 📖 doc |
InternVL2‑8B | InternViT‑300M‑448px | internlm2_5‑7b‑chat | 🤗 link | 🤖 link | 📖 doc |
InternVL2‑26B | InternViT‑6B‑448px‑V1‑5 | internlm2‑chat‑20b | 🤗 link | 🤖 link | 📖 doc |
InternVL2‑40B | InternViT‑6B‑448px‑V1‑5 | Nous‑Hermes‑2‑Yi‑34B | 🤗 link | 🤖 link | 📖 doc |
InternVL2-Llama3-76B | InternViT‑6B‑448px‑V1‑5 | Hermes‑2‑Theta‑ Llama‑3‑70B |
🤗 link | 🤖 link | 📖 doc |
我们诚挚邀请大家将 InternVL2-Pro 的 API 用于学术研究。为了更好地管理,请提交英文申请表/中文申请表以获得免费 API 访问权限。
Model | Date | HF Link | MS Link | Note |
---|---|---|---|---|
Mini‑InternVL‑Chat‑4B‑V1‑5 | 2024.05.28 | 🤗 link | 🤖 link | 🚀🚀 16% 的模型大小, 90% 的性能 |
Mini‑InternVL‑Chat‑2B‑V1‑5 | 2024.05.19 | 🤗 link | 🤖 link | 🚀 8% 的模型大小, 80% 的性能 |
InternVL‑Chat‑V1‑5 | 2024.04.18 | 🤗 link | 🤖 link | 支持 4K 图像;超强的 OCR 能力;在 MMMU、DocVQA、ChartQA、MathVista 等各种基准测试中,性能接近 GPT-4V 和 Gemini Pro |
InternVL‑Chat‑V1‑2‑Plus | 2024.02.21 | 🤗 link | 🤖 link | 更多的 SFT 数据和更强的性能 |
InternVL‑Chat‑V1‑2 | 2024.02.11 | 🤗 link | 🤖 link | 将 LLM 扩展到 34B |
InternVL‑Chat‑V1‑1 | 2024.01.24 | 🤗 link | 🤖 link | 支持中文和更强的 OCR 能力 |
InternVL‑Chat‑19B | 2023.12.25 | 🤗 link | 🤖 link | 英语多模态对话 |
InternVL‑Chat‑13B | 2023.12.25 | 🤗 link | 🤖 link | 英语多模态对话 |
Model | Date | HF Link | MS Link | Note |
---|---|---|---|---|
InternViT‑300M‑448px | 2024.05.25 | 🤗 link | 🤖 link | 蒸馏的小型视觉基础模型,具有 300M 参数(🔥新) |
InternViT‑6B‑448px‑V1‑5 | 2024.04.20 | 🤗 link | 🤖 link | 通过增量预训练支持动态分辨率和超强的 OCR 特征提取能力(🔥新) |
InternViT‑6B‑448px‑V1‑2 | 2024.02.11 | 🤗 link | 🤖 link | 通过增量预训练支持 448 分辨率 |
InternViT‑6B‑448px‑V1‑0 | 2024.01.30 | 🤗 link | 🤖 link | 通过增量预训练支持 448 分辨率 |
InternViT‑6B‑224px | 2023.12.22 | 🤗 link | 🤖 link | InternViT-6B 的第一个版本,提取自 InternVL‑14B‑224px |
Model | Date | HF Link | MS Link | Note |
---|---|---|---|---|
InternVL‑14B‑224px | 2023.12.22 | 🤗 link | 🤖 link | 视觉-语言基础模型,InternViT-6B + QLLaMA,可以用于类似 CLIP 的图文检索 |
视觉感知 (点击展开)
-
线性探针图像分类 [查看详情]
ViT-22B uses the private JFT-3B dataset.
method #param IN-1K IN-ReaL IN-V2 IN-A IN-R IN-Sketch OpenCLIP-G 1.8B 86.2 89.4 77.2 63.8 87.8 66.4 DINOv2-g 1.1B 86.5 89.6 78.4 75.9 78.8 62.5 EVA-01-CLIP-g 1.1B 86.5 89.3 77.4 70.5 87.7 63.1 MAWS-ViT-6.5B 6.5B 87.8 - - - - - ViT-22B* 21.7B 89.5 90.9 83.2 83.8 87.4 - InternViT-6B (ours) 5.9B 88.2 90.4 79.9 77.5 89.8 69.1 -
语义分割 [查看详情]
method decoder #param (train/total) crop size mIoU OpenCLIP-G (frozen) Linear 0.3M / 1.8B 512 39.3 ViT-22B (frozen) Linear 0.9M / 21.7B 504 34.6 InternViT-6B (frozen) Linear 0.5M / 5.9B 504 47.2 (+12.6) ViT-22B (frozen) UperNet 0.8B / 22.5B 504 52.7 InternViT-6B (frozen) UperNet 0.4B / 6.3B 504 54.9 (+2.2) ViT-22B UperNet 22.5B / 22.5B 504 55.3 InternViT-6B UperNet 6.3B / 6.3B 504 58.9 (+3.6) -
零样本图像分类 [查看详情]
method IN-1K IN-A IN-R IN-V2 IN-Sketch ObjectNet OpenCLIP-G 80.1 69.3 92.1 73.6 68.9 73.0 EVA-02-CLIP-E+ 82.0 82.1 94.5 75.7 71.6 79.6 ViT-22B* 85.9 90.1 96.0 80.9 - 87.6 InternVL-C (ours) 83.2 83.8 95.5 77.3 73.9 80.6 -
多语言零样本图像分类 [查看详情]
EN: English, ZH: Chinese, JP: Japanese, Ar: Arabic, IT: Italian
method IN-1K (EN) IN-1K (ZH) IN-1K (JP) IN-1K (AR) IN-1K (IT) Taiyi-CLIP-ViT-H - 54.4 - - - WuKong-ViT-L-G - 57.5 - - - CN-CLIP-ViT-H - 59.6 - - - AltCLIP-ViT-L 74.5 59.6 - - - EVA-02-CLIP-E+ 82.0 - - - 41.2 OpenCLIP-XLM-R-H 77.0 55.7 53.1 37.0 56.8 InternVL-C (ours) 83.2 64.5 61.5 44.9 65.7 -
零样本视频分类
method #frame K400 K600 K700 OpenCLIP-G 1 65.9 66.1 59.2 EVA-02-CLIP-E+ 1 69.8 69.3 63.4 InternVL-C (ours) 1 71.0 71.3 65.7 ViCLIP 8 75.7 73.5 66.4 InternVL-C (ours) 8 79.4 78.8 71.5
跨模态检索 (点击展开)
-
英语零样本图文检索 [查看详情]
model Flickr30K COCO avg image-to-text text-to-image image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 OpenCLIP-G 92.9 99.3 99.8 79.5 95.0 97.1 67.3 86.9 92.6 51.4 74.9 83.0 85.0 EVA-02-CLIP-E+ 93.9 99.4 99.8 78.8 94.2 96.8 68.8 87.8 92.8 51.1 75.0 82.7 85.1 EVA-CLIP-8B 95.6 99.6 99.9 80.8 95.5 97.6 70.3 89.3 93.9 53.0 76.0 83.4 86.2 InternVL-C (ours) 94.7 99.6 99.9 81.7 96.0 98.2 70.6 89.0 93.5 54.1 77.3 84.6 86.6 InternVL-G (ours) 95.7 99.7 99.9 85.0 97.0 98.6 74.9 91.3 95.2 58.6 81.3 88.0 88.8 -
中文零样本图文检索 [查看详情]
model Flickr30K-CN COCO-CN avg image-to-text text-to-image image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 CN-CLIP-ViT-H 81.6 97.5 98.8 71.2 91.4 95.5 63.0 86.6 92.9 69.2 89.9 96.1 86.1 OpenCLIP-XLM-R-H 86.1 97.5 99.2 71.0 90.5 94.9 70.0 91.5 97.0 66.1 90.8 96.0 87.6 InternVL-C (ours) 90.3 98.8 99.7 75.1 92.9 96.4 68.8 92.0 96.7 68.9 91.9 96.5 89.0 InternVL-G (ours) 92.9 99.4 99.8 77.7 94.8 97.3 71.4 93.9 97.7 73.8 94.4 98.1 90.9 -
多语言零样本图文对检索 [查看详情]
method EN ES FR ZH IT KO RU JP average AltCLIP 95.4 94.1 92.9 95.1 94.2 94.4 91.8 91.7 93.7 OpenCLIP-XLM-R-H 97.3 96.1 94.5 94.7 96.0 90.2 93.9 94.0 94.6 InternVL-C (ours) 97.3 95.7 95.1 95.6 96.0 92.2 93.3 95.5 95.1 InternVL-G (ours) 98.6 97.7 96.5 96.7 96.9 95.1 94.8 96.1 96.6
多模态对话
使用 InternViT-6B 提取视觉特征 (点击展开)
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-448px-V1-5',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
使用 InternVL-C(ontrastive) 和 InternVL-G(enerative) 进行跨模态检索 (点击展开)
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer
model = AutoModel.from_pretrained(
'OpenGVLab/InternVL-14B-224px',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternVL-14B-224px')
tokenizer = AutoTokenizer.from_pretrained(
'OpenGVLab/InternVL-14B-224px', use_fast=False, add_eos_token=True)
tokenizer.pad_token_id = 0 # set pad_token_id to 0
images = [
Image.open('./examples/image1.jpg').convert('RGB'),
Image.open('./examples/image2.jpg').convert('RGB'),
Image.open('./examples/image3.jpg').convert('RGB')
]
prefix = 'summarize:'
texts = [
prefix + 'a photo of a red panda', # English
prefix + '一张熊猫的照片', # Chinese
prefix + '二匹の猫の写真' # Japanese
]
pixel_values = image_processor(images=images, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
input_ids = tokenizer(texts, return_tensors='pt', max_length=80,
truncation=True, padding='max_length').input_ids.cuda()
# InternVL-C
logits_per_image, logits_per_text = model(
image=pixel_values, text=input_ids, mode='InternVL-C')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 5.2185e-03, 6.0070e-08],
# [2.2949e-02, 9.7656e-01, 5.9903e-06],
# [3.2932e-06, 7.4863e-05, 1.0000e+00]], device='cuda:0',
# dtype=torch.bfloat16, grad_fn=<SoftmaxBackward0>)
# InternVL-G
logits_per_image, logits_per_text = model(
image=pixel_values, text=input_ids, mode='InternVL-G')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 3.1738e-03, 3.6322e-08],
# [8.6060e-03, 9.9219e-01, 2.8759e-06],
# [1.7583e-06, 3.1233e-05, 1.0000e+00]], device='cuda:0',
# dtype=torch.bfloat16, grad_fn=<SoftmaxBackward0>)
# please set add_eos_token to False for generation
tokenizer.add_eos_token = False
image = Image.open('./examples/image1.jpg').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
tokenized = tokenizer("English caption:", return_tensors='pt')
pred = model.generate(
pixel_values=pixel_values,
input_ids=tokenized.input_ids.cuda(),
attention_mask=tokenized.attention_mask.cuda(),
num_beams=5,
min_new_tokens=8,
)
caption = tokenizer.decode(pred[0].cpu(), skip_special_tokens=True).strip()
# English caption: a red panda sitting on top of a wooden platform
使用 InternVL-Chat 进行多模态对话 (点击展开)
这里我们以较小的 OpenGVLab/InternVL2-8B
为例:
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
# Otherwise, you need to load a model using multiple GPUs, please refer to the `Multiple GPUs` section.
path = 'OpenGVLab/InternVL2-8B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=False)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail. Don\'t repeat.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
本项目以 MIT 许可证发布。项目中的部分代码和模型来自其它来源,受其原始许可证的约束。
如果您在研究中发现本项目有用,请考虑引用:
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
InternVL 的代码构建参考了以下的项目: OpenAI CLIP、Open CLIP、CLIP Benchmark、EVA、InternImage、ViT-Adapter、MMSegmentation、Transformers、DINOv2、BLIP-2、Qwen-VL和 LLaVA-1.5,感谢这些杰出的工作。
如何您想加入我们的项目微信群,请扫描下方二维码添加我们的小助手: