This is the repository for the paper:
Exploring the Spectrum of Visio-Linguistic Compositionality and Recognition
Youngtaek Oh, Pyunghwan Ahn, Jinhyung Kim, Gwangmo Song, Soonyoung Lee, In So Kweon, Junmo Kim
CVPR Workshop on ‘What is Next in Multimodal Foundation Models?’ (MMFM), 2024
- [June 14] Currently, the codes are under review before being made public (Expected date: Early July).
TL;DR We comprehensively curate VLMs and benchmarks for compositionality and recognition evaluation!
Click to expand
Vision and language models (VLMs) such as CLIP have showcased remarkable zero-shot recognition abilities yet face challenges in visio-linguistic compositionality, particularly in linguistic comprehension and fine-grained image-text alignment.
This paper explores the intricate relationship between compositionality and recognition -- two pivotal aspects of VLM capability. We conduct a comprehensive evaluation of existing VLMs, covering both pre-training approaches aimed at recognition and the fine-tuning methods designed to improve compositionality. Our evaluation employs 12 benchmarks for compositionality, along with 21 zero-shot classification and two retrieval benchmarks for recognition.
In our analysis from 274 CLIP model checkpoints, we reveal patterns and trade-offs that emerge between compositional understanding and recognition accuracy. Ultimately, this necessitates strategic efforts towards developing models that improve both capabilities, as well as the meticulous formulation of benchmarks for compositionality.
Overall trends of pre-trained and fine-tuned CLIP models between 12 compositionality and 21 zero-shot classification tasks.
[results.csv] | [individual_results.csv]
If you find this repository useful, please consider citing our paper with the following bibtex:
@article{oh2024exploring,
title={Exploring the Spectrum of Visio-Linguistic Compositionality and Recognition},
author={Oh, Youngtaek and Ahn, Pyunghwan and Kim, Jinhyung and Song, Gwangmo and Lee, Soonyoung and Kweon, In So and Kim, Junmo},
journal = {arXiv preprint},
year={2024},
url={https://arxiv.org/abs/2406.09388}
}