Modern VQA models are easily affected by language priors, which ignore image information and learn the superficial relationship between questions and answers, even in the optimal pre-training model. The main reason is that visual information is not fully extracted and utilized. We propose to extract dense captions from images to enhance the visual information for reasoning and utilize them to release the gap between vision and language.
Our code will be available sooner.