Skip to content

alice-cool/DenseCapBert

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 

Repository files navigation

DenseCapBert

Modern VQA models are easily affected by language priors, which ignore image information and learn the superficial relationship between questions and answers, even in the optimal pre-training model. The main reason is that visual information is not fully extracted and utilized. We propose to extract dense captions from images to enhance the visual information for reasoning and utilize them to release the gap between vision and language.

Our code will be available sooner.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published