References & Citations
Computer Science > Computer Vision and Pattern Recognition
Title: Probing Conceptual Understanding of Large Visual-Language Models
(Submitted on 7 Apr 2023 (this version), latest version 26 Apr 2024 (v3))
Abstract: We present a novel framework for probing and improving relational, compositional and contextual understanding of large visual-language models (V+L). While large V+L models have achieved success in various downstream tasks, it is not clear if they have a conceptual grasp of the content. We propose a novel benchmarking dataset for probing three aspects of content understanding. Our probes are grounded in cognitive science and help determine if a V+L model can, for example, determine if snow garnished with a man is implausible, or if it can identify beach furniture by knowing it is located on a beach. We have experimented with 5 well known models, such as CLIP and ViLT, and found that they mostly fail to demonstrate a conceptual understanding. That said, we find interesting insights such as cross-attention helps learning conceptual understanding. We use these insights to propose a new finetuning technique that rewards the three conceptual understanding measures we proposed. We hope that the presented benchmarks will help the community assess and improve the conceptual understanding capabilities of large V+L models.
Submission history
From: Shehreen Azad [view email][v1] Fri, 7 Apr 2023 14:26:11 GMT (23613kb,D)
[v2] Wed, 25 Oct 2023 21:36:46 GMT (13913kb,D)
[v3] Fri, 26 Apr 2024 16:23:31 GMT (24783kb,D)
Link back to: arXiv, form interface, contact.