We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CV

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computer Vision and Pattern Recognition

Title: Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding

Abstract: Current Vision and Language Models (VLMs) demonstrate strong performance across various vision-language tasks, yet they struggle with fine-grained understanding. This issue stems from weak image-caption alignment in pretraining datasets and a simplified contrastive objective that fails to distinguish nuanced grounding elements such as relations, actions, and attributes. As a result, the models tend to learn bag-of-words representations. To mitigate these challenges, we introduce an intra-modal contrastive loss and a unique cross-modal rank loss with an adaptive threshold that serves as curriculum learning, utilizing our automatically generated hard negatives to augment the model's capacity. Our strategy, which does not necessitate additional annotations or parameters, can be incorporated into any VLM trained with an image-text contrastive loss. Upon application to CLIP, our method leads to significant improvements on three fine-grained benchmarks, and it also enhances the performance of X-VLM, which is the state-of-art moodel on fine-grained reasoning.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2306.08832 [cs.CV]
  (or arXiv:2306.08832v1 [cs.CV] for this version)

Submission history

From: Le Zhang [view email]
[v1] Thu, 15 Jun 2023 03:26:28 GMT (10447kb,D)
[v2] Sun, 2 Jul 2023 00:31:36 GMT (10449kb,D)
[v3] Thu, 28 Dec 2023 15:44:04 GMT (16623kb,D)
[v4] Thu, 25 Apr 2024 15:24:11 GMT (14531kb,D)

Link back to: arXiv, form interface, contact.