We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CV

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computer Vision and Pattern Recognition

Title: Augmenting Vision Language Pretraining by Learning Codebook with Visual Semantics

Abstract: Language modality within the vision language pretraining framework is innately discretized, endowing each word in the language vocabulary a semantic meaning. In contrast, visual modality is inherently continuous and high-dimensional, which potentially prohibits the alignment as well as fusion between vision and language modalities. We therefore propose to "discretize" the visual representation by joint learning a codebook that imbues each visual token a semantic. We then utilize these discretized visual semantics as self-supervised ground-truths for building our Masked Image Modeling objective, a counterpart of Masked Language Modeling which proves successful for language models. To optimize the codebook, we extend the formulation of VQ-VAE which gives a theoretic guarantee. Experiments validate the effectiveness of our approach across common vision-language benchmarks.
Comments: 7 pages, 4 figures, ICPR2022. arXiv admin note: text overlap with arXiv:2203.00048
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2208.00475 [cs.CV]
  (or arXiv:2208.00475v1 [cs.CV] for this version)

Submission history

From: Xiaoyuan Guo [view email]
[v1] Sun, 31 Jul 2022 17:36:09 GMT (3082kb,D)

Link back to: arXiv, form interface, contact.