References & Citations
Computer Science > Computer Vision and Pattern Recognition
Title: Optimization of Prompt Learning via Multi-Knowledge Representation for Vision-Language Models
(Submitted on 16 Apr 2024 (v1), last revised 17 Apr 2024 (this version, v2))
Abstract: Vision-Language Models (VLMs), such as CLIP, play a foundational role in various cross-modal applications. To fully leverage VLMs' potential in adapting to downstream tasks, context optimization methods like Prompt Tuning are essential. However, one key limitation is the lack of diversity in prompt templates, whether they are hand-crafted or learned through additional modules. This limitation restricts the capabilities of pretrained VLMs and can result in incorrect predictions in downstream tasks. To address this challenge, we propose Context Optimization with Multi-Knowledge Representation (CoKnow), a framework that enhances Prompt Learning for VLMs with rich contextual knowledge. To facilitate CoKnow during inference, we trained lightweight semantic knowledge mappers, which are capable of generating Multi-Knowledge Representation for an input image without requiring additional priors. Experimentally, We conducted extensive experiments on 11 publicly available datasets, demonstrating that CoKnow outperforms a series of previous methods. We will make all resources open-source: this https URL
Submission history
From: Enming Zhang [view email][v1] Tue, 16 Apr 2024 07:44:52 GMT (4507kb,D)
[v2] Wed, 17 Apr 2024 02:48:49 GMT (4507kb,D)
Link back to: arXiv, form interface, contact.