References & Citations
Computer Science > Computer Vision and Pattern Recognition
Title: MCAD: Multi-teacher Cross-modal Alignment Distillation for efficient image-text retrieval
(Submitted on 30 Oct 2023 (this version), latest version 2 Apr 2024 (v3))
Abstract: With the success of large-scale visual-language pretraining models and the wide application of image-text retrieval in industry areas, reducing the model size and streamlining their terminal-device deployment have become urgently necessary. The mainstream model structures for image-text retrieval are single-stream and dual-stream, both aiming to close the semantic gap between visual and textual modalities. Dual-stream models excel at offline indexing and fast inference, while single-stream models achieve more accurate cross-model alignment by employing adequate feature fusion. We propose a multi-teacher cross-modality alignment distillation (MCAD) technique to integrate the advantages of single-stream and dual-stream models. By incorporating the fused single-stream features into the image and text features of the dual-stream model, we formulate new modified teacher features and logits. Then, we conduct both logit and feature distillation to boost the capability of the student dual-stream model, achieving high retrieval performance without increasing inference complexity. Extensive experiments demonstrate the remarkable performance and high efficiency of MCAD on image-text retrieval tasks. Furthermore, we implement a mobile CLIP model on Snapdragon clips with only 93M running memory and 30ms search latency, without apparent performance degradation of the original large CLIP.
Submission history
From: Chen Chen [view email][v1] Mon, 30 Oct 2023 15:38:43 GMT (8247kb,D)
[v2] Thu, 28 Mar 2024 08:47:14 GMT (9499kb,D)
[v3] Tue, 2 Apr 2024 00:12:21 GMT (9499kb,D)
Link back to: arXiv, form interface, contact.