MultiModal Fine-tuning with Synthetic Captions

MultiModal Fine-tuning with Synthetic Captions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we address a fundamental gap between pre-training and fine-tuning of deep neural networks: while pre-training has shifted from unimodal to multimodal learning with enhanced visual understanding, fine-tuning predominantly remains unimodal, limiting the benefits of rich pre-trained representations. To bridge this gap, we propose a novel approach that transforms unimodal datasets into multimodal ones using Multimodal Large Language Models (MLLMs) to generate synthetic image captions for fine-tuning models with a multimodal objective. Our method employs carefully designed prompts incorporating class labels and domain context to produce high-quality captions tailored for classification tasks. Furthermore, we introduce a supervised contrastive loss function that explicitly encourages clustering of same-class representations during fine-tuning, along with a new inference technique that leverages class-averaged text embeddings from multiple synthetic captions per image. Extensive experiments across 13 image classification benchmarks demonstrate that our approach outperforms baseline methods, with particularly significant improvements in few-shot learning scenarios. Our work establishes a new paradigm for dataset enhancement that effectively bridges the gap between multimodal pre-training and fine-tuning. Our code is available at https://github.com/s-enmt/MMFT.


💡 Research Summary

The paper addresses a critical mismatch between multimodal pre‑training (e.g., CLIP) and the predominantly unimodal fine‑tuning pipelines that still rely on image‑label datasets. To bridge this gap, the authors propose a three‑stage framework that converts any unimodal image‑label dataset into a multimodal one by automatically generating high‑quality synthetic captions with a Multimodal Large Language Model (MLLM).

Synthetic caption generation – Using Gemini 2.5 Flash‑Lite, the authors design a prompt template that explicitly incorporates (i) the class name, (ii) the domain of the dataset, and (iii) a visual characteristic (visual, shape, texture). The prompt asks the model to “differentiate this


Comments & Academic Discussion

Loading comments...

Leave a Comment