On the Cone Effect and Modality Gap in Medical Vision-Language Embeddings
Vision-Language Models (VLMs) exhibit a characteristic “cone effect” in which nonlinear encoders map embeddings into highly concentrated regions of the representation space, contributing to cross-modal separation known as the modality gap. While this phenomenon has been widely observed, its practical impact on supervised multimodal learning – particularly in medical domains – remains unclear. In this work, we introduce a lightweight post-hoc mechanism that keeps pretrained VLM encoders frozen while continuously controlling cross-modal separation through a single hyperparameter {λ}. This enables systematic analysis of how the modality gap affects downstream multimodal performance without expensive retraining. We evaluate generalist (CLIP, SigLIP) and medically specialized (BioMedCLIP, MedSigLIP) models across diverse medical and natural datasets in a supervised multimodal settings. Results consistently show that reducing excessive modality gap improves downstream performance, with medical datasets exhibiting stronger sensitivity to gap modulation; however, fully collapsing the gap is not always optimal, and intermediate, task-dependent separation yields the best results. These findings position the modality gap as a tunable property of multimodal representations rather than a quantity that should be universally minimized.
💡 Research Summary
This paper investigates the “cone effect” and the associated “modality gap” in vision‑language models (VLMs), focusing on their impact in supervised multimodal learning for medical applications. The cone effect refers to the phenomenon whereby nonlinear activations and contrastive training cause image and text embeddings to collapse into narrow angular regions on the unit hypersphere. As a consequence, embeddings from the two modalities occupy distinct, distant regions—a situation termed the modality gap. While prior work has characterized this gap mainly in zero‑shot settings, the authors ask whether and how it influences downstream performance when image and text signals are jointly leveraged, especially in the medically homogeneous domain.
To answer this, they propose a lightweight, post‑hoc alignment mechanism that leaves the pretrained image and text encoders frozen. After extracting ℓ2‑normalized embeddings v (image) and t (text), they compute the centroid difference Δ = μ_v − μ_t, where μ_v and μ_t are the mean embeddings over the training set. A single hyper‑parameter λ ∈
Comments & Academic Discussion
Loading comments...
Leave a Comment