Towards Uniformity and Alignment for Multimodal Representation Learning
Multimodal representation learning aims to construct a shared embedding space in which heterogeneous modalities are semantically aligned. Despite strong empirical results, InfoNCE-based objectives introduce inherent conflicts that yield distribution gaps across modalities. In this work, we identify two conflicts in the multimodal regime, both exacerbated as the number of modalities increases: (i) an alignment-uniformity conflict, whereby the repulsion of uniformity undermines pairwise alignment, and (ii) an intra-alignment conflict, where aligning multiple modalities induces competing alignment directions. To address these issues, we propose a principled decoupling of alignment and uniformity for multimodal representations, providing a conflict-free recipe for multimodal learning that simultaneously supports discriminative and generative use cases without task-specific modules. We then provide a theoretical guarantee that our method acts as an efficient proxy for a global Hölder divergence over multiple modality distributions, and thus reduces the distribution gap among modalities. Extensive experiments on retrieval and UnCLIP-style generation demonstrate consistent gains.
💡 Research Summary
This paper investigates fundamental conflicts inherent in multimodal contrastive learning that rely on the InfoNCE objective. While InfoNCE has driven impressive results in two‑modal settings (e.g., CLIP), the authors identify two distinct sources of tension that become increasingly severe as the number of modalities M grows: (i) an alignment‑uniformity conflict (ζₐ), where the uniformity term that spreads embeddings on the unit hypersphere opposes the alignment term that pulls positive cross‑modal pairs together; and (ii) an intra‑alignment conflict (χₐ), where multiple positive vectors for the same sample are not collinear, causing their alignment forces to cancel each other.
Through a careful gradient analysis, the paper formalizes these conflicts. ζₐ is defined as the cosine similarity between the aggregated alignment vector Vₐ and the uniformity‑induced repulsion vector Φₐ. Under a mild assumption that each modality’s uniformity component can be decomposed into a systematic part (cₙ) aligned with Vₐ plus zero‑mean noise, Proposition 2.2 proves that E
Comments & Academic Discussion
Loading comments...
Leave a Comment