Learning Modal-Mixed Chain-of-Thought Reasoning with Latent Embeddings
We study how to extend chain-of-thought (CoT) beyond language to better handle multimodal reasoning. While CoT helps LLMs and VLMs articulate intermediate steps, its text-only form often fails on vision-intensive problems where key intermediate states are inherently visual. We introduce modal-mixed CoT, which interleaves textual tokens with compact visual sketches represented as latent embeddings. To bridge the modality gap without eroding the original knowledge and capability of the VLM, we use the VLM itself as an encoder and train the language backbone to reconstruct its own intermediate vision embeddings, to guarantee the semantic alignment of the visual latent space. We further attach a diffusion-based latent decoder, invoked by a special control token and conditioned on hidden states from the VLM. In this way, the diffusion head carries fine-grained perceptual details while the VLM specifies high-level intent, which cleanly disentangles roles and reduces the optimization pressure of the VLM. Training proceeds in two stages: supervised fine-tuning on traces that interleave text and latents with a joint next-token and latent-reconstruction objective, followed by reinforcement learning that teaches when to switch modalities and how to compose long reasoning chains. Extensive experiments across 11 diverse multimodal reasoning tasks, demonstrate that our method yields better performance than language-only and other CoT methods. Our code will be publicly released.
💡 Research Summary
The paper introduces a novel multimodal reasoning framework called Modal‑Mixed Chain‑of‑Thought (CoT), which interleaves textual tokens with compact visual “sketches” represented as latent embeddings. Traditional CoT methods rely solely on language to articulate intermediate reasoning steps, which limits their effectiveness on vision‑intensive tasks where crucial intermediate states are inherently visual (e.g., 3D spatial reasoning, multi‑image logical queries). Inspired by human cognition—specifically the sketchpad that alternates between verbal and visual representations—the authors propose to endow Vision‑Language Models (VLMs) with the ability to generate and consume latent visual embeddings during reasoning.
Core Architecture
- VLM as Visual Encoder: The model reuses the VLM’s own vision encoder (e.g., a ViT) and connector to map any intermediate image into a sequence of visual tokens. These tokens are then compressed via average pooling into a fixed‑size latent sketch ( \mathbf{z} \in \mathbb{R}^{M \times d} ). This ensures that the latent space aligns perfectly with the VLM’s internal representation, avoiding any domain shift.
- Diffusion‑Based Latent Decoder: A lightweight stacked‑MLP diffusion decoder is attached to the VLM. When a special control token
<START>is generated, the decoder is activated. It receives the LLM’s last hidden state as a conditioning vector ( \mathbf{c}_k = W \mathbf{h}_k ) and iteratively denoises a Gaussian latent ( \mathbf{z}^{(T)}_k ) over ( T ) steps, producing the final latent sketch ( \mathbf{e}_k = \mathbf{z}^{(0)}_k ). The decoder thus handles fine‑grained visual details while the VLM focuses on high‑level semantic intent. - Modal‑Mixed CoT Generation: The model generates a sequence that alternates between text and latent sketches: `
Comments & Academic Discussion
Loading comments...
Leave a Comment