GEA: Generation-Enhanced Alignment for Text-to-Image Person Retrieval
Reading time: 2 minute
...
📝 Original Info
- Title: GEA: Generation-Enhanced Alignment for Text-to-Image Person Retrieval
- ArXiv ID: 2511.10154
- Date: 2025-11-13
- Authors: ** 정보 제공되지 않음 (논문에 저자 명시가 없습니다). **
📝 Abstract
Text-to-Image Person Retrieval (TIPR) aims to retrieve person images based on natural language descriptions. Although many TIPR methods have achieved promising results, sometimes textual queries cannot accurately and comprehensively reflect the content of the image, leading to poor cross-modal alignment and overfitting to limited datasets. Moreover, the inherent modality gap between text and image further amplifies these issues, making accurate cross-modal retrieval even more challenging. To address these limitations, we propose the Generation-Enhanced Alignment (GEA) from a generative perspective. GEA contains two parallel modules: (1) Text-Guided Token Enhancement (TGTE), which introduces diffusion-generated images as intermediate semantic representations to bridge the gap between text and visual patterns. These generated images enrich the semantic representation of text and facilitate cross-modal alignment. (2) Generative Intermediate Fusion (GIF), which combines cross-attention between generated images, original images, and text features to generate a unified representation optimized by triplet alignment loss. We conduct extensive experiments on three public TIPR datasets, CUHK-PEDES, RSTPReid, and ICFG-PEDES, to evaluate the performance of GEA. The results justify the effectiveness of our method. More implementation details and extended results are available at https://github.com/sugelamyd123/Sup-for-GEA.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.