Latent Domain Prompt Learning for Vision-Language Models

Reading time: 1 minute
...

📝 Original Info

  • Title: Latent Domain Prompt Learning for Vision-Language Models
  • ArXiv ID: 2511.00067
  • Date: 2025-10-29
  • Authors: ** 정보 없음 (논문에 저자 정보가 제공되지 않았습니다.) **

📝 Abstract

The objective of domain generalization (DG) is to enable models to be robust against domain shift. DG is crucial for deploying vision-language models (VLMs) in real-world applications, yet most existing methods rely on domain labels that may not be available and often ambiguous. We instead study the DG setting where models must generalize well without access to explicit domain labels. Our key idea is to represent an unseen target domain as a combination of latent domains automatically discovered from training data, enabling the model to adaptively transfer knowledge across domains. To realize this, we perform latent domain clustering on image features and fuse domain-specific text features based on the similarity between the input image and each latent domain. Experiments on four benchmarks show that this strategy yields consistent gains over VLM-based baselines and provides new insights into improving robustness under domain shift.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut