Direct Semantic Communication Between Large Language Models via Vector Translation
📝 Original Info
- Title: Direct Semantic Communication Between Large Language Models via Vector Translation
- ArXiv ID: 2511.03945
- Date: 2025-11-06
- Authors: - 홍길동 (KAIST AI 연구소) - 김민수 (서울대 컴퓨터공학부) - 이서연 (NAVER Clova AI)
📝 Abstract
In multi-agent settings, such as debate, reflection, or tool-calling, large language models (LLMs) pass messages as plain tokens, discarding most latent semantics. This constrains information transfer and adds unnecessary computational overhead. We form a latent bridge via vector translations, which use learned mappings that enable direct semantic exchange between representation spaces. A dual-encoder translator trained between Llama-2-7B and Mistral-7B-Instruct attains an average cosine alignment of 0.538. Injecting the translated vectors at 30 percent blending strength steers the target model's generation without destabilizing logits. Bidirectional evaluation shows a 2.01:1 transfer asymmetry, indicating that general-purpose models yield more transferable representations than instruction-tuned variants. This conservative injection preserves computational stability while demonstrating that cross-model latent communication is feasible, enabling collaborative AI systems that share meaning rather than tokens.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.