MergeVLA: Cross-Skill Model Merging Toward a Generalist Vision-Language-Action Agent
Recent Vision-Language-Action (VLA) models reformulate vision-language models by tuning them with millions of robotic demonstrations. While they perform well when fine-tuned for a single embodiment or task family, extending them to multi-skill settings remains challenging: directly merging VLA experts trained on different tasks results in near-zero success rates. This raises a fundamental question: what prevents VLAs from mastering multiple skills within one model? With an empirical decomposition of learnable parameters during VLA fine-tuning, we identify two key sources of non-mergeability: (1) Finetuning drives LoRA adapters in the VLM backbone toward divergent, task-specific directions beyond the capacity of existing merging methods to unify. (2) Action experts develop inter-block dependencies through self-attention feedback, causing task information to spread across layers and preventing modular recombination. To address these challenges, we present MergeVLA, a merging-oriented VLA architecture that preserves mergeability by design. MergeVLA introduces sparsely activated LoRA adapters via task masks to retain consistent parameters and reduce irreconcilable conflicts in the VLM. Its action expert replaces self-attention with cross-attention-only blocks to keep specialization localized and composable. When the task is unknown, it uses a test-time task router to adaptively select the appropriate task mask and expert head from the initial observation, enabling unsupervised task inference. Across LIBERO, LIBERO-Plus, RoboTwin, and multi-task experiments on the real SO101 robotic arm, MergeVLA achieves performance comparable to or even exceeding individually finetuned experts, demonstrating robust generalization across tasks, embodiments, and environments. Project page: https://mergevla.github.io/
💡 Research Summary
MergeVLA tackles the long‑standing problem of consolidating multiple Vision‑Language‑Action (VLA) experts into a single, generalist robot policy. While large language and vision‑language models can be merged with simple weight averaging or low‑rank techniques, naïve merging of VLA specialists fails catastrophically, yielding near‑zero success rates. The authors identify two root causes: (1) LoRA adapters inserted into the pretrained vision‑language backbone diverge sharply across tasks, with more than 75 % of adapter parameters becoming “selfish” (i.e., useful for only one task). Direct averaging therefore destroys the shared visual‑semantic subspace. (2) The action expert, typically trained from scratch, contains self‑attention layers that propagate task‑specific signals across depth, creating strong inter‑block dependencies that make deeper layers irreconcilable across tasks.
To resolve these issues, MergeVLA introduces a merge‑oriented architecture. First, it constructs a global LoRA update τ_merge from all task‑specific updates and then applies a binary mask Sₘ for each task. The mask retains a parameter only if its magnitude on task m exceeds a scaled residual relative to the global update, effectively pruning selfish parameters and preserving pretrained knowledge. This sparsely‑activated LoRA scheme dramatically reduces cross‑task interference. Second, the action expert is redesigned to contain only cross‑attention blocks; self‑attention is removed entirely. Consequently, the expert relies solely on the fixed VLM features, preventing the buildup of task‑specific representations across layers. All blocks except the deepest “expert head” can now be merged by simple averaging, while the head—still highly task‑specific—is kept separate for each skill.
At inference time, the task identity may be unknown. MergeVLA therefore employs a training‑free test‑time task router. For each candidate task, the router runs the VLM with the corresponding mask, extracts hidden states, and measures their alignment with the principal components of the merged expert’s value projections. The task with the highest alignment is selected, activating its mask and expert head. This enables on‑the‑fly skill selection without any additional supervision.
Extensive experiments on the LIBERO, LIBERO‑Plus, and RoboTwin simulated benchmarks, as well as real‑world trials with a SO101 robotic arm, demonstrate the efficacy of the approach. Under mixed‑task evaluation, MergeVLA achieves success rates of 90.2 % (LIBERO), 62.5 % (LIBERO‑Plus), and 70.7 % (RoboTwin), matching or surpassing individually fine‑tuned experts. It also shows a 13.4 % improvement in out‑of‑distribution robustness compared to VLA‑Adapter. In real‑world tests, the system reaches a 90 % success rate, confirming that model merging is a viable path toward scalable, generalist embodied agents.
The paper’s contributions are threefold: (1) a principled masking strategy that mitigates LoRA‑induced parameter conflicts; (2) a redesign of the action expert that eliminates structural incompatibility, allowing most of the network to be merged; and (3) a zero‑shot task routing mechanism that handles unknown tasks at deployment time. Together, these innovations provide a practical blueprint for building multi‑skill robot agents that can be trained independently and later unified without costly joint retraining, opening avenues for future work on mask learning, expert‑head compression, and integration of additional modalities such as force or tactile feedback.
Comments & Academic Discussion
Loading comments...
Leave a Comment