📝 Original Info
- Title: ARC: Leveraging Compositional Representations for Cross-Problem Learning on VRPs
- ArXiv ID: 2512.18633
- Date: 2025-12-21
- Authors: ** Han‑Seul Jeong, Youngjoon Park, Hyungseok Song, Woohyung Lim (LG AI Research, Republic of Korea) **
📝 Abstract
Vehicle Routing Problems (VRPs) with diverse real-world attributes have driven recent interest in cross-problem learning approaches that efficiently generalize across problem variants. We propose ARC (Attribute Representation via Compositional Learning), a cross-problem learning framework that learns disentangled attribute representations by decomposing them into two complementary components: an Intrinsic Attribute Embedding (IAE) for invariant attribute semantics and a Contextual Interaction Embedding (CIE) for attribute-combination effects. This disentanglement is achieved by enforcing analogical consistency in the embedding space to ensure the semantic transformation of adding an attribute (e.g., a length constraint) remains invariant across different problem contexts. This enables our model to reuse invariant semantics across trained variants and construct representations for unseen combinations. ARC achieves state-of-the-art performance across in-distribution, zero-shot generalization, few-shot adaptation, and real-world benchmarks.
💡 Deep Analysis
📄 Full Content
ARC: Leveraging Compositional Representations for
Cross-Problem Learning on VRPs
Han-Seul Jeong, Youngjoon Park, Hyungseok Song, Woohyung Lim
LG AI Research
Republic of Korea
{hanseul.jeong, yj.park, hyungseok.song, w.lim}@lgresearch.ai
Abstract
Vehicle Routing Problems (VRPs) with diverse real-world attributes have driven re-
cent interest in cross-problem learning approaches that efficiently generalize across
problem variants. We propose ARC (Attribute Representation via Compositional
Learning), a cross-problem learning framework that learns disentangled attribute
representations by decomposing them into two complementary components: an In-
trinsic Attribute Embedding (IAE) for invariant attribute semantics and a Contextual
Interaction Embedding (CIE) for attribute-combination effects. This disentangle-
ment is achieved by enforcing analogical consistency in the embedding space to
ensure the semantic transformation of adding an attribute (e.g., a length constraint)
remains invariant across different problem contexts. This enables our model to reuse
invariant semantics across trained variants and construct representations for unseen
combinations. ARC achieves state-of-the-art performance across in-distribution,
zero-shot generalization, few-shot adaptation, and real-world benchmarks.
1
Introduction
Capacitated Vehicle Routing Problem (CVRP) represents a fundamental NP-hard combinatorial
optimization challenge [26, 12, 7]. While deep learning-based approximation algorithms within the
Neural Combinatorial Optimization (NCO) paradigm have demonstrated near-optimal performance
[1, 24, 6, 17, 13, 14, 22], real-world routing applications must address diverse attributes such as time
windows [23] or open routing [25]. To efficiently leverage information of shared attributes across
multiple VRP variants, recent research has focused on cross-problem learning, where a single unified
model is trained to solve multiple VRP variants defined by different attribute combinations [30, 2, 15],
improving efficiency and generalization compared to variant-specific models [16].
However, prior works [16, 30, 2, 15] often conflate invariant attribute semantics with contextual effects
among attributes, leading to entangled representations that hinder efficient knowledge sharing across
different VRP variants. To address this limitation, we propose ARC, which disentangles individual
attribute embeddings by decomposing representation into intrinsic components that remain consistent
across combinations and contextual components that capture combination-specific interactions. ARC
learns distinct attribute representations through analogical compositional learning, ensuring identical
attributes maintain their intrinsic semantics regardless of their combinations by enforcing analogous
transformations across different problem contexts. Contextual components then model attribute
interactions by leveraging the learned intrinsic representations within specific problem contexts,
enabling efficient cross-problem learning and zero-shot generalization to unseen combinations.
Extensive experiments demonstrate that ARC outperforms existing baselines on trained configurations
while achieving robust zero-shot generalization to unseen attribute combinations and efficient few-
39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop: Differentiable Learning
of Combinatorial Algorithms.
arXiv:2512.18633v1 [cs.LG] 21 Dec 2025
shot adaptation to new attributes, with validation on real-world benchmarks. Our main contributions
are as followed:
• We propose ARC, a novel cross-problem learning framework that disentangles attribute repre-
sentations by decomposing them into intrinsic and contextual components, facilitating effective
knowledge sharing across different VRP variants.
• We introduce a compositional learning mechanism that enforces analogical embedding relation-
ships, establishing the first analogical embedding framework for NCO to our knowledge.
• We demonstrate superior performance across four scenarios: (1) in-distribution, (2) zero-shot
generalization to unseen attribute combinations, (3) few-shot adaptation to new attributes, and (4)
real-world benchmark, CVRPLib.
2
Related Works
Cross-Problem CO Solvers
Recent work has shifted toward cross-problem learning, developing
universal architectures capable of solving diverse problems. This research spans two branches:
heterogeneous CO tasks [5, 21] and VRP variants with different attribute combinations, to which
our work belongs. Existing VRP approaches include joint training and Mixture-of-Experts [16, 30],
foundation models [2], and attribute-aware attention mechanisms [15]. However, these methods learn
mixed representations where shared attribute semantics are entangled with combination-specific
interactions, inducing inefficient knowledge sharing across attribute combinations. Our approach
explicitly decomposes attribute representations into intrinsic characteristics an
Reference
This content is AI-processed based on open access ArXiv data.