ID-PaS : Identity-Aware Predict-and-Search for General Mixed-Integer Linear Programs
Mixed-Integer Linear Programs (MIPs) are powerful and flexible tools for modeling a wide range of real-world combinatorial optimization problems. Predict-and-Search methods operate by using a predictive model to estimate promising variable assignments and then guiding a search procedure toward high-quality solutions. Recent research has demonstrated that incorporating machine learning (ML) into the Predict-and-Search framework significantly enhances its performance. Still, it is restricted to binary problems and overlooks the presence of fixed variables that commonly arise in practical settings. This work extends the Predict-and-Search (PaS) framework to parametric MIPs and introduces ID-PaS, an identity-aware learning framework that enables the ML model to handle heterogeneous variables more effectively. Experiments on several real-world large-scale problems demonstrate that ID-PaS consistently achieves superior performance compared to the stateof-the-art solver Gurobi and PaS.
💡 Research Summary
The paper introduces ID‑PaS (Identity‑aware Predict‑and‑Search), a novel learning‑augmented heuristic that extends the Predict‑and‑Search (PaS) framework from binary mixed‑integer programs to general mixed‑integer linear programs (MIPs) with integer variables and parametric data. The authors observe that many real‑world MIPs contain a large number of integer variables, most of which take the value zero in optimal or near‑optimal solutions. Exploiting this sparsity, ID‑PaS does not try to predict the exact integer value of each variable; instead it learns a binary indicator of whether a variable will be zero. This simplification reduces the learning difficulty and aligns the training objective with the structure of practical problems.
A key contribution is the introduction of identity‑aware features. Standard bipartite graph representations of MIPs are permutation‑invariant, meaning that the model cannot distinguish “variable 1” from “variable 2” across different instances. In parametric settings, however, the same variable index often corresponds to the same physical entity (e.g., a specific truck route or rail arc). To capture this, the authors augment each variable node with a binary positional vector encoding its index. This allows the graph neural network to retain variable identity while still benefiting from the relational structure of the bipartite graph.
The learning model is a Graph Attention Network (GAT). Variable and constraint nodes, together with edge coefficients, are embedded into a 64‑dimensional space. Two rounds of attention‑based message passing are performed: first constraints attend to neighboring variables, then variables attend back to constraints, using eight attention heads. A multilayer perceptron with a sigmoid output produces a probability in
Comments & Academic Discussion
Loading comments...
Leave a Comment