Global Optimization for Combinatorial Geometry Problems Revisited in the Era of LLMs

Global Optimization for Combinatorial Geometry Problems Revisited in the Era of LLMs
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Recent progress in LLM-driven algorithm discovery, exemplified by DeepMind’s AlphaEvolve, has produced new best-known solutions for a range of hard geometric and combinatorial problems. This raises a natural question: to what extent can modern off-the-shelf global optimization solvers match such results when the problems are formulated directly as nonlinear optimization problems (NLPs)? We revisit a subset of problems from the AlphaEvolve benchmark suite and evaluate straightforward NLP formulations with two state-of-the-art solvers, the commercial FICO Xpress and the open-source SCIP. Without any solver modifications, both solvers reproduce, and in several cases improve upon, the best solutions previously reported in the literature, including the recent LLM-driven discoveries. Our results not only highlight the maturity of generic NLP technology and its ability to tackle nonlinear mathematical problems that were out of reach for general-purpose solvers only a decade ago, but also position global NLP solvers as powerful tools that may be exploited within LLM-driven algorithm discovery.


💡 Research Summary

The paper revisits three challenging combinatorial‑geometric benchmark problems that were recently tackled by DeepMind’s AlphaEvolve framework, which uses large language models (LLMs) to generate and evolve algorithmic code. The authors ask whether modern, off‑the‑shelf global nonlinear programming (NLP) solvers can match or even surpass the best‑known solutions obtained by LLM‑driven search when the problems are expressed directly as continuous nonlinear models. To answer this, they formulate the problems for two state‑of‑the‑art solvers—FICO Xpress (commercial) and SCIP (open‑source)—without any custom tuning, preprocessing, or problem‑specific heuristics.

The three problems are:

  1. Min‑max distance ratio – given n points in d‑dimensional Euclidean space, minimize the ratio of the largest to the smallest pairwise distance. The authors present two equivalent NLP models: (i) fix the minimum distance to 1 and minimize the maximal squared distance t_max, and (ii) fix the maximal distance to 1 and maximize the minimal squared distance t_min. Both lead to quadratically constrained programs (QCPs) with O(n·d) variables and O(n²) quadratic constraints. Experiments on 2‑D and 3‑D instances up to n = 30 reproduce all previously reported best values and improve several of them by up to five decimal places.

  2. Variable‑radius circle packing – pack n circles of freely chosen radii inside a rectangle of fixed perimeter 4 (or a unit square) so that the sum of radii is maximized. Decision variables include circle centers (x_i, y_i), radii r_i, and the rectangle’s short side α (the long side is 2 − α). Constraints enforce containment, non‑overlap (quadratic), radius bounds, and α∈(0,1]. The objective is linear in the radii, which makes the problem especially amenable to global solvers. The authors obtain improved solutions for 32 circles in a unit square (sum of radii 2.93794 vs. 2.937) and for several rectangle variants, again matching or beating the AlphaEvolve results.

  3. Unit‑hexagon packing – place n regular hexagons of unit side length inside a larger regular hexagon of minimal side length R, allowing free translation and rotation of each inner hexagon. This introduces rotation angles θ_i and trigonometric expressions, but the authors rewrite the model in a factorable form that SCIP and Xpress can handle. Preliminary results already yield upper bounds competitive with the best LLM‑generated configurations.

Across all experiments, both solvers succeed with their default settings. Xpress typically solves the larger instances a few seconds faster, while SCIP, despite being open‑source, reaches the same objective values within comparable runtimes. The authors emphasize that the key to this performance is the simplicity of the NLP formulations: using squared distances eliminates square‑root non‑linearity, fixing one scale (either min or max distance) reduces the number of nonlinear terms, and the problems are essentially unconstrained apart from the geometric relationships, allowing the solvers’ automatic linearization, convexification, and sophisticated primal heuristics to operate effectively.

The paper’s broader contribution is a methodological message: LLM‑based algorithm discovery excels at generating novel heuristic structures, but the final verification, refinement, and certification of high‑quality solutions can be delegated to mature global NLP technology. The authors suggest a hybrid workflow where LLMs propose candidate models or heuristic moves, which are then fed into a solver for rigorous optimization; feedback from the solver could in turn guide further LLM exploration. This synergy could accelerate progress on longstanding geometric and combinatorial optimization challenges that have resisted exact methods for decades.

In conclusion, the study demonstrates that modern global NLP solvers are not only capable of reproducing the state‑of‑the‑art LLM results but can also improve upon them, highlighting the maturity of generic optimization software and its potential role as a powerful component within LLM‑driven algorithm discovery pipelines.


Comments & Academic Discussion

Loading comments...

Leave a Comment