A partitioned optimization framework for structure-aware optimization
This work tackles a class of optimization problems in which fixing some well-chosen combinations of the variables makes the problem substantially easier to solve. We consider that the variables space may be partitioned into subsets that fix these combinations to given values, so that the restriction of the problem to any of the partition sets admits a tractable solution. Then, we exhibit a reformulation of the problem that consists in searching for the partition set index that minimizes the objective value of the solution to the restricted problem. We name partitioned optimization framework (POf) the formalization of this class of problems and this reformulation process. As we prove in this work, the POf allows solving the original problem by focusing on the reformulated problem: all solutions to the reformulated problem are partition indices for which the solution to the associated restricted problem is also a solution to the original problem. Second, we introduce a derivative-free partitioned optimization method (DFPOm) to efficiently solve problems that fit in the POf. We prove that the reformulated problem is nicely handled by derivative-free optimization (DFO) algorithms with a covering step. Then the DFPOm consists in solving the reformulated problem using such DFO algorithm with a covering step to obtain an optimal partition index, and to return the solution to the associated restricted problem as a solution to the initial problem. Finally, we illustrate how the DFPOm allows solving some classes of problems. We first focus on an infinite-dimensional case, by solving analytically an optimal control problem that challenges standard methods from the literature. Then, we apply the DFPOm on a class of finite-dimensional problems called composite greybox problems, and we highlight the gain in numerical performance provided by the DFPOm by comparing it to two DFO solvers.
💡 Research Summary
This paper introduces a novel optimization paradigm called the Partitioned Optimization Framework (POf) together with a derivative‑free algorithm, the Derivative‑Free Partitioned Optimization method (DFPOm), designed to exploit the framework. The central observation is that many difficult optimization problems become tractable once certain combinations of variables are fixed. By partitioning the decision space Y into a continuum of disjoint subsets Y(x) indexed by a finite‑dimensional space X, each subproblem (P_sub(x)) — the original problem restricted to Y(x) — can often be solved analytically or with a cheap inner routine. An oracle function γ:X→Y returns a global minimizer of each subproblem; the index function χ:Y→X maps any point back to its partition. The original problem (P_ini) is then exactly reformulated as a low‑dimensional problem (P_ref): minimize Φ(x)=φ(γ(x)) over X, with Φ(x)=+∞ outside the effective index set. Theorem 1 proves that any optimal index x* for Φ produces a global solution γ(x*) to the original problem, establishing the theoretical soundness of the partitioning approach.
To solve the reformulated problem, the authors employ a class of derivative‑free optimization (DFO) algorithms augmented with a “covering step.” This step guarantees that the search region in X covers the whole effective index set, ensuring global convergence even when Φ is discontinuous or non‑convex. Theorem 2 provides convergence guarantees for DFPOm under mild assumptions, showing that the method returns a point whose objective value is arbitrarily close to the global minimum after a finite number of function evaluations.
The paper validates the framework on three fronts. First, an infinite‑dimensional optimal control problem with a discontinuous Mayer cost is solved analytically by fixing the landing point (the partition index). Once the landing point is fixed, the cost becomes constant and the dynamics reduce to a smooth optimal‑control subproblem, which can be solved analytically; DFPOm then efficiently searches over landing points to recover the global optimum—something standard numerical methods cannot achieve due to the discontinuity.
Second, a finite‑dimensional “composite grey‑box” example is presented: φ(y₁,y₂) = (y₂−σ(y₁))² + ε(y₁) with black‑box functions σ and ε. By partitioning on y₁, each subproblem reduces to a simple quadratic in y₂, whose minimizer is γ(x) = (x, σ(x)). The reduced objective becomes Φ(x)=ε(x), a low‑dimensional black‑box function that DFPOm can optimize efficiently. Numerical experiments compare DFPOm against two state‑of‑the‑art DFO solvers applied directly to the original problem; DFPOm consistently requires fewer function evaluations and less CPU time (often 2–5× faster) while achieving the same or better solution quality.
Third, higher‑dimensional composite grey‑box problems (hundreds of variables) are tackled. By constructing partitions that isolate the few variables that drive the objective’s complexity, the authors demonstrate that DFPOm avoids the curse of dimensionality that plagues conventional DFO methods. In these tests, standard DFO solvers either stagnate or require orders of magnitude more evaluations, whereas DFPOm reliably converges to high‑quality solutions.
The discussion acknowledges several limitations and future directions. The current theory assumes that γ returns exact global minimizers; the authors show empirically that approximate γ still yields good performance and propose to extend the theory to accommodate inexact or locally optimal γ. Automating the construction of a useful partition for arbitrary problems remains an open challenge. Extensions to problems with complex constraints, manifold‑structured partitions, and applications such as counterfactual generation in machine learning are outlined.
In summary, the paper provides a rigorous formulation (POf) that transforms a potentially high‑dimensional, non‑smooth optimization problem into a low‑dimensional surrogate by exploiting problem structure. The accompanying DFPOm algorithm leverages derivative‑free optimization with a covering strategy to solve the surrogate efficiently. Empirical results on infinite‑dimensional control, composite grey‑box, and high‑dimensional test cases demonstrate substantial gains over existing DFO methods, establishing POf and DFPOm as powerful tools for a broad class of structured optimization problems.
Comments & Academic Discussion
Loading comments...
Leave a Comment