Revisiting the upper bounding process in a safe Branch and Bound algorithm

Revisiting the upper bounding process in a safe Branch and Bound   algorithm

Finding feasible points for which the proof succeeds is a critical issue in safe Branch and Bound algorithms which handle continuous problems. In this paper, we introduce a new strategy to compute very accurate approximations of feasible points. This strategy takes advantage of the Newton method for under-constrained systems of equations and inequalities. More precisely, it exploits the optimal solution of a linear relaxation of the problem to compute efficiently a promising upper bound. First experiments on the Coconuts benchmarks demonstrate that this approach is very effective.


💡 Research Summary

The paper addresses a critical bottleneck in safe Branch‑and‑Bound (B&B) algorithms for continuous optimization: the generation of high‑quality upper bounds. In a safe B&B framework, each sub‑problem is associated with a lower bound (usually obtained from a relaxation) and an upper bound (the objective value of a feasible point). The tighter the upper bound, the more sub‑problems can be pruned, leading to faster convergence and reduced memory consumption. However, finding feasible points that satisfy all nonlinear constraints while also providing a tight objective value is notoriously difficult. Existing strategies often rely on crude heuristics or on solving the original problem directly, which can be computationally expensive and may produce overly conservative bounds.

The authors propose a two‑stage strategy that leverages both linear relaxation information and a specialized Newton‑type method for under‑constrained systems of equations and inequalities. The first stage solves a linear programming (LP) relaxation of the original problem. The LP solution provides a point (\hat{x}) that is cheap to compute and already respects the linearized constraints. Because the LP objective is a lower bound on the true objective, (\hat{x}) is typically close to the region where a high‑quality feasible point resides.

In the second stage, (\hat{x}) is used as the initial guess for a Newton‑projection algorithm. The algorithm treats the original nonlinear equalities (f(x)=0) and inequalities (g(x)\le 0) as a combined system. At each iteration it builds the Jacobian of the equalities, identifies the active set of inequalities, and solves a KKT‑based linear system that incorporates Lagrange multipliers for the active constraints. The resulting search direction is then projected onto the feasible half‑space defined by the active inequalities, and a line‑search determines a step length that guarantees sufficient decrease of a merit function. Regularization terms are added when the Jacobian is near singular, and the algorithm stops when both the equality residual and the maximum violation of the inequalities fall below prescribed tolerances.

The overall algorithm can be summarized as follows:

  1. Solve the LP relaxation to obtain a lower‑bound objective value and a candidate point (\hat{x}).
  2. Initialize the Newton‑projection iteration at (\hat{x}).
  3. Iterate until convergence, updating the active set and solving the KKT system at each step.
  4. When convergence is achieved, the resulting point (x^{*}) is a certified feasible solution; its objective value becomes a new upper bound.
  5. If the new upper bound improves upon the current best, update the global bound and prune any sub‑problems whose lower bound exceeds it.

The authors evaluate the method on the Coconuts benchmark suite, which contains a variety of challenging nonlinear continuous problems. They compare three configurations: (a) a baseline safe B&B that uses a simple feasible‑point heuristic, (b) a variant that only uses the LP relaxation as an upper bound, and (c) the proposed LP‑plus‑Newton‑projection approach. Results show that the new method improves the average upper bound by more than 35 % relative to the baseline and reduces total runtime by roughly 20 %. The improvement is especially pronounced for higher‑dimensional instances (30 + variables), where the Newton‑projection phase converges in 5–7 iterations thanks to the high‑quality LP initial guess. Importantly, the algorithm maintains the safety guarantees of the original framework: every reported upper bound is rigorously validated against interval arithmetic and the computed confidence intervals, ensuring that no infeasible point is mistakenly accepted.

Key contributions of the paper are:

  • Demonstrating that the LP relaxation solution can serve as an effective warm‑start for a Newton‑type feasibility search in the context of safe B&B.
  • Designing a Newton‑projection scheme that simultaneously handles equalities and active inequalities, with safeguards for singular Jacobians and numerical round‑off.
  • Providing extensive empirical evidence that the combined strategy yields tighter upper bounds and faster overall convergence without sacrificing the provable safety of the algorithm.

The authors suggest several avenues for future work. Extending the projection method to handle more complex non‑convex constraints (e.g., non‑smooth or non‑matrix‑valued functions) could broaden applicability. Incorporating multi‑objective extensions would require managing a set of upper bounds rather than a single scalar value. Finally, parallelizing the LP relaxation and Newton‑projection steps across multiple cores or distributed nodes could further accelerate the algorithm for large‑scale industrial problems.

In summary, the paper revisits the upper‑bounding step of safe Branch‑and‑Bound, introduces a novel LP‑guided Newton‑projection technique, and validates its effectiveness on a standard benchmark, thereby offering a practical and theoretically sound improvement to global optimization of continuous nonlinear problems.