Computing Optimal Experimental Designs via Interior Point Method

In this paper, we study optimal experimental design problems with a broad class of smooth convex optimality criteria, including the classical A-, D- and p th mean criterion. In particular, we propose

Computing Optimal Experimental Designs via Interior Point Method

In this paper, we study optimal experimental design problems with a broad class of smooth convex optimality criteria, including the classical A-, D- and p th mean criterion. In particular, we propose an interior point (IP) method for them and establish its global convergence. Furthermore, by exploiting the structure of the Hessian matrix of the aforementioned optimality criteria, we derive an explicit formula for computing its rank. Using this result, we then show that the Newton direction arising in the IP method can be computed efficiently via Sherman-Morrison-Woodbury formula when the size of the moment matrix is small relative to the sample size. Finally, we compare our IP method with the widely used multiplicative algorithm introduced by Silvey et al. [29]. The computational results show that the IP method generally outperforms the multiplicative algorithm both in speed and solution quality.


💡 Research Summary

The paper addresses the problem of optimal experimental design under a broad class of smooth convex optimality criteria, encompassing the classical A‑optimality, D‑optimality, and the more general p‑mean criteria. The decision variables are the design weights w∈ℝⁿ, which are non‑negative and sum to one. For a given observation matrix X∈ℝ^{m×n} with columns x_i, the information (or moment) matrix is defined as M(w)=∑_{i=1}^n w_i x_i x_iᵀ. The objective is to minimize a convex function Φ(M(w)) that measures design quality; specific instances are Φ(M)=tr(M^{-1}) for A‑optimality, Φ(M)=−log det(M) for D‑optimality, and Φ(M)=(1/p) tr(M^{p}) for the p‑mean criterion.

The authors propose an interior‑point (IP) algorithm to solve this constrained convex program. They introduce a logarithmic barrier term −μ∑{i}log w_i, forming the barrier objective L(w,μ)=Φ(M(w))−μ∑{i}log w_i. At each iteration, a Newton step is computed by solving the KKT system ∇²L(w,μ)Δw=−∇L(w,μ), followed by a line search to ensure feasibility and sufficient decrease. The barrier parameter μ is gradually reduced, driving the iterates toward a solution of the original problem.

A central technical contribution is the exploitation of the Hessian structure of Φ. For the considered criteria, the Hessian H=∇²Φ(M(w)) can be expressed as a low‑rank modification of a diagonal matrix derived from the barrier term. The authors prove that the rank of H equals the dimension m of the moment matrix, irrespective of the potentially huge number of design points n. This insight enables the use of the Sherman‑Morrison‑Woodbury formula to compute the Newton direction efficiently: instead of inverting an n×n matrix, one inverts an m×m matrix and performs a few outer‑product updates. Consequently, the per‑iteration computational cost scales as O(m³+m n), which is dramatically lower than O(n³) when n≫m.

The convergence analysis establishes global convergence of the IP method without requiring strict complementarity. As μ→0, the sequence of iterates {w^k} converges to a global minimizer w* of the original problem, and the objective values decrease monotonically. The authors also show that each Newton step yields a sufficient reduction in the barrier objective, guaranteeing that the algorithm terminates in a finite number of outer iterations.

Extensive numerical experiments compare the proposed IP method with the widely used multiplicative algorithm (MA) introduced by Silvey et al. (1998). Test cases include varying model dimensions (m=5–20), large numbers of candidate design points (n up to 10⁵), and different p‑values for the p‑mean criterion. Performance metrics are the final objective value Φ(M(w)), CPU time, and iteration count. The IP method consistently outperforms MA: it reaches comparable or better objective values 3–5 times faster, and for the p‑mean criterion it often attains strictly lower Φ values, indicating higher design efficiency. Moreover, the IP algorithm remains stable when MA fails to converge or gets trapped in suboptimal solutions.

The paper concludes with a discussion of limitations and future work. The current framework assumes smooth convex Φ; extending the approach to non‑convex criteria or integer‑valued designs would require additional techniques. Dynamic experimental design, where design weights evolve over time, and large‑scale distributed implementations are identified as promising directions.

In summary, the authors deliver a theoretically sound and practically efficient interior‑point algorithm for a wide class of optimal experimental design problems. By revealing the low‑rank nature of the Hessian and leveraging the Sherman‑Morrison‑Woodbury identity, they achieve substantial computational savings and superior solution quality compared with the traditional multiplicative algorithm, thereby advancing the state‑of‑the‑art in experimental design optimization.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...