On strong homogeneity of two global optimization algorithms based on statistical models of multimodal objective functions
The implementation of global optimization algorithms, using the arithmetic of infinity, is considered. A relatively simple version of implementation is proposed for the algorithms that possess the introduced property of strong homogeneity. It is shown that the P-algorithm and the one-step Bayesian algorithm are strongly homogeneous.
💡 Research Summary
The paper addresses a practical difficulty in global optimization: the evaluation of objective‑function values can suffer from underflow, overflow, or extreme magnitude differences when performed with standard floating‑point arithmetic. To overcome this, the authors employ the “arithmetic of infinity” – a numerical framework that can represent finite, infinite, and infinitesimal numbers simultaneously – and investigate how global optimization algorithms behave under affine transformations of the objective function, i.e., when the original function f(x) is replaced by h(x)=a·f(x)+b with a and b possibly infinite or infinitesimal.
A new concept, strong homogeneity, is introduced. An algorithm is strongly homogeneous if, after evaluating the same initial set of points on both f and h, it generates exactly the same subsequent search points. This is a stricter requirement than the previously studied homogeneity (which only allowed additive shifts).
The authors focus on two algorithms that rely on statistical models of the objective: the P‑algorithm and the one‑step Bayesian algorithm. Both use a Gaussian stochastic process ξ(x) as a surrogate model. For the P‑algorithm, the next point is chosen where the probability of improving over the current aspiration level y_on is maximal. Because ξ(x) is Gaussian, this criterion reduces to maximizing the quantity y_on – m_n(x)·s_n(x), where m_n and s_n are the conditional mean and standard deviation given the already observed data.
When the data are transformed affinely, the authors show that the estimators of the Gaussian process parameters (mean μ and variance σ²) satisfy μ̂′ = a·μ̂ + b and σ̂′² = a²·σ̂² for the most common estimators (sample mean/variance and maximum‑likelihood). Consequently, the conditional mean and variance transform in exactly the same way, and the expression y_on – m_n·s_n remains invariant. Hence the P‑algorithm selects the same next point for f and h, proving strong homogeneity.
The one‑step Bayesian algorithm selects the point that maximizes the expected improvement E
Comments & Academic Discussion
Loading comments...
Leave a Comment