Relative Expected Improvement in Kriging Based Optimization
📝 Abstract
We propose an extension of the concept of Expected Improvement criterion commonly used in Kriging based optimization. We extend it for more complex Kriging models, e.g. models using derivatives. The target field of application are CFD problems, where objective function are extremely expensive to evaluate, but the theory can be also used in other fields.
💡 Analysis
We propose an extension of the concept of Expected Improvement criterion commonly used in Kriging based optimization. We extend it for more complex Kriging models, e.g. models using derivatives. The target field of application are CFD problems, where objective function are extremely expensive to evaluate, but the theory can be also used in other fields.
📄 Content
arXiv:0908.3321v1 [stat.ML] 23 Aug 2009 Relative Expected Improvement in Kriging Based Optimization Lukasz Laniewski-Wo l lk Institute of Aeronautics and Applied Mechanics Warsaw University of Technology Nowowiejska 24, 00-665 Warsaw, Poland e-mail: llaniewski@meil.pw.edu.pl Web page: http://c-cfd.meil.pw.edu.pl/ February 4, 2020 Abstract We propose an extension of the concept of Expected Improvement criterion commonly used in Kriging based optimization. We extend it for more complex Kriging models, e.g. models using derivatives. The target field of application are CFD problems, where objective function are extremely expensive to evaluate, but the theory can be also used in other fields. 1 INTRODUCTION Global optimization is a common task in advanced engineering. The ob- jective function can be very expensive to calculate or measure. In par- ticular this is the case in Computational Fluid Dynamics (CFD) where simulations are extremely expensive and time-consuming. At present, the CFD code can also generate the exact derivatives of the objective function so we can use them in our models. The long computation to evaluate the objective function and (as a rule) high dimension of the design space make the optimization process very time-consuming. Widely adopted strategy for such objective functions is to use response function methodology. It is based on constructing an approximation of the objective function based on some measurements and subsequently finding points of new measurements that enhance our knowledge about the location of optimum. One of the commonly used response functions models is the Kriging model [2, 4, 5, 3]. This statistical estimation model considers the objec- tive function to be a realization of a random field. We can construct a least square estimator. If we assume the field to be gaussian, the least square estimator is the Bayesian estimator. Conditional distribution of 1 the field with respect to the measurements (a posteriori) is also gaussian with known both mean and covariance. One of the methods to find a point for new measurement is the Ex- pected Improvement criterion[3]. It uses a Expected Improvement func- tion: EI(x) = E(min ( ˆFmin, F(x))) where F is the a posteriori field and ˆFmin is the minimum of estimator. The new point of measurement is chosen in the minimum of EI function. Many modifications and enhancements were considered for the Kriging model. Application of linear operators, e.g. derivatives, integrals and convolutions, are easy to incorporate in the model[4, 5]. Each of these extensions of classic Kriging model is based on measuring something else then is returned as the response. For example we measure gradient and value of the function, but the response is only the function. The Expected Improvement states that we should measure the function in place where the minimum of response can be mostly improved. But for classic model the notion of the measured and the response functions are the same. The purpose of this paper is the investigation wether the concept of EI can be extended for enhanced Kriging models. 2 RELATIVE EXPECTED IMPROVE- MENT 2.1 Efficient Global Optimization Jones et al.[3] propose an Efficient Global Optimization (EGO) algorithm based on Kriging model and Expected Improvement. It consists of the following steps:
- Select a learning group x1, . . . , xn. Measure objective function f in these points fi = f(xi).
- Construct a Kriging approximation ˆF based on measurements f1, . . . , fn.
- Find the minimum of EI(x) function for the approximation.
- Augment n and set xn at the minimum of EI.
- Measure fn = f(xn) and go back to 2 EI function can have many local minima (is highly multi-modal) and is potentially hard to minimize. The original paper proposed Branch and Bound Algorithm (BBA) to efficiently optimize the EI function. To use BBA authors had to establish upper and lower bonds on minimum of EI function over a region. It was fairly easy and was the main source of effectiveness of EGO. While proposing an extension of EI concept we also have to propose a suitable methods of it’s optimization. 2.2 Gaussian Kriging Kriging, is a statistical method of approximation a multi-dimensional function basing on values in a set of points. The Kriging estimator (ap- 2 proximation) can be interpreted as a least-square estimator, but also as a Bayes estimator. We will use the latter interpretation as in the original EI definition. Let us take an objective function f : Ω→R. For some probabilistic space (Γ, F, P), we consider a random gaussian field F on Ωwith the known mean µ and covariance K(x, y). Now we take a measurements of the objective at points x1, . . . , xn as fi = f(xi). The Bayes estimator of f is: ˆF(x) = E (F(x) | ∀i F(xi) = fi) Where E(A | B) is conditional expected value of A with respect to B. This estimator at y will be called the response at y and the (xi, fi) pairs will be called measurements at xi. Let us take an event M = {∀i F(xi) = fi} ⊂Γ and a a po
This content is AI-processed based on ArXiv data.