mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location

This paper develops mfEGRA, a multifidelity active learning method using data-driven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using …

Authors: Anirban Chaudhuri, Alex, re N. Marques

mfEGRA: Multifidelity Efficient Global Reliability Analysis through   Active Learning for Failure Boundary Location
mfEGRA: Multifidelit y Efficien t Global Reliabilit y Analysis through Activ e Learning for F ailure Boundary Lo cation Anirban Chaudh uri ∗ , Alexandre N. Marques † Massachusetts Institute of T e chnolo gy, Cambridge, MA, 02139, USA Karen E. Willco x ‡ University of T exas at Austin, Austin, TX, 78712, USA Abstract This pap er develops mfEGRA, a multifidelit y activ e learning metho d using data-driven adaptively refined surrogates for failure b oundary lo cation in reliabilit y analysis. This w ork addresses the issue of prohibitiv e cost of reliabilit y analysis using Mon te Carlo sampling for exp ensiv e-to-ev aluate high-fidelit y mo dels b y using cheaper-to-ev aluate approximations of the high-fidelit y mo del. The metho d builds on the Efficient Global Reliability Analysis (EGRA) metho d, which is a surrogate-based metho d that uses adaptiv e sampling for refining Gaussian pro cess surrogates for failure b oundary location using a single- fidelit y mo del. Our metho d introduces a tw o-stage adaptiv e sampling criterion that uses a m ultifidelity Gaussian pro cess surrogate to lev erage multiple information sources with different fidelities. The method com bines exp ected feasibility criterion from EGRA with one-step lo ok ahead information gain to refine the surrogate around the failure b oundary . The computational savings from mfEGRA dep ends on the discrepancy b etw een the different mo dels, and the relative cost of ev aluating the differen t mo dels as compared to the high-fidelit y mo del. W e show that accurate estimation of reliability using mfEGRA leads to computational savings of ∼ 46% for an analytic multimodal test problem and 24% for a three- dimensional acoustic horn problem, when compared to single-fidelity EGRA. W e also show the effect of using a priori drawn Monte Carlo samples in the implementation for the acoustic horn problem, where mfEGRA leads to computational savings of 45% for the three-dimensional case and 48% for a rarer even t four-dimensional case as compared to single-fidelity EGRA. Keyw ords: m ulti-fidelity , adaptiv e sampling, probability of failure, con tour lo cation, classification, Gaus- sian pro cess, kriging, multiple information sources, EGRA, surrogate 1 In tro duction The presence of uncertain ties in the man ufacturing and operation of systems mak e reliability analysis critical for system safety . The reliability analysis of a system requires estimating the probability of failure, whic h can b e computationally prohibitiv e when the high fidelity mo del is exp ensive to ev aluate. In this work, w e develop a metho d for efficient reliability estimation by leveraging multiple sources of information with differen t fidelities to build a multifidelit y approximation for the limit state function. Reliabilit y analysis for strongly non-linear systems typically require Mon te Carlo sampling that can incur substan tial cost b ecause of numerous ev aluations of exp ensiv e-to-ev aluate high fidelity models as seen in Figure 1 (a). There are several metho ds that improv e the conv ergence rate of Monte Carlo metho ds to decrease computational cost through Monte Carlo v ariance reduction, such as, imp ortance sampling [1, 2], cross-en tropy metho d [3], subset sim ulation [4, 5], etc. Ho wev er, such metho ds are outside the scop e of this pap er and will not b e discussed further. Another class of metho ds reduce the computational cost by using appro ximations for the failure b oundary or the entire limit state function. The p opular metho ds that fall in the first category are first- and second-order reliability metho ds (FORM and SORM), which approximate ∗ Postdoctoral Associate, Departmen t of Aeronautics and Astronautics, anirbanc@mit.edu. † Postdoctoral Associate, Departmen t of Aeronautics and Astronautics, noll@mit.edu. ‡ Director, Oden Institute for Computational Engineering and Sciences, kwillcox@oden.utexas.edu 1 the failure b oundary with linear and quadratic approximations around the most probable failure p oin t [6, 7]. The FORM and SORM metho ds can b e efficient for mildly nonlinear problems and cannot handle systems with m ultiple failure regions. The methods that fall in the second category reduce computational cost by replacing the high-fidelit y model ev aluations in the Monte Carlo simulation by cheaper ev aluations from adaptiv e surrogates for the limit state function as seen in Figure 1 (b). Reliability analysis loop High-fidelity model Random v ariable realization System outputs Reliability analysis loop Single fidelit y adap- tive surrogate Random v ariable realization System outputs Multifidelit y adaptiv e surrogate Reliability analysis loop High-fidelity mo del Low-fidelit y model 1 Low-fidelit y model k . . . Random v ariable realization System outputs (a) (b) (c) Impro ving computational efficiency Figure 1: Reliability analysis with (a) high-fidelit y model, (b) single fidelit y adaptiv e surrogate, and (c) m ultifidelity adaptive surrogate. Estimating reliabilit y requires accurately classifying samples to fail or not, whic h needs surrogates that accurately predict the limit state function around the failure b oundary . Thus, the surrogates need to b e refined only in the region of interest (in this case, around the failure b oundary) and do not require global accuracy in prediction of the limit state function. The developmen t of sequential active learning methods for refining the surrogate around the failure b oundary has b een addressed in the literature using only a single high-fidelit y information source. Such metho ds fall in the same category as adaptiv ely refining surrogates for iden tifying stability b oundaries, contour lo cation, classification, sequential design of exp eriment (DOE) for target region, etc. Typically , these methods are divided in to using either Gaussian process (GP) surrogates or supp ort vector machines (SVM). Adaptiv e SVM metho ds ha ve b een implemeted for reliability analysis and con tour lo cation [8, 9, 10]. In this work, w e fo cus on GP-based metho ds (sometimes referred to as kriging- based) that use the GP prediction mean and prediction v ariance to develop greedy and lo ok ahead adaptive sampling methods. Efficient Global Reliabilit y Analysis (EGRA) adaptiv ely refines the GP surrogate around the failure b oundary by sequen tially adding p oin ts that hav e maximum exp ected feasibility [11]. A weigh ted in tegrated mean square criterion for refining the kriging surrogate was dev elop ed by Pichen y et al. [12]. Ec hard et al. [13] proposed an adaptiv e Kriging metho d that refines the surrogate in the restricted set of samples defined by a Monte Carlo sim ulation. Dub ourg et al. [14] proposed a p opulation-based adaptiv e sampling technique for refining the kriging surrogate around the failure b oundary . One-step lo ok ahead strategies for GP surrogate refinement for estimating probability of failure was prop osed by Bect et al. [15] and Chev alier et al. [16]. A review of some surrogate-based metho ds for reliability analysis can b e found in Ref. [17]. Ho wev er, all the metho ds mentioned ab ov e use a single source of information, whic h is the high-fidelit y model as illustrated in Figure 1 (b). This w ork presents a nov el multifidelit y active learning metho d that adaptively refines the surrogate around the limit state function failure b oundary using multiple sources of information, thus, further reducing the active learning computational effort as seen in Figure 1 (c). F or several applications, in addition to an exp ensiv e high-fidelity model, there are p oten tially cheaper lo wer fidelit y mo dels, such as, simplified physics models, coarse-grid mo dels, data-fit mo dels, reduced order mo dels, etc. that are readily a v ailable or can b e built. This necessitates the dev elopment of m ultifidelity metho ds that can take adv antage of these multiple information sources [18]. V arious multifidelit y metho ds ha ve b een developed in the con text of GP-based Bay esian optimization [19, 20, 21]. While Bay esian opti- mization also uses GP mo dels and adaptiv e sampling [22, 23], we note that Bay esian optimization targets a differen t problem to GP-based reliabilit y analysis. In particular, the reliabilit y analysis problem targets 2 the entire limit state function failure contour in the random v ariable space, whereas Bay esian optimization targets finding the optimal design. Th us, the sampling criteria used for failure b oundary lo cation as com- pared to optimization are different, and the corresp onding needs and opp ortunities for multifidelit y metho ds are different. In the context of reliability analysis using activ e learning surrogates, there are few multifi- delit y methods a v ailable. Dribusc h et al. [24] prop osed a hierarchical bi-fidelity adaptive SVM metho d for lo cating failure b oundary . The recen tly developed CLoVER [25] method is a m ultifidelity activ e learning algorithm that uses a one-step lo ok ahead entrop y-reduction-based adaptiv e sampling strategy for refining GP surrogates around the failure b oundary . In this work, we develop a multifidelit y extension of the EGRA metho d [11] as EGRA has b een rigorously tested on a wide range of reliability analysis problems. W e prop ose mfEGRA (multifidelit y EGRA) that lev erages m ultiple sources of information with differen t fidelities and cost to accelerate active learning of surrogates for failure b oundary identification. F or single- fidelit y metho ds, the adaptiv e sampling criterion c ho oses where to sample next to refine the surrogate around the failure b oundary . The c hallenge in developing a m ultifidelity adaptive sampling criterion is that w e now ha ve to answ er tw o questions – (i) where to sample next, and (ii) what information source to use for ev aluating the next sample. This work prop oses a new adaptive sampling criterion that allows the use of multiple fidelity mo dels. In our mfEGRA metho d, w e com bine the expected feasibility function used in EGRA with a prop osed w eighted lo ok ahead information gain to define the adaptiv e sampling criterion for multifidelit y case. W e use the Kullback-Leibler (KL) div ergence to quantify the information gain and deriv e a closed-form expression for the multifidelit y GP case. The key adv an tage of the mfEGRA metho d is the reduction in computational cost compared to single-fidelity activ e learning metho ds b ecause it can utilize additional information from multiple cheaper low-fidelit y mo dels along with the high-fidelity mo del information. W e demonstrate the computational e fficiency of the prop osed mfEGRA using a multimodal analytic test problem and an acoustic horn problem with disjoint failure regions. The rest of the paper is structured as follo ws. Section 2 pro vides the problem setup for reliabilit y analysis using multiple information sources. Section 3 describ es the details of the prop osed mfEGRA metho d along with the complete algorithm. The effectiveness of mfEGRA is shown using an analytical multimodal test problem and an acoustic horn problem in Section 4. The conclusions are presen ted in Section 5. 2 Problem Setup The inputs to the system are the N z random v ariables Z ∈ Ω ⊆ R N z with the probability densit y function π , where Ω denotes the random sample space. The vector of a realization of the random v ariables Z is denoted b y z . The probability of failure of the system is p F = P ( g ( Z ) > 0), where g : Ω 7→ R is the limit state function. In this work, without loss of generality , the failure of the system defined as g ( z ) > 0. The failure b oundary is defined as the zero contour of the limit state function, g ( z ) = 0, and any other failure b oundary , g ( z ) = c , can b e reformulated as a zero con tour (i.e., g ( z ) − c = 0). One wa y to estimate the probability of failure for nonlinear systems is Monte Carlo simulation. The Mon te Carlo estimate of the probability of failure ˆ p F is ˆ p F = 1 m m X i =1 I G ( z i ) , (1) where z i , i = 1 , . . . , m are m samples from probabilit y densit y π , G = { z | z ∈ Ω , g ( z ) > 0 } is the failure set, and I G : Ω 7→ { 0 , 1 } is the indicator function defined as I G ( z ) =  1 , z ∈ G 0 , else. (2) The probabilit y of failure estimation requires many ev aluations of the exp ensiv e-to-ev aluate high-fidelity mo del for the limit state function g , which can mak e reliabilit y analysis computationally prohibitive. The computational cost can b e substan tially reduced b y replacing the high-fidelity model ev aluations with cheap- to-ev aluate surrogate mo del ev aluations. Ho wev er, to make accurate estimations of ˆ p F using a surrogate mo del, the zero-contour of the surrogate model needs to approximate the failure b oundary well. Adaptively 3 refining the surrogate around the failure b oundary , while trading-off global accuracy , is an efficient wa y of addressing the ab o ve. The goal of this w ork is to mak e the adaptiv e refinemen t of surrogate models around the failure b oundary more efficient by using multiple mo dels with different fidelities and costs instead of only using the high- fidelit y mo del. W e develop a m ultifidelity activ e learning metho d that utilizes multiple information sources to efficiently refine the surrogate to accurately lo cate the failure b oundary . Let g l : Ω 7→ R , l ∈ { 0 , . . . , k } b e a collection of k + 1 mo dels for g with asso ciated cost c l ( z ) at lo cation z , where the subscript l denotes the information source. W e define the mo del g 0 to b e the high-fidelity mo del for the limit state function. The k lo w-fidelity mo dels of g are denoted by l = 1 , . . . , k . W e use a multifidelit y surrogate to simultaneously appro ximate all information sources while enco ding the correlations b et ween them. The adaptively refined m ultifidelity surrogate mo del predictions are used for the probability of failure estimation. The Monte Carlo estimate of the probabilit y of failure is then estimated using the refined m ultifidelit y surrogate and is denoted here by ˆ p MF F . Next w e describe the m ultifidelity surrogate model used in this w ork and the multifidelit y activ e learning metho d used to sequentially refine the surrogate around the failure b oundary . 3 mfEGRA: Multifidelit y EGRA with Information Gain In this section, we introduce m ultifidelity EGRA (mfEGRA) that lev erages the k + 1 information sources to efficien tly build an adaptively refined m ultifidelity surrogate to lo cate the failure b oundary . 3.1 mfEGRA metho d ov erview The prop osed mfEGRA method is a multifidelit y extension to the EGRA metho d [11]. Section 3.2 briefly describ es the multifidelit y GP surrogate used in this work to combine the different information sources. The m ultifidelity GP surrogate is built using an initial DOE and then the mfEGRA metho d refines the surrogate using a tw o-stage adaptiv e sampling criterion that: 1. selects the next location to b e sampled using an expected feasibilit y function as described in Section 3.3; 2. selects the information source to b e used to ev aluate the next sample using a weigh ted lo ok ahead information gain criterion as describ ed in Section 3.4. The adaptive sampling criterion developed in this w ork enables us to use the surrogate prediction mean and the surrogate prediction v ariance to make the decision of where and which information source to sample next. Note that b oth of these quantities are a v ailable from the multifidelit y GP surrogate used in this w ork. Section 3.5 provides the implemen tation details and the algorithm for the prop osed mfEGRA metho d. Figure 2 shows a flow c hart outlining the mfEGRA metho d. 3.2 Multifidelit y Gaussian pro cess W e use the m ultifidelit y GP surrogate in troduced by P olo czek et al. [19], which built on earlier w ork b y Lam et al. [20], to combine information from the k + 1 information sources into a single GP surrogate, b g ( l, z ), that can sim ultaneously approximate all the information sources. The multi fidelity GP surrogate can provide predictions for any information source l and random v ariable realization z . The multifidelit y GP is built b y making tw o mo deling choices: (1) a GP approximation for the high- fidelit y mo del g 0 as given by b g (0 , z ) ∼ GP( µ 0 , Σ 0 ), and (2) indep endent GP approximations for the model discrepancy b et w een the high-fidelity and the lo wer-fidelit y mo dels as giv en by δ l ∼ GP( µ l , Σ l ) for l = 1 , . . . , k . µ l denotes the mean function and Σ l denotes the cov ariance k ernel for l = 0 , . . . , k . Then the surrogate for mo del l is constructed by using the definition b g ( l , z ) = b g (0 , z ) + δ l ( z ). These mo deling choices lead to the surrogate model b g ∼ GP( µ pr , Σ pr ) with prior mean function µ pr and prior co v ariance kernel Σ pr . The priors for l = 0 are µ pr (0 , z ) = E [ b g (0 , z )] = µ 0 ( z ) , Σ pr ((0 , z ) , ( l 0 , z 0 )) = Co v ( b g (0 , z ) , b g (0 , z 0 )) = Σ 0 ( z , z 0 ) , (3) 4 mfEGRA Get initial DOE and ev aluate mo dels k + 1 mo dels g l , l = 0 , . . . , k Build initial m ultifidelity GP Is mfEGRA stopping criterion met? Estimate probabilit y of failure using the adaptiv ely refined m ultifidelity GP Stop Select next sampling lo cation using exp ected feasibilit y function Select the information source using weigh ted lo ok ahead information gain Ev aluate at the next sample using the selected mo del Up date m ultifidelity GP Y es No Figure 2: Flow chart showing the mfEGRA metho d. and priors for l = 1 , . . . , k are µ pr ( l, z ) = E [ b g ( l, z )] = E [ b g (0 , z )] + E [ δ l ( z )] = µ 0 ( z ) + µ l ( z ) , Σ pr (( l, z ) , ( l 0 , z 0 )) = Co v ( b g (0 , z ) + δ l ( z ) , b g (0 , z 0 ) + δ l 0 ( z 0 )) = Σ 0 ( z , z 0 ) + 1 l,l 0 Σ l ( z , z 0 ) , (4) where l 0 ∈ 0 , . . . , k and 1 l,l 0 denotes the Kroneck er delta. Once the prior mean function and the prior co v ariance kernels are defined using Equations (3) and (4), w e can com pute the p osterior using standard rules of GP regression [26]. A more detailed description ab out the assumptions and the implementation of the m ultifidelity GP surrogate can b e found in Ref. [19]. A t any giv en z , the surrogate mo del p osterior distribution of b g ( l , z ) is defined by the normal distribution with p osterior mean µ ( l , z ) and p osterior v ariance σ 2 ( l, z ) = Σ(( l, z ) , ( l , z )). Consider that n samples { [ l i , z i ] } n i =1 ha ve b een ev aluated and these samples are used to fit the present m ultifidelity GP surrogate. Note that [ l , z ] is the augmented vector of inputs to the multifidelit y GP . Then the surrogate is refined around the failure b oundary b y sequen tially adding samples. The next sample z n +1 and the next information source l n +1 used to refine the surrogate are found using the tw o-stage adaptiv e sampling metho d mfEGRA as describ ed b elo w. 5 3.3 Lo cation selection: Maximize exp ected feasibility function The first stage of mfEGRA inv olv es selecting the next lo cation z n +1 to b e sampled. The exp ected feasibility function (EFF), whic h was used as the adaptive sampling criterion in EGRA [11], is used in this work to select the lo cation of the next sample z n +1 . The EFF defines the exp ectation of the sample lying within a band around the failure b oundary (here, ±  ( z ) around the zero con tour of the limit state function). The prediction mean µ (0 , z ) and the prediction v ariance σ (0 , z ) at any z are provided b y the multifidelit y GP for the high-fidelit y surrogate mo del. The m ultifidelity GP surrogate prediction at z is the normal distribution Y z ∼ N ( µ (0 , z ) , σ 2 (0 , z )). Then the feasibility function at any z is defined as b eing p ositiv e within the  -band around the failure b oundary and zero otherwise as giv en by F ( z ) =  ( z ) − min( | y | ,  ( z )) , (5) where y is a realization of Y z . The EFF is defined as the expectation of b eing within the  -band around the failure b oundary as given b y E Y z [ F ( z )] = Z  ( z ) −  ( z ) (  ( z ) − | y | ) Y z ( y )d y . (6) W e will use E [ F ( z )] to denote E Y z [ F ( z )] in the rest of the paper. The integration in Equation (6) can be solv ed analytically to obtain [11] E [ F ( z )] = µ (0 , z )  2Φ  − µ (0 , z ) σ (0 , z )  − Φ  −  ( z ) − µ (0 , z ) σ (0 , z )  − Φ   ( z ) − µ (0 , z ) σ (0 , z )  − σ (0 , z )  2 φ  − µ (0 , z ) σ (0 , z )  − φ  −  ( z ) − µ (0 , z ) σ (0 , z )  − φ   ( z ) − µ (0 , z ) σ (0 , z )  +  ( z )  Φ   ( z ) − µ (0 , z ) σ (0 , z )  − Φ  −  ( z ) − µ (0 , z ) σ (0 , z )  , (7) where Φ is the cum ulative distribution function and φ is the probability densit y function of the standard normal distribution. Similar to EGRA [11], w e define  ( z ) = 2 σ (0 , z ) to balance exploration and exploitation. As noted before, we describe the method considering the zero con tour as the failure boundary for con v enience but the prop osed metho d can b e used for lo cating failure b oundary at an y contour level. The lo cation of the next sample is selected b y maximizing the EFF as given b y z n +1 = arg max z ∈ Ω E [ F ( z )] . (8) 3.4 Information source selection: Maximize weigh ted look ahead information gain Giv en the lo cation of the next sample at z n +1 obtained using Equation (8), the second stage of mfEGRA selects the information source l n +1 to b e used for simulating the next sample by maximizing the information gain. Information-gain-based approaches ha ve been used previously for global optimization [27, 28, 21, 29], optimal exp erimen tal design [30, 31], and uncertaint y propagation in coupled systems [32]. Ref. [21] used an information-gain-based approac h for selecting the lo cation and the information source for improving the global accuracy of the multifidelit y GP appro ximation for the constrain ts in global optimization using a double-lo op Monte Carlo sample estimate of the information gain. Our w ork differs from previous efforts in that we develop a weigh ted information-gain-based sampling strategy for failure b oundary iden tification utilizing multiple fidelity mo dels. In the context of information gain criterion for multifidelit y GP , the t wo sp ecific contributions of our work are: (1) deriving a closed-form expression for KL div ergence for the m ultifidelity GP , which do es not require double-lo op Mon te Carlo sampling, thus impro ving the robustness and decreasing the cost of estimation of the acquisition f unction, and (2) using w eighting strategies to address the goal of failure b oundary lo cation for reliability analysis using multiple fidelity models. The next information source is selected b y using a w eighted one-step look ahead information gain criterion. This adaptiv e sampling strategy selects the information source that maximizes the information gain in the GP surrogate prediction defined by the Gaussian distribution at any z . W e measure the KL divergence b et ween 6 the present surrogate predicted GP and a hypothetical future surrogate predicted GP when a particular information source is used to simulate the sample at z n +1 to quan tify the information gain. W e represent the present GP surrogate built using the n av ailable training samples b y the subscript P for con venience as given by b g P ( l, z ) = b g ( l, z | { l i , z i } n i =1 ). Then the present surrogate predicted Gaussian distribution at any z is G P ( z ) ∼ N ( µ P (0 , z ) , σ 2 P (0 , z )) , where µ P (0 , z ) is the p osterior mean and σ 2 P (0 , z ) is the p osterior prediction v ariance of the presen t GP surrogate for the high-fidelity mo del built using the a v ailable training data till iteration n . A hypothetical future GP surrogate can b e understo o d as a surrogate built using the curren t GP as a generative model to create hypothetical future simulated data. The h yp othetical future simulated data y F ∼ N ( µ P ( l F , z n +1 ) , σ 2 P ( l F , z n +1 )) is obtained from the present GP surrogate prediction at the lo cation z n +1 using a p ossible future information source l F ∈ { 0 , . . . , k } . W e represent a hypothetical future GP surrogate by the subscript F. Then a hypothetical future surrogate predicted Gaussian distribution at any z is G F ( z | z n +1 , l F , y F ) ∼ N ( µ F (0 , z | z n +1 , l F , y F ) , σ 2 F (0 , z | z n +1 , l F , y F )) . The p osterior mean of the hypothetical future GP is affine with resp ect to y F and thus is distributed normally as giv en by µ F (0 , z | z n +1 , l F , y F ) ∼ N ( µ P (0 , z ) , ¯ σ 2 ( z | z n +1 , l F )) , where ¯ σ 2 ( z | z n +1 , l F ) = (Σ P ((0 , z ) , ( l F , z n +1 ))) 2 / Σ P (( l F , z n +1 ) , ( l F , z n +1 ))[19]. The p osterior v ariance of the h yp othetical future GP surrogate σ 2 F (0 , z | z n +1 , l F , y F ) dep ends only on the lo cation z n +1 and the source l F , and can b e replaced with σ 2 F (0 , z | z n +1 , l F ). Note that w e don’t need any new ev aluations of the information source for constructing the future GP . The total lo ok ahead information gain is obtained by integrating ov er all p ossible v alues of y F as describ ed b elo w. Since b oth G P and G F are Gaussian distributions, w e can write the KL div ergence betw een them explicitly . The KL divergence b et ween G P and G F for an y z is D KL ( G P ( z ) k G F ( z | z n +1 , l F , y F )) = log  σ F (0 , z | z n +1 , l F ) σ P (0 , z )  + σ 2 P (0 , z ) + ( µ P (0 , z ) − µ F (0 , z | z n +1 , l F , y F )) 2 2 σ 2 F (0 , z | z n +1 , l F ) − 1 2 . (9) The total KL divergence can then b e calculated b y integrating D KL ( G P ( z ) k G F ( z | z n +1 , l F , y F )) o ver the en tire random v ariable space Ω as given by Z Ω D KL ( G P ( z ) k G F ( z | z n +1 , l F , y F ))d z = Z Ω  log  σ F (0 , z | z n +1 , l F ) σ P (0 , z )  + σ 2 P (0 , z ) + ( µ P (0 , z ) − µ F (0 , z | z n +1 , l F , y F )) 2 2 σ 2 F (0 , z | z n +1 , l F ) − 1 2  d z . (10) The total lo ok ahead information gain for any z can then b e calculated by taking the exp ectation of Equa- tion (10) ov er all p ossible v alues of y F as giv en by D IG ( z n +1 , l F ) = E y F  Z Ω D KL ( G P ( z ) k G F ( z | z n +1 , l F , y F ))d z  = Z Ω " log  σ F (0 , z | z n +1 , l F ) σ P (0 , z )  + σ 2 P (0 , z ) + E y F  ( µ P (0 , z ) − µ F (0 , z | z n +1 , l F , y F )) 2  2 σ 2 F (0 , z | z n +1 , l F ) − 1 2 # d z = Z Ω  log  σ F (0 , z | z n +1 , l F ) σ P (0 , z )  + σ 2 P (0 , z ) + ¯ σ 2 ( z | z n +1 , l F ) 2 σ 2 F (0 , z | z n +1 , l F ) − 1 2  d z = Z Ω D ( z | z n +1 , l F )d z , (11) where D ( z | z n +1 , l F ) = log  σ F (0 , z | z n +1 , l F ) σ P (0 , z )  + σ 2 P (0 , z ) + ¯ σ 2 ( z | z n +1 , l F ) 2 σ 2 F (0 , z | z n +1 , l F ) − 1 2 . 7 In practice, we c ho ose a discrete set Z ⊂ Ω via Mon te Carlo sampling to numerically in tegrate Equation (11) as giv en by D IG ( z n +1 , l F ) = Z Ω D ( z | z n +1 , l F )d z ∝ X z ∈Z D ( z | z n +1 , l F ) . (12) The total information gain for the multifi delity GP can be estimated using single-lo op Monte Carlo sampling instead of double-loop Monte Carlo sampling because of the closed-form expression deriv ed in Equation (11). This impro ves the robustness and decreases the cost of estimation of the acquisition function. The total lo ok ahead information gain ev aluated using Equation (12) gives a metric of global information gain o ver the entire random v ariable space. Ho wev er, w e are interested in gaining more information around the failure boundary . In order to giv e more imp ortance to gaining information around the failure b oundary w e use a w eigh ted v ersion of the look ahead information gain normalized b y the cost of the information source. In this work, we explore three different weigh ting strategies: (i) no w eights w ( z ) = 1, (ii) weigh ts defined b y the EFF, w ( z ) = E [ F ( z )], and (iii) weigh ts defined b y the probabilit y of feasibilit y (PF), w ( z ) = P [ F ( z )]. The PF of the sample to lie within the ±  ( z ) b ounds around the zero con tour is P [ F ( z )] = Φ   ( z ) − µ (0 , z ) σ (0 , z )  − Φ  −  ( z ) − µ (0 , z ) σ (0 , z )  . (13) W eighting the information gain by either exp ected feasibilit y or probability of feasibilit y giv es more imp or- tance to gaining information around the target region, in this case, the failure b oundary . The next information source l n +1 is selected by maximizing the weigh ted lo ok ahead information gain normalized b y the cost of the information source as given by l n +1 = arg max l ∈{ 0 ,...,k } X z ∈Z 1 c l ( z ) w ( z ) D ( z | z n +1 , l F = l ) . (14) Note that the optimization problem in Equation (14) is a one-dimensional discrete v ariable problem. In this case, w e only need k + 1 (num b er of a v ailable mo dels) ev aluations of the ob jective function to solve the optimization problem exactly and typically k is a small num b er. 3.5 Algorithm and implementation details An algorithm describing the mfEGRA metho d is given in Algorithm 1. In this work, we ev aluate all the mo dels at the initial DOE. W e generate the initial samples z using Latin hypercub e sampling and run all the mo dels at each of those samples to get the initial training set { z i , l i } n i =1 . The initial n umber of samples, n , can b e decided based on the user’s preference (in this work, we use cross-v alidation error). The EFF maximization problem given by Equation (8) is solved using patternsearch function follow ed b y using m ultiple starts of a lo cal optimizer through the GlobalSearch function in MA TLAB. In practice, w e c ho ose a fixed set of realizations Z ∈ Ω at which the information gain is ev aluated as shown in Equation (12) for all iterations of mfEGRA. Due to the typically high cost asso ciated with the high-fidelity mo del, we chose to ev aluate all the k + 1 models when the high-fidelity model is selected as the information source and up date the GP h yperparameters in our implemen tation. All the k + 1 mo del ev aluations can b e done in parallel. The algorithm is stopp ed when the maximum v alue of EFF go es b elo w 10 − 10 . How ev er, other stopping criteria can also b e explored. Although in this work we did not encounter any case of failed mo del ev aluations, numerical solvers can sometimes fail to pro vide a con verged result. In the context of reliability analysis, a failed mo del ev aluation can be treated as failure of the system (defined here as g l ( z ) > 0) at the particular random v ariable realization z . One p ossibilit y to handle failed model ev aluations would b e to let the v alue of the limit state function g l ( z ) go to an upp er limit in order to indicate failure of the system. A potential limitation of an y GP-based metho d is dealing with the curse of dimensionalit y for high- dimensional problems, where the num b er of samples to cov er the space grows exponentially and the cost of training GPs scales as the cube of the num b er of samples. The multifidelit y metho d presen ted here alleviates the cost of exploring the space by using cheaper low-fidelit y mo del ev aluations and restricts its queries of the high-fidelity mo del to lie mostly around the failure b oundary . The iss ue of the cost of training GPs with 8 increasing num b er of samples is not addressed here but can b e p oten tially tackled through GP sparsification tec hniques [33, 34]. Another strategy for reducing cost of training is through adaptive sampling strategies that exploit parallel computing. Adv ancemen ts in parallel computing hav e led to several parallel adaptiv e sampling strategies for global optimization [35] and some parallel adaptiv e sampling metho ds for contour lo cation [36, 16]. In addition, parallel metho ds for multifidelit y adaptive sampling hav e increased difficulty and needs to b e explored in b oth the fields of global optimization and contour lo cation. Algorithm 1 Multifidelity EGRA Input: Initial DOE X 0 = { z i , l i } n i =1 , cost of each information source c l Output: Refined m ultifidelity GP b g 1: pro cedure mfEGRA( X 0 ) 2: X = X 0  set of training samples 3: Build initial multifidelit y GP b g using the initial set of training samples X 0 4: while stopping criterion is not met do 5: Select next sampling lo cation z n +1 using Equation (8) 6: Select next information source l n +1 using Equation (14) 7: Ev aluate at sample z n +1 using information source l n +1 8: X = X ∪ { z n +1 , l n +1 } 9: Build up dated multifidelit y GP b g using X 10: n ← n + 1 11: end while 12: return b g 13: end pro cedure 4 Results In this section, we demonstrate the effectiv eness of the proposed mfEGRA method on an analytic multimodal test problem and tw o different cases for an acoustic horn application. The probability of failure is estimated through Mon te Carlo simulation using the adaptiv ely refined multifidelit y GP surrogate. 4.1 Analytic m ultimo dal test problem The analytic test problem used in this w ork has tw o inputs and three models with differen t fidelities and costs. This test problem has been used b efore in the context of reliability analysis in Ref. [11]. The high-fidelity mo del of the limit state function is g 0 ( z ) = ( z 2 1 + 4)( z 2 − 1) 20 − sin  5 z 1 2  − 2 , (15) where z 1 ∼ U ( − 4 , 7) and z 2 ∼ U ( − 3 , 8) are uniformly distributed random num bers. The domain of the function is Ω = [ − 4 , 7] × [ − 3 , 8]. The tw o lo w-fidelity mo dels are g 1 ( z ) = g 0 ( z ) + sin  5 z 1 22 + 5 z 2 44 + 5 4  , (16) g 2 ( z ) = g 0 ( z ) + 3 sin  5 z 1 11 + 5 z 2 11 + 35 11  . (17) The cost of each fidelity mo del is taken to b e constant ov er the en tire domain and is given by c 0 = 1 , c 1 = 0 . 01 and c 2 = 0 . 001. In this case, there is no noise in the observ ations from the different fidelity mo dels. The failure b oundary is defined by the zero contour of the limit state function ( g 0 ( z ) = 0) and the failure of the system is defined by g 0 ( z ) > 0. Figure 3 shows the con tour plot of g ( z ) for the three mo dels used for the analytic test problem along with the failure b oundary for eac h of them. 9 Figure 3: Con tours of g l ( z ) using the three fidelit y models for the analytic test problem. Solid red line represen ts the zero contour that denotes the failure b oundary . W e use an initial DOE of size 10 generated using Latin h ypercub e sampling. All the models are ev aluated at these 10 samples to build the initial multifidelit y surrogate. The reference probability of failure is estimated to b e ˆ p F = 0 . 3021 using 10 6 Mon te Carlo samples of g 0 mo del. The relative error in probability of failure estimate using the adaptively refined multifidelit y GP surrogate, defined by | ˆ p F − ˆ p MF F | / ˆ p F , is used to assess the accuracy and computational efficiency of the prop osed metho d. W e rep eat the calculations for 100 differen t initial DOEs to get the confidence bands on the results. W e first compare the accuracy of the metho d when different weigh ts are used for the information gain criterion in mfEGRA as seen in Figure 4. W e can see that using weigh ted information gain (b oth EFF and PF) p erforms b etter than the case when no weigh ts are used when comparing the error confidence bands. EFF-weigh ted information gain leads to only marginally low er errors in this case as compared to PF-w eighted information gain. Since we don’t see any significant adv antage of using PF as weigh ts and we use the EFF-based criterion to select the sample lo cation, w e propose using EFF-w eighted information gain to make the implementation more con venien t. Note that for other problems, it is p ossible that PF-w eighted information gain may b e better. F rom hereon, mfEGRA is used with the EFF-weigh ted information gain. 10 15 20 25 30 35 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 Figure 4: Effect of different weigh ts for information gain criterion in mfEGRA for analytic test problem in terms of conv ergence of relative error in p F prediction (shown in log-scale) for 100 differen t initial DOEs. Solid lines represent the median and dashed lines represent the 25 and 75 p ercen tiles. 10 The comparison of mfEGRA with single-fidelity EGRA shows considerable improv ement in accuracy at substan tially low er computational cost as seen in Figure 5. In this case, to reac h a median relative error of b elo w 10 − 3 in p F prediction, mfEGRA requires a computational cost of 26 compared to EGRA that requires a computational cost of 48 ( ∼ 46% reduction). Note that we start b oth cases with the same 100 sets of initial samples. W e also note that the original pap er for the EGRA metho d [11] rep orts a computational cost of 35.1 for the mean relative error from 20 different initial DOEs to reac h b elo w 5 × 10 − 3 . W e rep ort the computational cost for the EGRA algorithm to reach a median relativ e error from 100 differen t initial DOEs b elo w 10 − 3 to b e 48 (in our case, the computational cost for the median relative error for EGRA to reac h b elo w 5 × 10 − 3 is 40). The difference in results can b e attributed to the differen t sets of initial DOEs, the GP implementations, differen t statistics of rep orted results, and different probabilit y distributions used for the random v ariables. 10 20 30 40 50 60 70 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 Figure 5: Comparison of mfEGRA vs single-fidelity EGRA for analytic test problem in terms of conv ergence of relativ e error in p F prediction (sho wn in log-scale) for 100 different initial DOEs. Figure 6 shows the evolution of the exp ected feasibility function and the weigh ted lo ok ahead information gain, which are the tw o stages of the adaptive sampling criterion used in mfEGRA. These metrics along with the relative error in probability of failure estimate can b e used to define an efficient stopping crite- rion, sp ecifically when the adaptiv e sampling needs to be repeated for different sets of parameters (e.g., in reliabilit y-based design optimization). Figure 7 sho ws the progress of mfEGRA at sev eral iterations for a particular initial DOE. mfEGRA explores most of the domain using the c heap er g 1 and g 2 mo dels in this case. The algorithm is stopped after 69 iterations when the exp ected feasibility function reac hed b elo w 10 − 10 ; we can see that the surrogate contour accurately traces the true failure b oundary defined b y the high-fidelity mo del. As noted before, w e ev aluate all the three mo dels when the high-fidelity mo del is selected as the information source. In this case, mfEGRA makes a total of 21 ev aluations of g 0 , 77 ev aluations of g 1 , and 23 ev aluations of g 2 including the initial DOE, to reach a v alue of EFF b elo w 10 − 10 . 4.2 Acoustic horn W e demonstrate the effectiveness of mfEGRA for the reliabilit y analysis of an acoustic horn for a three- dimensional case and a rarer even t probability of failure four-dimensional case. The acoustic horn mo del used in this w ork has been used in the context of robust optimization by Ng et al. [37] An illustration of the acoustic horn is shown in Figure 8. 11 15 20 25 30 35 40 10 -20 10 -10 10 0 (a) 15 20 25 30 35 40 10 -5 10 0 10 5 10 10 10 15 (b) Figure 6: Ev olution of adaptiv e sampling criteria (a) exp ected feasibility function, and (b) weigh ted infor- mation gain used in mfEGRA for 100 different initial DOEs. -4 -2 0 2 4 6 -2 0 2 4 6 8 -4 -2 0 2 4 6 -2 0 2 4 6 8 -4 -2 0 2 4 6 -2 0 2 4 6 8 -4 -2 0 2 4 6 -2 0 2 4 6 8 Figure 7: Progress of mfEGRA at several iterations showing the surrogate prediction and the samples from differen t mo dels for a particular initial DOE. HF refers to high-fidelity mo del g 0 , LF1 refers to lo w-fidelity mo del g 1 , and LF2 refers to low-fidelit y model g 2 . 4.2.1 Three-dimensional case The inputs to the system are the three random v ariables listed in T able 1. 12 T able 1: Random v ariables used in the three-dimensional acoustic horn problem. Random v ariable Description Distribution Lo wer b ound Upp er b ound Mean Standard deviation k w av e num ber Uniform 1.3 1.5 – – Z u upp er horn wall imp edance Normal – – 50 3 Z l lo wer horn wall imp edance Normal – – 50 3 2 𝑎 𝐿 𝐿 2 𝑏 𝑖 2 𝑏 Γ in let Γ wa ll Γ radi a t ion Figure 8: Tw o-dimensional acoustic horn geometry with a = 0 . 5 , b = 3 , L = 5 and shape of the horn flare describ ed by six equally-spaced half-widths b 1 = 0 . 8 , b 2 = 1 . 2 , b 3 = 1 . 6 , b 4 = 2 , b 5 = 2 . 3 , b 6 = 2 . 65. [37] The output of the mo del is the reflection coefficient s , whic h is a measure of the horn’s efficiency . W e define the failure of the system to b e s ( z ) > 0 . 1. The limit state function is defined as g ( z ) = s ( z ) − 0 . 1, whic h defines the failure b oundary as g ( z ) = 0. W e use a t wo-dimensional acoustic horn mo del gov erned by the non-dimensional Helmholtz equation. In this case, a finite element mo del of the Helmholtz equation is the high-fidelity mo del g 0 with 35895 no dal grid p oin ts. The low-fidelit y mo del g 1 is a reduced basis mo del with N = 100 basis vectors [37, 38]. In this case, the cost of ev aluating the lo w-fidelity mo del is 40 times faster than ev aluating the high-fidelit y mo del. The cost of ev aluating the different mo dels is taken to be constan t o ver the entire random v ariable space. A more detailed description of the acoustic horn mo dels used in this work can b e found in Ref. [37]. The reference probabilit y of failure is estimated to b e p F = 0 . 3812 using 10 5 Mon te Carlo samples of the high-fidelit y mo del. W e rep eat the mfEGRA and the single-fidelity EGRA results using 10 different initial DOEs with 10 samples in each (generated using Latin h yp ercube sampling) to get the confidence bands on the results. The comparison of conv ergence of the relative error in the probability of failure is shown in Figure 9 for mfEGRA and single-fidelity EGRA. In this case, mfEGRA needs 19 equiv alent high-fidelity solv es to reac h a median relative error v alue of b elo w 10 − 3 as compared to 25 required by single-fidelit y EGRA leading to 24% reduction in computational cost. The reduction in computational cost using mfEGRA is driven by the discrepancy b et ween the models and the relative cost of ev aluating the mo dels. In the acoustic horn case, we see computational sa vings of 24% as compared to around 46% seen in the analytic test problem in Section 4.1. This can b e explained by the substantial difference in relative costs – 40 times cheaper low-fidelit y mo del for the acoustic horn problem as compared to tw o low-fidelit y models that are 100-1000 times cheaper than the high-fidelit y mo del for the analytic test problem. The evolution of the mfEGRA adaptive sampling criteria can b e seen in Figure 10. Figure 11 shows that classification of the Mon te Carlo samples using the high-fidelity mo del and the 13 Figure 9: Comparing relativ e error in the estimate of probability of failure (shown in log-scale) using mfEGRA and single-fidelity EGRA for the three-dimensional acoustic horn application with 10 different initial DOEs. 15 20 25 30 35 40 10 -30 10 -20 10 -10 10 0 (a) 15 20 25 30 35 40 10 -60 10 -40 10 -20 10 0 (b) Figure 10: Evolution of adaptiv e sampling criteria (a) exp ected feasibility function, and (b) weigh ted infor- mation gain for the three-dimensional acoustic horn application with 10 different initial DOEs. adaptiv ely refined surrogate mo del for a particular initial DOE lead to very similar results. It also shows that in the acoustic horn application there are t wo disjoint failure regions and the method is able to accurately capture b oth failure regions. The lo cation of the samples from the differen t models when mfEGRA is used to refine the multifidelit y GP surrogate for a particular initial DOE can b e seen in Figure 12. The figure sho ws that most of the high-fidelity samples are selected around the failure b oundary . F or this DOE, mfEGRA requires 31 ev aluations of the high-fidelity model and 76 ev aluations of the low-fidelit y mo del to reach an EFF v alue b elow 10 − 10 . Similar to the w ork in Refs. [13, 15], EGRA and mfEGRA can also be implemented by limiting the searc h space for adaptiv e sampling lo cation in Equation (8) to the set of Mon te Carlo samples (here, 10 5 ) 14 (a) (b) Figure 11: Classification of Monte Carlo samples using (a) high-fidelit y mo del, and (b) the final refined m ultifidelity GP surrogate for a particular initial DOE using mfEGRA for the three-dimensional acoustic horn problem. 1.3 1.35 1.4 1.45 1.5 30 40 50 60 70 1.3 1.35 1.4 1.45 1.5 30 40 50 60 70 Figure 12: Lo cation of samples from different fidelity mo dels using mfEGRA for the three-dimensional acoustic horn problem for a particular initial DOE. The cloud of p oin ts are the high-fidelit y Monte Carlo samples near the failure b oundary . dra wn from the given random v ariable distribution. The conv ergence of relative error in probability of failure estimate using this metho d improv es for b oth mfEGRA and single-fidelit y EGRA as can b e seen in Figure 13. In this case, mfEGRA requires 12 equiv alent high-fidelit y solves as compared to 22 high-fidelity solv es required b y single-fidelity EGRA to reac h a median relativ e error below 10 − 3 leading to computational sa vings of around 45%. 15 Figure 13: Comparing relative error in the estimate of probability of failure (shown in log-scale) using mfEGRA and single-fidelit y EGRA by limiting the search space for adaptiv e sampling lo cation to a set of Mon te Carlo samples drawn from the given random v ariable distribution for the three-dimensional acoustic horn application with 10 different initial DOEs. 4.2.2 F our-dimensional case F or the four-dimensional acoustic horn problem, the inputs to the system are the three random v ariables used b efore along with a random v ariable ξ defined b y a truncated normal distribution representing manufacturin g uncertain ty as listed in T able 2. The parameters defining the geometry of the acoustic horn (see Figure 8) are now given by b i + ξ , i = 1 , . . . , 6 to account for manufacturing uncertaint y . In this case, we define failure of the system to b e s ( z ) > 0 . 16 to make the failure a rarer even t. The limit state function is defined as g ( z ) = s ( z ) − 0 . 16, which defines the failure b oundary as g ( z ) = 0. The reference probabilit y of failure is estimated to be p F = 7 . 2 × 10 − 3 using 10 5 Mon te Carlo samples of the high-fidelit y model. Note that the p F in the four-dimensional case is tw o orders of magnitude low er than the three-dimensional case. The complexity of the problem increases b ecause of the higher dimensionality as well as the rarer even t probability of failure to b e estimated. T able 2: Random v ariables used in the four-dimensional acoustic horn problem. Random v ariable Description Distribution Lo wer b ound Upp er b ound Mean Standard deviation k w av e num ber Uniform 1.3 1.5 – – Z u upp er horn wall imp edance Normal – – 50 3 Z l lo wer horn wall imp edance Normal – – 50 3 ξ man ufacturing uncertain ty T runcated Normal -0.1 0.1 0 0.05 In this case, we present the results for EGRA and mfEGRA implemen ted b y limiting the search space to a priori Monte Carlo samples (here, 10 5 ) drawn from the given random v ariable distribution. Note that the lo wer probabilit y of failure estimation required here necessitates the use of a priori drawn Monte Carlo samples to efficiently achiev e the required accuracy . The computational efficiency can b e further improv ed 16 b y combining EGRA and mfEGRA with Monte Carlo v ariance reduction techniques, esp ecially for problems with ev en low er probabilities of failure. W e rep eat the mfEGRA and the single-fidelit y EGRA results using 10 different initial DOEs with 15 samples in each (generated using Latin h yp ercube sampling) to get the confidence bands on the results. The comparison of conv ergence of the relative error in the probability of failure is shown in Figure 14 for mfEGRA and single-fidelity EGRA. In this case, mfEGRA requires 25 equiv alent high-fidelity solves as compared to 48 high-fidelity solves required by single-fidelity EGRA to reac h a median relative error below 10 − 3 leading to computational savings of around 48%. Figure 14: Comparing relative error in the estimate of probability of failure (shown in log-scale) using mfEGRA and single-fidelit y EGRA by limiting the search space for adaptiv e sampling lo cation to a set of Mon te Carlo samples drawn from the given random v ariable distribution for the four-dimensional acoustic horn application with 10 different initial DOEs. 5 Concluding remarks This pap er introduces the mfEGRA (m ultifidelity EGRA) metho d that refines the surrogate to accurately lo cate the limit state function failure b oundary (or any contour) while leveraging multiple information sources with different fidelities and costs. The metho d selects the next lo cation based on the expected feasibilit y function and the next information source based on a weigh ted one-step look ahead information gain criterion to refine the multifidelit y GP surrogate of the limit state function around the failure b oundary . W e show through three n umerical examples that mfEGRA efficien tly com bines information from differen t mo dels to reduce computational cost. The mfEGRA metho d leads to computational savings of ∼ 46% for a multimodal test problem and 24% for a three-dimensional acoustic horn problem ov er the single- fidelit y EGRA metho d when used for estimating the probability of failure. The mfEGRA metho d w hen implemen ted by restricting the searc h-space to a priori drawn Monte Carlo samples sho wed even more computational efficiency with 45% reduction in computational cost compared to single-fidelity method for the three-dimensional acoustic horn problem. W e see that using a priori drawn Monte Carlo samples improv es the efficiency of b oth EGRA and mfEGRA, and the imp ortance is further highlighted through the four- dimensional implemen tation of the acoustic horn problem, whic h requires estimating a rarer ev en t probabilit y of failure. F or the four-dimensional acoustic horn problem, mfEGRA leads to computational savings of 48% as compared to the single-fidelity metho d. The driving factors for the reduction in computational cost for the metho d are the discrepancy b et ween the high- and low-fidelit y mo dels, and the relativ e cost of the low- fidelit y mo dels compared to the high-fidelity model. These information are directly enco ded in the mfEGRA 17 adaptiv e sampling criterion helping it make the most efficien t decision. Ac kno wledgemen ts This w ork has been supp orted in part b y the Air F orce Office of Scien tific Researc h (AF OSR) MURI on man- aging multiple information sources of m ulti-physics systems a ward num b ers F A9550-15-1-0038 and F A9550- 18-1-0023, the Air F orce Cen ter of Excellence on m ulti-fidelit y mo deling of rock et com bustor dynamics a w ard F A9550-17-1-0195, and the Departmen t of Energy Office of Science AEOLUS MMICC a ward DE-SC0019303. References [1] Melchers, R., “Importance sampling in structural systems,” Structur al Safety , V ol. 6, No. 1, 1989, pp. 3–10. [2] Liu, J. S., Monte Carlo str ate gies in scientific c omputing , Springer Science & Business Media, 2008. [3] Kro ese, D. P ., Rubinstein, R. Y., and Glynn, P . W., “The cross-en tropy metho d for estimation,” Handb o ok of Statistics , V ol. 31, Elsevier, 2013, pp. 19–34. [4] Au, S.-K. and Beck, J. L., “Estimation of small failure probabilities in high dimensions by subset sim ulation,” Pr ob abilistic Engine ering Me chanics , V ol. 16, No. 4, 2001, pp. 263–277. [5] Papaioannou, I., Betz, W., Zwirglmaier, K., and Straub, D., “MCMC algorithms for subset sim ulation,” Pr ob abilistic Engine ering Me chanics , V ol. 41, 2015, pp. 89–103. [6] Hohenbic hler, M., Gollwitzer, S., Kruse, W., and Rac kwitz, R., “New light on first-and second-order reliabilit y metho ds,” Structur al Safety , V ol. 4, No. 4, 1987, pp. 267–284. [7] Rackwitz, R., “Reliabilit y analysis–a review and some p erspectives,” Structur al Safety , V ol. 23, No. 4, 2001, pp. 365–395. [8] Basudhar, A., Missoum, S., and Sanchez, A. H., “Limit state function identification using supp ort v ector machines for discontin uous resp onses and disjoint failure domains,” Pr ob abilistic Engine ering Me chanics , V ol. 23, No. 1, 2008, pp. 1–11. [9] Basudhar, A. and Missoum, S., “Reliability assessment using probabilistic support vector mac hines,” International Journal of R eliability and Safety , V ol. 7, No. 2, 2013, pp. 156–173. [10] Lecerf, M., Allaire, D., and Willcox, K., “Metho dology for dynamic data-driven online fligh t capability estimation,” AIAA Journal , V ol. 53, No. 10, 2015, pp. 3073–3087. [11] Bichon, B. J., Eldred, M. S., Swiler, L. P ., Mahadev an, S., and McF arland, J. M., “Efficien t global reliabilit y analysis for nonlinear implicit p erformance functions,” AIAA Journal , V ol. 46, No. 10, 2008, pp. 2459–2468. [12] Pichen y , V., Ginsb ourger, D., Roustant, O., Haftk a, R. T., and Kim, N.-H., “Adaptiv e designs of exp erimen ts for accurate appro ximation of a target region,” Journal of Me chanic al Design , V ol. 132, No. 7, 2010, pp. 071008. [13] Echard, B., Gayton, N., and Lemaire, M., “AK-MCS: an active learning reliability metho d com bining Kriging and Monte Carlo simulation,” Structur al Safety , V ol. 33, No. 2, 2011, pp. 145–154. [14] Dub ourg, V., Sudret, B., and Bourinet, J.-M., “Reliability-based design optimization using kriging surrogates and subset sim ulation,” Structur al and Multidisciplinary Optimization , V ol. 44, No. 5, 2011, pp. 673–690. [15] Bect, J., Ginsb ourger, D., Li, L., Pic heny , V., and V azquez, E., “Sequential design of computer exp er- imen ts for the estimation of a probability of failure,” Statistics and Computing , V ol. 22, No. 3, 2012, pp. 773–793. 18 [16] Chev alier, C., Bect, J., Ginsb ourger, D., V azquez, E., Pichen y , V., and Richet, Y., “F ast parallel kriging-based stepwise uncertaint y reduction with application to the iden tification of an excursion set,” T e chnometrics , V ol. 56, No. 4, 2014, pp. 455–465. [17] Moustapha, M. and Sudret, B., “Surrogate-assisted reliabilit y-based design optimization: a surv ey and a unified mo dular framework,” Structur al and Multidisciplinary Optimization , 2019, pp. 1–20. [18] Peherstorfer, B., Willcox, K., and Gunzburger, M., “Survey of multifidelit y metho ds in uncertaint y propagation, inference, and optimization,” SIAM R eview , V ol. 60, No. 3, 2018, pp. 550–591. [19] Poloczek, M., W ang, J., and F razier, P ., “Multi-information source optimization,” A dvanc es in Neur al Information Pr o c essing Systems , 2017, pp. 4291–4301. [20] Lam, R., Allaire, D., and Willcox, K., “Multifidelit y optimization using statistical surrogate modeling for non-hierarc hical information sources,” 56th AIAA/ASCE/AHS/ASC Structur es, Structur al Dynamics, and Materials Confer enc e , 2015. [21] Ghoreishi, S. F. and Allaire, D., “Multi-information source constrained Bay esian optimization,” Struc- tur al and Multidisciplinary Optimization , V ol. 59, No. 3, 2019, pp. 977–991. [22] F razier, P . I., “A tutorial on bay esian optimization,” arXiv pr eprint arXiv:1807.02811 , 2018. [23] Jones, D. R., “A taxonomy of global optimization metho ds based on resp onse surfaces,” Journal of Glob al Optimization , V ol. 21, No. 4, 2001, pp. 345–383. [24] Dribusch, C., Missoum, S., and Beran, P ., “A m ultifidelity approach for the construction of explicit de- cision boundaries: application to aeroelasticity ,” Structur al and Multidisciplinary Optimization , V ol. 42, No. 5, 2010, pp. 693–705. [25] Marques, A., Lam, R., and Willcox, K., “Contour lo cation via entrop y reduction lev eraging multiple information sources,” A dvanc es in Neur al Information Pr o c essing Systems , 2018, pp. 5217–5227. [26] Rasmussen, C. E. and Nic kisc h, H., “Gaussian pro cesses for machine learning (GPML) to olb o x,” Journal of Machine L e arning R ese ar ch , V ol. 11, No. Nov, 2010, pp. 3011–3015. [27] Villemonteix, J., V azquez, E., and W alter, E., “An informational approach to the global optimization of exp ensiv e-to-ev aluate functions,” Journal of Glob al Optimization , V ol. 44, No. 4, 2009, pp. 509–534. [28] Hennig, P . and Sch uler, C. J., “Entrop y search for information-efficient global optimization,” The Jour- nal of Machine L e arning R ese ar ch , V ol. 13, No. 1, 2012, pp. 1809–1837. [29] Hern´ andez-Lobato, J. M., Hoffman, M. W., and Ghahramani, Z., “Predictiv e entrop y searc h for efficien t global optimization of black-box functions,” A dvanc es in Neur al Information Pr o c essing Systems , 2014, pp. 918–926. [30] Huan, X. and Marzouk, Y. M., “Simulation-based optimal Bay esian exp erimen tal design for nonlinear systems,” Journal of Computational Physics , V ol. 232, No. 1, 2013, pp. 288–317. [31] Villanuev a, D. and Smarslok, B. P ., “Using Exp ected Information Gain to Design Aerothermal Mo del Calibration Exp erimen ts,” 17th AIAA Non-Deterministic Appr o aches Confer enc e, Kissimme e, FL, USA , 2015. [32] Chaudhuri, A., Lam, R., and Willcox, K., “Multifidelit y uncertain ty propagation via adaptiv e surrogates in coupled multidisciplinary systems,” AIAA Journal , 2018, pp. 235–249. [33] Williams, C. K. and Rasmussen, C. E., Gaussian pr o c esses for machine le arning , V ol. 2, MIT press Cam bridge, MA, 2006. [34] Burt, D., Rasmussen, C. E., and V an Der Wilk, M., “Rates of Con vergence for Sparse V ariational Gaussian Pro cess Regression,” International Confer enc e on Machine L e arning , 2019, pp. 862–871. 19 [35] Haftk a, R. T., Villanuev a, D., and Chaudh uri, A., “Parallel surrogate-assisted global optimization with exp ensiv e functions–a surv ey ,” Structur al and Multidisciplinary Optimization , V ol. 54, No. 1, 2016, pp. 3–13. [36] Viana, F. A., Haftk a, R. T., and W atson, L. T., “Sequential sampling for con tour estimation with concurren t function ev aluations,” Structur al and Multidisciplinary Optimization , V ol. 45, No. 4, 2012, pp. 615–618. [37] Ng, L. W. and Willcox, K. E., “Multifidelity approac hes for optimization under uncertaint y ,” Interna- tional Journal for Numeric al Metho ds in Engine ering , V ol. 100, No. 10, 2014, pp. 746–772. [38] Eftang, J. L., Huynh, D., Knezevic, D. J., and P atera, A. T., “A t wo-step certified reduced basis metho d,” Journal of Scientific Computing , V ol. 51, No. 1, 2012, pp. 28–58. 20

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment