Using Supervised Learning to Improve Monte Carlo Integral Estimation
Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospac…
Authors: Brendan Tracey, David Wolpert, Juan J. Alonso
Using Supervised Learning to Impro v e Monte Carlo Inte gral Estimation Brendan T race y Stanfor d University , Stanfor d, CA 94305 David W olpert N ASA Ames Resear c h Center , Mo ff ett F ield, CA 94035 Juan J. Alonso Stanfor d University , Stanfor d, CA 94305 October 1, 2018 Abstract Monte Carlo (MC) techniques are often used to estimate integrals of a multi v ariate function using randomly gen- erated samples of the function. In light of the increasing interest in uncertainty quantification and robust design ap- plications in aerospace engineering, the calculation of e xpected v alues of such functions (e.g. performance measures) becomes important. Howe v er , MC techniques often su ff er from high variance and slow conv ergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improv e the associated inte gral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the v ariance of an y type of Monte Carlo inte gral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. W e report on an extensi ve set of experiments confirming that the StackMC estimate of an inte gral is more accurate than both the associated unprocessed Monte Carlo estimate and an estimate based on a functional fit to the MC samples. These ex- periments run over a wide v ariety of integration spaces, numbers of sample points, dimensions, and fitting functions. In particular , we apply StackMC in estimating the expected value of the fuel b urn metric of future commercial aircraft and in estimating sonic boom loudness measures. W e compare the e ffi ciency of StackMC with that of more standard methods and show that for ne gligible additional computational cost significant increases in accuracy are g ained. Nomenclatur e b Bias of M D x Set of samples of f ( x ) D x ( i ) i th Sample of f ( x ) D x i , te st Subset of Samples in the i th T esting Set D x i , tr ain Subset of Samples in the i th T raining Set E [ · ] Expected V alue f ( x ) Objectiv e Function ˆ f T rue Expected V alue of f ( x ) ˆ f M Estimate of ˆ f from M f ( D x ( i )) Function V alue at the i th sample g ( x ) Fitting Algorithm g i ( x ) i th Fit to f ( x ) g i D x ( i ) Prediction of Fit g i at D x ( i ) ˆ g Expected V alue of g ( x ) k Number of F olds 1 L i Likelihood of the Expected V alue of the i th Fold M An Estimator of ˆ f m i Number of Samples in the i th T esting Set N Number of Samples N i Number of Samples in the i th T raining Set p ( x ) Probability Distribution of x q ( x ) Alternate Sample Distribution r ( x ) Fitting Algorithm for p ( x ) v V ariance of M x Input Parameters α Free Parameter of StackMC β Free Parameter in the Fitting Algorithm ρ Correlation σ Standard Deviation 1 Intr oduction A system optimized only for best performance at nominal operating conditions can see sev ere performance degradation at e ven slightly o ff -design conditions. Aerospace systems rarely operate at precisely the design condition and a rob ust design approach dictates trading some performance at the nominal condition for improved performance over a wide range of input conditions. Howe ver , certain input conditions will occur rarely or nev er , and adding robustness for these conditions will degrade av erage system performance. Consequently , we are often interested in optimizing the e xpected performance of a system rather than the performance at any specific operating point. Formally , we are interested in an integral of the form E [ f ] = ˆ f = Z f ( x ) p ( x ) d x , (1) where f ( x ) is the (potentially multi v ariate) objecti v e function to be optimized, and p ( x ) is the probability density function (or probability distribution, in which case (1) is actually a summation) from which x is generated. Evaluating (1) is non-trivial, especially when the objective function is high dimensional and / or expensiv e to ev al- uate. When standard quadrature techniques (such as Simpson’ s rule) become intractable, Monte Carlo (MC) methods are powerful techniques which have become the standard for integral estimation. Howe v er , MC techniques also have their drawbacks; the required computational expense may be prohibiti ve in most situations. The question is if a tech- nique exists to significantly lower the cost of estimating (1). This paper describes such a technique which combines MC with machine learning and statistical techniques and can lead to significant computational sa vings o v er traditional methods. W e begin this paper with a brief revie w of integral estimation techniques including Monte Carlo, fitting algorithms, and stacking. W e then introduce a new technique we call Stacked Monte Carlo (StackMC), which uses stacking to reduce the estimation error of any Monte Carlo method. Finally , we apply StackMC to a number of analytic example problems and to two problems from the aerospace literature. W e sho w that StackMC can significantly reduce estimation error and nev er has higher estimation error than the best of Monte Carlo and the chosen fitting algorithm. 2 Integral Estimation T echniques 2.1 Estimation Error When using any method M returning ˆ f M as an estimate of (1), there are two sources of estimation error: bias ( b ) and variance ( v ). Bias is the expected di ff erence between the method and the truth: b = E [ ˆ f M − ˆ f ], where the expectation is over all data sets. A method whose average over many runs giv es the correct answer has zero bias, whereas a 2 method that estimates high or low on average has non-zero bias. As an example, the Euler equations have significant bias in their estimate of viscous drag. V ariance is a measure of how much ˆ f M varies between di ff erent runs of M : v = E [( ˆ f M − E [ ˆ f M ]) 2 ]. If M is deterministic, it has zero variance because every run returns the same answer , whereas if M is stochastic multiple runs have di ff erent outputs leading to a non-zero variance. The total expected squared error is the combination of these two factors E | ˆ f M − ˆ f | 2 = b 2 + v . Any method seeking to reduce estimation error must keep both of sources of error lo w . 2.2 Monte Carlo T echniques In “Simple Sampling” Monte Carlo (MC), a set of N samples, D x , is generated randomly according to p ( x ). The estimate of the expected v alue of the function based on D x is the av erage of the function v alues of the samples ˆ f ≈ ˆ f mc = 1 N N X i = 1 f D x ( i ) , where D x ( i ) refers to the i th generated sample. Simple Monte Carlo has two extremely useful properties. First, MC is guaranteed to be unbiased and thus, on av erage, will return the correct answer . Second, MC is guaranteed to con v erge to the correct answer at a rate of O ( n 1 / 2 ) independent of the number of dimensions . That is, for any problem, increasing the number of samples by a factor of one hundred decreases the expected error by a factor of ten. As Simple Monte Carlo has zero bias, all of the error comes from the variance of Monte Carlo runs. This variance is due a di ff erent kind of variance concerning f ( x ) itself; fluctuations in the Monte Carlo estimate are directly related to fluctuations in f ( x ). As a result, the smaller the variance of f ( x ), the smaller the expected error of Monte Carlo. Many di ff erent sample generation techniques exist to help reduce the variance of Monte Carlo. Importance sam- pling [1] generates samples from an alternate distribution q ( x ) (instead of the true distribution p ( x )) and estimates integral (1) as a weighted a verage of sample v alues ˆ f = Z f ( x ) p ( x ) d x = Z f ( x ) p ( x ) q ( x ) q ( x ) d x , ˆ f i s = 1 N N X i = 1 f D x ( i ) p D x ( i ) q D x ( i ) . (2) Importance sampling is often used when the tails of a distribution hav e a measurable e ff ect on ˆ f , but occur very infrequently . Using Simple MC, several million samples are needed to accurately capture a once-in-a-million event, but by using importance sampling unlikely outcomes can be made to occur more frequently reducing the total number of samples needed. Quasi-Monte Carlo (QMC) techniques reduce variance by choosing sample locations more carefully . Sample points generated from simple Monte Carlo will inevitably cluster in some locations and leave other locations void of samples. QMC methods are usually deterministic and reduce variance by spreading points evenly throughout the sample space. T wo such methods are the scrambled Halton sequence [2] and the Sobol sequence [3]. While often e ff ectiv e, due to deterministic sample generation the y are not guaranteed to be unbiased. It is also di ffi cult to generate points from an arbitrary p ( x ); most QMC algorithms generate sample points from a uniform distribution o ver the unit hypercube. 2.3 Supervised Lear ning A second class of techniques for estimating integrals from data seeks to use the data samples more e ffi ciently through the use of a fitting algorithm and supervised learning techniques. The fitting algorithm uses data samples to make a “fit” to the data, and the integral estimate is the integral of the fit. Examples of fitting algorithms include splines, high-order polynomials and Fourier expansions; ev en Simpson’ s rule computes the quadrature by fitting a piecewise 3 quadratic polynomial to the data samples. Fits incorporate the spatial distrib ution of sample points and thus often gi ve more accurate estimates of ˆ f than MC. Ho we ver , using a fitting algorithm can induce bias and may or may not exhibit con v ergence to the correct answer as the number of points increases. Additionally , when the number of data points is “too small” many fitting algorithms exhibit higher variance than MC, and, as a result, using a fitting algorithm can be worse than not using one, especially with low numbers of sample points and in high-dimensional spaces. It is usually impossible to kno w a priori how many points is “too small” making it di ffi cult to know when to use a fitting algorithm. Additionally , it is di ffi cult to kno w if a fit to the data samples is an accurate representation of f ( x ) at values of x not in the data set. Many fitting algorithms exhibit a property kno wn as “o verfitting” where a fit to the data is v ery accurate at x locations that are among the data samples, but is very inaccurate at x locations not among the data samples. As a result, one cannot use the data samples themselves to both make a fit and to ev aluate its accuracy . The standard technique for addressing this issue is known as cross-v alidation, where a certain percentage of the data is used to make the fit and the rest of the data is used to e v aluate its performance. Usually this process is repeated several times, and the best performing fit is used as the output of the fitting algorithm (a.k.a. winner-takes-all ). Stacking is a technique first introduced by W olpert [4] which improves upon cross-validation by combining the results of di ff erent fits. W olpert describes stacking in his paper as “ a strategy ... which is more sophisticated than winner-tak es-all. Loosely speaking, this strategy is to combine the [fits] rath er than choose one amongst them. ... [Stacking] can be vie wed as a more flexible version of nonparametric statistics techniques like cross-v alidation, ... therefore it can be argued that for almost any generalization or classification problem ... one should use [stacking] rather than any single generalizer by itself. ” Interested readers can see stacking applied to improving regression [5], probability density estimation [6], confidence interval estimation [7], and optimization [8]. Stacking recently gained fame as being a major part of the first and second place algorithms in the Netflix competition [9]. In addition to combining the predictions of multiple fitting algorithms, stacking can also be used to improve the prediction of a single fitting algorithm. In this variant of stacking, one again partitions the full set of data D x into a subset S 1 used to make fits and the remainder of the data, and a subset S 2 = D x \ S 1 . Ho we ver now one does not use S 2 to determine ho w to combine the predictions of multiple fitting algorithms that were all run on S 1 . Instead one uses S 2 to correct the prediction of a single algorithm that was run on S 1 . This variant of stacking is closest to the algorithm we use below . For a comparison of stacking to other generalizers, see Clarke[10]. 3 Stacked Monte Carlo The main idea of Stacked Monte Carlo (StackMC) is to incorporate the variance reduction potential of a fitting al- gorithm while avoiding introducing bias and additional variance from poor fits. It makes no assumptions about the function f ( x ) nor the sample generation method, and therefore StackMC can be used to augment nearly any Monte Carlo application. Let us assume that we have a function g ( x ) that is a reasonable (though not necessarily perfect) fit to f ( x ). Equation (1) can be re-written as ˆ f = Z α g ( x ) p ( x ) d x + Z f ( x ) − α g ( x ) p ( x ) d x = α ˆ g + Z f ( x ) − α g ( x ) p ( x ) d x , (3) where α is a constant and ˆ g = R g ( x ) p ( x ) d x . Instead of using Monte Carlo samples to estimate (1) directly , we can use the Monte Carlo samples to estimate the integral term in (3), i.e. ˆ f ≈ α ˆ g + 1 N N X i = 1 f D x ( i ) − α g D x ( i ) . (4) Since g ( x ) is assumed to be a reasonable fit, for a properly chosen α , f ( x ) − α g ( x ) has lower variance than f ( x ) alone (see Fig. 1), and so a MC estimate of (3) has lower expected error than an estimate of (1). In fact, it can be 4 (a) Example f ( x ) and g ( x ) (b) f ( x ) and f ( x ) − α g ( x ) Figure 1: Comparison of f ( x ) and f ( x ) − α g ( x ) for a value of α = 0 . 85. f − α g has lo wer variance than f alone, and so a Monte Carlo estimate of f − α g will ha ve less error than an estimate of f . shown [11] that the optimal v alue of α to minimize expected error is α = ρ σ f σ g , (5) where σ f and σ g are the standard deviations of f ( x ) and g ( x ) respectiv ely , and ρ is the correlation between f and g . Intuitiv ely , if g is a good fit to f , ρ (and correspondingly α ) will be high and ˆ g will be trusted as a good estimate for ˆ f . If g is a poor fit to f , ρ and α will both be low and ˆ g will be downplayed. Looking at it from a di ff erent perspectiv e, (3) takes the expected value estimated from g ( x ), and corrects it with a term estimating the bias of g ( x ). Either way , by using (3) the error should be lower than using either Monte Carlo or the fitting function alone. Since the expected value of the fit is both added and subtracted, (4) remains an unbiased estimate of (1). Thus, (4) incorporates a fitter while remaining unbiased, and α allo ws us to emphasize good fits while de-emphasizing poor ones. The obvious question is how to obtain g ( x ) and find α from data samples. The first step is to pick a functional form for a fitting algorithm g ( x ), such as a polynomial with unknown coe ffi cients. g ( x ) can be nearly anything, the only restrictions are that it can make predictions at new x values. For now we also require that g can be analytically integrated o ver the v olume, i.e. we can calculate ˆ g = Z g ( x ) p ( x ) d x (6) analytically (see further discussion later in the paper). By comparing the output of a fit g ( x ) to the true f ( x ) values, an estimate for the “goodness” of the fit (and by extention α ) could be obtained, and (4) could be applied. Ho we ver , doing this directly would cause ov er -fitting and would lead to an inaccurate estimate of α . Over -fitting can be mitigated by using a technique known as k -fold cross-validation [12]. The N data samples are randomly partitioned into k testing sets, D x i , te st , i = 1 , ..., k , which are mutually exclusi ve and contain the same number of samples (di ff ering slightly if N / k is not an integer). T raining sets, D x i , tr ain , i = 1 , ..., k , contain all of the data samples not in D x i , te st . W e call N i the number of samples in D x i , te st and m i the number of samples in D x i , tr ain (such that N i + m i = N ). One fit per fold, g i ( x ), is created using only the N i samples in D x i , tr ain (for a total of k fitters). The samples in a training set are called “held in” points because they are used to generate the fit, whereas points in the corresponding testing set are “held out” points because the fit is generated without using these samples. Each data sample is in a held out data set once and in the held in data set k − 1 times. Using g i ( x ) a prediction of the function values for the points in D x i , te st is made ( g i ( D x i , tr ain ) ). Standard cross-validation ev aluates the accuracy of each of the fits, and chooses the best fit to use as a single g ( x ). Instead, we adopt the approach of stacking and use (3) to get an estimate of ˆ f from each of the fitters, and we can use the held out data points as an estimate of the integral term. 5 ˆ f smc ( i ) = α ˆ g i + Z f ( x ) − α g i ( x ) p ( x ) d x (7) ˆ f smc ( i ) = α ˆ g i + 1 m i m i X j = 1 f D x i , te st ( j ) − α g i D x i , te st ( j ) . (8) The mean of these estimates is taken as the final prediction ˆ f ≈ ˆ f smc = 1 k k X i = 1 ˆ f smc ( i ) . (9) W e can also use the predictions at the held out points to estimate α . σ f is estimated from the variance of the sample function values themselves, σ g from the variance of the predictions, and ρ from the correlation between the predictions and the truth. µ f = 1 N N X j = 1 f D x ( j ) µ g = 1 N k X i = 1 N i X j = 1 g i D x i , tr ain ( j ) σ f = v u t 1 N − 1 N X j = 1 f D x ( j ) − µ f 2 σ g = v u t 1 N − 1 k X i = 1 N i X j = 1 g i D x i , tr ain ( j ) − µ g 2 cov ( f , g ) = 1 N − 1 k X i = 1 N i X j = 1 f D x i , tr ain ( j ) − µ f g D x i , tr ain ( j ) − µ g ρ = cov ( f , g ) σ f σ g . Finally , we plug into (9) ˆ f smc = 1 k k X i = 1 ˆ f smc ( i ) = 1 k k X i = 1 α ˆ g i + 1 m i m i X j = 1 f D x i , te st ( j ) − α g i D x i , te st ( j ) . (10) One final correction is needed to complete StackMC. It is possible that some or all of the ˆ g i di ff er greatly from ˆ f but α is calculated high because of good predictions at held out data points. If left alone, this would cause StackMC to return a low-quality prediction on some data sets. Howe v er , the error in the mean (EIM) of the Monte Carlo samples can be used as a second metric to ev aluate the goodness of the fit. EIM is defined as ¯ σ = σ √ N , and represents the uncertainty in the mean of a set of Monte Carlo samples. Specifically , it gi v es us a likelihood bound on ˆ f based on ˆ f mc and σ f . W e can find a “likelihood” for each fold by calculating ˜ L i = | ˆ g i − ˆ f mc | ¯ σ . (11) 6 The higher L i , the less likely it is that ˆ g i = ˆ f , and the more likely it is that the fitter is bad and should be ignored. A heuristic is set that if max( L i ) > C , ˆ f smc = ˆ f mc , i.e. all of the fits are ignored entirely and the Monte Carlo average is used. If C is set too lo w , then too man y reasonable fits are ignored, and if C is set too high, then too man y unreasonable fits are kept. A value of C = 5 was chosen based on experimentation; it was clear that setting C as low as 3 or as high as 7 were inferior . Finally , a note about the bias of StackMC. Because the e xpected value of the fit is being both added and subtracted, it is true that (3) (and by extension, (7) and (9)) are also unbiased for a fixed α . Howe ver , using the held-out data to both set α and estimate the integral can introduce bias. In practice, we hav e found that this is only a problem for very small numbers of sample points where the output fit of the fitting algorithm changes significantly depending on the held-in samples. 3.1 Generalized Stacked Monte Carlo The discussion above was for the case where the samples are generated according to a known p ( x ), this is only a special case of a class of estimation scenarios. While the overall methodology remains approximately the same for these other scenarios, some specifics (such as the calculation of α ) change with the choice for g ( x ) and the sample generation method. 3.1.1 Importance sampling If importance sampling methods are used, samples are generated from q ( x ) instead of p ( x ). As a result, (1) is e xpanded as ˆ f = Z α g ( x ) q ( x ) d x + Z f ( x ) p ( x ) q ( x ) − α g ( x ) q ( x ) d x , and so g ( x ) should be a fitting algorithm for f ( x ) p ( x ) q ( x ) instead of just f ( x ). As a result a few modifications are needed including the fact that, ˆ g i = Z g ( x ) q ( x ) d x , and that in the calculation of α and (10), f p q is used instead of just f . 3.1.2 Unable to integrate g ( x ) ov er p ( x ) If the samples are generated from p ( x ), but the choice for g ( x ) cannot be integrated over p ( x ), there are two options. The first option is to use another function r ( x ) as a fitting algorithm for p ( x ) over which g ( x ) is integrable. W e expand (1) as ˆ f = Z α g ( x ) r ( x ) d x + Z f ( x ) p ( x ) − α g ( x ) r ( x ) d x = Z α g ( x ) r ( x ) d x + Z f ( x ) − α g ( x ) r ( x ) p ( x ) p ( x ) d x . Similarly to the abov e case, when calculating α and (10), g is replaced with gr p . Alternately , when g ( x ) is significantly less computationally expensi ve than f ( x ), ˆ g itself can be estimated by Monte Carlo techniques. When estimating ˆ g from N g additional samples, it should be the case that N g N so that errors in the estimate of ˆ g do not cause significant errors in the estimate of ˆ f . 3.2 A Simple Example W e attempt to find the expected value of 3 x 6 + 3 . 6 x 5 − 91 . 29 x 4 − 19 . 41 x 3 + 57 . 15 x 2 − 14 . 43 x + 0 . 9, where x is generated according to a uniform distribution between − 3 and 3. Thus, the integral of concern is 7 Left-out Fold x f ( x ) β 0 β 1 β 2 β 3 g i ( x ) ˆ g i 1 0.4087 0.2438 2.7385 -4.3737 -3.9500 -9.4712 -0.3549 1.4281 -0.6950 7.8350 7.0498 -0.0943 0.9259 3.1237 0.1152 -0.0166 2.1675 2 0.4117 0.2420 2.4683 -3.3829 -3.0183 -11.3595 -0.2284 1.4622 0.2745 0.1108 1.0774 0.1823 -0.0163 1.6825 0.2882 0.1342 0.9708 3 -0.6318 7.6689 0.7900 0.3755 3.3397 -22.0257 7.4398 1.9032 -0.3923 4.7811 2.4864 -0.8345 6.4358 15.6002 0.7716 -5.2874 -7.0489 4 0.5711 -0.5683 1.8054 -6.7386 -0.2738 -1.3468 -2.3831 1.7141 -0.5988 7.4412 6.0312 0.9607 -16.6302 -6.1154 -0.6411 7.7172 6.3677 5 0.7124 -3.2834 2.3965 -3.8653 -3.4306 -10.9995 -6.0740 1.2529 0.1206 -0.0208 1.8609 -0.3960 4.8377 4.0722 0.2816 0.1230 0.7901 T able 1: Details of simple example calculations. ˆ f = 1 2 Z 3 − 3 x 6 + 1 . 2 x 5 − 30 . 43 x 4 − 6 . 47 x 3 + 19 . 05 x 2 − 4 . 81 x + 0 . 3 d x . The actual value of ˆ f can be calculated to be 0.7069. T wenty samples are generated and divided into fiv e folds, with each fold’ s testing set containing four points and each corresponding training set containing the other sixteen points. While the function of interest is a sixth order polynomial, g ( x ) was chosen to be a third-order polynomial: β 0 + β 1 x + β 2 x 2 + β 3 x 3 . g 1 ( x ) is found by choosing the β ’ s which minimize the squared error of the data in D x 1 , tr ain (using the standard pseudo-inv erse). g 2 ( x ) - g 5 ( x ) are set similarly . Next, g i D x i , te st ( j ) is ev aluated for each test point in D x i , te st . The following parameters are calculated to find α : µ f = 1 . 1337 µ g = 1 . 9258 σ f = 5 . 6835 σ g = 5 . 2678 cov ( f , g ) = 24 . 0999 ρ = cov ( f , g ) σ f σ g = 0 . 8049 α = ρ σ f σ g = 0 . 8685 . Finally , equation (10) is applied to calculate ˆ f smc . Using the Monte Carlo estimate alone gives ˆ f mc = 1 . 1337, and using a fit to all of the data points alone gives ˆ f f it = 1 . 3412. By using StackMC, we find ˆ f smc = 0 . 8081, much closer to the true v alue than either of the individual estimates. Details of the calculations can be seen in T ables 1 and 2, and the fit from the first fold can be seen in Fig. 2. 8 Fold ˆ g i P m i j = 1 f D x i , te st ( j ) − α g i D x i , te st ( j ) Corrected ˆ g i 1 1.4281 -0.3553 0.8795 2 1.4622 -0.6428 0.6271 3 1.9032 -0.6122 1.0407 4 1.7141 -1.3569 0.1318 5 1.2529 -0.2732 1.3613 ˆ f smc 0.8081 T able 2: Details of fold combination calculations. Figure 2: Example of the first fold. 9 4 Results W e test the performance of StackMC on a number of di ff erent problems. In the first set of example cases the problems hav e known analytic results and an exact analysis of the performance of StackMC is examined. In the second set, StackMC is applied to two aerospace problems in the literature. While an exact solution to the aerospace problems is not av ailable, an approximate answer is obtained from a Monte Carlo estimate using 100,000 samples. For each example problem, a range of number of samples was tested using two thousand simulations at each value of N . In each case, 10-fold cross-validation was used. Unless otherwise noted, the plots in this section hav e three lines which show the mean squared error versus the number of sample points. The lines represent the av erage squared error from using just the prediction of Monte Carlo (green), the fit to all the samples (red), and StackMC (blue). 4.1 Analytic T est Cases 4.1.1 T en-dimensional Rosenbrock function under a unif orm p ( x ) , polynomial fitter . The D-dimensional Rosenbrock function is giv en by f ( x ) = D − 1 X i = 1 [(1 − x i ) 2 + 100( x i + 1 − x 2 i ) 2 ] , and x is drawn from a uniform distribution o ver the [-3,3] h ypercube. The fitting algorithm is chosen as a third-order polynomial in each dimension whose form is g ( x ) = β 0 + D X i = 1 β 1 , i x i + β 2 , i x 2 i + β 3 , i x 3 i , where the β i are free parameters that are set from the data samples. A comparison of the error of StackMC can be seen in Fig. 3 for the ten-dimensional version of the Rosenbrock function. At low numbers of sample points, Monte Carlo is more accurate, on av erage, than the polynomial fit to all the data points, but at higher numbers of points the polynomial outperforms Monte Carlo. Throughout the entire range of number of samples, StackMC performs at least as well as the best of the two. Additionally , as can be seen in Fig. 4, the polynomial fitter is actually a biased fitter for this example problem. StackMC is able to use cross-v alidation to remov e the bias of the fitter while keeping the accurac y of its estimate. 4.1.2 T en dimensional Rosenbrock function under a Gaussian p ( x ) . This is the same set-up as abov e, except that x is generated according to a multi v ariate Gaussian distrib ution, each dimension independent with µ = 0 and σ = 2. Much like the case abov e, in Fig. 5 it can be seen that at lo w numbers of sample points Monte Carlo outperforms the polynomial fit, whereas at high sample points the polynomial does better than Monte Carlo alone. Howe ver , for all numbers of sample points, StackMC outperforms the other two algorithms. 4.1.3 20 Dimensional BTButterfly , uniform p ( x ) , Fourier fitter In the previous examples, the Rosenbrock function is a 4 th order polynomial, and the fitting algorithm is a 3 rd order polynomial, so StackMC had reasonable fits to use to improv e upon Monte Carlo. In order to challenge StackMC, a function we call the BTButterfly was created so that it would be v ery di ffi cult to fit accurately . Like the Rosenbrock function, its general form is gi ven by f ( x ) = D − 1 X i = 1 h ( x i , x i + 1 ) with the contour plot of h shown in Fig. 6. x is generated uniformly over the [-3,3] hypercube. 10 10 2 10 5 10 6 10−D Rosenbrock Test Function with Uniform p(x) Average Squared Error vs. Number of Samples Number of Samples Average squared error with error in the mean MC Estimate Polynomial StackMC Figure 3: Comparison of MC, fitter, and StackMC for the Rosenbrock function with uniform uncertainty . A Fourier e xpansion for g ( x ) was chosen whose form is gi ven by g ( x ) = β 0 + D X i = 1 β 1 , i cos( x i ) + β 2 , i cos(2 x i ) + β 3 , i cos(3 x i ) + β 4 , i sin( x ) i + β 5 , i sin(2 x i ) + β 3 , i sin(3 x i ) . Results for this function are shown in Fig.7, and exhibit the same trends seen previously; StackMC has as low of an error as the lowest of the tw o methods indi vidually . 4.2 Aerospace A pplications 4.2.1 Futur e Aircraft Uncertainty Quantification (UQ) In Ref.[13], the authors use the Program for Aircraft Synthesis Studies [14] (P ASS), a conceptual aircraft design tool, to predict the fuel burn of future aircraft giv en certain assumptions about technology adv ancement in the 2020 and 2030 time frames. In their predictions for single-aisle aircraft in 2020, the authors model eight probabilistic variables representing di ff erent e ff ects of improvements in aircraft technology (propulsion, structures, aerodynamics). Each of the variables is represented by a unique beta distribution. The authors generated 15,000 samples (each representing one optimized aircraft) from which they measured the expected fuel-burn metric and the standard de viation of the expected fuel-b urn metric. T o apply StackMC, we choose a third-order polynomial fitter . A polynomial fitter is conv enient for this case because the E [ x n ] ov er a beta distribution is easily found analytically as follo ws: E [ x n ] = Q n i = 1 α + i − 1 Q n i = 1 α + β + i − 1 , (12) 11 20 40 60 80 100 120 140 160 180 200 220 1.695 1.7 1.705 1.71 1.715 1.72 1.725 1.73 1.735 1.74 x 10 4 10−D Rosenbrock Test Function with Uniform p(x) Expected Value vs. Number of Samples Number of Samples Expected Value MC Estimate Polynomial StackMC Exact Solution Figure 4: Expected output of MC, fitter , and StackMC with error in the mean for the 10-D Rosenbrock function. Note that the fitting function is biased, especially at lower numbers of sample points, while StackMC and MC are not. 12 10 2 10 3 10 6 10 7 10 8 10−D Rosenbrock Test Function with Gaussian p(x) Average Squared Error vs Number of Samples Number of Samples Average squared error with error in the mean MC Estimate Polynomial StackMC Figure 5: Comparison of MC, fitter, and StackMC for the 10-D Rosenbrock function with Gaussian uncertainty . x(ii) x(ii+1) −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Figure 6: Contour plot of successive dimensions of the BTButterfly function. 13 10 2 10 3 10 0 10 1 10 2 20−D BTButterfly Test Function with uniform p(x) Average Squared Error vs Number of Samples Number of Samples Average squared error with error in the mean MC Estimate Fourier StackMC Figure 7: Comparison of MC, fitter, and StackMC for the BTButterfly function with uniform uncertainty . where α and β are the two parameters of the beta distrib ution, B ( α, β ) (not to be confused with the StackMC parameter α ). A set of 100,000 samples were generated and the mean and standard deviation of their function values were taken as the “true” expected v alue and standard deviation. The standard deviation is defined as p E [ x 2 ] − E [ x ] 2 , and therefore, to find the standard de viation we can run StackMC twice, once to find E [ x ] and a second time to find E [ x 2 ]. The results of applying StackMC to find the expected v alue and standard de viation are sho wn in Fig. 8 and Fig. 9, respecti v ely . The exact same trend is seen here as in all of the analytic problems. At low numbers of samples, where the fit to all the data points performs badly , StackMC does as well as Monte Carlo. At high numbers of samples, where the fit greatly outperforms Monte Carlo, StackMC does as well as the fitter . In between these two extremes, StackMC outperforms both algorithms. At 1500 samples, StackMC reduces the error by roughly an order of magnitude compared with the Monte Carlo result used by th e authors. Each sample takes about 6 seconds to generate, whereas StackMC takes less than a half a second to form the 10 fits, calculate α , and calculate (10) . For a computational cost of less than one additional sample, StackMC has reduced the number of samples needed by 90% for the same expected error . 4.2.2 Sonic Boom Uncertainty Quantification The uncertainty quantification of sonic boom noise is used as the second application case. Unlike the aircraft design test case, the response surf ace for the sonic boom noise signature is not smooth; the output can vary significantly with slight adjustments to the input parameters. Colonno and Alonso[15] recently created a new sonic boom propagation tool, SUBoom, and used it to analyze the robustness of se veral aircraft pressure signatures optimized for minimal boom noise. Specifically , in their high-fidelity near-field case, the y ha v e 62 uncertain variables: 4 representing aircraft parameters such as cruise Mach number and roll angle, and 58 representing uncertainties in the near -field pressure signal. Like in the Aircraft UQ case, 100,000 samples were generated and used as the true mean. A third-order polynomial was used as g ( x ). The results can be seen in Fig. 10. 14 10 2 10 3 10 −8 10 −6 10 −4 10 −2 Aircraft UQ Test Case Average Squared Error vs Number of Samples Number of Samples Average squared error with error in the mean MC Fit polynomial StackMC Figure 8: Comparison of MC, fitter, and StackMC for the Aircraft UQ test case finding E [ x ]. 10 2 10 3 10 −10 10 −8 10 −6 10 −4 Aircraft UQ E[x 2 ] Average Squared Error vs Number of Samples Number of Samples Average squared error with error in the mean MC Estimate Polynomial StackMC Figure 9: Comparison of MC, fitter, and StackMC for the Aircraft UQ test case finding E [ x 2 ]. 15 10 3 10 4 10 −2 10 −1 SUBoom Test Case Average Squared Error vs Number of Samples Number of Samples Average squared error with error in the mean MC Estimate Polynomial StackMC Figure 10: Comparison of MC, fitter, and StackMC for the SUBoom UQ test case. Even with the discontinuities in the function space, the same performance for StackMC is seen. StackMC does as well as the best of Monte Carlo and the fitting algorithm. The reduction in error is not as great here as in the Aircraft UQ case because a polynomial is not a good fit to f ( x ), though using domain knowledge to improve the accuracy of the fitting algorithm would further increase the performance of StackMC. Despite the relatively poor performance of the fitting algorithm, it is still better to use StackMC than to not regardless of the number of sample points used. In some senses, this test case demonstrates one of the strengths of StackMC: at virtually no additional computation cost the user is guaranteed the highest accuracy without ha ving to worry about the appropriateness of the methodology for a giv en number of samples. 5 Conclusion In this paper , we hav e introduced a new technique, Stack ed Monte Carlo, which reduces error of Monte Carlo sampling by blending sev eral di ff erent fitters of f ( x ). StackMC is unbiased, (thus retaining a major advantage of Monte Carlo), and is able to use cross validation to e ff ecti v ely incorporate the use of a fitting function. StackMC is an extremely generic post-processing technique. It is applied after the samples are generated and makes no assumptions about the generation method, and therefore StackMC can be used not only with simple MC, but also with other sample generation techniques such as Importance Sampling, Quasi-Monte Carlo, and Markov-Chain Monte Carlo. Furthermore, StackMC makes no assumptions about f ( x ); it not only applies to smooth functions, but also to discontinuous functions, and ev en functions with discrete v ariables. In a future paper , [16] we will present results of the application of StackMC to di ff erent input regimes and di ff erent sample generation methods. W e will examine higher dimensional spaces, explore the application of StackMC to multi-fidelity methods, and extended StackMC to incorporate multiple fitting functions. The computation time of StackMC is dominated by forming the fits g i ( x ). As a result, StackMC is only a ff ected by the curse of dimensionality insofar as the fitting algorithm is. In both of the aerospace applications, the additional 16 cost of the entire StackMC algorithm was virtually non-existant; there are significant improvements in accuracy for less computation time than the cost of one additional function ev aluation. The only assumptions made about g ( x ) are that it be able to predict the value at new sample locations and we are able to ev aluate ˆ g analytically (and ev en this second assumption is relax ed). As sho wn in the BTButterfly and sonic boom example cases, the fit does not need to be particularly good to see improvement, and so the choice for g ( x ) can be modified as computational e ff ort and domain knowledge allows. StackMC does not eliminate the need for finding better fitters; a better fitter will always lead to an improved result. W ith the exception of the error in the mean test (whose value we did not change for any of the example cases), StackMC requires no user -set parameters that need to be heuristically tuned to see good results. The version of StackMC implemented in this paper is relativ ely naiv e. Instead of taking the mean of the results from the di ff erent fits, taking a weighted combination of the fits could improv e the estimate. StackMC currently only partitions the data into folds once, but we could imagine re-partitioning the same samples several times to hav e more fitters. W e use Monte Carlo to estimate the integral term in (7), but using a fitter, or ev en using StackMC recursiv ely , could improve the estimates of ˆ f smc . Furthermore, α was set as a constant, but in general α could vary between the folds or could ev en be a function of x so the confidence in the fit varies spatially . Despite this simplistic implementation, StackMC performs at least as well as the better of MC and the fitting function, and for a range of sample points outperforms both. Normally , there is a transition number of samples at which using a fit to all the data samples has a smaller average error than MC, and it is hard to know a priori if it is better to use the fit. StackMC eliminates the need to decide whether or not to use a fit; it will nev er be harmful to do so. While it is not true that for any set of data samples ˆ f smc is closer to ˆ f than ˆ f mc , on av erage StackMC reduces the expected error . StackMC is a very generic method for the post-processing of data samples; it can be used by anyone trying to estimate an integral or e xpected v alue of a function where p ( x ) is known. Refer ences [1] Haugh, M., “V ariance Reduction Methods II, ” Monte Carlo Simulation: IEOR E4703 , 2004. [2] Kocis, L. and Whiten, W . J., “Computational In vestig ations of Low-Discrepanc y Sequences, ” A CM T ransactions on Mathematical Softwar e , V ol. 23, No. 2, 1997, pp. 266–294. [3] Sobol, I., “Uniformly distributed sequences with additional uniformity properties, ” USSR Comput. Math. Math. Phys , V ol. 16, No. 5, 1976, pp. 236–242. [4] W olpert, D., “Stacked Generalizations, ” Neural Networks , V ol. 5, No. 2, 1992, pp. 241–260. [5] Breiman, L., “Stacked regressions, ” Machine Learning , V ol. 24, pp. 49–64. [6] Smyth, P . and W olpert, D., “Stacked density estimation, ” Pr oceedings of the 1997 confer ence on Advances in neural information pr ocessing systems 10 , NIPS ’97, MIT Press, Cambridge, MA, USA, 1998, pp. 668–674. [7] Kim, K. and Bartlett, E. B., “Error Estimation by Series Association for Neural Network Systems, ” Neural Computation , V ol. 7, No. 4, 1995, pp. 799–808. [8] Rajnarayan, D. and W olpert, D., “Exploiting Parametric Learning to Improv e Black-Box Optimization, ” . [9] Sill, J., T akács, G., Macke y , L., and Lin, D., “Feature-W eighted Linear Stacking, ” CoRR , V ol. abs / 0911.0460, 2009. [10] Clarke, B., “Comparing Bayes Modeling A veraging and Stacking when Model Aprroximation Error Cannot Be Ignored, ” Journal of Mac hine Learning Resear c h , V ol. 4:683-712, 2003. [11] Haugh, M., “V ariance Reduction Methods I, ” Monte Carlo Simulation: IEOR E4703 , 2004. [12] Duda, R. O., Hart, P . E., and Stork, D. G., P attern Classification , Wile y , New Y ork, 2nd ed., 2001. 17 [13] Economon, T ., Copeland, S., Alonso, J., Zeinali, M., and Rutherford, D., “Design and Optimization of Future Aircraft for Assessing the Fuel Burn Trends of Commercial A viation, ” 49th AIAA Aer ospace Sciences Meeting , Orlando, Fl., January 2011, AIAA Paper 2011-267. [14] Kroo, I., “ An Interactive System for Aircraft Design and Optimization, ” Aer ospace Design Conference , Irvine, Ca., 1992, AIAA 1992-1190. [15] Colonno, M. and Alonso, J., “Sonic Boom Minimization Revisited: The Robustness of Optimal Low-Boom Designs, ” 13th AIAA / ISSMO Multidisciplinary Analysis Optimization Confer ence , Fort W orth, T exas, September 2010, AIAA-2010-9364. [16] Trace y , B., W olpert, D., and Alonso, J., “V ariance Reduction of Monte Carlo Methods Using Stacking, ” In Submission. 18
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment