Biology-Derived Algorithms in Engineering Optimization
Biology-derived algorithms are an important part of computational sciences, which are essential to many scientific disciplines and engineering applications. Many computational methods are derived from or based on the analogy to natural evolution and …
Authors: Xin-She Yang
CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 585 — #1 32 Biology-Derived Algorithms in Engineering Optimization Xin-She Yang 32.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 -585 32.2 Biology-Derived Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 -586 Genetic Algorithms • Phot osynthetic Algorithms • N eural Netw orks • Cellular Aut omata • Optimization 32.3 Engineering Optimization and A pplications . . . . . . . . . . 32 -590 Function and Multilevel Optimizations • Shape Design and Optimization • Finite Element Inv erse Analysis • In verse IVBV Optimization Ref erenc es . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 -595 32.1 Introduction Biology-derived algorithms are an important part of computational sciences, which are essential to man y scientific disciplines and eng ineering applications. Many computational methods are derived from or based on t he analog y to natural evolution and biolog ical activities, and these biologically inspired compu- tations include genetic algorithms, neural networks, cellular automata, and other algorithms. Howev er , a substantial amount of computations today are still using con ventional methods such as finit e differenc e, finite element, and finite volume methods. New algorithms are often developed in the form of a h ybrid combination of biology-der i ved algorithms and con ventional methods, and this is especially true in the field of engineering optimizations. Engineering pr oblems with optimization objectives are often difficult and time consuming, and the application of nature or biology-inspired algorithms in combination with the c onventional optimization methods has been very successful in the last seve ral decades. There are five paradigms of nature-inspired evolu tionary computations: genetic algorithms, evolution- ar y programming , evolutionary str at eg ies, genetic programming, and classifier systems (Holland, 1975; Goldberg, 1989; Mitchell, 1996; Flake, 1998). Genetic algor ithm (GA), developed by John Holland and his collaborators in the 1960s and 1970s, is a model or abstraction of biolog ical evolution, which includes the following operators: crossover , mutation, inv ersion, and selection. This is done b y the r epresentation within a computer of a population of individuals corresponding to chromosomes in terms of a set of character strings, and the individuals in the population then evolve thr ough the crossov er and mutation 32 -585 CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 586 — #2 32 -586 Handbook of Bioinspired Algorithms and Applications of the string from parents, and the selection or surv ival acc ording to their fitness. Evolu tionar y pro- gramming (EP), first developed by La w rence J. Fogel in 1960, is a stochastic optimization strategy similar to GAs. But it differs from GAs in that there is no constraint on the represent ation of solutions in EP and the repr esentation often follows the problem. In addition, the EP s do not attempt to model genetic operations closely in the sense that the crossover operation is not used in EP s. The mutation operation simply changes aspects of the solution acc ording to a statistical distribution, such as multivariate Gaussian pertur bations, instead o f bit-flopping, which is often done in GAs. As the global optimum is approached, the r ate of mutation is often reduced. E volutionar y strategies (ESs) wer e conceiv e d by Ingo Rechenberg and Hans-P aul Schw efel in 1963, later joined b y P eter Bienert, to solv e technical optimization problems. Although they were dev eloped independently of one another , both ESs and EPs hav e man y similarities in implementations. T ypically , they both oper ate on real-values to solve real-valued function optimization in contrast w ith the encoding in Gas. M ultivariate Gaussian mutation w ith zero mean are used for each parent population and appropriate selection criteria ar e used to determine which solution to k eep or remov e. H owev er , EP s often use stochastic selection via a tournament and the selection eliminates those solutions with the least wins, while the ESs use deterministic selection cr iterion that removes the worst solutions directly based on the evaluations of certain functions (Heitk otter and Beasley , 2000). In addition, recomb ination is possible in an ES as it is an abstraction of e volution at the level of individual behavior in contrast to the abstraction of evolution at the le vel of reproductiv e populations and no recombination mechanisms in EP s. A Q: Please check if the insertion of (1992) here is ok. The afor ementioned three areas hav e the most impact in the dev elopment of evolutionary c omputations, and, in fact, evolutionary computation has been chosen as the general term that encompasses all these areas and some new areas. In recent years, two more paradigms in evolutionary computation hav e attracted substantial att ention: Ge netic programming and classifier systems. Genetic pr o g ramming (GP) was introduc ed in the early 1990s by John K oza (1992), and it e xtends GA s using parse trees to represent functions and programs. The programs in the population consist of elements from the function sets, rather than fix ed-length character strings, selected appropriately to be the solutions to the problems. The crossover operation is done through randomly selected subtrees in the individuals according to their fitness; the mutation operator is not used in GP . On the other hand, a classifier system (CFS), another inv ention by John Holland, is an adaptive system that combines man y methods of adaptation w ith learning and evolution. Such h ybrid systems can adapt b ehaviors toward a changing environment by using GAs with adding capacities such as memor y , recursion, or iterations. In fact, we can essentially con sider the CFSs as general-pur pose computing machines that are modified by both en v ironmental feedback and the A Q: Holland, 1996, is not listed in the Refer ences. Is it 1995 as in the list? underlying GAs (Holland, 1975, 1996; Flak e, 1998). Biology-derived algorithms are applicable to a wide variet y of optimization problems. For example, optimization functions can ha ve discrete, contin uous, or even mix ed parameters w ithout any a prior i assumptions about their continuity and different iability . Thus, e volutionary algorithms are par ticularly suitable for parameter search and optimization problems. In addition, they are easy for parallel implement- ation. Ho wever , evolutionary algorithms are usually computationally intensiv e, and there is no absolute guarantee for the quality of the global optimizations. Besides, the tuning of the parameters can be very difficult for any given algorithms. Furthermore, there are many evolutionary algorithms w ith different suitabilities and the best choice of a par ticular algorithm depends on the ty pe and characteristics of the problems conc erned. Howev er , great progress has been made in the last several decades in the application of evolutionary algorithms in engineering optimizations. In this chapter , we w ill focus on some of the important areas of the application of GAs in engineer ing optimizat ions. 32.2 Biology-Derived Algorithms There are man y biology-derived algorithms that are popular in evolutionary computations. For engin- eering applications in par ticular , four ty pes of algor ithms are ver y useful and hence relevant. The y are GAs, photosynthetic algorithms (P As), neural networks, and cellular automata. W e will br iefly discuss CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 587 — #3 Biology-Derived Algorithms in Engineering Optimization 32 -587 these algor ithms in this section, and we w ill focus on the application of GAs and P A s in engineering optimizations in Section 32.3. 32.2.1 Genetic Algorithms The essence of GAs in vol ves the encoding of an optimization function as arrays of bits or character strings to represent the chromosomes, the manipulation operations of strings by genetic operators, and the selection according to their fitness to find a solution to the problem conc erned. This is often done by the following procedur e: (1) encoding of the objectives or optimization functions; (2) defining a fitness function or selection criterion; (3) creating a population of individuals; (4) evolution cycle or iterations by evaluating the fitness of all the individuals in the population, cr eating a new population by performing crosso ver , mutation, and inv ersion, fitness-proportionate reproduction, etc., and replacing the old population and it erating using the new population; (5) decoding the results to obtain the solution to the probl em. One iteration of creating new populations is called a generation. Fixed-length character strings are used in most GAs during each generation althoug h there is substantial researc h on variable-length str ing and coding structures. The coding of objective functions is usually in the form of binar y arrays or real-valued arrays in adaptive GAs. For simplicity , we use the binar y string for coding for describing genetic operators. Genetic operators include crossov er , mutation, invers ion, and selection. The crossov er of two parent strings is the main operator with highest probability p c (usually , 0.6 to 1.0) and is carried out by switching one segment of one string with the corres ponding segment on another string at a r andom position (see Figure 32.1). The crossove r carried out in this way is a single-point crosso ver . Crossov er at multiple points is also used in man y GAs to increase the efficiency of the algorithms. The mutation operation is achieved by the flopping of randomly select ed bits, and the m utation probability p m is usually small (sa y , 0.001 t o 0.05), while the in version of some part of a string is done by interchanging 0 and 1. The selection of an individual in a population is carried out by the e valuation of its fitness, and it can remain in the new generation if a certain threshold of fitness is reached or the reproduction of a population is fitness- proportionate. One of the key parts is the formulation or choice of fitness functions that determines the selection cr iterion in a particular pr oblem. Further , just as crossover can be carried out at multiple points, mutations can also occur at multiple sites. M ore comple x and adaptiv e GAs are being actively researc hed and there is a vast literature on this topic. In Section 32.3, we w ill giv e e xamples of GAs and their applications in eng ineering optimizations. 1 1 0 1 1 0 1 0 1 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 P arent 1 1 1 0 1 0 1 1 0 1 1 0 1 1 1 1 0 1 1 0 1 0 0 1 0 P arent 2 Child 1 Child 2 Mutation Crossov er point FIGURE 32.1 Diagram of cr ossover and mutation in a GA. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 588 — #4 32 -588 Handbook of Bioinspired Algorithms and Applications 32.2.2 Photosynthetic Algorithms The P A was first introduced by Murase (2000) to optimize the parameter estimation in finite element A Q: Murose is changed to Murase as per the Referenc e list. T rust this is OK. inv erse analysis. The P A is a good example of biology-derived algorithms in the sense that its c omputational procedur e corresponds w ell to the real photosynthesis proc ess in green plants. Photosynthesis uses water and CO 2 to produce g lucose and oxygen when there is light and in the presence of chlor oplasts. The ov erall reaction 6CO 2 + 12H 2 O light and green plants − − − − − − − − − − − − − − − → C 6 H 12 O 6 + 6O 2 + 6H 2 O is just a simple version of a complicat ed process. Other factors, such as temperature, concentration of CO 2 , water cont ent, etc., being equal, the reaction efficiency depends largely on light intensity . The impor tant part of photosynthetic reactions is the dar k reactions that consist of a biological process including two cycles: the Benson–Calvin cycle and photorespiration cy cle. The balance between these two cycles can be consider ed as a natural optimization procedur e that maximizes the efficiency of sugar production under the c ontinuous variations of light energy input (M ur ase, 2000). M ur ase’ s P A uses the rules gov er ning the con version of car bon molecules in the Benson–Calvin cyc le (with a product or feedback from DHAP ) and photor espiration reactions. The product DHAP serves A Q: Please provide expansion for DHAP . as the knowledge strings of the algorithm and optimization is reac hed when the qualit y or the fitness of the products no longer improves. An interest ing feature of such algorithms is that the stimulation is a function of light intensity that is r andomly changing and affects the rate of photorespiration. The r atio of O 2 to CO 2 conc entration determines the ratio of the Benson–Calvin and photorespiration cycles. A P A consists of the following steps: (1) the coding of optimization functions in terms of fixed-length DHAP strings (16-bit in M urase’ s P A) and a random generation of light intensity ( L ); (2) the CO 2 fixation rate r is then evaluated by the follo w ing equation: r = V max /( 1 + A / L ) , where V max is the maximum fixation rate of CO 2 and A is its affinit y constant; (3) either the Benson–Cal vin cycle or phot orespiration cy cle is chosen for the next step, dep ending on the CO 2 fixation rate, and the 16-bit strings are shuffled in both cycles ac cor ding to the rule of carbon molecule combination in photosynthetic pathways; (4) after some iterations, the fitness of the intermediate str ings is evaluated, and the best fit remains as a DHAP , then the results are dec oded into the solution of the optimization problem (see Figure 32.2). In the next section, we will present an example of parametr ic inv ersion and optimization using P A in finite element in verse analysis. Light (random) O 2 /CO 2 concentration DHAP Strings and results Benson–Calvin cycle Fitness ev aluation Photorespiration remov e if poor iterations FIGURE 32.2 Sc heme of M urase’ s P As. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 589 — #5 Biology-Derived Algorithms in Engineering Optimization 32 -589 Inputs Hidden lay er Outputs Σ−Θ i u i ( t +1) u 1 ( t ) u 2 ( t ) w i 1 w i 2 FIGURE 32.3 Diagram of a M cCulloch–Pitts neuron (left) and neur al networks (right). 32.2.3 Neural Networks Ne ural networks, and the associated machine-learning algorithms, is one more ty pe of biolog y-inspired algorithms, which uses a network of interc onnected neurons with the intention of imitating the neural activities in human brains. These neur ons have high interconnectivity and feedback, and their connecti v- ity can be weakened or enhanced dur ing the learning process. The simplest model for neurons is the M cCulloc h–Pitts model (see Figur e 32.3) for multiple inputs u 1 ( t ) , . . . , u n ( t ) , and the output u i ( t ) . The activation in a neuron or a neur on ’ s state is det ermined b y u i ( t + 1 ) = H n j = 1 w ij u i ( t ) − i , where H ( x ) is the Heaviside unit step function that H ( x ) = 1 if x ≥ 0 and H ( x ) = 0 otherw ise. The weight c oefficient w ij is c onsidered as the synaptic strength of the connection fro m neur on j to neuron i . For each neuron, it can be activated only if its thr eshold i is reac hed. One can consider a sing le neur on as a simple computer that g ives the output 1 or yes if the weighted sum of inc oming signals is greater than the thr eshold, other wise it outputs 0 or no. Real pow er comes from the combination of nonlinear activation functions with multiple neurons (M cCullo ch and Pitts, 1943; Flake, 1998). Figure 32.3 also shows an example of feed-forward neural networks. The key element of an ar tificial neural network (ANN) is the novel structure of such an information pr ocessing system that consists of a large number of interconnec ted processing neurons. These neurons work together to sol ve specific problems by adaptive lear ning through examples and self- organization. A trained neur al network in a given categor y can solv e and answer what-if ty pe questions to a par ticular problem when new situations of interest are given. Due to the real-time capability and parallel arc hitecture as well as adaptive lear ning, neur al ne tworks hav e been applied to solv e many real- world pr oblems such as industrial process control, data validation, pattern recognition, and other systems of artificial intelligence such as drug desig n and diagnosis of cardiovascular conditions. Optimization is just one possibility of such applications, and often the optimization functions can change with time as is the case in industrial proc ess control, target mar keting, and business forecasting (Ha y kins, 1994). On the other hand, the training of a network may take considerable time and a good tr aining database or examples that are specific to a particular problem are r equired. H owev er , neural networks w ill, gradually , come to play an impor tant role in engineer ing applications because of its flexibility and adaptability in learning. 32.2.4 Cellular Automata Cellular automata (CA) w ere also inspired by biolog ical evolution. On a regular gr id of cells, each c ell has finite number of states. Their states are updat ed acc ording t o certain local rules that are functions of the states of neig hbor c e lls and the cur rent state of the cell concerned. The states of the cells evolve with time CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 590 — #6 32 -590 Handbook of Bioinspired Algorithms and Applications in a discrete manner , and comple x characteristics can be obser ved and studied. For more details on this topic, readers can refer to Chapters 1 and 22 in this handbook. There is some similarity between finite state CA and conv entional numerical methods such as finite difference methods. If one c onsiders the finite different method as r eal-valued CA, and the real-values are alwa ys con verted to finite discrete values due to the round-off in the implementation on a c omputer , then there is no substantial differenc e between a finite difference method and a finite state CA. Ho wever , CA are easier to parallelize and more numerically stable. In addition, finite difference schemes are based on differential equations and it is sometimes straightfor ward to for mulate a CA from the corresponding par tial differential equations via appropriate finite differen cing proc edure; howev er , it is usually very difficult to con versely obtain a differential equation for a given CA (see Chapter 22 in this handbook). An optimization problem can be solved using CA if the objective functions can be c oded to be associated with the states of the CA and the parameters are properly associated w ith automat on r ules. This is an area under activ e researc h. One of the advantages of CA is that it can simulate many proc esses such as reaction–diffusion, fluid flo w , phase transition, percolation, waves, and biological evolution. Artificial intelligence also uses CA intensi vely . 32.2.5 Optimization Man y problems in engineer ing and other disciplines inv olve optimizations that depend on a number of parameters, and the choice of these parameters affects the performance or objectives of the system con- cerned. The optimization target is ofte n mea sured in terms of objectiv e or fitness functions in qualitative models. Engineering design and testing often require an iteration process with parameter adjustment. Optimization functions are gener ally formulat ed as: Optimize: f ( x ) , Subject to: g i ( x ) ≥ 0, i = 1, 2, . . . , N ; h j ( x ) = 0, j = 1, 2, . . . , M . where x = ( x 1 , x 2 , . . . , x n ) , x ∈ (parameter spac e). Optimization can be expressed either as maximization or mor e oft en as minimization (Deb, 2000). As parameter var iations ar e usually very large, systematic adaptive searching or optimization procedures are required. In the past several decades, researchers hav e developed many optimization algor ithms. Examples of con ventional methods are hill climbing, gradient methods, r andom walk, simulated annealing, heuristic methods, etc. Examples of evolutionary or biology-inspired algorithms are GA s, photosynthetic methods, neural netw ork, and many others. The methods used to solve a particular problem depend largely on the ty pe and characteristics of the optimization pr oblem itself. There is no univ ersal method that works for all problems, and there is generally no guarantee to find the optimal solution in g lobal optimizations. In general, we can emphasize on the best estimate or suboptimal solutions under the given conditions. Knowledge about the particular problem conc er ned is always helpful to make the appr opr iate choice of the best or most efficient methods for the optimization procedure. In this chapter , however , we focus mainly on biology-inspired algorithms and their applications in engineering optimizations. 32.3 Engineering Optimization and Applications Biology-derived algorithms such as GAs and P As hav e many applications in engineer ing optimizations. Ho wever , as we mentioned earlier , the choice of methods for optimization in a particular problem depends on the nature of the pr oblem and the quality of solutions co ncerned. W e will now discuss optimization problems and relat ed issues in various engineering applications. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 591 — #7 Biology-Derived Algorithms in Engineering Optimization 32 -591 32.3.1 Function and Multilevel Optimizations For optimization of a function using GAs, one way is to use the simplest GA with a fitness function: F = A − y w ith A being the large constant and y = f ( x ) , thus the objec tive is to maximize the fit ness A Q: Please check the change of f ( x ) to f ( x ) function and subsequently minimize the objectiv e function f ( x ) . H owev er , there are many different ways of defining a fitness function. For example, we can use the indiv idual fitness assignment relative to the whole population F ( x i ) = f ( x i ) N i = 1 f ( x i ) , where x i is the phenotypic value of individual i , and N is the population size. For the generalized De J ong’ s (1975) test function f ( x ) = n i = 1 x 2 α , | x | ≤ r , α = 1, 2, . . . , m , where α is a positive integer and r is the half-length of the domain. This function has a minimum of f ( x ) = 0 at x = 0. For the values of α = 3, r = 256, and n = 40, the results of optimization of this test function ar e shown in Figure 32.4 using GAs. The function we just discussed is relativ ely simple in the sense that it is single-peaked. In reality , many functions are multi-peaked and the optimization is thus m ultileveled. K eane (1995) studied the following bumby function in a multi-peaked and multilev eled o ptimization pr oblem f ( x , y ) = sin 2 ( x − y ) sin 2 ( x + y ) x 2 + y 2 , 0 < x , y < 10. The optimization pr oblem is to find ( x , y ) starting (5, 5) t o maximize the function f ( x , y ) subject to: x + y ≤ 15 and xy ≥ 3 / 4. In this pro blem, optimization is difficult because it is nearly sy mmetrical about x = y , and while the peaks occur in pairs one is bigger than the other . In addition, the true maximum is f ( 1.593, 0.471 ) = 0.365, which is defined by a constraint boundar y . Figure 32.5 shows the surface variation of the multi-peaked bumpy function. Although the properties of this bumpy function make it difficult for most optimizers and algorithms, GAs and other evolutionar y algorithms perform well for this function and it has been widely used as a test 6 5 4 3 2 1 0 – 1 – 2 log[ f ( x )] Generation ( t ) 0 100 200 300 400 500 600 Best estimate = 0.046346 6 5 4 3 2 1 0 – 1 log[ f ( x )] Generation ( t ) 0 100 200 300 400 500 600 Best estimate = 0.23486 FIGURE 32.4 F unction optimization using GAs. T wo runs w ill g ive slightly different results due to the stochastic nature of GAs, but they produce better estimates: f ( x ) → 0 as the generation increases. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 592 — #8 32 -592 Handbook of Bioinspired Algorithms and Applications 0.6 0.4 0.2 0 0 5 10 0 5 y x 10 f (x,y) FIGURE 32.5 S urface of the multi-peaked bumpy function. R T h R T s L FIGURE 32.6 Diagram of the pr essure v essel. function in GAs for c omparativ e studies of various ev olutionary algorithms or in m ultilevel optimization enviro nments (El-Beltagy and K eane, 1999). 32.3.2 Shape Design and Optimization Most engineering desig n problems, especially in shape design, aim to reduce the cost, weight, and volume and increase the performance and quality of the products. The optimization proc ess star ts w ith the transformation of desig n specification and descriptions into o ptimization functions and constraints. The structure and par ameters of a product depend on the functionality and manufacturability , and thus considerable effor t has been put into the modeling of the desig n proc ess and search technique to find the optimal solution in the search space, which comprises the set of all designs with all allowable values of design parameters (Renner and Ekart, 2003). Genetic algorithms hav e been applied in many areas of engineering design such as c onceptual design, shape optimization, data fitting, and r obot path design. A well-studied example is the design of a pressur e vessel (K annan and Kramer , 1994; Coello , 2000) using different algorithms such as augmented Lag rangian multiplier and GAs. Figure 32.6 shows the diag ram of the parameter notations of the pressure vessel. The vessel is cylindr ical and capped at both ends by hemispherical heads with four desig n variables: thickness of the shell T s , thickness of the head T h , inner radius R , and the length of the cylindrical par t L . The objective of the design is to minimize the total cost including that of the material, for ming, as well as welding. Using the notation g iven by Kannan and CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 593 — #9 Biology-Derived Algorithms in Engineering Optimization 32 -593 Kramer , the optimization problem can be expressed as: Minimize: f ( x ) = 0.6224 x 1 x 2 x 3 + 1.7781 x 2 x 2 3 + 3.1611 x 2 1 x 4 + 19.84 x 2 1 x 3 . x = ( x 1 , x 2 , x 3 , x 4 ) T = ( T s , T h , R , L ) T , Subject to: g 1 ( x ) = − x 1 + 0.0193 x 3 ≤ 0, g 2 ( x ) = − x 2 + 0.00954 x 3 ≤ 0, g 3 ( x ) = − π x 2 3 x 4 − 4 π x 3 3 / 3 + 1296000 ≤ 0, g 4 ( x ) = x 4 − 240 ≤ 0. The values for x 1 , x 2 should be considered as integer multiples of 0.0625. Using the same constraints as given in Coello (2000), the variables are in the ranges: 1 ≤ x 1 , x 2 ≤ 99, 10.0000 ≤ x 3 . x 4 ≤ 100.0000 (with a four -decimal precision). By coding the GAs with a p opulation of 44-bit strings for each indi- vidual (4-bits for x 1 · x 2 ; 18-bits for x 3 · x 4 ), similar to that by W u and Chow (1994), we can solve the optimization problem for the pressure vessel. A fter several runs, the best solution obtained is x ∗ = ( 1.125, 0.625, 58.2906, 43.6926 ) with f ( x ) = 7197.9912$, which is compared with the results x ∗ = ( 1.125, 0.625, 28.291, 43.690 ) and f ( x ) = 7198.0428$ obtained by K annan and Kramer (1994). 32.3.3 Finite Element Inverse Analysis The usage and efficiency of M urase’ s P A described in Section 32.3.4 can be demonst rated in the application of finite element inv erse analysis. Finite element analysis (FEA) in structural engineering is forward modeling as the aims are t o calculate the displacements at various positions for give n loading c onditions and material properties such as Y oung’ s modulus ( E ) and P oisson ’ s ratio ( ν ) . This forward FEA is widely used in eng ineering design and applications. Sometimes, inverse problems need to be sol ved. For a g iven structure with known loading conditions and measured displacement, the objective is to find or in vert the material properties E and ν , which may be r equired for testing new materials and design optimizations. It is w ell kno wn that inverse problems are usually ver y difficult, and this is especially true for finit e element inv erse analysis. T o show how it wo rks, we use a test example similar to that proposed by Murase (2000). A simple beam system of 5 unit length × 10 unit length (see Figure 32.7) consists of five nodes and four elements whose Y oung’ s modulus and P oisson’ s ratio may be different. N odes 1 and 2 are fix ed, and a unit vertical load is applied at node 4 while the other nodes deform freely . Let us denote the displac ement vector U = ( u 1 , v 1 , u 2 , v 2 , u 3 , v 3 , u 4 , v 4 , u 5 , v 5 ) T , where u 1 = v 1 = u 2 = v 2 = 0 (fix ed). M easurements are made for other displac ements. B y using the P A with the values of the CO 2 affinity A = 10000, light intensity L = 10 4 to 5 × 10 4 lx, and maximum CO 2 2 1 3 4 5 f =1 E 4 , n 4 E 3 , n 3 E 1 , n 1 E 2 , n 2 FIGURE 32.7 Beam system of finite element in verse analysis. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 594 — #10 32 -594 Handbook of Bioinspired Algorithms and Applications fixation speed V max = 30, each of eight elastic modulus ( E i , ν i )( i = 1, 2, 3, 4 ) is c o ded as a 16-bit DHAP molecule string. F o r a target vector Y = ( E 1 , ν 1 , E 2 , ν 2 , E 3 , ν 3 , E 4 , ν 4 ) = ( 600, 0.25, 400, 0.35, 450, 0.30, 350, 0.32 ) , and measured displacements U = ( 0, 0, 0, 0, − 0.0066, − 0.0246, 0.0828, − 0.2606, 0.0002, − 0.0110 ) , the best estimates after 500 iterations from the optimization by the P A are Y = ( 580, 0.24, 400, 0.31, 460, 0.29, 346, 0.26 ) . 32.3.4 Inverse Initial-Value, Boundary-Value Problem Optimization In verse initial-value, boundar y-value problem (IVBV) is an optimization paradigm in which GAs hav e been used succ essfully (Karr et al., 2000). Some con ventional algor ithms for solving such search optim- izations are the trial-and-error iteration methods that usually start with a guessed solution, and the substitution into the partial differential equations and associated boundar y conditions to calculate the errors between predicted values and measured or known values at var ious locations, then the new guessed or improv ed solutions are obtained by corrections according to the errors. The aim is to minimize the differenc e or er rors, and the proc edure stops once the giv en precision or tolerance criterion is satisfied. In this wa y , the in verse pr oblem is actually transformed into an optimization problem. W e now use the heat equation and inverse pr ocedure discussed by K ar r et al. (2000) as an example to illustrate the IVBV optimization. On a squar e plate of unit dimensions, the diffusivity κ ( x , y ) varies with locations ( x , y ). The heat equation and its boundar y conditions can be w ritten as: ∂ u ∂ t = ∇ · [ κ ( x , y ) ∇ u ] , 0 < x , y < 1, t > 0, u ( x , y , 0 ) = 1, u ( x , 0, t ) = u ( x , 1, t ) = u ( 0, y , t ) = u ( 1, y , t ) = 0. The domain is discretized as an N × N grid, and the measurements of values at ( x i , y j , t n ) , ( i , j = 1, 2, . . . , N ; n = 1, 2, 3 ) . The data set c onsists of the measur ed value at N 2 points at thr ee different times t 1 , t 2 , t 3 . The objectiv e is t o in verse or estimate the N 2 diffusivity values at the N 2 distinct locations. The Karr’ s er ror metr ics ar e defined as E u = A N i = 1 N j = 1 u measured i , j − u comput ed i , j N i = 1 N j = 1 u measured i , j , E κ = A N i = 1 N j = 1 κ known i , j − κ predicted i , j N i = 1 N j = 1 κ known i , j , where A = 100 is just a constant. The floating-point GA proposed by K ar r et al. for the inv erse IVB V optimization can be summarized as the follo w ing pr ocedur e: (1) Generate randomly a population containing N solutions to the IVBV problem; the potential solutions are represented in a vec tor due to the variation of diffusivity κ with locations; (2) The error met ric is computed for each of the potential solution vectors; (3) N new potential solution vect ors are generated b y genetic operato rs such as crossover and mutations in GAs. Selection of solutions depends on the r equired quality of the solutions with the aim to minimize the error met r ic and thereby rem ove solutions with large errors; (4) the iteration contin ues until the best acceptable solution is found. On a grid of 16 × 16 points with a target mat rix of diffusivity , after 40,000 random κ ( i , j ) matrices were generated, the best value of the er ror metrics E u = 4.6050 and E κ = 1.50 × 10 − 2 . Figure 32.8 sho ws the er ror metr ic E κ associated w ith the best solution determined by the GA. The small values in the error metric imply that the inverse diffusivity matrix is ver y c lose to the tr ue diffusi vity matrix. W ith some modifications, this type of GA can also be applied to the in verse analysis of other problems such as the inv erse parametric estimation of P oisson equation and wa ve equations. In addition, nonlinear problems can also be studied using GAs. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 595 — #11 Biology-Derived Algorithms in Engineering Optimization 32 -595 × 10 – 3 2 1.5 1 0.5 0 15 10 5 5 10 15 E k node ( i ) node ( j ) FIGURE 32.8 Error met ric E κ associated with the best solution obtained by the GA algor ithm. The optimization methods using biolog y-derived algorithms and their engineer ing applications have been summarized. W e used four examples to show ho w GAs and P As can be applied to solve optimization problems in multilevel function optimization, shape design of pressure vessels, finite element in verse analysis of material pr operties, and the in version of diffusivity matrix as an IVBV problem. Biology- inspired algor ithms hav e man y advantages over traditional optimization methods such as hill-climbing and calculus-based techniques due to parallelism and the ability to locate the best appro ximate solu- tions in very large search spaces. Furthermore, more powerful and flexible new generation algorithms can be formulated by combining existing and new e volutionary algorithms w ith classical optimization methods. References ∗ Chipperfield, A.J ., Fleming, P .J., and Fonseca, C.M. Genetic algorithm tools for control systems engin- eering. In Pr oceedings of Adaptive Computing in Engineering Design and Control , Plymouth, pp. 128–133 (1994). A Q: ∗ marked Refer ences are not cited in text. Please advice. Coello , C.A. Use of a self-adaptive penalty approach for eng ineering optimization problems. Computers in Industry , 41 (2000) 113–127. De Jong, K. Analysis of the Behaviour of a Class of Genetic Adapti ve Systems, Ph.D . thesis, U niv ersity of Michigan, Ann Arbor , MI (1975). ∗ Deb, K. Opt imization for Engineer ing Design: Algorithms and Examples , Prentice-H all, New Del hi (1995). Deb, K. An efficient c onstr aint handling method for genetic algorithms. C om puter Methods and Applications of Mechanical E ng ineer ing, 186 (2000) 311–338. El-Beltagy , M.A. and K eane, A.J . A comparison of var ious optimization algorithms on a multilevel problem. Engineeri ng A pplications of A r tific ial Intelligence , 12 (1999) 639–654. Flake, G.W . The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and A daptat ion , MIT Press, Cambridge, MA (1998). Goldberg, D .E. Genetic Algorithms in Search, O ptimization and Machine Learning , Addison-W esley , Reading, MA (1989). Ha ykins, S. Neural N etworks: A Comprehensive F oundation , MacM illan, N ew Y ork (1994). Heitk otter , J. and Beasley , D . Hitch Hiker’ s Guide to E volut ionary Computation (2000). A Q: Please provide publication details for Heitk otter & Beasley (2000) Holland, J. Adaptation in N atural and A rt ificial Systems , Univ ersit y of Michigan Press, Ann Anbor , (1975). Holland, J.H. Hidden Order: How Adaptation Builds Complexity , Addison-W esley , Reading, MA (1995). ∗ J enkins, W .M. On the applications of natural algorithms to structural design optimization. Engineer ing Structures , 19 (1997) 302–308. CHAPMAN: “C4754_C032” — 2005/5/6 — 23:44 — page 596 — #12 32 -596 Handbook of Bioinspired Algorithms and Applications Kannan, B.K. and Kramer , S.N. An augmented Lagr ange multiplier based method for mixed integer discrete contin uous optimization and its application to mechanical desig n. Journal Mechanical Design , T ransaction of ASME, 116 (1994) 318–320. Karr , C.L., Y akushin, I., and N icolosi, K. Solving inverse initial-value, boundar y-valued problems v ia genetic algor ithms. E ng ineer ing Ap plications of A rtific ial Intelligence , 13 (2000) 625–633. K eane, A.J . Genetic algorithm optimization of multi-peak problems: Studies in conv ergence and robustness, Artificial Intelligence in Engineer ing , 9 (1995) 75–83. K oza, J .R. Genetic Programming: On the Programming of Computers by Natur al Selection , MIT Press, Cambridge, MA (1992). M cCulloc h, W .S. and Pitts, W . A logical calculus of the idea immanent in ner vous activ ity , Bulletin of Mathematical Biophysic s, 5 (1943) 115–133. ∗ Michaelewicz, Z. Genet ic A lgorithm + Data Str ucture=E volut ion Programming , Springer -V erlag , New Y ork (1996). Mitchell, M. A n Introduction to Genet ic A lgor ithms , Cambridge, MIT Pr ess, MA (1996). M ur ase, H. Finite element inv erse analysis using a photosynthetic algorithm. Computers and Elect ronics in A g r iculture , 29 (2000) 115–123. ∗ Pohlheim, H. Genetic and Evolutionary Algorithm T oolbox for Matlab (geatbx.com) (1999). ∗ Rec henberg, I. Evolutionsstrateg ie: Optimerr ung T echnischer Systeme nach Prinzipien der Biologis chen Evo lution , 1973. A Q: please provide publisher details for Rechenberg (1973) Renn er , G. and Ekar t, A. Genetic algorithms in computer aided design. Computer-A ided Design , 35 (2003) 709–726. W u, S.Y . and Chow , P . T . Genetic algorithms for solving mix ed-discrete optimization problems. Journal of the Franklin I nstitute , 331 (1994) 381–401.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment