A Memetic Algorithm for the Generalized Traveling Salesman Problem

The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city f…

Authors: Gregory Gutin, Daniel Karapetyan

A Memetic Algo rithm fo r the Genera lized T raveling Salesman Problem ∗ Gregory Gutin † Daniel Ka rap et y an ‡ Abstract The generalized trav eling sales man p roblem (GTSP) is an extension of the wel l-known tra veling sal esman problem. In GTSP , we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one cit y from each group. The recent studies on this sub ject consider different v ariations of a memetic algorithm approach to th e GTSP . The aim of this pap er is t o p resen t a new memetic algorithm for GTSP with a p ow erful lo cal search pro cedure. The exp erimen ts show that the prop osed algorithm clearly outp erforms all of the k no wn heuristics with resp ect to b oth solution quality and run ning time. Wh ile the other memetic algo rithms were designed only for the symmetric GTSP , our algorithm can solve b oth symmetric and asymmetric instances. 1 Intro duction The gener alize d tr aveling sale sman p r oblem (GTSP) is defined as follows. W e are given a w eighted complete directed or undirected gr aph G and a pa r tition V = V 1 ∪ V 2 ∪ . . . ∪ V M of its vertices; the subsets V i are called clusters . The ob jectiv e is to find a minim um weigh t cy cle co ntaining exactly one vertex from each cluster. There are ma ny publications on GTSP (see, e .g ., the surveys [4, 6] and the references there) and the problem has many applica tions, s ee, e.g., [2, 13]. The problem is NP-hard, since the tra veling salesman pr oblem (TSP) is a sp ecial case of GTSP when | V i | = 1 for e a c h i . GTSP is trickier than TSP in the following sense : it is an NP-hard problem to find a minim um weigh t co llec tio n of vertex-disjoint cycles s uch that e a c h cluster has only one vertex in the c ollection (and the claim holds e v en when ea ch cluster has just t wo vertices) [7]. Compare it with the well-kno wn fact that a minimum weight collection of vertex-disjoin t cycles in a weight ed complete digraph can b e found in polyno mial time [8]. W e call GTSP and TSP symmetric if the complete gr a ph G is undirected and asymmetric if G is directed. Often instead o f the term weight we use the term length . ∗ This is a modified version of the pa p er “A Memetic Algorithm for the Generalized T ra veling Salesman Problem” b y G. Gutin, D. Karap et ya n and N. Krasnogor published in the pro ceedings of N ICSO 2007. † Departmen t of Computer Science , Ro yal Hollow a y Uni v ersity of Londo n, Egham, Surrey TW20 0EX, UK , gutin@cs .rhul.ac.u k ‡ Departmen t of Computer Science , Ro yal Hollow a y Uni v ersity of Londo n, Egham, Surrey TW20 0EX, UK , daniel.k arapetyan@ gmail.com 1 2 The Genetic Algo rithm 2 V arious approaches to GTSP hav e b een studied. There are exact a lg orithms such as branch- and-b ound and branch-and-cut algor ithms in [5]. While exact algorithms a re very impo rtan t, they are unreliable with re s pect to their running time that can easily reach many hours o r even days. F or example, the w ell-known TSP so lv er Concorde can easily solve some TSP instances with several thousand cities, but it co uld not solve several a symmetric instances with 316 cities w ithin the time limit of 1 0 4 sec. (in fact, it a ppear s it would fail ev en if sig nifican tly muc h more time was allow ed) [5]. Several resear c hers use transforma tions from GTSP to TSP [2] as there exists a large v ar iet y of exact and heuristic algo rithms for the TSP , see, e.g., [8, 1 4]. Ho wev er, while the known trans- formations normally allow to pro duce GTSP optimal solutions from the o btained o ptimal TSP tours, a ll known transformations do not preserve sub optimal solutions. Moreover, con versions o f near-optimal TSP tour s may well res ult in infeasible GTSP solutions. Thus, the tra nsformation do not a llo w us to obtain quickly approximate GTSP s olutions a nd there is a necessity for specific GTSP heuristics. Not e very TSP heuristic ca n be extended to GTSP; for exa mple, so-called sub- tour pa tc hing heuristics often used for the Asymmetric TSP , see, e.g., [1 0], cannot be extended to GTSP due to the ab ov e mentioned NP- ha rdness res ult from [7]. It a ppear s that the o nly metaheuris tic algor ithms that can comp ete with Lin-Kirnighan-based lo cal sea rc h for TSP ar e memetic algor ithms [9, 15] that combine pow ers o f g e netic and lo cal search algorithms [11, 20]. Thus, it is no co incidence that the latest studies in the area of GTSP explore the memetic algorithm approa c h [17, 18, 19]. The aim of this pap er is to present a new memetic algor ithm for GTSP with a powerful lo cal search part. Unlik e the previous heuristics which can b e used for the symmetric GTSP only , our algorithm can be used for both symmetric and asymmetric GTSPs. The computational exper imen ts show that our a lgorithm clearly outp erforms all published memetic heuristics [17, 18, 19] with resp ect to bo th solution qualit y and running time. 2 The Genetic Algo rithm Our heuristic is a memetic algorithm, whic h combines p ow er of genetic algo rithm with that of lo cal sea rc h [9, 12]. W e start with a genera l scheme of o ur heuristic, which is s imilar to the genera l schemes of many memetic a lgorithms. Step 1 Initialize . Construct the first g eneration of solutions. T o pro duce a s olution we use a semi- random construction heuristic (see Subsection 2.2). Step 2 Impr ove . Use a lo cal sear c h pro cedure to replace each of the first genera tio n solutions by the lo cal optim um. Eliminate duplicate solutions. Step 3 Pr o duc e next gener ation . Use repro duction, crossover, and m utation g e netic o pera tors to pro duce the no n-optimized next genera tio n. Each of the genetic op erators selects par en t solutions from the previous genera tion. The length o f a solution is used as the ev a luation function. Step 4 Impr ove next gener ation . Use a lo cal s earch pro cedure to replace each o f the curren t gen- eration solutions exc e pt the repr oduced ones b y the lo cal optimum. Eliminate duplicate solutions. 2 The Genetic Algo rithm 3 Step 5 Evolute . Repea t Steps 3–4 un til a termination condition is reached. 2.1 Co ding The Genetic Algo rithm (GA) requires e a c h solution to b e co ded in a chr omosome , i.e., to be represented by a sequence of genes . Unlike [18, 19] we use a natural coding of the solutions a s in [17]. The co ded solution is a seq uence of num ber s ( s 1 s 2 . . . s M ) such that s i is the vertex a t the p osition i of the so lution. F or example (2 5 9 4) repre s en ts the cycle visiting vertex 2, then vertex 5, then vertex 9, then vertex 4, and then returning to vertex 2. No te that not any s e quence corresp onds to a feasible solution as the feasible solution should contain exac tly o ne vertex from each cluster, i.e., C ( s i ) 6 = C ( s j ) for any i 6 = j , where C ( v ) is the cluster containing vertex v . Note that, using na tur a l co ding, each solution ca n be repre sented by M differen t chromosomes: the sequence can b e ‘rotated’, i.e., the first gene can b e mov ed to the end of the chromosome or the la st gene can b e inserted b efore the fir st o ne and these op erations will preser v e the cycle. F or example, chromosomes (2 5 9 4) and (5 9 4 2) represent the same solution. W e need to take this into acco un t when considering several solutions together, i.e., in precisely t wo ca ses: when we compare tw o solutions, and when we apply cro ssov er o pera tor. In these cases we ‘normalise’ the chromosomes by ro ta ting each of them such that the vertex v ∈ V 1 (the vertex that represents the cluster 1 ) takes the first place in the chromosome. F or example, if we had a chromosome (2 5 9 4) and the vertex 5 b elongs to the cluster 1, we rotate the chromosome in the following wa y: (5 9 4 2). In the case of the symmetric problem the ch romos ome ca n also b e ‘r e flec ted’ w hile preser ving the solution. But our heuristic is designed for bo th symmetric and a symmetric instances and, th us, the chromosomes (1 5 9 4) and (4 9 5 1) ar e considere d as the chromosomes cor respo nding to distinct solutions. The main adv antage o f the natural co ding is its efficiency in the loc a l search. As the lo cal search is the most time consuming part of our heuris tic, the co ding should b e optimized for it. 2.2 First Generation W e pro duce 2 M s olutions for the fir st genera tion, where M is the num ber of clusters. The solutions are gener ated b y a semir andom c onstruct ion heuristic . The semirando m construction heuristic generates a rando m cluster p erm utation a nd then finds the be st vertex in each cluster when the order of clusters is given by the p ermutation. It choose s the b est vertex s election within the given cluster sequence using the Cluster Opti- mization Heuristic (see Section 3). The adv ant ages of the semirandom cons tr uction heuristic a re that it is fast and its cycles ha ve no r egularity . The la tter is imp ortant as e a c h completely deterministic heuristic c a n cause so lutions uniformit y and as a result some solution bra nc hes ca n b e lost. 2 The Genetic Algo rithm 4 2.3 Next Generations Each genera tion except the fir st one is based o n the previous generation. T o produce the next generation one uses ge netic op erator s, which are algor ithms that c o nstruct a solution or t wo from one or t wo so-ca lled par e n t solutions. Parent solutions ar e chosen from the previous gener a tion using some sele ction str ate gy . W e p erform r r uns of r epr o duction , 8 r runs of cr ossover , a nd 2 r runs of mutation o pera tor. The v alue r is calculated as r = 0 . 2 G + 0 . 05 M + 10, where G is the n umber of g enerations pro duced b efore the cur ren t one. (Recall that M is the num b er of clusters.) As a result, we obtain a t most 11 r solutions in ea c h generation but the firs t one (since w e remov e duplicated so lutions from the p opulation, the n umber of solutions in each generatio n ca n b e smaller than 11 r ). F rom generation to g eneration, one ca n exp ect the num ber of lo cal minima found by the a lgorithm to incr ease. Also this n um b er can be exp ected to grow when the num b er of clusters M grows. Th us, in the for m ula ab ov e r depends on b oth G and M . All the co efficien ts in the formulas of this section were obtained in computational exp eriments, where se veral other v alues of the co efficients were als o tried. Note that slight v ariations in selection of the co efficien ts do not influence significantly the results of the algo r ithm. 2.4 Repr o duction Repro duction is a pro cess of simply copying s olutions from the pr evious genera tion. Repr oduction op erator requires a selection strategy to select the solutions from the previo us generation to be copied. In our algo rithm w e sele c t r (see Subsection 2.3) shortest solutions from the previous generation to copy them to the current g eneration. 2.5 Crossover A cr ossover op erator is a genetic op erator that co m bines tw o different solutions fro m the previ- ous generation. W e use a mo dification of the t wo-p oin t cro ssov er in tr o duced b y Silberho lz and Golden [17] as an extension of an Or der e d Cr ossover [3]. Our crossov er op erato r pro duces just one child so lution ( r 1 r 2 . . . r M ) fro m the pa ren t so lutions ( p 1 p 2 . . . p M ) and ( q 1 q 2 . . . q M ). A t first it selects a random po sition a and a ra ndo m fragment length 1 ≤ l < M a nd copies the fragment [ a, a + l ) o f the first parent to the be g inning of the child solution: r i = p i + a for each i = 0 , 1 , . . . , l − 1. 1 T o pro duce the rest of the child so lution, we intro duce a sequence q ′ as follows: q ′ i = q i + a + l − 1 , where i = 1 , 2 , . . . , M . Then for each i suc h that the cluster C ( q ′ i ) is alr eady visited b y the ch ild s o lution r , the vertex q ′ i is remov ed from the sequence: q ′ = ( q ′ 1 q ′ 2 . . . q ′ i − 1 q ′ i +1 . . . ). As a result l vertices will b e remov ed: | q ′ | = M − l . No w the child solution r should b e ex tended b y the sequence q ′ : r = ( r 1 r 2 . . . r l q ′ 1 q ′ 2 . . . q ′ M − l ). A feature of this crossov er is that it pres e r v es the vertex order of both pa ren ts. Cr ossover example . Let the first parent b e (1 2 3 4 5 6 7) a nd the second parent (3 2 5 7 6 1 4) (here we assume for explanation clarity that every cluster contains e x actly one vertex: V i = { i } ). First of all we ro tate the parent solutions such that C ( p 1 ) = C ( q 1 ) = 1: p = (1 2 3 4 5 6 7) (remains the same) and q = (1 4 3 2 5 7 6). Now we c ho ose a random fragment in the parent so lutions: p = (1 2 | 3 4 | 5 6 7) 1 W e assume that s i + M = s i for the sol ution ( s 1 s 2 . . . s M ) and for any 1 ≤ i ≤ M . 2 The Genetic Algo rithm 5 q = (1 4 | 3 2 | 5 7 6) and co p y this frag men t from the fir st par en t p to the child solution: r = (3 4 ). Next we pro duce the sequence q ′ = (5 7 6 1 4 3 2) and r e move vertices 3 and 4 from it a s the corr esponding clusters are already visited b y r : q ′ = (5 7 6 1 2). Finally , we extend the child solution r by q ′ : r = (3 4 5 7 6 1 2). The cro ssov e r op erator requir es some str ategy to s e le c t t wo parent s olutions from the pr evious generation. In our algor ithm an elitist str ategy is us e d; the parents a re chosen randomly b et ween the bes t 33% of all the solutions in the previous generation. 2.6 Mutation A mutation op erator mo difies pa rtially some solution from the previous g eneration. The mo difica- tion should b e sto c hastic and us ually w orse ns the solution. The goal of the mutation is to incr ease the solution diversit y in the genera tion. Our m utation op erator remo ves a random fragment of th e solution and inserts it in some random po sition. The size of the fra gmen t is selected b et ween 0 . 05 M and 0 . 3 M . An elitist strategy is us ed in our algo rithm; the parent is selected ra ndomly among 75 % of all the solutions in the previo us generation. Mutation example . Let the parent s olution b e (1 2 3 4 5 6 7). Let the random frag men t start at 2 and be of the length 3. The new frag men t p osition is 3, for example. After removing the fragment we hav e (1 5 6 7). Now inser t the fragment (2 3 4) at the po s ition 3: (1 5 2 3 4 6 7). 2.7 T ermination condition F o r the termination co ndition w e use the concept of idle generations. W e call a genera tion id le if the b est solution in this generation has the same leng th as the length o f the bes t solution in the previous genera tio n. In other words, if the pro duced gener ation has not improv ed the solution, it is idle. The heuristic sto ps after so me idle ge ner ations ar e pro duced sequentially . In particular, w e implemented the following new condition. Let I ( l ) b e the num ber o f se- quent ial idle ge ner ations with th e best solution o f length l . Let I cur = I ( l cur ), where l cur is the curr en t bes t solution length. Let I max = max l>l cur I ( l ). Then our heuristic stops if I cur ≥ max (1 . 5 I max , 0 . 0 5 M + 5). This fo r m ula mea ns that we are re a dy to wait for the next im- prov emen t 1.5 times mo re generations than we ha ve ev er w aited prev iously . The constant 0 . 05 M + 5 is the minimu m b oundary for the n umber o f gener ations we ar e ready to wait for improvemen t. All the co efficien ts used in the formula were found empirica lly . 2.8 Asymmetric instances Our algor ithm is designed to pro cess equa lly b oth symmetric a nd as y mmetric instances, how- ever some parameters should take differen t v alues for these types of instances for the pur pose of high efficiency . In par ticular, we double the size of the fir st genera tion (4 M instead of 2 M , see Subsection 2.2) a nd increase the minimum n umber of idle g enerations b y 5 (i.e., I cur ≥ 3 Local Improvement Pa rt 6 max(1 . 5 I max , 0 . 0 5 M + 1 0)). The loca l improv emen t pro cedure (see b elow) has also some differences for symmetric and asymmetric instances. 3 Lo cal Imp rovement P a rt W e us e a lo c al imp r ovement pr o c e dur e for each solution added to the curr en t genera tion. The lo cal improv emen t pro cedure r uns several lo cal search heuristics sequentially . The following lo c al search heuristics are used in our algor ithm: • Swaps tries to swap every non-neighboring pair of vertices. The heuristic applies all the improv emen ts found dur ing o ne cycle of swaps. • k -Neighb or S wap tries differen t per m utations of every solution subsequence ( s 1 s 2 . . . s k ). In particular it tries all the non-trivial p ermutations which are not cov ered b y any of i -Neighbor Swap, i = 2 , 3 , . . . , k − 1 . F or each permutation the b est selectio n of the vertices within the considered cluster s ubse q uence is ca lculated. The b est p ermutation is a ccepted if it improv e s the solution. The heuristic applies all the improvemen ts found during one cycle. • 2-opt tries to r eplace every non-adjacent pair of edg es s i s i +1 and s j s j +1 in the solution by the edges s i s j and s i +1 s j +1 if the new e dg es are lighter, i.e., the sum of their w eights is smaller than the sum of the weigh ts of old edges. The heuristic a pplies a ll the improv e ments found. • Dir e ct 2-opt is a mo dification of 2- o pt heuristic. Direct 2- opt s e lec ts a num ber of the longest edges contained in the solution and then tries all the non-a djacen t pairs o f the selected edges. It re pla ces edges s i s i +1 and s j s j +1 with the edges s i s j and s i +1 s j +1 if the new edges are shorter, i.e., the sum of their w eights is sma ller than the sum of the w eight s o f o ld edges. The heuristic applies all the improv e ments found. • Inserts tries to remov e a vertex from the solution and to insert it in the differen t p osition. The bes t vertex in the inserted cluster is se lec ted after the insertion. The insertion is a ccepted if it impro ves the solution. The heuristic tries ev ery com bination of the o ld a nd the new po sitions e x cept the neighbor ing p ositions and applies all the improv ements found. • Cluster Optimization (CO) uses the shor test ( s, t )-path a lg orithm for acyclic digraphs (see, e.g., [1]) to find the bes t v ertex for each cluster when the or der o f clusters is fixed. This heuris- tic was intro duced by Fischetti, Salaz a r-Gonz´ alez and T o th [5] (see its detailed description also in [4]). The CO Heuristic uses the fact that the shortest ( s, t )-pa th in a n acyclic digra ph can b e found in a p olynomial time. L et the given solution be represented by chromosome ( s 1 s 2 . . . s M ). The algorithm builds an a cyclic digr aph G CO = ( V CO , E CO ), where V CO = V ∪ C ′ ( s 1 ) is the se t of the GTSP instance vertices extended by a copy of the cluster C ( s 1 ) a nd E CO is a set of edge s in the digraph G CO . (Recall that C ( x ) is the cluster co n taining the v ertex x .) An edge xy ∈ E CO if a nd only if C ( x ) = C ( s i ) and C ( y ) = C ( s i +1 ) for so me i < M or if C ( x ) = C ( s M ) and C ( y ) = C ′ ( s 1 ). F or each vertex s ∈ C ( s 1 ) a nd its copy s ′ ∈ C ′ ( s 1 ), the algo rithm finds the shortest ( s, s ′ )-path in G CO . The alg orithm selects the sho rtest path ( s p 2 p 3 . . . p M s ′ ) and returns the chromosome ( s p 2 p 3 . . . p M ) whic h is the b est vertex selection within the given cluster sequence. 4 Results of Computational Experiments 7 Note that the algor ithm’s time co mplexit y grows linear ly with the size of the cluster C ( s 1 ). Thu s, be fore applying the CO algor ithm w e rota te the initial chromosome in such a way that | C ( s 1 ) | = min i ≤ M | C i | . F o r each lo cal sear c h algo r ithm with so me cluster o ptimization embedded, i.e., for k -Neigh tb our Swap and Inser ts, we use a sp eed-up heur is tic. W e calculate a low er b ound l new of the new solution length a nd compare it with the previous length l prev befo r e the vertices within the clusters optimization. If l new ≥ l prev , the so lution mo dification is dec lined immediately . F or the purp ose of the new length lower bound ca lculation we ass ume that the unknown edges, i.e., the edges adjacent to the vertices that should b e o ptimized, hav e the length o f the shortest edges b et ween the corresp onding clusters. Some of these heuristics form a heuristic-vector H as follows: Symmetric instances Asymmetric instances Inserts Swaps Direct 2-opt for M / 4 longest edges Inserts 2-opt Direct 2-opt for M / 4 lo ngest edges 2-Neighbour Swap 2-opt 3-Neighbour Swap 2-Neighbour Swap 4-Neighbour Swap 3-Neighbour Swap The improv emen t pro cedure a pplies a ll the lo cal s earch heuristic from H c yclically . Once so me heuristic fails to improve the tour , it is excluded fro m H . If 2-opt heur istic fails, we als o exclude Direct 2- opt from H . Once H is empt y , the CO heur istic is applied to the solution a nd the improv emen t pro cedure stops. 4 Results of Computational Exp eriments W e tested our heuris tic using GTSP instances whic h w ere gener a ted from some TSPLIB [16] instances by applying the standar d clustering pro cedure of Fischett i, Salazar , and T oth [5]. Note that our heur istic is designed for medium and larg e instances a nd, thu s, we selected all the instances with 40 to 217 clusters. Unlike [1 7 , 1 8, 1 9], s ma ller instances a re not co nsidered. All th e information necessary for r e pr oducing our experiments is av ailable online at www .cs.rhul .ac.uk/Research/ToC / • All the instances consider ed in our experiments. F or the purpos e of s implicit y and efficiency we use a uniform binary format for instances of all types. • The binar y for mat definition. • Source co des o f binary forma t rea ding and writing pro cedures. • Source co des o f the clustering pro cedure [5] to conv ert TSP instances in to GTSP instances. • Source co des o f the TSPLIB files r eading pro cedure. • Source co des o f our memetic alg o rithm. • Source co des o f our exp erimentation engine. 4 Results of Computational Experiments 8 The tables be low show the exp eriments results. W e compare the following heuristics: GK is the heuristic pr esen ted in this pap er. SG is the heuristic b y Silber holz and Golden [17]. SD is the heuristic by Snyder and Daskin [18]. TSP is the heuristic b y T a sgetiren, Sugant han, and Pan [19]. The res ults for GK and SD were obtained in our own exper imen ts. Other results ar e tak en fro m the corresp onding paper s. Each test of GK and SD includes ten algorithm runs. The re s ults for SG and TSP were pro duced after five runs. T o co mpa re the running times of a ll the considered heur istics we need to conv ert the running times of S G and TSP obtained from the corres ponding pap ers to the running times on o ur ev aluatio n platform. Let us a ssume that the running time of some Ja v a implemented algor ithm on the SG ev aluation platfor m is t SG = k SG · t GK , wher e k SG is some constant a nd t GK is the running time of the same but C++ implemen ted alg orithm on our ev aluation platform. Let us assume that the running time o f some algor ithm on the TSP ev alua tion platform is t TSP = k TSP · t GK , where k TSP is some constant and t GK is the running time of the same algorithm on our ev alua tion platform. The computer used for GK and SD ev aluation has the AMD A thlon 6 4 X2 3.0 GHz pro cessor. The co mputer used for S G has In tel P entium 4 3.0 GHz pro cessor. The computer used for T SP has Intel Centrino Duo 1 .83 GHz pr o cess o r. Heuristics G K , SD , and TSP are implemented in C++ ( GK is implemented in C# but the most time cr itica l fragments a re implemented in C++ ). Heuristic SG is implemented in Jav a . Some rough estimation of J ava p erformance in the co m binatorial optimisation applications shows that C+ + implemen tation could b e a pproximately t w o times faster than the Java implementation. As a result the adjusting co efficient k SG ≈ 3 and the adjusting co efficien t k TSP ≈ 2. W e are able to co mpare the results o f SD heuris tic tests gathered from different pap ers to chec k the k S G and k T S P v alues b ecause SD ha s been ev alua ted on each of the platforms of o ur interest (the heuristic was implemen ted in Java in [17] for the exact compariso n to SG ). The time ratio b et ween the SD running times fro m [17] and our own r esults v ary significa n tly for different problems, but for so me middle size problems the ratio is abo ut 2 .5 to 3. These results corr e la te well with the previous estimation. The sug g ested v alue k TSP ≈ 2 is also confirmed by this metho d. The headers of the tables in this section are as follows: Name is the instance name. The prefix num b er is the num ber of clusters in the instance; the suffix n umber is the n umber of vertices. Error , % is the error, in p er cen t, of the a verage solution ab ov e the o ptimal v alue. The error is calculated a s value − opt opt × 1 00%, where v al ue is the obtained solution length and opt is the optimal solution length. The exact optimal s olutions a re known from [2] and from [5] for 17 of the considered instances only . F or the res t of the pro blems we use the b est solutions ever obtained in our exp erimen ts instead. 4 Results of Computational Experiments 9 Time, sec is the av erage running time for the consider ed heuristic in seconds. The running times for S G a nd for TSP ar e obtained from the co rresp onding pap ers thus these v alues should b e adjusted using k SG and k TSP co efficien ts, resp ectiv ely , b efore the compariso n. Quality impr., % is the improvemen t of the av er age solution quality of the G K with r espect to some other heur istic. The improv e ment is calculated as E H − E GK where E H is the average er r or of the considered heuristic H and E GK is the av erage error o f o ur heuristic. Time impr. is the improvemen t of the GK av erage running time with r espect to s o me o ther heuristic running time. The improvem ent is calculated as T H /T GK where T H is the av erag e running time of the considered heuristic H and T GK is the av er age running time of our heuristic. Opt., % is the num ber o f tests, in p er cen t, in whic h the o ptimal solution was rea ched. The v alue is display ed for three heuristics only as we do not hav e it for SG . Opt. is the b est known s o lution length. The exact optimal solutions are known from [5] and [2] for 17 of the co nsidered instances only . F or the rest of the pro blems we use the b est so lutions ever obtained in our exp erimen ts. V alue is the av erage solution length. # gen. is the a verage num b er of gener ations pro duced by the heuristic. The results of the exp eriments pr esen ted in T able 1 show that our heuristic ( GK ) has clearly outper formed all other heuristics with resp ect to solution quality . F or ea c h of the considered instances the average solution r eac hed b y our heuristic is a lw ays no t w orse than the average so lution reached by any other heuristic and the pe rcen t of the runs in whic h the o ptimal solution was reached is no t less than for a n y other considered heuristic (note that we are not able to co mpare our heuristic with SG with resp ect to this v alue). The average v a lues are ca lculated for fo ur instanc e sets (IS). The F ul l IS includes all the instances co nsidered in this paper , b oth symmetric and asymmetric. The Sym. IS includes all the symmetric instances co nsidered in this pap er. The S G IS includes all the instances c o nsidered in bo th this pap er and [17]. The TSP IS includes all the instances consider ed in both this paper and [19]. One ca n see that the average quality of our GK heuris tic is approximately 1 0 times b etter than that of SG heuristic, approximately 30 times b etter than that of SD , and for T SP IS our heuristic reaches the o ptimal so lution each run and for each instance, in con trast to TSP tha t has 0.44% av erage error . The maximum err or of GK is 0.27 % while the maximum error of SG is 2.25% and the maximum err or of SD is 3.84%. The running times of the co nsidered heuris tics a re pr esen ted in T able 2. The running time of GK is not worse than the running time o f a n y other heuristic for every instance: the minimum time improvemen t with respect to SG is 6.6 that is g reater than 3 (reca ll that 3 is an adjusting co efficien t for SG ev aluation platform, see ab o ve), the time improv ement with resp ect to SD is never less than 1 .0 (recall that b oth heuristics were tested on the same platform), and the minimum time improv emen t with resp ect to TSP is 4 .6 that is gr eater than 2 (recall that 2 is an adjusting co efficient for TSP ev aluatio n platform, see ab ov e). The time improvemen t av erage is ∼ 12 times for SG (or ∼ 4 times if we ta ke into account the platfor ms difference), ∼ 3 times for SD , and ∼ 1 1 times for TSP (or ∼ 5 times if we take into a ccoun t the platforms difference). 4 Results of Computational Experiments 10 T ab. 1: Solv ers qualit y compariso n. Name Error , % Quality impr., % Opt., % GK SG SD TSP SG SD TSP GK SD TSP 40d198 0.00 0.00 0.00 0.00 0 .00 0.00 0.00 100 10 0 100 40kro a 200 0.00 0.00 0.00 0.00 0 .00 0.00 0.00 100 10 0 100 40kro b2 00 0.00 0.05 0.01 0.00 0 .05 0.01 0.00 100 70 100 41gr2 02 0.00 0.00 0.00 100 10 0 45ts225 0.00 0.14 0.09 0.04 0 .14 0.09 0.04 100 0 60 45tsp225 0.00 0.01 0.01 100 90 46pr22 6 0.00 0.00 0.00 0.00 0 .00 0.00 0.00 100 10 0 100 46gr2 29 0.00 0.03 0.03 100 60 53gil26 2 0.00 0.45 0.31 0.32 0 .45 0.31 0.32 100 30 60 53pr26 4 0.00 0.00 0.00 0.00 0 .00 0.00 0.00 100 10 0 100 56a28 0 0.0 0 0.17 0.08 0.17 0.08 10 0 70 60pr29 9 0.00 0.05 0.05 0.03 0 .05 0.05 0.03 100 20 60 64lin318 0.00 0.00 0.38 0.46 0.00 0 .3 8 0.46 100 50 60 65rbg3 23 (as ym.) 0.00 100 72rbg3 58 (as ym.) 0.00 100 80rd40 0 0.00 0.58 0.60 0.91 0 .58 0.60 0.91 100 0 20 81rbg4 03 (as ym.) 0.00 100 84fl417 0.00 0.04 0.02 0.00 0 .04 0.02 0.00 100 40 100 87gr4 31 0.00 0.30 0.30 100 40 88pr43 9 0.00 0.00 0.28 0.00 0 .00 0.28 0.00 100 20 80 89p cb442 0.00 0.01 1.30 0.86 0 .01 1.30 0.86 100 0 0 89rbg4 43 (as ym.) 0.13 50 99d493 0.11 0.47 1.28 0.36 1.17 10 0 107ali5 35 0.00 1.36 1.36 100 0 107att53 2 0.01 0.35 0.72 0.34 0.72 80 0 107si5 3 5 0.00 0.08 0.32 0.08 0.32 100 0 113pa5 61 0.00 1.50 3.57 1.50 3.57 100 0 115u57 4 0.02 1.54 1.52 80 0 115ra t575 0.20 1.12 3.22 0.93 3.03 90 0 131p65 4 0.00 0.29 0.08 0.29 0.08 100 0 132d65 7 0.15 0.45 2.32 0.29 2.16 30 0 134gr 666 0.11 3.74 3.62 70 0 145u72 4 0.14 0.57 3.49 0.43 3.35 50 0 157ra t783 0.11 1.17 3.84 1.06 3.72 20 0 200dsj10 00 0.12 2.45 2.33 30 0 201pr1 002 0.14 0.24 3.43 0.10 3.29 30 0 207si1 0 32 0.03 0.37 0.93 0.34 0.91 20 0 212u10 60 0.27 2.25 3.60 1.98 3.33 30 0 217vm10 84 0.19 0.90 3.68 0.71 3.49 60 0 F ull IS av e r age 0.04 81 Sym. IS average 0.05 1.43 1.38 77 16 SG IS av erage 0.06 0.54 1.57 0.47 1.50 72 11 TSP IS av erage 0.00 0.21 0.45 0.44 0 .21 0.45 0.44 100 17 43 4 Results of Computational Experiments 11 T ab. 2: Solvers running time compariso n. Name Time, sec Time impr., % GK SG SD TSP SG SD TSP 40d198 0.14 1.63 1.18 1 .22 11.6 8.4 8.7 40kro a 200 0.14 1.66 0.26 0 .79 12.1 1.9 5.8 40kro b2 00 0.16 1.63 0.80 2 .70 10.2 5.0 16.8 41gr2 02 0.21 0.65 3.2 45ts225 0.24 1.71 0.46 1 .42 7.0 1 .9 5.8 45tsp225 0.19 0.55 2.9 46pr22 6 0.10 1.54 0.63 0 .46 15.5 6.4 4.6 46gr2 29 0.25 1.14 4.6 53gil26 2 0.31 3.64 0.85 4 .51 11.7 2.7 14.5 53pr26 4 0.24 2.36 0.82 1 .10 10.0 3.5 4.7 56a28 0 0.3 8 2.92 1.14 7.7 3.0 60pr29 9 0.42 4.59 1.74 3 .08 10.9 4.1 7.3 64lin318 0.45 8.08 1 .42 8.49 18.1 3.2 19.0 65rbg3 23 (as ym.) 1.14 72rbg3 58 (as ym.) 1.26 80rd40 0 1.07 14.5 8 3.53 13.55 13 .7 3.3 12.7 81rbg4 03 (as ym.) 0.98 84fl417 0.73 8.15 3.17 6 .74 11.1 4.3 9.2 87gr4 31 2.01 4.01 2.0 88pr43 9 1.48 19.0 6 4.68 20.87 12 .9 3.2 14.1 89p cb442 1.72 23.4 3 4.26 23.14 13 .6 2.5 13.4 89rbg4 43 (as ym.) 3.69 99d493 4.17 35.7 2 6.34 8.6 1 .5 107ali5 35 5.82 7.75 1.3 107att53 2 3.45 31.7 0 8.04 9.2 2 .3 107si5 3 5 1.88 26.3 5 6.06 14.1 3 .2 113pa5 61 3.22 21.0 8 6.37 6.5 2 .0 115u57 4 3.76 11.48 3.1 115ra t575 4.12 48.4 8 9.19 11.8 2 .2 131p65 4 2.82 32.6 7 13.23 11.6 4.7 132d65 7 6.82 13 2 .24 15.40 19.4 2.3 134gr 666 14.46 21.06 1.5 145u72 4 11.61 16 1 .82 22.00 13.9 1.9 157ra t783 15.30 152 .1 5 22.70 9.9 1.5 200dsj10 00 50.14 84.30 1.7 201pr1 002 34.83 464 .3 6 63.04 13.3 1.8 207si1 0 32 36.76 242 .3 7 34.99 6.6 1.0 212u10 60 44.76 594 .6 4 65.81 13.3 1.5 217vm10 84 59.82 562 .0 4 87.38 9.4 1.5 F ull IS total 321.0 Sym. IS total/ average 314.0 516.4 2.9 SG IS total/av erag e 237.1 260 0 .6 385.5 11.6 3.0 TSP IS total/av erage 7.2 92.1 23.8 88 .1 12.2 3.9 10.5 5 Conclusion 12 The stability of G K is hig h, e.g ., for the 8 9pcb442 instance it pro duces only exa ct solutions a nd the time sta ndard deviation is 0.27 sec for 100 runs. The minimum running time is 1.2 9 sec, the maximum is 2.45 sec, and the av erage is 1.88 sec. F or 100 runs of 217 vm1084 the average r unning time is 65.32 sec, the minimum is 44.30 sec, the maximum is 99 .54 sec, and the standar d deviation is 13.5 7 sec. The av er age s olution is 130 994 (0.22 % ab o ve the b est known), the minimum is 130 704 (exactly the b est known), the maxim um is 131 845 (0.87% ab ov e bes t kno wn), and the sta ndard deviation is 331. Some details on the G K exper imen ts are prese nted in T able 3. The table includes the av erage n umber of genera tio ns pro duced by the heuristic. One can see that the n umber of genera tions pro duced by our heuristic is r elativ ely small: the SD and TS P limit the n umber of g e ner ation to 100 while they co nsider the instances with M < 90 only; SG terminates the algorithm a fter 15 0 idle gener ations. Our heuristic do es not require a lo t of genera tio ns b ecause o f the p ow er ful lo cal search pro cedure and large po pulation sizes. 5 Conclusion W e hav e developed a new memetic algo rithm for GTSP that do minates all known GTSP heuristics with respec t to b oth solution q ua lit y a nd the running time. Unlik e other memetic algo rithms intro- duced in the literature, our heur is tic is a ble to solve b oth symmetric and asymmetric instances of GTSP . The improvemen t is achiev ed due to the pow erful lo cal sear c h, well-fitted genetic o pera tors and new efficient termination condition. Our lo cal sear ch (LS) pro cedure consists o f several LS heuristics of different p ow er and type. Due to their div ersity , our algorithm is capable o f succe s sfully solv ing v ario us instances. Our LS heuristics are either known v ariations of GTSP heuristics from the literature (2-opt, Inserts, Cluster Optimization) or new ones inspir ed by the appropr iate TSP heuristics (Swaps, k -Neighbor Swap, Direct 2-opt). Note that our co mputationa l exp eriments demonstrated that the order in whic h LS heuristics are used is of imp ortance. F urther research ma y find some b etter LS alg orithms including more sophisticated based on, e.g., T a bu search or Sim ulated Annealing. While cross o ver ope r ator used in our alg orithm is the same as in [1 7], the mutation op erator is new. The termination condition is a lso new. The choices of the op erators and the termination condition influence significantly the p erformance of the algorithm. Ac knowledgemen t . W e would like to thank Na talio Krasno g or for numerous useful discussions of earlier versions of the pap er and Michael Bas arab for helpful advice on memetic algo rithms. References [1] J. Bang - Jensen and G. Gutin. Digr aphs: The ory, Algorithms and Applic ations . Springer-V erlag, London, 2000, 754 pp. [2] D. Ben-Arieh, G. Gutin, M. Penn, A. Y eo, a nd A. Zverovitc h. T ransforma tio ns of genera lized A TSP int o A TSP , Op erations Research Letters 3 1 (200 3 ), 35 7-365. 5 Conclusion 13 T ab. 3: GK experiments deta ils . Name Opt. V a lue Er ror, % Opt., % Time, s ec # gen. 40d198 10557 10557 .0 0.00 100 0.14 9.1 40kro a 200 13406 13406 .0 0.00 100 0.14 9.0 40kro b2 00 13111 13111 .0 0.00 100 0.16 10.3 41gr2 02 23301 23301 .0 0.00 100 0.21 9.8 45ts225 68340 68340 .0 0.00 100 0.24 12.7 45tsp225 1612 1612.0 0.00 100 0.19 10.4 46pr22 6 64007 64007 .0 0.00 100 0.10 9.0 46gr2 29 71972 71972 .0 0.00 100 0.25 9.6 53gil26 2 1013 1013.0 0.00 100 0.31 12.2 53pr26 4 29549 29549 .0 0.00 100 0.24 9.1 56a28 0 1079 1079.0 0.00 100 0.38 13.1 60pr29 9 22615 22615 .0 0.00 100 0.42 11.9 64lin318 20765 20765 .0 0.00 100 0.45 12.8 65rbg3 23 (as ym.) 471 471 .0 0.00 100 1.14 27.8 72rbg3 58 (as ym.) 693 693 .0 0.00 100 1.26 24.4 80rd40 0 6361 6361.0 0 .00 100 1.07 15.0 81rbg4 03 (as ym.) 1170 1170.0 0.00 100 0.98 16.1 84fl417 9651 9651.0 0.00 100 0.73 11.5 87gr4 31 10194 6 10 1 946.0 0.00 100 2.01 17.7 88pr43 9 60099 60099 .0 0.00 100 1.48 16.3 89p cb442 2 1657 21657.0 0 .00 100 1.72 21.2 89rbg4 43 (as ym.) 632 632 .8 0.13 50 3.69 38.8 99d493 20023 20044 .8 0.11 10 4.17 27.3 107ali5 35 1 28639 1286 3 9.0 0.00 100 5.82 25.1 107att53 2 13464 13464 .8 0.01 80 3.45 22.2 107si5 3 5 13502 13502 .0 0.00 100 1.88 19.5 113pa5 61 103 8 1038.0 0.00 100 3.22 22.2 115u57 4 16689 16691 .8 0.02 80 3.76 25.3 115ra t575 2388 2392.7 0.20 90 4.12 25.7 131p65 4 27428 27428 .0 0.00 100 2.82 15.3 132d65 7 22 498 22532.8 0.1 5 30 6.82 30.3 134gr 666 1 63028 1632 10.7 0.11 70 14.46 41.0 145u72 4 17272 17296 .8 0.14 50 11.61 38.9 157ra t783 3262 3265.7 0.11 20 15.30 40.1 200dsj10 00 91878 84 9198846.6 0.1 2 30 50.14 49.1 201pr1 002 11431 1 11 4 466.2 0.14 3 0 34.83 46.8 207si1 0 32 2230 6 22 312.0 0.0 3 20 38.4 0 45.0 212u10 60 10600 7 10 6 290.1 0.27 3 0 44.76 50.4 217vm10 84 13 0 704 130954 .2 0.19 60 59 .82 50.5 Average 0.04 81 23.1 5 Conclusion 14 [3] L. Davis. Applying Adaptiv e Algo rithms to Epistatic Domains. Pr oce e ding o f the International Joint Conference on Artificial Intelligence, 19 85, 1 62-164. [4] M. Fisc hetti, J.J . Sa la zar-Gonz´ alez, and P . T oth. The gener a lized tr a veling salesman and or ien- tering pro blems. In The T r aveling Salesman Pr oblem and its V ariatio ns (G. Gutin and A. Pun- nen, editors), Kluw er , Dor dr ec ht, 2 0 02. [5] M. Fischetti, J.J. Salaza r-Gonz´ alez, and P . T oth. A B r anc h-and-Cut algo rithm fo r the sym- metric generalize d trav eling sa lesman problem. Op erations Resear c h 45 (3) (199 7), 37 8-394. [6] G. Gutin. T rav eling Sa lesman Problems. In Handb o ok o f Gr aph The ory (J. Gross and J. Y ellen, editors), CR C Pr e ss, 200 3. [7] G. Gutin and A. Y e o . Assignment pro blem ba sed algorithms are impra c tica l for the genera lized TSP . Ausralasian J. Co m binatorics 27 (2 003), 149–1 5 4. [8] G. Gutin a nd A.P . P unnen, editors. T rav eling Sa lesman P roblem and its V ariations. Kluw er Academic Publishrers , Dordrech t, 20 02. [9] W.E. Hart, N. Krasnog o r, and J.E. Smith, editors. Recen t adv ances in memetic a lgorithms, volume 166 of Studies in F uzzy ness and Soft Computing. Springer Ber lin Heidelb erg New Y ork, 2004. [10] D.S. Johnson, G. Gutin, L. McGeo ch, A. Y eo, X. Z hang, and A. Zverovitc h. E xperimen- tal Analysis of Heuristics for A TSP . In The T r aveling Salesman Pr oblem and its V ariations (G. Gutin and A. Punnen, editors), Kluw er, Dor dr ec ht, 2002 . [11] D.S. Johnson and L. McGeo c h. Exp erimen tal Analysis of Heuristics for STSP . In The T r aveling Salesman Pr oblem and its V ariations (G. Gutin and A. Punnen, editors), K lu wer, Dor drec ht, 2002. [12] N. K r asnogor and J . Smith. A T utorial for co mpetent memetic algorithms: mo del, taxo nom y and design issues. IEEE T ransactions on E v olutionary Computation, 2005, 9, 474–4 8 8. [13] G. Lap orte, A. Asef-V a ziri, and C. Srisk andar a jah. Some applications of the generalized trav el- ling salesman problem. Journa l of the Op erationa l Resear c h So ciety 47 (12) (1996 ), 1 4 61-146 7. [14] E.L. Lawler, J.K. Lenstra, A.H.G. Rino o y Kan, and D.B. Shmoys, editors. T r avel ling Salesman Pr oblem: a Guide d T our of Combi natorial O ptimizatio n . Wiley , Chichester, 198 5. [15] P . Mosc a to. Memetic algorithms: A short in tro duction. In New Ide as in Optimization (D. Corne, F. Glov e r , and M. Dorig o, editors) McGraw-Hill, 1 9 99. [16] G. Reinelt. TSPLIB—A trav eling sa lesman problem library . ORSA J. Comput. 3 (199 1), 37 6 - 384, http ://www. crpc.rice.edu/softlib/tsplib/ . [17] J. Silb erholz and B. Golden. The Generalized T rav eling Salesman P r oblem: a new Genetic Algorithm a pproach. Extending the Horizons: Adv ance s in Computing, Optimization, and Decision T ec hnologies, 2 007, 165–18 1 . [18] L.V. Snyder and M.S. Daskin. A random-key genetic algo rithm for the genera lized trav eling salesman problem. Europ ean Journal of Op erational Res earch 17 4 (200 6), 3 8–53. 5 Conclusion 15 [19] M.F. T asgetiren, P .N. Suganthan, and Q .-K. Pan. A discrete particle swarm optimization algorithm for the generalize d travelin g salesman problem. GECC O ’07: Pro ceedings of the 9 th annual conference on Genetic and evolutionary co mputation, 2 007, 158 –167. [20] H.-K. Tsai, J.-M. Y a ng, Y.-F. Ts a i, and C.-Y. K ao. An Evolutionary Algor ithms for Large T r a veling Salesman Pro blems, IEEE T ransactions on SMC-pa rt B 3 4 (200 4), 1 718–17 2 9.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment