Local Search Heuristics For The Multidimensional Assignment Problem
The Multidimensional Assignment Problem (MAP) (abbreviated s-AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP, though the problems with larger values of s also have a large num…
Authors: Gregory Gutin, Daniel Karapetyan
Local Searc h Heuristics f or the Multid imension al Assignm ent Pr oblem ∗ Gregor y Gutin † Daniel Karape ty an ‡ Abstract The Multidimensional Assignment Problem (MAP) (abbreviated s -AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP , though the problems with larger values of s also ha ve a larg e number of application s. W e consider se veral kno wn neighborhoo ds, generalize them an d propose some ne w ones. The heuristics are e valuated bo th theoretically and e xperimentally and dominating algorithms a re selected. W e also demonstrate a combination of two neighborho ods may yield a heuristics which is superior to both of its compone nts. K eywords: Multidimensional Assignment Problem; Local Search; Neighborho od; Metaheuristics. 1 Introduction The Multid imensional Assignme nt Pr o blem (MAP) (abbreviated s -AP in the case of s dime nsions, also called (axial) Multi Index Assignment Pr oblem , MIAP , [5, 29]) is a well-known optim ization problem. I t is an extension of th e Assignmen t Pr o blem (AP) , which is exactly the two dimensiona l case of MAP. While AP can be solved in the p olynom ial time [25], s -AP fo r every s ≥ 3 is NP-hard [1 3] and inap proxim able [9 ] 1 . The most studied case of MAP is the case of three dimen sions [1, 3, 4, 1 1, 20, 37] though the problem has a host of ap plications for higher numbers of d imensions, e.g ., in matching inform ation from several sensors (data association pr oblem), which arises in plane trackin g [27, 30], compu ter vision [39] and some others [3, 5, 7], in routing in meshes [5], tracking elementary particles [33], solving systems of polynomial equations [6], image recogn ition [1 4], resource allocation [14], etc. For a fixed s ≥ 2 , th e pr oblem s - AP is stated as f ollows. Let X 1 = X 2 = . . . = X s = { 1 , 2 , . . . , n } . W e will consider o nly vector s that b elong to the Cartesian p roduct X = X 1 × X 2 × . . . × X s . Each vector e ∈ X is assigned a non -negative weig ht w ( e ) . For a vector e ∈ X , th e compon ent e j denotes its j th coordin ate, i.e., e j ∈ X j . A co llection A of t ≤ n vectors A 1 , A 2 , . . . , A t is a (feasib le) partial assignment if A i j 6 = A k j holds for each i 6 = k and j ∈ { 1 , 2 , . . . , s } . T he weight of a par tial a ssignment A is w ( A ) = P t i =1 w ( A i ) . An assignment (or full assignment ) is a partial assign ment with n vectors. The objective of s -AP is to find an assignment of minimal weight. ∗ A prelimina ry version of this paper was published in Golumbic Festschrift, volume 5420 of L ect. Notes Comput. Sci., pages 100–115, Springer , 2009. † Roya l Hollo way , Univ ersity of London, gutin@cs .rhul.ac.uk ‡ Roya l Hollo way , Univ ersity of London, daniel.k arapetyan@gmai l.com 1 Burkard et a l. sho w it for a specia l case of 3-AP an d since 3-AP is a speci al case of s -AP t he result can be exte nded to the gene ral MAP 1 Local Search Heur istics f or the Multidimensio nal Assignment Problem 2 W e also provide a permutation form of the assignmen t wh ich is sometimes more co n venient. Let π 1 , π 2 , . . . , π s be permu tations of X 1 , X 2 , . . . , X s respectively . Then π 1 π 2 . . . π s is an assignmen t with the weight P n i =1 w ( π 1 ( i ) π 2 ( i ) . . . π s ( i )) . It is obviou s that one per mutation, say the first one, may be fixed without any loss of g enerality: π 1 = 1 n , where 1 n is the identity permuta tion of s ize n . Then the obje cti ve of the problem is as follows: min π 2 ,...,π s n X i =1 w ( iπ 1 ( i ) . . . π s ( i )) . A graph fo rmulation of the pro blem is as fo llows. Having an s - partite graph G with p arts X 1 , X 2 , . . . , X s , whe re | X i | = n , find a set of n disjoint cliqu es in G of th e minimal total weig ht if every clique e in G is assigned a weight w ( e ) . Finally , an integer pro grammin g formu lation of the problem is as follows. min X i 1 ∈ X 1 ,...,i s ∈ X s w ( i 1 . . . i s ) · x i 1 ...i s subject to X i 2 ∈ X 2 ,...,i s ∈ X s x i 1 ...i s = 1 ∀ i 1 ∈ X 1 , . . . X i 1 ∈ X 1 ,...,i s − 1 ∈ X s − 1 x i 1 ...i s = 1 ∀ i s ∈ X s , where x i 1 ...i s ∈ { 0 , 1 } for all i 1 , . . . , i s and | X 1 | = . . . = | X s | = n . Sometimes the pro blem is form ulated in a mo re g eneral way if | X 1 | = n 1 , | X 2 | = n 2 , . . . , | X s | = n s and the requirem ent n 1 = n 2 = . . . = n s is omitted . However this case can b e easily transfo rmed into the problem described above by comp lementing the weight matr ix to an n × n × . . . × n m atrix with ze ros, where n = max i n i . The p roblem was studied by many researche rs. Several special cases of th e pro blem were intensively studied in th e liter ature (see [2 6] an d ref erences th ere) an d for few classes of instances polynom ial tim e exact algo rithms were foun d, see, e.g ., [ 8, 9, 2 1]. In m any cases MAP remains hard to solve [2 6, 36]. For example, if there are three sets o f points of size n o n a Euclid ean plain and the objective is to find n triples of po ints, one fro m each set, such that the total cir cumferen ce or area of the corr espondin g triangles is minimal, the corresp onding 3-AP is still NP-hard [36]. Th e asymptotic prop erties of some special instance families are studied in [14]. As regards the solution methods, apart from exact and appro ximation algorith ms [4, 11, 26, 31, 32], se veral heuristics includin g con struction heuristics [4, 16, 2 3, 28], greedy rand omized adaptiv e search pro- cedures [1, 27, 28, 35], m etaheuristics [ 10, 20] and parallel h euristics [28] are presented in the literature. Sev eral local search procedures are proposed and discussed in [1, 4, 5, 9, 10, 20, 28, 35]. The difference between the constructio n heuristics and local search is sometimes crucial. W hile a construction heuristic g enerates a solu tion fr om scra tch and, thus, has som e solution qu ality lim itation, local search is intended to improve an existing solution and, thus, can be used after a con struction heuristic or as a part of a more sophisticated heuristic, so called metaheu ristic. The c ontribution of ou r pa per is in co llecting and gener alizing all local search he uristics k nown f rom the literature, proposing some new ones and detailed theoretical and e valuating them both theoretically and experimentally . For the purpose of experimental e valuation we also thoro ughly d iscuss, classify the existing instance families and propose some new one s. Local Search Heur istics f or the Multidimensio nal Assignment Problem 3 In this p aper we consider o nly the g eneral case of MAP and, thus, all the heuristics which rely o n the special structures of the weight matrix are not included in the comparison . W e also assume that the nu mber of dimension s s is a small fixed constant while the size n can be arbitrary large. 2 Heuristics In this section we d iscuss some well kn own and some new M AP local search heuristics as well as their combinatio ns. 2.1 Dimensionwise V ariat ions Heuri stics The heur istics o f this grou p wer e first intro duced by Bandelt et al. [ 5] for MAP with decom posable co sts. Howe ver, having a very large neighb orhoo d (see below), they are very efficient ev en in the gene ral case. The fact th at this ap proach was also used b y Huan g and Lim as a local sear ch proc edure for their memetic algorithm [20] confirms its efficienc y . The idea of the dimensionw ise variation heur istics is a s fo llows. Consider th e initial assignmen t A in the perm utation form A = π 1 π 2 . . . π s (see Section 1). Let p ( A, ρ 1 , ρ 2 , . . . , ρ s ) be an assignment obtained from A by applying the permu tations ρ 1 , ρ 2 , . . . , ρ s to π 1 , π 2 , . . . , π s respectively: p ( A, ρ 1 , ρ 2 , . . . , ρ s ) = ρ 1 ( π 1 ) ρ 2 ( π 2 ) . . . ρ s ( π s ) . (1) Let p D ( A, ρ ) be an assignmen t p ( A, ρ 1 , ρ 2 , . . . , ρ s ) , where ρ j = ρ if j ∈ D and ρ j = 1 n otherwise ( 1 n is the identity permu tation of s ize n ): p D ( A, ρ ) = p A, ρ if 1 ∈ D 1 n otherwise , ρ if 2 ∈ D 1 n otherwise , . . . , ρ if s ∈ D 1 n otherwise . (2) On every iteration, the h euristic selects some non empty set D ( { 1 , 2 , . . . , s } of d imensions an d searches for a permutatio n ρ such that w ( p D ( A, ρ )) is minimized. For e very subset of d imensions D , there a re n ! dif ferent per mutations ρ but the o ptimal one can be found in th e polyno mial time. Let sw ap ( u, v , D ) be a vector which is equal to vector u in all dimensions j ∈ { 1 , 2 , . . . , s } \ D and equal to vector v in all dimensions j ∈ D : swap ( u, v , D ) j = u j if j / ∈ D v j if j ∈ D for j = 1 , 2 , . . . , s . (3) Let matrix [ M i,j ] n × n be constructed as M i,j = w ( swap ( A i , A j , D )) . (4) It is clear that th e solution of the corre sponding 2-AP is exactly th e requ ired permu tation ρ . Indeed, as- sume th ere exists some perm utation ρ ′ such th at w ( p D ( A, ρ ′ )) < w ( p D ( A, ρ )) . Observe that p D ( A, ρ ) = { sw ap ( A i , A ρ ( i ) , D ) : i ∈ { 1 , 2 , . . . , n }} . Then we have n X i =1 w ( sw ap ( A i , A ρ ′ ( i ) , D )) < n X i =1 w ( sw ap ( A i , A ρ ( i ) , D )) . Since w ( sw ap ( A i , A ρ ( i ) , D )) = M i,ρ ( i ) , the sum P n i =1 w ( sw ap ( A i , A ρ ( i ) , D )) is already minimized to the optimum and no ρ ′ can exist. Local Search Heur istics f or the Multidimensio nal Assignment Problem 4 The neighborh ood of a dimen sionwise heuristic is as follows: N DV ( A ) = p D ( A, ρ ) : D ∈ D and ρ is a perm utation , (5) where D includes all dimension subsets acceptable by a certain heuristic. Observe that p D ( A, ρ ) = p D ( A, ρ − 1 ) , (6) where ρ − 1 ( ρ ) = ρ ( ρ − 1 ) = 1 s and D = { 1 , 2 , . . . , s } \ D , and, hence, p D ( A, ρ ) : ρ is a perm utation = p D ( A, ρ ) : ρ is a perm utation (7) for any D . From (7) and the obvious f act that p ∅ ( A, ρ ) = p { 1 , 2 ,...,s } ( A, ρ ) = A for any ρ we introduce the following restrictions for D : D ∈ D ⇒ D / ∈ D and ∅ , { 1 , 2 , . . . , s } / ∈ D . (8) W ith these restriction s, one can see that for any pair of distinct sets D 1 , D 2 ∈ D the equation p D 1 ( A, ρ 1 ) = p D 2 ( A, ρ 2 ) holds if and only if ρ 1 = ρ 2 = 1 n . Hence, the size of the neig hborh ood N DV ( A ) is | N DV ( A ) | = |D | · ( n ! − 1) + 1 . (9) In [5] it is decided th at the number of iterations should not be exponential with regards to neither n nor s wh ile the size of the max imum D is |D | = 2 s − 1 − 1 . Therefo re two heur istics, LS1 and LS2, ar e ev alua ted in [5]. LS1 includes only singleto n values o f D , i.e., D = { D : | D | = 1 } ; LS2 includes only do ubleton values of D , i.e. , D = { D : | D | = 2 } . It is surprising but according to both [5] and our computa tional experie nce, the heuristic LS2 produ ces worse solutions than LS1 tho ugh it obvio usly has larger neighborh ood and larger runn ing times. W e improve the heuristic by allowing | D | ≤ 2 , i.e., D = { D : | D | ≤ 2 } . This does not ch ange the theoretical tim e complexity of the alg orithm but impr oves its perfor mance. The heu ristic LS1 is ca lled 1D V in o ur pape r; LS2 with | D | ≤ 2 is called 2D V . W e also assume (see Section 1) that the value o f s is a small fixed constant an d, thu s, in troduce a heur istic s D V which enumer ates all feasible (recall (8)) D ⊂ { 1 , 2 , . . . , s } . The order in which the heu ristics take the v alu es D ∈ D in o ur implementatio ns is as f ollows. For 1D V it is { 1 } , { 2 } , . . . , { s } . 2D V be gins as 1D V and then takes a ll pairs of dimensions: { 1 , 2 } , { 1 , 3 } , . . . , { 1 , s } , { 2 , 3 } , . . . , { s − 1 , s } . No te that because of (8) it enumerates no pair s of vectors for s = 3 , and for s = 4 it only takes the following pairs: { 2 , 3 } , { 2 , 4 } and { 3 , 4 } . s D V takes first all sets D of size 1 , then all sets D of size 2 and so on up to | D | = ⌊ s/ 2 ⌋ . If s is even then we shou ld take only half of the sets D of size s/ 2 (recall (7)); for this purp ose we take all the s ubsets of D ⊂ { 2 , 3 , . . . , s } , | D | = s/ 2 in the similar order as before. It is obvious that N 1DV ( A ) ⊆ N 2DV ( A ) ⊆ N s D V ( A ) for any s howe ver for s = 3 all the neig hborh oods are equal and for s = 4 2D V and s D V also coincide. According to (8) and (9), the neighbo rhood size of 1D V is | N 1DV ( A ) | = s · ( n ! − 1) + 1 , of 2D V is | N 2DV ( A ) | = (2 s − 1 − 1) · ( n ! − 1 ) + 1 if s ∈ { 3 , 4 } s 2 + s · ( n ! − 1) + 1 if s ≥ 5 , and of s D V is | N s D V ( A ) | = (2 s − 1 − 1) · ( n ! − 1 ) + 1 . The time complexity of ev ery run of DV is O ( |D | · n 3 ) as e very 2-AP takes O ( n 3 ) and, hence, the time complexity of 1D V is O ( s · n 3 ) , of 2D V is O ( s 2 · n 3 ) and of MD V is O (2 s − 1 · n 3 ) . Local Search Heur istics f or the Multidimensio nal Assignment Problem 5 2.2 k -opt The k -o pt heuristic for 3 -AP for k = 2 and k = 3 was first introduce d by Balas and Saltzman [4] as a pairwise and triple in ter chan ge heuristic . 2-o pt as we ll as its variations were also discussed in [1, 10, 27, 28, 31, 35] and some other paper s. W e generalize the heuristic for arbitrary v alues of k and s . The heuristic proceed s as follows. For e very subset of k vectors taken in the assignment A it removes all these vectors f rom A and inserts som e new k vectors su ch that the assign ment feasibility is preserved a nd its weigh t is minimized. An other definition is as follows: for every set of distinct vectors e 1 , e 2 , . . . , e k ∈ A let X ′ j = { e 1 j , e 2 j , . . . , e k j } fo r j = 1 , 2 , . . . , k . Let A ′ = { e ′ 1 , e ′ 2 , . . . , e ′ k } be the solution of this s -AP o f size k . Replace the vectors e 1 , e 2 , . . . , e k in the initial assignment A with e ′ 1 , e ′ 2 , . . . , e ′ k . The time complexity of k -op t is o bviously O n k · k ! s − 1 ; for k ≪ n it can be replac ed with O ( n k · k ! s − 1 ) . It is a natur al qu estion if o ne can u se some faster solver on every iter ation. Indeed , acco rding to Section 1 it is possible to solve s - AP o f size k in O ( k ! s − 2 · k 3 ) . Howe ver, it is ea sy to see th at k ! s − 1 < k ! s − 2 · k 3 for every k up to 5, i.e., it is b etter to use th e exhau sti ve search for any re asonable k . One can doubt that the exact algorithm actually takes k ! s − 2 · k 3 operation s but e ven f or the lower b ound Ω( k ! s − 2 · k 2 ) the inequality k ! s − 1 < k ! s − 2 · k 2 holds for any k ≤ 3 , i.e., for all the values of k we actu ally consider . Now let us fin d the neighb orho od of the heu ristic. For some set I and a subset I ⊂ I let a perm utation ρ of elements in I be an I -permutation if ρ ( i ) = i for ev ery i ∈ I \ I , i.e., if ρ do es n ot move any elements except elements from I . Let E = { e 1 , e 2 , . . . , e k } ⊂ A be a set o f k distinct vectors in A . For j = 2 , 3 , . . . , s let ρ j be an E j -permu tation, where E j = { e 1 j , e 2 j , . . . , e k j } . Then a set W ( A, E ) of all assignmen ts which can be ob tained from A by swapping c oordin ates of vecto rs E can be described as follows: W ( A, E ) = p ( A, 1 n , ρ 2 , ρ 3 , . . . , ρ s ) : ρ j is an E j -permu tation for j = 2 , 3 , . . . , s . Recall that 1 n is the identity permuta tion of size n and p ( A, ρ 1 , ρ 2 , . . . , ρ s ) is defined by (1). The neighborh ood N k -opt ( A ) is defined as follows: N k -opt ( A ) = [ E ⊂ A, | E | = k W ( A, E ) . (10) Let Y , Z ⊂ A such that | Y | = | Z | = k . Observe that W ( A, Y ) ∩ W ( A, Z ) is none mpty and apart from the initial assignment A this intersection may contain ass ignments which are modified only in the common vectors Y ∩ Z . T o calculate the size o f the neig hborh ood of k -opt let u s intro duce W ′ ( A, E ) as a set of all assignments in W ( A, E ) such that every vector in E is m odified in a t least o ne dimension, wher e E ⊂ A is the set of k selected vectors in the assignment A : W ′ ( A, E ) = A ′ ∈ W ( A, E ) : | A ∩ A ′ | = n − k . Then the neighb orhoo d N k -opt ( A ) of k -o pt is N k -opt ( A ) = [ E ⊂ A, | E |≤ k W ′ ( A, E ) (11) and since W ( A, Y ) ∩ W ( A, Z ) = ∅ if Y 6 = Z we hav e | N k -opt ( A ) | = X E ⊂ A, | E |≤ k | W ′ ( A, E ) | = k X i =0 n i N i , (12) Local Search Heur istics f or the Multidimensio nal Assignment Problem 6 where N i = | W ( A, E ) | fo r any E with | E | = i . Observe that W ′ ( A, E ) = W ( A, E ) \ [ E ′ ( E W ′ ( A, E ′ ) and | W ( A, E ) | = k ! s − 1 for | E | = k and , hence, N k = k ! s − 1 − k − 1 X i =0 k i N i . (13) It is obvious that N 0 = 1 since one can obtain exactly o ne assignment (the gi ven o ne) by chang ing no vectors. From this an d (13) we have N 1 = 0 , N 2 = 2 s − 1 − 1 and N 3 = 6 s − 1 − 3 · 2 s − 1 + 2 . From this and (12) follows | N 2-opt ( A ) | = 1 + n 2 (2 s − 1 − 1) , (14) | N 3-opt ( A ) | = 1 + n 2 (2 s − 1 − 1) + n 3 (6 s − 1 − 3 · 2 s − 1 + 2) . (15) In o ur imp lementation, we skip an iter ation if the correspo nding set o f vector s E either consists of the vectors of the minim al weigh t ( w ( e ) = min e ∈ X w ( e ) for every e ∈ E ) o r all the se vectors have remained unchan ged during the previous run of k - opt. It is assumed in the literatu re [4, 31, 35] that k -opt for k > 2 is to o slow to be app lied in practice. Howe ver, the neigh borho od N k -opt do not only inc ludes th e ne ighbor hood N ( k − 1) -opt but also grows expo- nentially with the growth of k and, thus, becomes very powerful. W e decided to includ e 2-opt and 3- opt in our resear ch. Greater values of k are not con sidered in this pa per because o f no npractical time co mplexity (observe that the time complexity of 4-op t is O ( n 4 · 2 4 s − 1 ) ) and even 3-opt with all th e imp rovements de- scribed above still takes a lot of time to proceed. Howev er , 3 -opt is more robust when used in a combinatio n with some other heuristic (see Section 2.4). It is worth noting th at our extension of the p airwise (triple) interchange heuristic [4] is not typical. Many papers [1, 10, 27, 31, 35] consider another neighb orhoo d: N k -opt* ( A ) = p D ( A, ρ ) : D ⊂ { 1 , 2 , . . . , s } , | D | = 1 an d ρ moves at most k ele ments , where p D is defined in (2). T he size of such neigh borh ood is | N k -opt* ( A ) | = s · n k · ( k ! − 1) + 1 and the time comp lexity of one run of k -opt* in the assum ption k ≪ n is O ( s · n k · k !) , i.e., un like k -opt , it is not expone ntial with respect to the number of dimensions s which is considered to be im portant by m any researchers. Howev er , as it is stated in Section 1, we assum e that s is a small fixed con stant and , thu s, the time c omplexity of k -opt is still reasonab le. At the same time, observe that N k -opt* ( A ) ⊂ N 1-D V ( A ) for any k ≤ n , i.e., 1D V perform s a s good as n -op t* with the time comp lexity of 3-op t* . Only in the case o f k = 2 the heuristic 2-o pt* is faster in th eory howe ver it is known [7] that th e expected time c omplexity of AP is significan tly less than O ( n 3 ) and, thus, the runn ing times of 2-opt* and 1D V are similar while 1D V is definitely more powerful. Because of this we do not consider 2-opt * in our comparison. Local Search Heur istics f or the Multidimensio nal Assignment Problem 7 2.3 V a riable Depth Interc hange (v-opt) The V ariable Depth Interc h ange (VDI) was first introd uced by Balas and Saltzman for 3 -AP as a h euristic based on the well known Lin-Kernigha n heuristic for the tr av eling salesman pro blem [4 ]. W e provide here a natural e xtension v-opt of the VDI heuristic for the s -dim ensional case, s ≥ 3 , and then improve th is extension. Our computational experiments sh ow that the improved versio n of v-opt is s uperior to the natural extension o f VDI with respect to solution quality at the cost of a rea sonable increase in runn ing time. In what follows, v -opt refer s to th e improved version of the heuristic unless otherwise specified. In [4] , the heu ristic is describ ed quite briefly . Our co ntribution is n ot o nly in the extendin g, improving and analyz ing it but also in a more detailed an d, we believe, clearer explan ation of it. W e describe the heuristic in a d ifferent way to the d escription provided in [4], however , both versions o f our algor ithm are equal to VDI in case o f s = 3 . Th is fact w as also ch ecked by reproductio n of the com putational e valuation results repor ted in [4] . Further we will u se functio n U ( u, v ) which retu rns a set of swaps between vecto rs u an d v . The difference between the two versions o f v-o pt is on ly in the U ( u, v ) definition. For the natu ral extension of VDI, let U ( u, v ) be a set of all the possible swaps (see (3)) in at most one dimension between the vectors u and v , where the coor dinates in at most one dimension are s wapped: U ( u, v ) = swap ( u, v , D ) : D ⊂ { 1 , 2 , . . . , s } and | D | ≤ 1 . For the imp roved version o f v-opt , let U ( u, v ) be a set of all the po ssible swaps in at mo st ⌊ s/ 2 ⌋ dimensions between the vectors v an d w : U ( u, v ) = swap ( u, v , D ) : D ⊂ { 1 , 2 , . . . , s } and | D | ≤ s/ 2 . The constraint | D | ≤ s/ 2 guaran tees that at least half of the coordin ates o f e very swap ar e e qual to th e first vector coor dinates. The compu tational experim ents show th at removing th is constraint incr eases th e runnin g time an d decreases the a verag e solution quality . Let vector µ ( u, v ) be the minim um weight swap betwee n vectors u and v : µ ( u, v ) = argmin e ∈ U ( u,v ) w ( e ) . Let A be an initial assignment. 1. For e very vector c ∈ A d o the rest of the algorithm. 2. I nitialize the total gain G = 0 , the best assignmen t A best = A , and a set o f av ailab le vecto rs L = A \ { c } . 3. Find vector m ∈ L su ch th at w ( µ ( c, m )) is minimized . Set v = µ ( c, m ) and v j = { c j , m j } \ { v j } for every 1 ≤ j ≤ s . Now v ∈ U ( c, m ) is the min imum weight swap of c with some other vector m in the assignment, and v is the comple mentary vector . 4. Set G = G + w ( c ) − w ( v ) . If now G ≤ 0 , set A = A best and go to the next iteration (Step 1). 5. Mar k m as an unavailable for the f urther swaps: L = L \ { m } . Note th at c is alr eady m arked unav ailable: c / ∈ L . 6. Replac e m and c with v a nd v . Set c = v . 7. I f w ( A ) < w ( A best ) , sav e the ne w assignment as the best one: A best = A . Local Search Heur istics f or the Multidimensio nal Assignment Problem 8 8. Repe at from Step 3 while the total gain is positi ve (see Step 4) and L 6 = ∅ . The h euristic r epeats un til no imp rovement is fo und durin g a run. The time co mplexity of o ne r un o f v-opt is O ( n 3 · 2 s − 1 ) . The time com plexity of the n atural extension of VDI is O ( n 3 · s ) , and the computation experiments also show a significant d ifference between the run ning times of the improved and the natur al extensions. Howev er , the solution qu ality of the natur al extension for s ≥ 7 is quite po or, while for the smaller values of s it pr oduces solu tions similar to or ev en worse th an s D V solutions at th e cost of mu ch larger running times. The n eighbo rhood N v-opt ( A ) is not fixed an d d epends on the MAP instance and initial assignmen t A . The numb er o f iteration s ( runs of Step 3) of the algo rithm can vary f rom n to n 2 . Moreover , ther e is no guaran tee th at the algo rithm selects a better assignment ev en if the corresponding swap is in U ( c, m ) . Thus, we do not provide any results for the neigh borho od of v-opt . 2.4 Combined Neighborhood W e have already presen ted two typ es of neighbor hoods in this paper , let us say dimension wise (Section 2.1) and vecto rwise (Sections 2. 2 and 2 .3). Th e idea of the co mbined heuristic is to use the dimension wise and the vectorwise neighb orho ods togeath er , com bining the m into so called V ariable Neig hborh ood Sear ch [3 8]. The c ombined heu ristic impr oves the assignment b y m oving it into the local optimu m with resp ect to the dimensionw ise neighb orhoo d, th en it impr oves it by m oving it to the local minimum with respect to the vectorwise neighbo rhood . The proced ure is repeated until th e assignmen t occurs in the local minimu m with respect to both the dimension wise and the vector wise neighborhoo ds. More for mally , the co mbined heur istic D V opt consists of a dimension wise heu ristic D V (either 1D V , 2D V or s D V ) and a vectorwise heuristic opt (either 2 -opt , 3-opt or v-opt ). D V opt proceed s as follows. 1. Ap ply the dimensionwise heuristic A = D V ( A ) . 2. Repe at: (a) Save the assignment weight x = w ( A ) and app ly the v ectorwise heuristic A = opt ( A ) . (b) If w ( A ) = x stop the algorithm. (c) Save the assignment weight x = w ( A ) and app ly the dimensionwise heuristic A = D V ( A ) . (d) If w ( A ) = x stop the algorithm. Step 1 o f the com bined heur istic is the harde st o ne. Indeed, it is typ ical that it takes a lot of iterations to move a bad solu tion to a local minim um while fo r a g ood solution it takes just a f ew iterations. Hence, the first of the two heuristics should be the most efficient on e, i.e., it should perform quickly and produ ce a good solution. In this case the d imensionwise h euristics ar e more efficient b ecause, having ap proxim ately the same as vectorwise heu ristics time comp lexity , they search much larger neig hborh oods. Th e fact that the dimensionwise heuristics are more effi cient than the vectorwise one s is also confirm ed by e xperimen tal ev alua tion (see Section 4). It is clear that the neighbo rhood of a combin ed heuristic is defined as follo ws: N DV opt ( A ) = N DV ( A ) ∪ N opt ( A ) , (16) where N DV ( A ) and N opt ( A ) are neighborh oods of the correspo nding dimensionwise and vectorwise heu ris- tics respe cti vely . T o calculate the size of the neighb orhoo d N DV opt ( A ) we need to fin d the size of the intersection of these neigh borho ods. Observe that N DV ( A ) ∩ N k -opt ( A ) = p D ( A, ρ ) : D ∈ D an d ρ moves at most k elemen ts , (17) Local Search Heur istics f or the Multidimensio nal Assignment Problem 9 where p D ( A, ρ ) is defined by (2). This means that, if r k is the number of permutation s o n n elemen ts which move at mo st k elemen ts, the intersection (17) has size | N DV ( A ) ∩ N k -opt ( A ) | = |D | · ( r k − 1) + 1 . (18) The numbe r r k can be calculated as r k = k X i =0 n i · d i , (19) where d i is the nu mber of der angemen ts on i elem ents, i.e., perm utations on i elements such that no ne o f the elements ap pear on th eir places; d i = i ! · P i m =0 ( − 1) m /m ! [19]. For k = 2 , r 2 = 1 + n 2 ; for k = 3 , r 3 = 1 + n 2 + 2 n 3 . From (9), (12), (16) and (18) we immediately have N DV k -opt ( A ) = 1 + |D | · ( n ! − 1) + " k X i =2 n i N i # − |D | · ( r k − 1) , (20) where N i and r k are calculated acco rding to (13) and ( 19) r espectively . Substituting the v alue of k , we have: N DV 2-opt ( A ) = 1 + |D| · ( n ! − 1) + n 2 (2 s − 1 − 1) − |D| · n 2 and (21) N DV 3-opt ( A ) = 1 + |D| · ( n ! − 1) + n 2 (2 s − 1 − 1) + n 3 (6 s − 1 − 3 · 2 s − 1 + 2) − |D| · n 2 + 2 n 3 (22) One can easily substitute |D | = s , |D| = s 2 and |D | = 2 s − 1 − 1 to (2 1) or (22) to get the neighborh ood sizes of 1D V 2 , 2D V 2 , s D V 2 , 1D V 3 , 2D V 3 and s D V 3 . W e will only show the results fo r s D V 2 : | N s D V 2 ( A ) | = 1 + (2 s − 1 − 1) · ( n ! − 1 ) + n 2 (2 s − 1 − 1) − (2 s − 1 − 1) · n 2 = 1 + (2 s − 1 − 1) · ( n ! − 1 ) , (2 3) i.e., | N s D V 2 ( A ) | = | N s D V ( A ) | . Since N s D V ( A ) ⊆ N s D V 2 ( A ) (see ( 16)), we ca n conclude that N s D V 2 ( A ) = N s D V ( A ) . Indeed , the neighborho od of 2-opt can be defined as follows: N 2-opt = p D ( A, ρ ) : D ⊂ { 2 , 3 , . . . , s } and ρ swaps at most two elements , which is obviously a subset of N s D V ( A ) (see (5)). Hence, the combined heuristic s D V 2 is of no interest. For o ther combina tions the inter section (17) is significantly smaller th an b oth neig hborh oods N DV ( A ) and N k -opt ( A ) (recall that the neighb orho od N v-opt has a variable structure ). I ndeed, | N DV ( A ) | ≫ | N DV ( A ) ∩ N k -opt ( A ) | because |D| · ( n ! − 1) ≫ |D | · ( r k − 1) for k ≪ n . Similarly , | N 2-opt ( A ) | ≫ | N DV ( A ) ∩ N k -opt ( A ) | because n 2 (2 s − 1 − 1) ≫ |D | · n 2 if |D | ≪ 2 s − 1 , which is the case for 1D V and 2D V if s is large enough. Finally , | N 3-opt ( A ) | ≫ | N DV ( A ) ∩ N k -opt ( A ) | b ecause n 2 (2 s − 1 − 1) + n 3 (6 s − 1 − 3 · 2 s − 1 + 2) ≫ |D| · n 2 + 2 n 3 , which is true ev en for |D| = 2 s − 1 , i.e., for s D V . The time comp lexity of the comb ined heuristic is O ( n k · k ! s − 1 + |D| · n 3 ) in case o f opt = k -opt an d O ( n 3 · (2 s − 1 + |D | )) if opt = v -opt. The particular formulas are provided in the follo w ing table: Local Search Heur istics f or the Multidimensio nal Assignment Problem 10 2-opt 3-opt v-opt 1D V O (2 s − 1 · n 2 + s · n 3 ) O (6 s − 1 · n 3 ) O (2 s · n 3 ) 2D V O (2 s − 1 · n 2 + s 2 · n 3 ) O (6 s − 1 · n 3 ) O (2 s · n 3 ) s D V (no interest) O (6 s − 1 · n 3 ) O (2 s · n 3 ) Note that all the combination s with 3 -opt and v-opt have eq ual time complexities; this is because the time co mplexities of 3-opt and v-opt are d ominan t. Our experimen ts sho w that the actu al ru nning times of 3-opt and v-opt ar e rea lly much high er then even the s D V ru nning tim e. T his means that the com binations of these h euristics with s D V ar e appro ximately as fast as the com binations of these heuristics with lig ht di- mensionwise heuristics 1D V and 2D V . M oreover , a s it w as n oticed above in this section, the dimensionwise heuristic, being executed first, simplifies the job for the vectorwise heu ristic and, h ence, the increase of the dimen sionwise heuristic power may dec rease the runn ing time of the whole combine d heuristic. At the same time, the neighborho ods o f the combinations with s D V are significantly larger than the neig hborh oods of the combin ations with 1D V and 2D V . W e can conclude that the ‘ligh t’ heur istics 1D V 3 , 2D V 3 , 1D V v and 2D V v are of no inter est because the ‘heavy’ heuristics s D V 3 and s D V v , having the same theoretical time complexity , ar e more powerful and , mo reover , ou tperfor med the ‘light’ heuristics in o ur exp eriments with respect to both solution quality and runnin g time on average and in most of s ingle experiments. 2.5 Other algorithms Here we provide a lis t of som e other MAP algorithms presented in the literature. • A host of local sear ch p rocedu res and co nstruction he uristics which often have some app roximatio n guaran tee ([5, 9, 11, 21, 26, 27] and some othe rs) are proposed f or special cases of MAP (usually with decomp osable weigh ts, see Section 3.2) and exploit the sp ecifics o f the se instances. Howe ver , as it was stated in Section 1, we con sider on ly the general case of MAP , i.e., all the algorith ms included in this paper do not rely on any special structu re of the weight matrix. • A nu mber o f construction heuristics a re intended to generate solutions for general case M AP [4, 16, 23, 2 8]. While so me of them are fast and low quality , like Greedy , some, like Max-Regret , are significantly slower but pr oduce mu ch better solution s. A specia l class of construc tion heuris- tics, Gre edy Ran domized Adaptive Search Procedu re (GRASP), was also investigated b y many r e- searchers [1, 27, 28, 35]. • Several metaheu ristics, in cluding a simulated annealing procedur e [10] and a memetic algo rithm [20], were prop osed in the literature. Metaheu ristics are sophisticated algor ithms intended to search for the near op timal solutions in a reasona bly large time. Pro ceeding for much longer than local sear ch a nd being hard for t heoretical analysis of t he running time or the neighbo rhoo d, me taheuristics cannot be compare d straightf orwardly to local search proc edures. • Som e weak variations of 2-o pt ar e consid ered in [1, 27, 31, 35]. While our heur istic 2 -opt tries all possible rec ombinatio ns o f a pair o f assignmen t vectors, i.e., 2 s − 1 combinatio ns, these variations only try the sw a ps in one dimension at a time, i.e., s combinations for e very pair of v ectors. W e have already decided that these v ariations hav e no practical interest, for details see Section 2.2. 3 T est Bed While the theoretical analy sis can help in heuristic design, selection of the best appr oaches requires empir- ical e valuation [18 , 34]. In this section we discuss the test bed and in Section 4 the experimental results are Local Search Heur istics f or the Multidimensio nal Assignment Problem 11 reported and discussed. The question of selecting proper test bed is one of the mo st importa nt questio ns in heur istic experi- mental evaluation [ 34]. While m any research ers focu sed on in stances with rand om indep endent weights ([3, 4, 24, 3 1] and some others) an d rand om instances with pred efined solutions [1 0, 15, 23], sev eral more sophisticated mod els a re of g reater practical inter est [ 5, 9, 1 1, 12, 26]. Th ere is also a number of p apers which consider real-world and pseudo real-world instances [6, 27, 30] but the authors of this paper suppose that these instances do not well repr esent all the instance classes and b uilding a proper benchmar k with the real-world instances is a subject for another research. In this paper we gr oup all th e instance families in to two classes: instances with indepen dent weigh ts (Section 3.1) and instances with d ecomposab le weights (Section 3.2). L ater we sh ow that the heuristics perfor m dif ferently on the i nstances of th ese classes and, thus, this devision helps us in correct experimen tal analysis of the local search algorithms. 3.1 Instances With Independent W eigh ts One of th e most studied class o f instances for MAP is R andom Instan ce F amily . In Random , the weigh t assigned to a vector is a ran dom uniformly distributed integral value in the interval [ a, b − 1] . Random instances were used in [1, 3, 4, 32] and some others. Since th e instanc es are ra ndom and qu ite large, it is po ssible to estimate the average solution value for the Rando m Instance F amily . The previous resear ch in this area [24] sho w that if n tends to infinity than the problem solution a pproach es the bo und an , i.e., the min imal possible assignmen t weigh t (observe that the minimal assignment includes n vectors of weight a ). Mo reover , an estimation of the mean optimal s olution is p rovided in [14] but this estimation is not accur ate enoug h for our experimen ts. In [1 8] we prove that it is very likely that every big eno ugh Random in stance has at least one an - assignment, wh ere x -assignmen t means an assignment of weight x . Let α be the nu mber of assignm ents of weight an and let c = b − a . W e would like to h av e an up per bound o n th e pr obability Pr( α = 0) . Suc h an upper boun d is g i ven in the following theorem whose proof (see [18]) is based on the Extended Jansen Inequ ality (see Theo rem 8.1.2 of [2]). Theorem 1. F or any n such that n ≥ 3 and n − 1 e s − 1 ≥ c · 2 1 n − 1 , (24) we have Pr( α = 0) ≤ e − 1 2 σ , wher e σ = n − 2 P k =1 ( n k ) · c k [ n · ( n − 1) ··· ( n − k +1)] s − 1 . The lo wer bounds of Pr( α > 0) for dif f erent values of s and n and for b − a = 1 00 , are reported belo w . s = 4 s = 5 s = 6 s = 7 n Pr( α > 0 ) 15 0.5 75 20 0.8 23 25 0.9 43 30 0.9 86 35 0.9 97 40 1.0 00 n Pr( α > 0) 10 0.9 91 11 0.9 98 12 1.0 00 n Pr( α > 0 ) 8 1 .000 n Pr( α > 0) 7 1 .000 Local Search Heur istics f or the Multidimensio nal Assignment Problem 12 One can see th at a 4-AP Rando m instance has an ( an ) - assignment with the p robability w hich is very close to 1 if n ≥ 40 ; a 5-AP instance has an ( an ) -assignm ent with prob ability very close to 1 fo r n ≥ 12 , etc.; so, the optimal solutions for all the Random instances used in our experiments (see Section 4) are very likely to be of weig ht an . For s = 3 Theorem 1 d oes not provide a go od upper bound , but we a re able to use the results from T able II in [4] instead. Balas and Saltzman repo rt that in their experiments the a verage optimal solution of 3-AP for Random instances redu ces very quickly and has a small v alue e ven for n = 2 6 . Since the smallest Rando m instance we u se in o ur experiments has size n = 15 0 , we assume that all 3-AP Random instances considered in our experimen t are very likely to have an an -assignment. Useful results can also be obtained from (11 ) in [14] which is an upp er bou nd for the average optim al solution. Grundel, Oliveira a nd Pardalos [1 4] c onsider the same instan ce family excep t the weig hts of the vectors are real numbe rs un iformly distributed in the interval [ a, b ] . Howe ver the re sults from [1 4] can b e extended to our discrete case. Let w ′ ( e ) be a real weight of the vector e in a contin uous instance. Consider a discrete instance with w ( e ) = ⌊ w ′ ( e ) ⌋ (if w ′ ( e ) = b , set w ( e ) = b − 1 ). Note that the weig ht w ( e ) is a un iformly distributed integer in the interval [ a, b − 1] . Th e op timal assignmen t weight of this instance is not larger than the optimal assignment weight of the contin uous instance and , thus, the upp er bo und for the av erage optimal solution for the discrete case is correct. In fact, the upper bou nd ¯ z ∗ u (see [14]) for the av erage optim al solution is not accurate enou gh. For example, ¯ z ∗ u ≈ an + 6 . 9 for s = 3 , n = 1 00 and b − a = 10 0 , and ¯ z ∗ u ≈ an + 3 . 6 fo r s = 3 , n = 200 and b − a = 100 , so it canno t be used f or s = 3 in our experiments. T he upper bound ¯ z ∗ u giv es a b etter approx imation f or larger values of s , e.g., ¯ z ∗ u ≈ an + 1 . 0 fo r s = 4 , n = 40 and b − a = 1 00 , howe ver, Theorem 1 provides stronger results ( Pr( α > 0) ≈ 1 . 000 for this case). Another class o f instances with almost indep endent weights is GP Instance F amily which co ntains pseudo- random instanc es with predefined op timal solutio ns. GP in stances are gen erated by an a lgorithm produ ced by Grundel and Pardalos [15]. Th e generator is naturally designed fo r s -AP for ar bitrary large values of s and n . Howev er , it is relatively slo w and, thus, i t was impo ssible to generate large G P instances. Nev ertheless, this is what we need since finally we have both small ( GP ) and large ( Rando m ) instanc es with indepen dent weights with kn own optimal solutions. 3.2 Instances With Decomposable W eights In many cases it is not easy to define a weigh t for an s -tuple of objects but it is possible to define a relation between ev ery pair o f o bjects from different sets. I n this case one sh ould use dec omposab le weights [37] (or decomp osable costs ), i.e. , the weight of a vector e should be defin ed as follows: w ( e ) = f d 1 , 2 e 1 ,e 2 , d 1 , 3 e 1 ,e 3 , . . . , d s − 1 ,s e s − 1 ,e s , (25) where d i,j is a distance matrix between the sets X i and X j and f is some fu nction. The most natural instance family with decomp osable weights is Clique , which defines the function f a s the sum of all arguments: w c ( e ) = n − 1 X i =1 n X j = i +1 d i,j e i ,e j . (26) The Cl ique instance family was inv estigated in [5, 11, 12] an d some others. It was proven [11] that MAP restricted to Clique instances remains NP-hard. A special case of Cliq ue is Geometric I nstance F amily . In Geometr ic , the sets X 1 , X 2 , . . . , X s corre- spond to sets of points in Euclid ean space, and the d istance between two points u ∈ X i and v ∈ X j is Local Search Heur istics f or the Multidimensio nal Assignment Problem 13 defined as Euclidean distance; we consider the two dimensional Euclidean space: d g ( u, v ) = q ( u x − v x ) 2 + ( u y − v y ) 2 . It is proven [3 6] that the Geometr ic instances ar e NP-hard to solve for s = 3 and, thus, Geome tric is NP-h ard for every s ≥ 3 . In this pap er , we pro pose a new special ca se of th e d ecomposab le weights, Sq uareRoo t . It is a m odifi- cation of the Clique instance family . Assume we have s radar s and n plane s and each radar observes all the planes. The problem is to assign signals which come from different radars t o each other . It is quite natural to define a distance fun ction between each pair of signals from d ifferent rad ars, and for a set of signals wh ich correspo nd to o ne plane th e sum of the distances sho uld be small so (26) is a goo d ch oice. Howe ver, it is not actually correct to minimize the total distance between the assigning signals b ut one should also ensure that n one of these distances is too large. Sam e req uiremen ts appear in a n umber of other app lications. W e propo se a weight fu nction which can leads to both small total distance between the assigned signals and small dispersion of the distances: w sq ( e ) = v u u t n − 1 X i =1 n X j = i +1 d i,j e i ,e j 2 . (27) Similar appro ach is used in [26] thoug h they d o no t use squa re ro ot, i.e. , a vector weigh t is just a sum of squares of the edge weigh ts in a clique. In addition, the edge weights in [ 26] are calculated as distances between some node s i n a Euclidean space. Another special case of th e deco mposable weights, Pro duct , is studie d in [9]. Burkard et al. con sider 3-AP and defin e the weight w ( e ) as w ( e ) = a 1 e 1 · a 2 e 2 · a 3 e 3 , where a 1 , a 2 and a 3 are random vectors of positive n umber s. It is easy to show that the Product weight function can be represented in the form (25). It is p roven that the minimizatio n pro blem for the Product in stances is N P-hard in case s = 3 and , thu s, it is NP-hard for every s ≥ 3 . 4 Computational Experimentation In this section, the results of em pirical ev aluation are rep orted and discu ssed. The experiments were con- ducted for the following instances (for instance f amily definitions see Section 3): • Ran dom instances where ea ch weight was rando mly cho sen in { 1 , 2 , . . . , 1 00 } , i. e., a = 1 and b = 101 . Accordin g to Subsection 3.1, the optimal solutions of all the considered Rando m instances are very likely to be an = n . • GP instances with pr edefined optimal solutions (see Section 3.1). • Cli que and SquareRoot instances, where the weight of each edg e in the graph was randomly selected from { 1 , 2 , . . . , 10 0 } . Instead o f the optimal solution value we use the best known solution v alu e. • Geo metric instan ces, where both coordinates of e very point were randomly selected from { 1 , 2 , . . . , 100 } . The distances b etween the p oints ar e calcu lated p recisely while th e weigh t of a vector is rou nded to the nearest integer . In stead of the optimal solution v alue we use the best known solution value. • Pro duct instances, wher e every value a j i was rand omly selected from { 1 , 2 , . . . , 10 } . In stead of the optimal solution v alue we use the best known solution value. Local Search Heur istics f or the Multidimensio nal Assignment Problem 14 An instance name consists of three parts: the num ber s of dimensions, the ty pe of th e in stance (‘gp’ for GP , ‘ r’ for Random , ‘c’ for Clique , ‘g’ for Geometri c , ‘p’ for Product an d ’ sr’ fo r SquareRoo t ), an d the size n of th e instance . For example, 5r 40 means a fiv e dimension al Rando m instance of size 40. F or ev ery combination of instance s ize and type we generated 10 instance s, using the number seed = s + n + i as a seed of the rando m number seque nces, where i is an index o f the instance of this type an d size, i ∈ { 1 , 2 , . . . , 1 0 } . Thereb y , every experiment is con ducted f or 1 0 d ifferent in stances of som e fixed type and size, i.e., ev ery number reported in the tables below is average for 10 run s for 10 dif ferent instances. The sizes of all b ut GP instances are selected such that every algorithm could p rocess them all in approx imately th e same tim e. The GP instances are included in or der to examin e the behavior of the heuristics on smaller instances (recall that GP is the only instance set for which we know the exact solutions for small instances). All the heuristics are implemented i n V isual C++. Th e ev aluation platform is based on AMD Athlon 64 X2 3.0 GHz processor . Further, the results of the experiments of three dif f erent types are provided and discussed: • In Subsection 4.1, the local sear ch heu ristics are applied to th e assignm ents genera ted by so me co n- struction heu ristic. These experimen ts allow us to exclud e se veral local searches f rom the rest of the experim ents, howe ver, th e comp arison of the results is com plicated becau se of the significant difference in both the solutio n quality and the running time. • In Subsection 4.2, tw o simple metaheuristics are used to equate the runnin g times of different heuris- tics. T his is done by varying of number of iteration s of the metaheuristics. • In Subsection 4. 3, the results of all the d iscussed approaches are g athered in two tables to find the most successful solvers for th e instance with indepen dent and d ecompo sable weights fo r every particular runnin g time. 4.1 Pure Local Sear ch Experiments First, we run e very local search heuristic for every instance exactly once . The loca l search is applied to solutions genera ted with o ne of the follo wing construction heuristics: 1. T r ivial , wh ich was first mention ed in [ 4] as Diagonal . T rivial construction heuristic simply assigns A i j = i for every i = 1 , 2 , . . . , n and j = 1 , 2 , . . . , s . 2. Gre edy heuristic was discussed in m any pap ers, see, e.g. [4, 9, 16, 17, 18, 23]. It was proven [16] that in the worst case Greedy prod uces the unique worst solution; howe ver, it was sho wn [1 7] that in some cases Greedy may be a good selection as a fast and simple heuristic. 3. Max-Re gret was d iscussed in a number of papers, see, e. g., [4, 9, 16, 23, 35]. As fo r Greedy , it is proven [16] that in the worst case Max-Regret prod uces the un ique worst solution however many researchers [4, 23] noted that Max-Regret is quite powerful in practice. 4. ROM was first introduced in [16] as a heur istic of a large domination n umber . On every iteration , the h euristic calculates the total weigh t fo r ev ery set of vectors with the fixed fir st two coor dinates: M i,j = P e ∈ X,e 1 = i,e 2 = j w ( e ) . Th en it solves a 2-AP f or th e weigh t matrix M and reorde rs th e secon d dimension of the assign ment according to this solu tion and the fir st dimension of the assign ment. The pro cedure is rep eated recursively for the sub problem where the first d imension is exclud ed. For details see [16, 23]. Local Search Heur istics f or the Multidimensio nal Assignment Problem 15 W e will begin our d iscussion from the experiments started from trivial assignme nts. The results reported in T a bles 2 and 3 are a verages for 10 exper iments since every row of these tables correspon ds to 10 instances of some fixed type and size but of different seed values (see above). Th e tables are split in to two parts; the first part con tains only the instances with in depend ent weights ( GP an d Rando m ) while the second part contains only the instances with decompo sable weights ( Clique , Geome tric , Product and SquareRo ot ). Th e av erage values for different instance families and numb ers of dimension s are provided at the bo ttom of each part of each tab le. The tab les are also split vertically accord ing to th e classes o f heuristics. The winn er in ev ery row and ev ery class of heuristics is underlined . The value of the solution err or is calcu lated as w ( A ) /w ( A best ) − 1 · 100% , where A is the obta ined assignment and A best is the optimal assignment (or the best known on e, see above). In the gro up of the vectorwise heuristics the mo st powerful one is definitely 3-opt . v-opt outperf orms it only in a few experiments, mostly three dimen sional on es ( recall th at the neigh borho od of k -opt increases exponentially with the increase of the num ber of dimension s s ). As it w a s expected, 2-o pt n ev er ou tperfor ms 3-opt sinc e N 2-opt ⊂ N 3-opt (see Section 2.2). The tenden cies for the indep endent weight instances and for the decomposab le weight instanc es are similar; the only d ifference which is worth to note is that all b ut v-opt heu ristics of this grou p solve the Product instances very well. Note that the d ispersion of the weig hts in Product instances is really high and, thus, v-opt , which minimizes the weight of only one vector in e very pair of v ectors while the weight of the complementary vector may increase arbitrary , cannot be efficient fo r them. As one can expect, s D V is more successful than 2D V and 2D V is more successful than 1D V with respect to th e solution quality (obvio usly , all the heu ristics of this gro up perfo rm e qually f or 3-AP and 2D V and s D V are also equal f or 4 -AP , see Section 2. 1). Howe ver, for the instances with decom posable weights all the dimen sionwise heuristics perform very similarly and even for the large s , s D V is n ot significantly mo re powerful than 1D V or 2D V which means that in case of decomposab le instances the most efficient iterations are when | D | = 1 . W e can assume that if c is the number of edges connecting the fixed and unfixed parts of the clique, then an iteration of a dimensionw ise h euristic is rath er efficient when c is small. Obser ve that, e.g., fo r Cliq ue the d iv ersity of values in the weigh t matrix [ M i,j ] n × n (see (4)) decr eases with th e increase of the nu mber c and, hence, the space fo r optim ization on e very iteration is decr easing. Observe also that in the case c = 1 th e iteration leads to the optimal match between the fixed and unfixed parts of th e assignmen t vectors. All the comb ined heuristics show improvements in the solution quality over each of their com ponen ts, i.e., over b oth correspon ding vectorswise and dimen sionwise local search es. In p articular, 1D V 2 outper- forms b oth 2-op t and 1 D V , 2D V 2 outperf orms b oth 2-op t and 2 D V , s D V 3 outperf orms b oth 3-op t and s D V and s D V v outperf orms b oth v-op t and s D V . Moreover, s D V 3 is significantly faster than 3-o pt and s D V v is significantly faster than v-opt . Hen ce, we will not discuss the sing le heuristics 3-opt and v-opt in the rest o f the paper . The heuristics 1D V 2 and 2D V 2 , obviously , perform equally for 3-AP instances. While for the instances with independen t weig hts the combinatio n of the dimension wise heuristics with the vectorwise on es sign ificantly imp roves the solution quality , it is not th e case fo r the instances with decomp osable weights (ob serve that 1D V performs almost as well as the most powerful heuristic s D V 3 ) which shows the impo rtance of the instance s division. W e con clude that the vector wise neighborh oods are not efficient for the instances with de composab le weights. Next we cond ucted the experiments starting fro m the other constru ction heuristics. But first we com - pared the constru ction heuristics themselves, see T able 1. It is n ot surprising that T rivia l produces the worst solutions. Howe ver, one can see that T rivia l outperfor ms Greed y and Max-Regret for e very Product instance . The reason is in the extremely high dispersion of the weights in Product . Both Greedy and M ax-Regret con- struct the assignments by ad ding new vecto rs to it. The decision which vector should be added does not Local Search Heur istics f or the Multidimensio nal Assignment Problem 16 depend (or does not depe nd enou gh in case of Max-Regret ) on the rest of the vectors and, thus, at the end of the proce dure only the vector s with hu ge weigh ts ar e av ailab le. For o ther instance families, Greed y , Max- Regret and ROM perfor m similarly thoug h the r unning time of th e heur istics is very different. Max-Regret is defin itely the slowest construction heuristic; Greedy is very fast fo r the Rand om instances (this is because of the large nu mber of vector s of the weight a a nd the imp lementation featur es, see [2 3] for deta ils) and relativ ely slow fo r the rest of the in stances; ROM ’ s runn ing tim e almost d oes not d epend on the instance and is constantly modera te. Starting from Greed y ( T ab le 4) significantly improves the solutio n q uality . This mo stly influenced the weakest h euristics, e.g ., 2-op t average er ror decreased in ou r experimen ts from 59% and 20 % to 1 5% and 6% for independent and decomp osable weights respectiv ely , thoug h, e.g., the most po werful heu ristic s D V 3 error also noticeab ly d ecreased (from 2.8% and 5.8% to 2. 0% and 2.5%). As regards the ru nning time, Greedy is slower than most of the lo cal search heu ristics and, thu s, the ru nning times of all but s D V 3 and s D V v heuristics are very s imilar . The best of the rest of the heuristics in this e xperimen t is s D V though 1D V 2 and 2D V 2 perfor m similarly . Starting from Max-Regret improves the solu tion quality e ven more b ut at the cost of very large running times. In this case the difference in the r unning time of the loc al sear ch heuristics almost disapp ears and s D V 3 , the best one, reach es the average err or values 1.3% and 2.2 % for indepe ndent and d ecompo sable weights respectiv ely . Star ting from ROM improves the quality only for the worst heur istics. T his is probably because all the b est heuristics c ontain s D V which d oes a g ood vectorwise optimizatio n (rec all tha t ROM exploits a similar to the dimensionwise neighborhoo d idea). At the same time, starting from ROM increases the running time of the heuristics significantly; the results for both Max-Regret and ROM are e xcluded from the paper; one can find them on the web [22]. It is clear that the construction heuristics are q uite slow com paring to the local search an d we shou ld answer the follo wing question: is i t worth to spend so much time on th e initial solution construction o r there is some way to apply local search se veral t imes in orde r to improve th e assi gnmen ts iteratively? I t is known that the algorithm s which app ly loc al search several times are called m etaheuristics. Ther e is a nu mber o f different metaheuristic approache s such as tabu search or memetic algorithm s, b ut this is n ot the subject of this paper . In what follows, we are going to use tw o simple metaheu ristics, Chain and Multi chain . 4.2 Experiments With Metaheuristics It is obvious that there is no sense in app lying a local sear ch proced ure to one solution several times b e- cause the local search moves the solution to a local minimum with respect to its neighb orhoo d, i.e., th e second exploration o f this neighb orhoo d is u seless. In ord er to apply the local sear ch several times, one should pertur b the solu tion o btained o n th e previous iteration. This id ea immediately brings us to the fir st metaheuristic, let us say Chain : 1. I nitialize an assignment A ; 2. Set A best = A ; 3. Repe at: (a) Apply loc al search A = L S ( A ) ; (b) If w ( A ) < w ( A best ) set A best = A ; (c) Perturb the assignm ent A = P er tur b ( A ) . In th is algor ithm we use two su broutin es, LS ( A ) and P ertu rb ( A ) . Th e first one is so me loc al search proced ure and the s econd one is an algorithm which removes th e given assignment from the local minimum Local Search Heur istics f or the Multidimensio nal Assignment Problem 17 by a ran dom per turbation o f it. The p erturbatio n should be stron g en ough such that the assignment will not come back to the previous position on the n ext iter ation every time tho ugh it shou ld n ot be too strong such th at the results of the p revious search would b e totally destroyed . Our per turbation pr ocedur e selects p = ⌈ n/ 2 5 ⌉ + 1 vectors in the ass ignmen t and perturbs them randomly . In other words, P ertu rb ( A ) is just a rando m move of the p-opt heuristic. The par ameters of the procedure are obtained empirically . One can dou bt if Chain is go od enoug h for large running times and , thus, we intro duce a little bit mo re sophisticated metah euristic, Mult ichain . Unlike Ch ain , Multichain maintains several assignm ents on every iteration: 1. I nitialize assignment A best ; 2. Set P = ∅ and repe at the following c ( c + 1) / 2 times: P = P ∪ { LS ( P e r tur b ( A best )) } (recall that P ertu rb ( A ) pr oduces a different assignmen t e very time); 3. Repe at: (a) Save the best c assign ments from P into C 1 , C 2 , . . . , C c such that w ( C i ) ≤ w ( C i +1 ) ; (b) If w ( C 1 ) < w ( C best ) set A best = C 1 . (c) Set P = ∅ and fo r every i = 1 , 2 , . . . , c repeat the following c − i + 1 times: P = P ∪ { LS ( P er tur b ( C i )) } . The p arameter c is responsible for the p ower of Multichain ; we u se c = 5 and, thus, the algor ithm perfor ms c ( c + 1) / 2 = 15 local searche s on e very iteration. The results o f the experiments with Chain run ning for 5 and 10 seconds ar e provided in T ables 5 an d 6 respectively . The experiments a re repe ated for three co nstruction heur istics, T r ivial , Greedy and ROM . It was not possible to in clude Max-Regret in the c omparison because it takes much more than 10 seconds f or some of the instances. The diversity in solu tion quality of th e heu ristics decrea sed with the u sage o f a metaheur istic. This is because the fast h euristics are able to rep eat more time s than the slow on es. Note that s D V 3 , which is the most powerful single heuristic, is now outp erforme d by other heuristics. T he most successful heuristics for the instances with indep endent and decompo sable weights are s D V v and 1D V respectfu lly , though 1D V 2 and 2D V 2 are slightly mor e successful than s D V v for the GP instan ces. This result also h olds for Multichai n , s ee T ab les 7 and 8. The success of 1D V confirms again that a dim ensionwise heuristic is most successful when | D | = 1 if the weights are decomp osable and that it is more efficient to r epeat these iterations many times rather than tr y | D | > 1 . For the explanatio n of this phenom enon see Section 4.1. Th e success of 1 D V 2 and 2D V 2 for GP means existence of a certain structure in the weight matrices of these instances. One can see that the initialization of the assignme nt is not crucial for the final solution quality . Howe ver, using Greedy instead of T ri vial clear ly im proves the solution s fo r alm ost ev ery instan ce and local search heuristic. In con trast to Greedy , using of ROM usually does not improve the solution qu ality . It o nly influences 2-op t whic h is the only pure vectorwise local search in the co mparison (recall that ROM has a dimensionw ise structu re and, thus, it is good in combination with vectorwise heu ristics). The Multicha in metah euristic, given the same time, obtains better results th an Chain . Howe ver, Multi - chain fails for some combin ations of slow local search and hard instance because it is not ab le to com plete ev en the first iteration in the gi ven tim e. Cha in , having much easier iteration s, do not have this disadvantage. Giving mor e time to a metah euristic also improves the solution quality . The refore, one is able to obtain high quality solutions using metaheuristics with large running times. Local Search Heur istics f or the Multidimensio nal Assignment Problem 18 4.3 Solvers Comparison T o compare all t he heuristics and metaheuristics discussed in this paper we pr oduced T ables 9 and 10. T hese tables indicate which heuristics should be chosen to solve par ticular instances in the giv en time limitations. Sev eral best heuristics are selec ted for ev ery combin ation of the instance and the given time. A h euristic is included in the table if it was able to solve th e prob lem in the given time, and if its solution qu ality is not worse t han 1 . 1 · w ( A best ) and its running time is not larger than 1 . 1 · t best , where A best is the best assignment produ ced by the considered heu ristics and t best is the time spent to produ ce A best . The following info rmation is provided fo r e very solver in T ables 9 and 10: • Metah euristic type ( C for Chain , MC for Multichain or empty if the experiment is single). • Lo cal search procedu re ( 2-o pt , 1D V , 2D V , s D V , 1 D V 2 , 2D V 2 , s D V 3 s D V v or em pty if no local search was applied to the initial solution). • Con struction h euristic the experimen t was started with ( Gr , M-R or empty if th e assignment was initialized by T rivial ) . • Th e solution error in percent. The following solvers were included in this experiment: • Con struction heuristics Greedy , Max-Regret and R OM . • Sing le heuristics 2-opt , 1D V , 2D V , s D V , 1D V 2 , 2D V 2 , s D V 3 and s D V v started from either T r ivial , Gree dy , Max-Regret or ROM . • Cha in and Mul tichain metah euristics fo r eith er 2 -opt , 1 D V , 2D V , s D V , 1D V 2 , 2D V 2 , s D V 3 or s D V v and started fro m either T ri vial , Greed y , Max-Regret or ROM . The metaheu ristics procee ded u ntil the gi ven time limitations. Note that for certain instances we exclud e duplicating s olvers (recall that all the dim ensionwise heuris- tics perfo rm equ ally for 3-AP as well as 2 D V and s D V perform equally for 4- AP , see Section 2.1). The common rule is that we leave s D V rather than 2D V and 2D V rather than 1D V . For example, if the list of suc- cessful solvers for some 3-AP instance con tains C 1DV Gr , C 2D V Gr and C s D V Gr, then only C s D V Gr will be includ ed in th e ta ble. This is a lso a pplicable to the combin ed h euristics, e.g , having 1DV 2 R and 2D V 2 R for a 3-AP instance, we include only 2D V 2 R in the final results. The last row in every table ind icates the heu ristics wh ich ar e the most successful on average, i.e., th e heuristics which can solve all the instances with the best a verage results. Single construction heuristics are not presented in the t ables; single local search proce dures appear on ly for th e small allowed tim es when all o ther h euristics take more time to ru n; the most o f the best solvers are the metaheuristics. Multichain seems to b e mor e suitab le than Cha in f or large runnin g times; however , Multichai n does not a ppear for the instan ces with small n . This is probably because the power o f the perturb ation degree increases with the de crease of th e instance size (note that per tur b ( A ) pertur bs at least two vectors in spite o f n ). The most successful h euristics fo r the assignmen t initialization are T ri vial and Greedy ; T rivial is usefu l rather for small runn ing times. Max-Regret and ROM appear only a fe w times in the tables. The success of a local searc h d epends o n the instance ty pe. The most successful local search heur istic for the instance with ind ependen t weights is definitely s D V v . T he s D V heuristic also app ears several times in T able 9, especially fo r the small r unning time s. For the in stances with deco mposable weigh ts, the mo st successful are the dimensionwise heuristics and, in particular, 1D V . Local Search Heur istics f or the Multidimensio nal Assignment Problem 19 5 Conclusions Sev eral neig hborh oods are generalized a nd discussed in th is pa per . An efficient approa ch o f joinin g d ifferent neighbo rhoo ds is successfully app lied; the yielded heuristics showed th at they comb ine the strengths of their c ompon ents. Th e experim ental ev aluatio n for a set of instances o f different type s show that there are se veral superior heu ristic approaches suitable for dif ferent kinds of instances and running times. T wo kin ds of instan ces are selected: instances with inde penden t weights and instances with de composab le weigh ts. The first ones are better solvable by a combined heuristic s D V v ; the second one s are better solvable by 1D V . In both cases, it is good to initialize the assignment with the Greedy construction heur istic if there is eno ugh time; oth erwise on e sho uld use a tr i vial a ssignment as the in itial o ne. The r esults can also be sign ificantly improved by apply ing metaheuristic approaches for as log as possible. Thereby , it is shown in the paper that metaheuristics app lied to the fas t heuristics dominate slow heur is- tics and, thus, further research of some more sophisticated metaheur istics such as meme tic algorithms is of interest. Referenc es [1] R. M. Aiex, M. G. C. Resende, P . M. Pardalos, and G. T oraldo. Grasp with path rel inking for three-index assign- ment. INF ORMS J . on Computing , 17(2):224–2 47, 2005. [2] N. Alon and J. Spencer . The Pr obabilistic Method . John W i ley , se cond edition, 2000. [3] S . M. Andrijich and L. Caccetta. Solving the multisensor data association problem. Nonlinear Analysis , 47(8):5525– 5536, 2001. [4] E . Balas and M. J. Saltzman. An algorithm for the three-index assignment problem. Ope r . Res. , 39(1):150–161, 1991. [5] H. J. Bandelt, A. Maas, and F . C. R. Spieksma. Local search heu ristics for multi-inde x assignment problems with decomposa ble costs. Journal of the Opera tional Resear ch Society , 55(7):694– 704, 2004. [6] H. Bekker , E. P . Braad, and B. Goldengo rin. Usi ng bipartite and multidimensiona l matching to select the roots of a system of polynomial equations. In Computational Science and Its Applications — ICCSA 2005 , volume 3483 of Lectur e N otes Comp. Sci. , pages 397–40 6. Springer , 2005. [7] R. E. Burkard and E. C ¸ ela. Linear assignment problems and extensions. In Z . Du and P . Pardalos, editors, Handboo k of Combinatorial Optimization , pages 75–149. Dordrecht, 1999. [8] R. E. Burkard, B. Klinz, and R. Rudolf. Perspectives of monge properties in optimization. Discr ete Applied Mathematics , 70(2):95–1 61, 1996. [9] R. E. Burkard, R. Rudolf, and G. J. W oe ginger . Three-dimensional a xial assign ment p roblems with decomposable cost coef ficients. T echnical Report 238, Graz, 1996. [10] W . K. Clemons, D. A. Grundel, and D. E. Jeffcoat. Theory and algorithms for cooperative systems , chapter Applying simu lated an nealing to the multidimensional assignment problem, pages 45–61 . W orld Scientific, 2004 . [11] Y . Crama and F . C. R. Spieksma. Approximation algorithms for three-dimensional assignment problems with triangle inequalities. Europ ean Jou rnal of Operational Resear ch , 60(3):273–279, 1992. [12] A. M. F rieze and J. Y ad egar . An algorithm for solving 3-dimensional assignment problems with application to scheduling a teaching practice. Journa l of the Opera tional Resear ch Society , 32:989– 995, 1981. [13] M. R. Gare y and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness (Series of Books in the Mathematical Sciences) . W . H. Freeman, January 1979. [14] D. Gr undel, C. Oliv eira, and P . Pardalos. Asymptotic properties of random multidimensional assignment prob- lems. Journal of Optimization Theory and Applications , 122(3):33– 46, 2004. Local Search Heur istics f or the Multidimensio nal Assignment Problem 20 [15] D. A. Grundel and P . M. Parda los. T est p roblem genera tor for the multidimen sional a ssignment prob lem. Comput. Optim. Appl. , 30(2):133– 146, 2005. [16] G. Gutin, B. Goldengorin, and J. Huang. W orst case analysis of max-regret, greedy and other heuristics for multidimensional assignment and trav eling salesman problems. Journ al of Heuristics , 14(2):169– 181, 2008. [17] G. Guti n and D. Karapetyan. G reedy like algorithms for the traveling salesman problem and multi dimensional assignment problem. I n Advances in Gr eedy Algorithms . I-T ech, 200 8. [18] G. Gutin and D. Karapetyan. A selection of useful theoretical tools for the design and analysis of optimization heuristics. Memetic Computing , 1(1):25–34, 2009. [19] J. M. Harris, J. L. Hirst, and M. J. Mossing hof f. Combinatorics and Graph Theory . Mathematics, 2008. [20] G. Huang and A. Lim. A hybrid genetic algorithm for the three-index assignment problem. E ur opean J ournal of Operation al Resear ch , 172(1):24 9–257, July 2006. [21] V . Isler, S. Khanna, J. Spletzer, and C. J. T aylor . T arget tracking with distributed sensors: The focus of attention problem. Computer V ision and Image Understanding J ournal , (1-2):225–247 , 2005. Special Issue on Attention and Performance in Computer V ision. [22] D. Karapetyan. http://www.cs. rhul.ac.uk/Re search/ToC/pu blications/Karapetyan/ . [23] D. Karapetyan , G. Gutin, and B. Goldengorin. Empirical e valuation of construction heuristics for the multidimen- sional assignment problem. In J. Chan, J. W . Daykin, and M. S . Rahman, editors, London A lgorithmics 2008: Theory and Practice , T exts in Algorithmics, pages 10 7–122. College Publications, 2009. [24] P . Krokh mal, D. Grundel, and P . P ardalos. Asymptotic behavior of the e xpected optimal v alue of the multidimen- sional assignment problem. Mathematical P r ogramming , 109(2 -3):525–551 , 2007. [25] H. W . Kuhn. The hungarian method for the assignment problem. Naval R esear ch Logistic Quarterly , 2:83–97, 1955. [26] Y . K uroki and T . Matsui. An approximation algo rithm for multidimens ional assignmen t problems minimizing the sum of squared errors. Discrete Applied Math ematics , 157(9):2124–213 5, 2007. [27] R. Murphey , P . Pardalos, and L. Pi tsoulis. A grasp for the multitarget multisensor tracking problem. Networks, Discr ete Mathematics and Theor etical C omputer Science Series , 40:277 –302, 1998. [28] Carlos A. S . Oliv eira and Panos M. Pardalos. Randomized parallel algorithms f or the multidimension al assignment problem. A ppl. Numer . Math. , 49:117 –133, 2004. [29] P . M. Parda los and L. S. Pitsoulis. Nonlinear assignment pr oblems . Springer , 2000. [30] P . M. Pardalos and L. S . P itsoulis. Nonlinear Optimization and Applications 2 , chapter Quadratic and Multidi- mensional Assignment Problems, pages 235–27 6. Kluwer Academic Publishers, 2000. [31] E. L. Pasiliao, P . M. Pardalos, and L. S. Pitsoulis. Branch and bound algorithms for the multidimensional assign- ment problem. Optimi zation Methods and Softwar e , 20(1):127–143, 2005. [32] W . P . Pierskalla. The multidimensional assignment problem. Operation s Resear ch , 16:422 –431, 1968. [33] J. Pusztaszeri, P . Rensing, and Th. M. L iebling. T racking elementary particles near their primary vertex: a combinatorial approach. Journa l of Global Optimization , 9:41–64 , 1996. [34] R. L. Rardin and R. Uzsoy . E xperimental ev aluation of heuristic optimization algorithms: A tutorial. Journal of Heuristics , 7(3):261–304 , 2001. [35] A. J. Robertson. A set of greedy randomized adaptiv e local search procedure (grasp) implementations for the multidimensional assignment problem. Comput. Optim. A ppl. , 19(2):145– 164, 2001. [36] F . Spieksma and G. W oeginger . Geometric three-dimension al assignment problems. Eur opean J ournal of Opera- tional Resear ch , 91:611–618, 1996. [37] F . C. R. Spiek sma. Nonlinear Assignment Pr oblems, A lgorithms and Application , chapter Multi Index Assignment Problems: Complexity , Approximation, Applications, pages 1–12. Kluwer , 2000. [38] E-G. T albi. Metaheuristics: Fr om Disign to Implementation. John Wiley & S ons, 2009. [39] C. J. V een man, M. J. T . Reinders, and E. Backer . Establishing motion correspondence using extended temporal scope. A rtificial Intelligen ce , 145(1-2):227–243 , 2003. Local Search Heur istics f or the Multidimensio nal Assignment Problem 21 T ab . 1: Constructio n heuristics com parison. Solution error , % Running times, ms Inst. Best Tri vial Greedy M ax-Regret ROM Tri vial Greedy Max-Regret ROM 3gp100 504.4 157 6 6 10 0 40 799 9 3r150 150.0 4 997 54 29 34 0 14 4 253 26 4gp30 145.2 158 9 9 2 0 35 206 7 4r80 80.0 4 985 74 49 76 0 12 27 285 278 5gp12 66.2 147 13 9 9 0 6 36 2 5r40 40.0 4 911 159 116 169 0 6 37 214 686 6gp8 41.8 143 25 1 14 0 5 33 2 6r22 22.0 5 180 295 218 310 0 6 24 750 861 7gp5 25.6 157 27 6 20 0 1 8 1 7r14 14.0 5 116 377 454 396 0 2 17 032 805 8gp4 19.2 113 21 7 28 0 1 8 1 8r9 9.0 5 262 579 514 543 0 2 5 604 342 All avg. 2 610 137 118 134 0 11 9 769 252 GP avg. 146 17 6 14 0 15 182 4 Rand. avg. 5 075 256 230 255 0 7 19 356 500 3-AP avg. 2 577 30 17 22 0 27 2 526 17 4-AP avg. 2 571 41 29 39 0 23 13 745 142 5-AP avg. 2 529 86 62 89 0 6 18 625 344 6-AP avg. 2 662 160 110 162 0 5 12 391 432 7-AP avg. 2 637 202 230 208 0 2 8 520 403 8-AP avg. 2 687 300 261 286 0 1 2 806 171 3cq150 1738.5 1 219 41 20 37 0 56 4 388 27 3g150 1552.0 865 19 27 3 0 53 4 226 28 3p150 14437.2 76 215 122 7 0 580 4 318 37 3sr150 1077.8 1 250 42 21 43 0 60 4 363 29 4cq50 3034.8 400 27 22 32 0 156 3 713 161 4g50 1705.2 492 21 29 2 0 217 3 828 148 4p50 20096.8 103 484 278 8 0 1 030 3 725 151 4sr50 1496.6 367 25 20 32 0 193 3 847 150 5cq30 4727.1 218 20 17 24 0 640 9 636 583 5g30 2321.8 340 26 33 3 0 936 9 650 604 5p30 55628.5 137 1 017 646 8 0 2 711 9 536 619 5sr30 1842.0 196 16 13 28 0 666 9 627 615 6cq18 5765.5 142 15 15 18 0 426 6 758 267 6g18 2536.0 260 26 27 3 0 563 6 802 262 6p18 135515.3 163 2 118 1 263 8 0 1 098 6 758 323 6sr18 1856.3 121 13 13 19 0 420 6 775 261 7cq12 6663.7 91 14 11 15 0 1 037 6 653 924 7g12 3267.2 156 19 23 2 0 1 217 6 614 944 7p12 558611.7 346 3 162 1 994 9 0 1 872 6 463 335 7sr12 1795.7 78 9 9 15 0 980 6 510 268 8cq8 7004.9 62 10 10 10 0 465 2 416 130 8g8 3679.5 105 15 21 1 0 569 2 446 120 8p8 2233760.0 177 3 605 2 309 9 0 710 2 413 140 8sr8 1622.1 52 7 7 10 0 474 2 448 132 All avg. 309 457 290 14 0 714 5 580 302 Clique avg. 355 21 16 23 0 463 5 594 349 Geom. avg. 370 21 27 2 0 593 5 594 351 Product avg. 167 1 767 1 102 8 0 1 334 5 536 268 SR avg. 344 19 14 24 0 465 5 595 242 3-AP avg. 853 79 47 22 0 187 4 324 30 4-AP avg. 340 139 87 19 0 399 3 778 152 5-AP avg. 223 270 177 15 0 1 238 9 612 605 6-AP avg. 171 543 329 12 0 627 6 773 278 7-AP avg. 168 801 509 10 0 1 276 6 560 618 8-AP avg. 99 909 587 8 0 555 2 431 131 Local Search Heur istics f or the Multidimensio nal Assignment Problem 22 T ab . 2: L ocal search heuristics started from T r ivial. Solution error , % Inst. Best 2-opt 3-opt v-opt 1DV 2DV s D V 1DV 2 2DV 2 s D V 3 s D V v 3gp100 504.4 19.6 10.0 19.8 4. 9 4.9 4.9 4.9 4.9 4. 6 4. 9 3r150 150.0 134. 5 16.0 1.5 2.4 2.4 2.4 2.4 2.4 2.1 0. 7 4gp30 145.2 17.4 4.2 13.4 11. 1 7.9 7.9 10.7 7.9 4. 2 7. 5 4r80 80.0 115.0 7. 3 2.0 20.5 11.5 11.5 18.9 11.5 4.1 1.6 5gp12 66.2 10.6 2.1 8.5 12. 5 6.9 6.9 10.1 6.9 1. 8 6. 9 5r40 40.0 104.5 4. 3 3.8 63.0 34.3 34.3 47.3 34.3 3.5 5.3 6gp8 41.8 6.7 2.4 5.3 12. 4 5.7 5.0 6.5 5.5 2.4 4. 8 6r22 22.0 105.5 0. 9 8.6 125. 0 62.3 54.5 80.9 55.5 1.8 9.1 7gp5 25.6 6.3 3.9 10.2 21. 5 9.0 5.9 5.9 5.1 3. 9 5. 5 7r14 14.0 95.7 0.0 36.4 244. 3 111.4 72.1 92.1 70.0 0.7 16.4 8gp4 19.2 6.8 5.2 10.9 17. 2 9.4 6.2 7.8 6.8 5. 2 6. 2 8r9 9.0 81.1 0.0 67.8 323. 3 173.3 60.0 73.3 77.8 0.0 40.0 All avg. 58.6 4.7 15.7 71. 5 36.6 22.6 30.1 24.0 2.9 9.1 GP avg. 11.2 4.6 11.3 13. 3 7.3 6.1 7.6 6.2 3. 7 6. 0 Rand. avg. 106.1 4. 7 20.0 129. 8 65.9 39.1 52.5 41.9 2.0 12.2 3-AP avg. 77.1 13.0 10.6 3.6 3.6 3.6 3.6 3.6 3.3 2. 8 4-AP avg. 66.2 5.7 7.7 15. 8 9.7 9.7 14.8 9.7 4. 2 4.6 5-AP avg. 57.5 3.2 6.1 37. 8 20.6 20.6 28.7 20.6 2.7 6.1 6-AP avg. 56.1 1.7 6.9 68. 7 34.0 29.8 43.7 30.5 2.1 6.9 7-AP avg. 51.0 2.0 23.3 132. 9 60.2 39.0 49.0 37.5 2.3 10.9 8-AP avg. 43.9 2.6 39.4 170. 3 91.4 33.1 40.6 42.3 2.6 23.1 3cq150 1738.5 125.1 49.9 22.8 20.1 20.1 20.1 20.1 20.1 19.9 18. 9 3g150 1552.0 0.0 0.0 5.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3p150 14437.2 0.1 0.0 15.0 0. 0 0.0 0.0 0.0 0.0 0. 0 0. 0 3sr150 1077. 8 144.2 64. 0 28.0 22.0 22.0 22.0 22.0 22.0 21. 8 21.3 4cq50 3034.8 52. 5 31.3 30.3 23.3 23. 1 23.1 23.2 23.1 21.4 20.1 4g50 1705.2 0.0 0.0 11.1 0.2 0.0 0.0 0.0 0.0 0.0 0.0 4p50 20096.8 0.0 0.0 49.6 0. 1 0.0 0.0 0.1 0.0 0. 0 0. 0 4sr50 1496.6 56. 8 30.6 31.9 27. 2 24.8 24.8 27.2 24.8 23.4 23.9 5cq30 4727.1 30. 9 18.7 21.4 16. 9 16.6 16.6 16.8 16.6 15.5 16.1 5g30 2321.8 0.0 0.0 9.2 0.2 0.0 0.0 0.0 0.0 0.0 0.0 5p30 55628.5 0.0 0.0 53.2 0. 1 0.0 0.0 0.0 0.0 0. 0 0. 0 5sr30 1842.0 38. 3 19.0 23.9 21. 7 20.4 20.4 21.1 20.4 17.6 18.3 6cq18 5765.5 17. 6 12.2 16.1 11. 5 10.3 11.6 11.3 10.3 10.1 11.1 6g18 2536.0 0.0 0.0 15.4 0.5 0.0 0.0 0.0 0.0 0.0 0.0 6p18 135515.3 0.0 0.0 98.3 0. 2 0.0 0.0 0.0 0.0 0. 0 0. 0 6sr18 1856.3 20. 9 11.9 17.4 12. 7 13.9 13.6 12.7 13.9 11.5 12.6 7cq12 6663.7 11. 9 5.3 10.4 8.0 7.0 5.9 7.1 6.9 5.7 5.8 7g12 3267.2 0.0 0.0 9.9 0.1 0.0 0.0 0.0 0.0 0.0 0.0 7p12 558611.7 0.0 0.0 123.6 0.2 0.0 0.0 0.0 0.0 0.0 0.0 7sr12 1795.7 12. 1 7.6 11.0 8.5 10. 1 7.1 8. 3 10.1 5.9 7. 0 8cq8 7004.9 6.4 3.0 8.5 6.4 4.4 4.8 5.3 4.1 2.2 4.7 8g8 3679.5 0.0 0.0 9.1 0.2 0.0 0.0 0.0 0.0 0.0 0.0 8p8 2233760.0 0.0 0.0 143.8 0.1 0.0 0.0 0.0 0.0 0.0 0.0 8sr8 1622.1 6.6 2.6 7.4 5.7 5.0 4.7 4.9 4.4 3.5 4.7 All avg. 21.8 10.7 32.2 7. 8 7.4 7.3 7.5 7.4 6. 6 6. 9 Clique avg. 40.7 20.0 18. 2 14.4 13.6 13.7 14.0 13.5 12. 5 12.8 Geom. avg. 0.0 0.0 10.1 0.2 0.0 0.0 0.0 0.0 0.0 0.0 Product avg. 0.0 0. 0 80.6 0.1 0.0 0.0 0.0 0.0 0.0 0.0 SR avg. 46.5 22.6 19. 9 16.3 16.1 15.4 16.0 16.0 13. 9 14.7 3-AP avg. 67.3 28.5 17.9 10.5 10.5 10.5 10.5 10.5 10. 4 10.1 4-AP avg. 27.3 15.5 30.7 12.7 12.0 12.0 12.6 12. 0 11.2 11.0 5-AP avg. 17.3 9.4 26.9 9.7 9.3 9.3 9.5 9.3 8.3 8.6 6-AP avg. 9.6 6.0 36.8 6.2 6.1 6.3 6.0 6.1 5.4 6.0 7-AP avg. 6.0 3.2 38.7 4.2 4.3 3.2 3.8 4.3 2.9 3.2 8-AP avg. 3.2 1.4 42.2 3.1 2.4 2.4 2.6 2.1 1.4 2.4 Local Search Heur istics f or the Multidimensio nal Assignment Problem 23 T ab . 3: L ocal search heuristics started from T r ivial. Running time, ms Inst. 2-opt 3-opt v-opt 1D V 2D V s DV 1DV 2 2DV 2 s D V 3 s D V v 3gp100 6.2 82 0.6 181.8 14.3 14.0 16.5 18.4 16.7 430.6 79.0 3r150 19.8 1 737.9 65.7 17.6 18.7 17.1 22.7 18.9 147.8 45.4 4gp30 1.5 150.3 45.0 0.7 1. 2 1.1 1. 4 1.4 116. 9 17.5 4r80 10.5 987.5 64.5 7.9 18.0 15.3 11.2 18.4 344.8 98. 2 5gp12 0.3 38.5 3.6 0.2 0 .4 0.5 0.5 0.5 30. 6 1. 6 5r40 16.9 425.9 34.3 2.3 7. 2 6.3 4. 6 8.6 386. 9 35.3 6gp8 0.2 57.2 2.5 0.2 0 .3 0.4 0.4 0.5 42. 0 1. 3 6r22 2.2 218.9 16.7 0.9 2. 6 3.9 1. 9 4.3 259. 0 22.7 7gp5 0.1 48.9 0.9 0.1 0 .2 0.3 0.1 0.3 40. 0 0. 9 7r14 1.4 237.1 12.0 0.4 1. 6 2.9 1. 8 3.0 210. 9 15.5 8gp4 0.1 117.5 0.8 0.2 0.3 0.6 0.2 0.3 72.3 0.9 8r9 0.9 191.9 6.7 0.3 1.1 2.3 1.0 3.1 177.7 7. 1 All avg. 5.0 419.4 36.2 3.8 5.5 5.6 5.3 6.3 188.3 27.1 GP avg. 1.4 205.5 39.1 2.6 2. 7 3. 2 3. 5 3.3 122. 1 16.9 Rand. avg. 8.6 633.2 33.3 4.9 8. 2 7. 9 7. 2 9.4 254. 5 37.4 3-AP avg. 13.0 1 279.2 123.8 16.0 16. 4 16.8 20.5 17.8 289.2 62.2 4-AP avg. 6.0 568.9 54.7 4.3 9. 6 8. 2 6. 3 9.9 230. 8 57.8 5-AP avg. 8.6 232.2 19.0 1.3 3. 8 3. 4 2. 5 4.5 208. 7 18.5 6-AP avg. 1.2 138.1 9.6 0.5 1.5 2.1 1.1 2.4 150.5 12.0 7-AP avg. 0.7 143.0 6.5 0.3 0.9 1.6 1.0 1.6 125.5 8. 2 8-AP avg. 0.5 154.7 3.8 0.3 0.7 1.4 0.6 1.7 125.0 4. 0 3cq150 22.1 4 366.5 1 388.4 42.1 39.3 34.9 41.0 46.0 1 503.9 497.6 3g150 19.0 2 229.3 780.0 26.2 28. 1 25.5 37.2 33.0 1 299. 5 201.2 3p150 15.4 2 149.7 847.1 82.0 89. 8 89.7 96.0 101.9 1 730.1 458.6 3sr150 21.7 3 949.9 1 157.5 36.0 37.5 37.9 41.2 47.1 1 400.9 469.6 4cq50 6.1 872.0 308.9 3.8 8.5 7.3 6.1 10.8 468.0 167.2 4g50 5.3 542.9 251.2 3.7 5.9 5.9 6.7 6.6 273.0 87.3 4p50 5.7 586.6 251.2 7.3 14.2 13.6 13.4 15.7 441. 5 95.5 4sr50 5. 6 1 009.3 296.4 3.3 7.4 6.2 6.0 7.9 424.3 111.6 5cq30 4.6 1 087.3 177.7 2.0 5.2 5.5 3.3 6.0 560.0 63.5 5g30 3.7 673.9 182.5 1.8 4.1 4.0 3.6 5.7 319.8 41.8 5p30 4.5 762.8 103.6 2.7 10.1 9.5 6.1 12. 2 580. 3 44.1 5sr30 4. 8 1 115.4 163.5 1.9 4.7 4.5 3.6 6.3 667.7 63.2 6cq18 3.5 1 205.9 63.4 1.0 2.7 3.7 1.5 3.1 630.2 26.6 6g18 2.0 731.6 55.2 0.9 1. 8 2. 7 1. 9 2.4 346. 3 18.1 6p18 3.1 929.8 31.1 1.3 3. 8 5. 4 2. 5 5.2 658. 3 19.9 6sr18 2. 3 1 369.7 59.9 0.9 2.9 3.0 1.5 3.4 778.4 34.4 7cq12 1.7 1 658.3 31.7 0.6 2.0 3.4 1.2 2.9 728.5 12.6 7g12 1.4 1 048.3 28.2 0.6 1.3 2.4 1.1 2.0 555.4 11.1 7p12 2.1 1 324.4 17.5 0.8 2.4 6.4 1.8 3.9 1 088. 9 14.6 7sr12 1. 9 1 622.4 40.9 0.7 2.0 3.5 1.1 2.5 965.6 11.0 8cq8 1.1 2 112.3 13.3 0.5 1.5 2.8 1.0 2.0 1 909. 5 8.5 8g8 1.0 1 675.5 15.6 0.4 0.8 2.1 0.8 1.2 728.5 7.2 8p8 1.7 2 051.4 7. 6 0.4 1.2 3.1 0.9 1.8 1 492.9 7.9 8sr8 1.3 2 439.9 16.4 0.3 1.3 2.9 1.0 1.8 1 252. 7 8.1 All avg. 5.9 1 563. 1 262.0 9.2 11.6 11.9 11.7 13.8 866.8 103.4 Clique avg. 6.5 1 883.7 330.6 8.3 9.9 9.6 9.0 11.8 966.7 129. 4 Geom. avg. 5.4 1 150.2 218.8 5.6 7.0 7.1 8.5 8.5 587.1 61.1 Product avg. 5.4 1 300.8 209.7 15.8 20. 2 21.3 20.1 23.4 998.7 106.8 SR avg. 6.3 1 917.8 289.1 7.2 9.3 9.7 9.1 11.5 914.9 116. 3 3-AP avg. 19.5 3 173.8 1 043.3 46.6 48.7 47.0 53.8 57.0 1 483.6 406.8 4-AP avg. 5.7 752.7 276.9 4.5 9.0 8.2 8.0 10.2 401.7 115.4 5-AP avg. 4.4 909.9 156.8 2.1 6.0 5.9 4.2 7.5 532.0 53.2 6-AP avg. 2.7 1 059.2 52.4 1.0 2.8 3.7 1.9 3.5 603.3 24.8 7-AP avg. 1.7 1 413.4 29.6 0.7 1.9 3.9 1.3 2.8 834.6 12.3 8-AP avg. 1.2 2 069.7 13.2 0.4 1.2 2.7 0.9 1.7 1 345. 9 7.9 Local Search Heur istics f or the Multidimensio nal Assignment Problem 24 T ab . 4: L ocal search heuristics started from Greedy . Solution error , % Running times, ms Inst. 2-opt 1D V 2D V s D V 1D V 2 2DV 2 s D V 3 s D V v 2-opt 1DV 2DV s DV 1DV 2 2DV 2 s D V 3 s D V v 3gp100 4.3 3.4 3.4 3.4 3.4 3.4 3.3 3.4 0.04 0. 04 0. 04 0. 04 0.05 0.05 0.36 0.09 3r150 16.7 1.2 1.2 1.2 1.2 1.2 0.8 0.7 0.02 0.02 0.02 0.03 0.02 0. 03 0.11 0.05 4gp30 4.5 3.7 3.6 3. 6 3.6 3.6 2.6 3.6 0.04 0.03 0.04 0.04 0. 04 0.04 0. 11 0.05 4r80 15.8 7.9 6.1 6.1 7.9 6.1 2.6 1.5 0.01 0.02 0.02 0.02 0.02 0. 02 0.21 0.08 5gp12 5.4 6.3 4.5 4. 5 5.3 4.5 1.8 4.5 0.01 0.01 0.01 0.01 0. 01 0.01 0. 03 0.01 5r40 18.5 19.8 13.5 13.5 15.0 13.5 2.3 3.5 0.01 0.01 0.01 0.01 0. 01 0.01 0. 18 0.04 6gp8 4.1 8.9 5.5 4. 3 6.0 4.5 2.4 3.8 0.01 0.01 0.01 0.01 0. 01 0.01 0. 04 0.01 6r22 25.9 44.1 28.6 26.4 26.8 27.3 2.7 8.6 0.01 0.01 0.01 0.01 0. 01 0.01 0. 21 0.02 7gp5 5.5 11.3 7.0 5. 9 6.6 5.9 3.5 5.1 0.00 0.00 0.00 0.00 0. 00 0.00 0. 04 0.00 7r14 37.9 88.6 55.7 33.6 51.4 44.3 0.0 15.0 0.00 0.00 0.00 0.00 0.00 0.00 0.14 0. 01 8gp4 4.2 11.5 5.2 3. 6 4.2 3.6 3.1 3.6 0.00 0.00 0.00 0.00 0. 00 0.00 0. 07 0.00 8r9 40.0 158.9 107. 8 54.4 65.6 65.6 0.0 30.0 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0. 01 All avg. 15.2 30.5 20.2 13.4 16.4 15.3 2.1 6.9 0.01 0. 01 0. 01 0.01 0.01 0.01 0.14 0.03 GP avg. 4.7 7.5 4.9 4.2 4.8 4.3 2.8 4.0 0.02 0.02 0.02 0.02 0. 02 0.02 0. 11 0.03 Rand. avg. 25.8 53.4 35.5 22.5 28.0 26.3 1.4 9.9 0.01 0.01 0.01 0.01 0. 01 0.01 0. 16 0.03 3-AP avg. 10.5 2.3 2.3 2.3 2.3 2.3 2.1 2.0 0.03 0.03 0.03 0.04 0.04 0. 04 0.24 0.07 4-AP avg. 10.1 5.8 4.9 4.9 5.7 4.9 2.6 2.5 0.02 0.03 0.03 0.03 0.03 0. 03 0.16 0.06 5-AP avg. 12.0 13.0 9.0 9. 0 10.1 9.0 2.0 4.0 0.01 0.01 0.01 0.01 0. 01 0.01 0. 11 0.02 6-AP avg. 15.0 26.5 17.1 15.3 16.4 15.9 2.6 6.2 0.01 0.01 0.01 0.01 0. 01 0.01 0. 13 0.01 7-AP avg. 21.7 49.9 31.4 19.7 29.0 25.1 1.8 10.0 0.00 0.00 0.00 0.00 0.00 0.00 0.09 0. 01 8-AP avg. 22.1 85.2 56.5 29.0 34.9 34.6 1.6 16.8 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0. 01 3cq150 26.8 8.1 8.1 8.1 8.1 8.1 8.0 8.0 0.07 0. 07 0. 07 0. 07 0.08 0.08 1.18 0.26 3g150 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.07 0.07 0.07 0.07 0.08 0.08 1.09 0.22 3p150 0.2 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.61 0.65 0.66 0.66 0. 66 0.66 1. 92 0.96 3sr150 29.9 9.8 9.8 9.8 9.8 9.8 9.4 9.1 0.07 0.07 0.07 0.07 0.08 0. 09 1.49 0.26 4cq50 19.0 11.6 11.6 11.6 11.6 11.6 11. 3 11.6 0.16 0.16 0.16 0.16 0.16 0.16 0.44 0. 21 4g50 0.0 0.3 0.0 0.0 0.0 0.0 0.0 0. 0 0.22 0.22 0.22 0.22 0.22 0.22 0.43 0.29 4p50 0.1 0.2 0.1 0. 1 0.1 0.1 0.0 0.1 1.04 1.04 1.04 1.04 1. 04 1.05 1. 39 1.12 4sr50 20.0 10.9 11.3 11.3 10.9 11.3 10. 3 11.0 0.19 0.19 0.20 0.20 0.20 0.20 0.47 0. 25 5cq30 14.2 9.6 9.5 9.5 9.6 9.5 9.3 9.4 0.64 0.64 0.64 0.64 0. 64 0.64 1. 03 0.68 5g30 0.0 0.4 0.0 0.0 0.0 0.0 0.0 0. 0 0.94 0.94 0.94 0.94 0.94 0.94 1.26 0.97 5p30 0.0 0.2 0.0 0. 0 0.0 0.0 0.0 0.0 2.72 2.71 2.72 2.72 2. 72 2.72 3. 23 2.76 5sr30 11.7 8.9 8.5 8.5 8.3 8.5 7.1 8.5 0.67 0.67 0.67 0.67 0. 67 0.67 1. 23 0.69 6cq18 9.8 8.2 7.8 7.5 7.9 7.8 6.3 7.3 0.43 0.43 0.43 0.43 0. 43 0.43 1. 08 0.44 6g18 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0. 0 0.56 0.56 0.56 0.57 0.56 0.57 0.90 0.58 6p18 0.0 0.2 0.0 0. 0 0.0 0.0 0.0 0.0 1.10 1.10 1.10 1.10 1. 10 1.10 1. 69 1.12 6sr18 9. 7 8.6 8.2 8. 2 8.5 8.2 6.5 7.8 0.42 0.42 0.42 0.42 0. 42 0.42 1. 15 0.44 7cq12 7.1 5.7 5.0 5.1 5.1 5.0 4.0 4.9 1.04 1.04 1.04 1.04 1. 04 1.04 2. 20 1.05 7g12 0.0 0.5 0.1 0.0 0.0 0.0 0.0 0. 0 1.22 1.22 1.22 1.22 1.22 1.22 1.77 1.23 7p12 0.0 0.4 0.0 0. 0 0.0 0.0 0.0 0.0 1.88 1.87 1.87 1.88 1. 87 1.88 2. 90 1.89 7sr12 6. 5 5.7 5.1 5. 2 5.6 5.1 4.0 5.0 0.98 0.98 0.98 0.98 0. 98 0.98 2. 15 0.99 8cq8 4.7 4.1 3.1 2.8 3.7 2.7 2.2 2.6 0.47 0.47 0.47 0.47 0. 47 0.47 1. 97 0.47 8g8 0.0 0.7 0.0 0.0 0.0 0.0 0.0 0. 0 0.57 0.57 0.57 0.57 0.57 0.57 1.38 0.58 8p8 0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.71 0.71 0.71 0.71 0. 71 0.71 2. 11 0.72 8sr8 3.2 3.7 2.8 2. 6 2.6 2.5 2.1 2.4 0.47 0.47 0.47 0.48 0. 47 0.48 1. 72 0.48 All avg. 6.8 4.1 3.8 3.8 3.8 3.8 3.4 3.7 0.72 0. 72 0. 72 0.72 0.72 0.72 1.51 0.78 Clique avg. 13.6 7.9 7.5 7.4 7.6 7.5 6.8 7.3 0.47 0.47 0.47 0.47 0. 47 0.47 1. 32 0.52 Geom. avg. 0.0 0.4 0.0 0.0 0.0 0.0 0.0 0. 0 0.60 0.60 0.60 0.60 0.60 0.60 1.14 0.65 Product avg. 0.1 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.34 1.35 1.35 1.35 1. 35 1.35 2. 21 1.43 SR avg. 13.5 7.9 7.6 7.6 7.6 7.6 6.6 7.3 0.47 0.47 0.47 0.47 0. 47 0.47 1. 37 0.52 3-AP avg. 14.2 4.5 4.5 4.5 4.5 4.5 4.4 4.3 0.20 0.22 0.22 0.22 0.22 0. 23 1.42 0.43 4-AP avg. 9.8 5.7 5.7 5.7 5.6 5.7 5.4 5.7 0.40 0.40 0.41 0.41 0. 40 0.41 0. 68 0.47 5-AP avg. 6.5 4.8 4.5 4.5 4.5 4.5 4.1 4.5 1.24 1.24 1.24 1.24 1. 24 1.24 1. 69 1.27 6-AP avg. 4.9 4.4 4.0 3.9 4.1 4.0 3.2 3.8 0.63 0.63 0.63 0.63 0. 63 0.63 1. 21 0.64 7-AP avg. 3.4 3.1 2.6 2.6 2.7 2.5 2.0 2.5 1.28 1.28 1.28 1.28 1. 28 1.28 2. 26 1.29 8-AP avg. 2.0 2.2 1.5 1.4 1.6 1.3 1.1 1.3 0.56 0.55 0.56 0.56 0. 56 0.56 1. 80 0.56 Local Search Heur istics f or the Multidimensio nal Assignment Problem 25 T ab . 5: Chain metaheuristic started from T rivial, Greedy and R OM. 5 seconds given. 1 — 2-op t, 2 — 1D V , 3 — 2D V , 4 — s D V , 5 — 1DV 2 , 6 — 2DV 2 , 7 — s D V 3 , 8 — s D V v . Solution error , % Tri vial Greedy ROM Inst. 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 3gp100 15.3 1.8 1.8 1.8 1.8 1. 8 2.8 2. 5 5. 3 1.7 1.7 1. 7 1.8 1.8 2.9 2.3 9.8 1.9 1.9 1. 9 1.9 1.9 2.6 2.3 3r150 77. 7 0.0 0.0 0.0 0. 0 0.0 0.1 0.0 41.4 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 33.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4gp30 7.0 1.6 1.1 1. 1 1.8 1.1 0.8 1.4 6.3 1.9 0.8 0.8 1.8 0.9 0.8 1.4 2.2 1.7 0.9 0.9 1. 7 1.0 0.8 1.4 4r80 55.0 4.4 1.9 1.9 4.1 2.3 0.4 0. 0 41.6 4.6 1.6 1.6 4.5 1.8 0. 8 0.0 57. 0 4.3 2.0 2. 0 4.3 2.0 0.9 0.0 5gp12 1.5 1.5 1.5 1. 5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1. 5 1.5 1.5 1.5 5r40 40.8 18.5 8.0 8.0 16.3 8.0 0. 0 0.0 34.0 19.3 8.0 8. 0 13.5 8.5 0.0 0. 0 40.3 19.3 8.0 8.0 15. 8 8.8 0.5 0. 0 6gp8 2.4 2.4 2.4 2. 4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2. 4 2.4 2.4 2.4 6r22 20.5 30.0 10.9 6.4 15.5 8.2 0.0 0. 0 19.1 27.7 11.8 5. 5 15.5 9.1 0. 0 0.0 15.5 32.7 13.6 8.6 15.0 9.5 0.0 0.0 7gp5 3.9 3.9 3.9 3. 9 3.9 3.9 3.9 3.9 3.1 3.5 3.5 3.5 3.5 3.1 3.5 3.5 3.5 3.9 3.9 3.9 3. 9 3.9 3.9 3.9 7r14 2.9 33.6 11.4 2.1 6.4 3. 6 0.0 0.0 3. 6 33.6 10. 7 2.1 5.7 2.1 0.0 0.0 4.3 35.7 7.9 0. 7 2.1 3.6 0.0 0.0 8gp4 2.1 5.2 4.7 4. 2 2.1 3.6 5.2 5.2 0.5 3.1 3.1 3.1 2.6 1.6 3.1 2.6 1.0 4.7 4.7 3.6 2. 6 4.2 4.7 4.7 8r9 0.0 25.6 4.4 0. 0 0.0 0.0 0.0 0.0 0.0 22.2 2.2 0. 0 0.0 0.0 0.0 0.0 0.0 26.7 4.4 0.0 0.0 0.0 0.0 0. 0 All avg. 19.1 10.7 4.3 2.8 4.6 3.0 1.4 1.4 13.2 10.1 4.0 2. 5 4.4 2.7 1.2 1.1 14.3 11.2 4.3 2. 8 4.3 3.2 1.4 1.3 GP avg. 5.4 2.7 2.6 2.5 2.2 2. 4 2.8 2. 8 3. 2 2.4 2.2 2. 2 2.3 1.9 2.4 2.3 3.4 2.7 2.5 2.4 2.3 2.5 2.6 2.7 Rand. avg. 32.8 18.7 6.1 3.1 7.0 3. 7 0.1 0. 0 23.3 17.9 5.7 2.9 6.5 3.6 0.1 0.0 25.1 19.8 6.0 3. 2 6.2 4.0 0.2 0.0 3-AP avg. 46.5 0.9 0. 9 0.9 0.9 0.9 1.4 1. 2 23.3 0.9 0.9 0. 9 0.9 0.9 1.4 1.1 21.7 0.9 0.9 0.9 1.0 1.0 1.3 1.1 4-AP avg. 31.0 3.0 1. 5 1.5 3.0 1.7 0.6 0.7 23. 9 3.3 1.2 1. 2 3.1 1.3 0.8 0.7 29.6 3.0 1.4 1. 4 3.0 1.5 0.8 0.7 5-AP avg. 21.1 10.0 4.8 4.8 8.9 4. 8 0.8 0.8 17. 8 10.4 4.8 4.8 7.5 5.0 0.8 0. 8 20. 9 10.4 4.8 4.8 8. 6 5.1 1.0 0. 8 6-AP avg. 11.4 16.2 6.7 4.4 8.9 5. 3 1.2 1.2 10. 7 15.1 7.1 3.9 8.9 5.7 1.2 1. 2 8. 9 17.6 8.0 5.5 8. 7 6.0 1.2 1. 2 7-AP avg. 3.4 18.7 7.7 3.0 5.2 3. 7 2.0 2.0 3. 3 18.5 7.1 2.8 4.6 2.6 1.8 1.8 3.9 19.8 5.9 2. 3 3.0 3.7 2.0 2.0 8-AP avg. 1.0 15.4 4.6 2.1 1.0 1. 8 2.6 2.6 0. 3 12.7 2.7 1.6 1.3 0.8 1.6 1.3 0.5 15.7 4.6 1. 8 1.3 2.1 2.3 2.3 3cq150 80.7 6.2 6.2 6.2 6.7 6. 7 17.0 9.8 38.2 6.0 6.0 6.0 6. 1 6.0 8. 4 6.3 36.8 6.4 6.4 6. 4 6.5 6.5 15.8 11.3 3g150 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3p150 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 3sr150 96.0 7.0 7.0 7.1 7. 6 7.6 18.3 11. 8 41.0 7.4 7.4 7.4 7.9 7.9 9.1 7.4 42.8 6.7 6.7 6.8 7.0 7.2 17.8 11.4 4cq50 27.7 5.4 5.8 5.8 5. 6 6.1 12.7 9.5 22.5 5.4 5.7 5.7 6.1 5.8 9. 8 7.4 26. 4 5.1 5.2 5. 0 5.4 5.6 13.0 8.0 4g50 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4p50 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 4sr50 31.9 6.4 7.1 7.1 7. 3 7.4 14.4 8.8 23.3 6.6 7.2 7.2 7.6 7.4 9. 2 7.6 30. 0 6.5 7.1 7. 1 7.3 7.3 13.5 10.4 5cq30 11.6 2.7 2.5 2.4 2.7 2.5 8.3 4.4 11.8 2.3 2.7 2.6 2. 9 2.8 5. 6 3.9 11.9 2.6 2.8 2. 6 2.9 3.1 9.0 4.8 5g30 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5p30 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 5sr30 15.3 4.1 4.1 3.8 5.1 4.2 10.5 6.6 13.5 4.2 4.0 4. 0 4.9 4.2 7.4 6.0 14.9 4.3 4.7 4.5 4.6 4.7 9.8 5.9 6cq18 3.2 0.3 0.2 0.4 0. 5 0.3 5.9 1.4 3.3 0.3 0. 3 0.4 0.4 0.5 4.4 1.5 2.7 0. 4 0.3 0.4 0.6 0.2 6.5 1.3 6g18 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 6p18 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 6sr18 4.1 0.7 0.5 0.8 1. 2 1.1 7.6 1.9 4.0 1.0 0. 9 0.7 0.9 0.9 5.7 2.5 4.2 1. 1 0.7 1.0 1.2 0.7 7.1 2.4 7cq12 0.5 0.0 0.0 0.0 0. 0 0.0 3.8 0.3 0.4 0.0 0. 0 0.0 0.0 0.0 2.7 0.4 0.4 0. 0 0.0 0.1 0.0 0.0 4.3 0.2 7g12 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 7p12 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 7sr12 0.6 0.0 0.0 0.0 0.1 0.0 4.6 0. 4 0.7 0.0 0.0 0. 1 0.0 0.1 3.4 0.6 0.4 0.0 0.0 0.1 0.0 0.1 5.1 0.3 8cq8 0.0 0.0 0. 0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.9 0.0 8g8 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8p8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8sr8 0.0 0.0 0. 0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.4 0.0 All avg. 11.3 1.4 1.4 1.4 1.5 1. 5 4.5 2. 3 6. 6 1.4 1.4 1. 4 1.5 1.5 2.9 1.8 7.1 1.4 1.4 1.4 1.5 1.5 4.4 2.3 Clique avg. 20. 6 2.4 2.4 2.5 2. 6 2.6 8.3 4.2 12.7 2.3 2. 4 2.5 2.6 2.5 5.6 3.2 13.0 2. 4 2.4 2.4 2.6 2.6 8.4 4.3 Geom. avg. 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Product avg. 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 SR avg. 24.6 3.0 3.1 3.1 3. 5 3.4 9.6 4.9 13.7 3.2 3. 2 3.2 3.5 3.4 6.1 4.0 15.4 3.1 3.2 3.2 3.4 3.3 9.3 5.0 3-AP avg. 44.2 3.3 3.3 3.3 3. 6 3.6 8.8 5.4 19.8 3.4 3. 4 3.4 3.5 3.5 4.4 3.4 19.9 3.3 3.3 3.3 3.4 3.4 8.4 5.7 4-AP avg. 14.9 3.0 3.2 3.2 3. 2 3.4 6.8 4.6 11.4 3.0 3. 2 3.2 3.4 3.3 4.8 3.8 14.1 2.9 3.1 3.0 3.2 3.2 6.6 4.6 5-AP avg. 6.7 1.7 1. 7 1.6 1.9 1.7 4.7 2.7 6.3 1.6 1.7 1.6 2. 0 1.8 3. 2 2.5 6. 7 1.8 1.9 1. 8 1.9 1.9 4.7 2.7 6-AP avg. 1.8 0.3 0. 2 0.3 0. 4 0.3 3.4 0.8 1.8 0.3 0. 3 0.3 0.3 0.3 2.5 1.0 1.7 0. 4 0.3 0.4 0.5 0.2 3.4 0.9 7-AP avg. 0.3 0.0 0. 0 0.0 0. 0 0.0 2.1 0.2 0.3 0.0 0. 0 0.0 0.0 0.0 1.5 0.2 0.2 0. 0 0.0 0.0 0.0 0.0 2.4 0.1 8-AP avg. 0.0 0.0 0. 0 0.0 0.0 0.0 1. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1. 1 0.0 0. 0 0.0 0.0 0. 0 0.0 0.0 1.1 0.0 Local Search Heur istics f or the Multidimensio nal Assignment Problem 26 T ab. 6: Chain m etaheuristic started from Tri vial, Greedy an d R OM. 10 seco nds given. 1 — 2-opt, 2 — 1D V , 3 — 2D V , 4 — s DV , 5 — 1D V 2 , 6 — 2DV 2 , 7 — s D V 3 , 8 — s D V v . Solution error , % Tri vial Greedy R OM Inst. 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 3gp100 15.1 1.6 1.6 1.6 1.7 1.7 2.3 2. 2 5.3 1.6 1.6 1. 6 1.6 1.6 2.5 2.1 9.8 1.6 1.6 1.6 1.8 1.7 2.2 2. 1 3r150 75. 3 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 41.4 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 33.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4gp30 6.5 1.4 0.8 0.8 1.3 1. 0 0.7 1. 3 6. 2 1.7 0.8 0. 8 1.5 0.8 0.7 1.1 2.2 1.4 0.8 0.8 1.4 0.8 0.7 1. 2 4r80 52.1 3.9 1.1 1.0 3.6 1. 1 0.1 0.0 41.4 3.9 1.0 1.0 4.3 1.1 0. 4 0.0 55. 0 4.0 1.1 1. 1 3.4 1.3 0.4 0.0 5gp12 1.5 1.5 1.5 1.5 1.5 1. 5 1.5 1. 5 1. 5 1.5 1.5 1. 5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1. 5 5r40 36.5 16.3 6.5 5.8 13. 0 6.8 0.0 0.0 32.3 18.8 7.0 7. 0 13.0 7.0 0. 0 0.0 36. 8 16.5 6.8 6.8 13.8 7.3 0.0 0.0 6gp8 2.4 2.4 2.4 2.4 2.4 2. 4 2.4 2. 4 2. 4 2.4 2.4 2. 4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2. 4 6r22 16.8 27.7 9.1 5.0 12. 3 7.7 0.0 0.0 15.5 26.8 11.4 4.5 13.2 8.2 0.0 0.0 14.1 30.0 10.5 5.9 12.3 8.6 0.0 0.0 7gp5 3.5 3.9 3.9 3.9 3.9 3. 9 3.9 3. 9 3. 1 3.5 3.5 3. 5 3.5 3.1 3.5 3.5 2.7 3.9 3.9 3.9 3.9 3.5 3.9 3. 9 7r14 1.4 29.3 7.1 1.4 2.9 2.9 0.0 0.0 0. 7 31.4 6.4 0.7 4.3 0.0 0.0 0.0 2.9 29.3 5.7 0. 7 0.0 2.1 0.0 0. 0 8gp4 1.6 5.2 4.7 3.6 1.0 2. 1 5.2 3. 6 0. 5 3.1 2.1 2. 6 1.6 1.6 2.6 2.6 1.0 4.7 4.7 3.6 1.0 2.1 4.2 4. 2 8r9 0.0 23.3 1.1 0.0 0.0 0. 0 0.0 0. 0 0.0 15.6 1.1 0.0 0.0 0.0 0.0 0. 0 0. 0 22.2 4.4 0.0 0. 0 0.0 0.0 0.0 All avg. 17.7 9.7 3.3 2.3 3.6 2.6 1.3 1.2 12.5 9.2 3.2 2.1 3. 9 2.3 1. 1 1.1 13. 5 9.8 3.6 2. 4 3.5 2.6 1.3 1.3 GP avg. 5.1 2.7 2.5 2.3 2.0 2.1 2.7 2.5 3.2 2.3 2.0 2.1 2.0 1.8 2.2 2.2 3.3 2.6 2.5 2.3 2.0 2.0 2.5 2. 5 Rand. avg. 30.4 16.7 4.2 2.2 5.3 3.1 0.0 0.0 21.9 16.1 4.5 2.2 5.8 2.7 0.1 0.0 23.7 17.0 4.7 2. 4 4.9 3.2 0.1 0.0 3-AP avg. 45.2 0.8 0.8 0.8 0.9 0.9 1.1 1.1 23.3 0.8 0.8 0.8 0. 8 0.8 1. 3 1.0 21.7 0.8 0.8 0. 8 0.9 0.9 1.1 1. 1 4-AP avg. 29.3 2.7 1.0 0.9 2.5 1.0 0.4 0.7 23. 8 2.8 0.9 0. 9 2.9 0.9 0.5 0.6 28.6 2.7 1.0 1. 0 2.4 1.0 0.5 0. 6 5-AP avg. 19.0 8.9 4.0 3.6 7.3 4.1 0.8 0.8 16. 9 10.1 4.3 4.3 7. 3 4.3 0. 8 0.8 19. 1 9.0 4.1 4. 1 7.6 4.4 0.8 0. 8 6-AP avg. 9.6 15.1 5.7 3.7 7.3 5.1 1.2 1. 2 8.9 14.6 6.9 3.5 7.8 5.3 1. 2 1.2 8. 2 16.2 6.4 4.2 7. 3 5.5 1.2 1.2 7-AP avg. 2.5 16.6 5.5 2.7 3.4 3.4 2.0 2. 0 1.9 17.5 5.0 2.1 3.9 1.6 1. 8 1.8 2. 8 16.6 4.8 2.3 2. 0 2.8 2.0 2.0 8-AP avg. 0.8 14.3 2.9 1.8 0.5 1.0 2.6 1. 8 0.3 9.3 1.6 1. 3 0.8 0.8 1.3 1.3 0.5 13.5 4.6 1. 8 0.5 1.0 2.1 2.1 3cq150 79.8 5.4 5.4 5.4 5.9 5.9 13.3 8.3 38.2 5.6 5. 6 5.6 5.7 5.8 7.9 6.1 36.8 6. 1 5.9 5.9 6.3 6.2 12.7 8.2 3g150 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 3p150 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 3sr150 93.9 6.3 6.3 6.3 6. 7 6.7 15.8 10. 2 41.0 6.2 6.2 6.2 6.6 6.6 8.3 7.2 42.8 6.6 6.5 6.5 6.7 6.7 14.5 9.1 4cq50 26.2 5.0 5. 0 4.9 5.2 5.3 9.9 6.5 22.4 4.9 5.3 5.2 5. 4 5.5 8. 9 7.0 25.5 4.6 4.8 4. 8 5.2 4.9 11.3 7.1 4g50 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 4p50 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 4sr50 30.8 5.8 6.6 6.6 6. 5 6.9 11.2 8.2 23.3 6.4 6.5 6.5 6.7 6.6 8. 9 6.7 29. 3 6.2 6.1 6. 1 6.9 6.7 11.3 9.2 5cq30 10.9 2.2 1. 9 2.0 2. 0 2.1 6.9 4.2 11.0 1. 9 2.2 2.2 2.3 2.5 5.1 3.4 11.4 2.4 2.3 2.3 2.4 2.4 7.3 3. 7 5g30 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 5p30 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 5sr30 13.8 3.7 3.5 3.2 3.9 3.5 8.9 5.0 12.2 3.9 3.5 3.5 4. 0 3.7 6. 3 4.9 14.0 4.0 4.0 4. 0 3.8 4.2 8.6 4. 8 6cq18 2.5 0.2 0.1 0.3 0. 4 0.2 4.1 0.8 2.7 0.3 0. 2 0.0 0.2 0.4 3.5 0.8 2.3 0. 2 0.1 0.2 0.2 0.2 4.8 1.0 6g18 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 6p18 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 6sr18 3.4 0.4 0.4 0.4 0.7 0.8 5.1 1.0 3.4 0.7 0.7 0.3 0. 6 0.6 4. 8 2.0 3. 8 0.5 0.6 0. 5 0.8 0.6 5.5 1. 8 7cq12 0.2 0.0 0.0 0.0 0. 0 0.0 2.7 0.1 0.2 0.0 0. 0 0.0 0.0 0.0 2.1 0.1 0.2 0. 0 0.0 0.0 0.0 0.0 3.4 0. 1 7g12 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 7p12 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 7sr12 0.2 0.0 0.0 0.0 0. 0 0.0 3.5 0.1 0.5 0.0 0. 0 0.1 0.0 0.0 2.4 0.2 0.3 0. 0 0.0 0.1 0.0 0.0 4.1 0. 2 8cq8 0.0 0.0 0.0 0.0 0.0 0. 0 1.4 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 1.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.4 0. 0 8g8 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 8p8 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 8sr8 0.0 0.0 0.0 0.0 0.0 0. 0 1.4 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 1.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.1 0. 0 All avg. 10.9 1.2 1.2 1.2 1.3 1.3 3.5 1.9 6.5 1.3 1.3 1.2 1.3 1.3 2.5 1.6 6.9 1.3 1.3 1.3 1.4 1.3 3.6 1. 9 Clique avg. 19. 9 2.1 2.1 2.1 2. 2 2.3 6.4 3.3 12.4 2. 1 2.2 2.2 2.3 2.4 4.8 2.9 12.7 2.2 2.2 2.2 2.4 2.3 6.8 3. 3 Geom. avg. 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 Product avg. 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 SR avg. 23.7 2.7 2.8 2.7 3. 0 3.0 7.7 4.1 13.4 2.9 2. 8 2.8 3.0 2.9 5.3 3.5 15.0 2.9 2.9 2.9 3.0 3.0 7.7 4.2 3-AP avg. 43.4 2.9 2.9 2.9 3. 1 3.1 7.3 4.6 19.8 3.0 3. 0 3.0 3.1 3.1 4.0 3.3 19.9 3. 2 3.1 3.1 3.2 3.2 6.8 4.3 4-AP avg. 14.2 2.7 2.9 2.9 2. 9 3.1 5.3 3.7 11.4 2.8 2. 9 2.9 3.0 3.0 4.4 3.4 13.7 2. 7 2.7 2.7 3.0 2.9 5.7 4.1 5-AP avg. 6.2 1.5 1.3 1.3 1.5 1.4 3.9 2.3 5.8 1.5 1.4 1.4 1. 6 1.6 2. 8 2.1 6. 4 1.6 1.6 1. 6 1.6 1.7 4.0 2. 1 6-AP avg. 1.5 0.1 0.1 0.2 0. 3 0.3 2.3 0.5 1.5 0.2 0. 2 0.1 0.2 0.3 2.1 0.7 1.5 0. 2 0.2 0.2 0.3 0.2 2.6 0.7 7-AP avg. 0.1 0.0 0.0 0.0 0.0 0.0 1.5 0.1 0.2 0. 0 0.0 0.0 0.0 0.0 1.1 0.1 0.1 0.0 0.0 0.0 0.0 0.0 1.9 0. 1 8-AP avg. 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 7 0.0 0. 0 0.0 0.0 0. 0 0.0 0.0 0.9 0. 0 Local Search Heur istics f or the Multidimensio nal Assignment Problem 27 T ab. 7: Multichain metaheuristic started from T rivial, Greedy and R OM. 5 seconds giv en. 1 — 2-opt, 2 — 1D V , 3 — 2D V , 4 — s DV , 5 — 1D V 2 , 6 — 2DV 2 , 7 — s D V 3 , 8 — s D V v . Solution error , % Tri vial Greedy ROM Inst. 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 3gp100 11.8 1.1 1.2 1.1 1.3 1.3 156.9 2.0 5.3 1.2 1.2 1. 1 1.4 1.3 5.6 2.2 9. 7 1.2 1.2 1.2 1.3 1.3 9.8 2.0 3r150 68. 1 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 41.4 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 33. 6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4gp30 3.0 0.9 0.7 0.7 0.8 0. 7 0.7 1. 1 3. 1 0.8 0.7 0.7 1.0 0. 7 0. 7 1.0 2.1 0.7 0.7 0.7 0.7 0. 7 0.8 1.0 4r80 45.3 3.4 1.5 1.4 2.3 1. 5 0. 4 0.0 38.9 3.3 0.9 0.9 3.1 0.9 0.8 0.0 44. 6 2.4 1.0 1.0 2.6 1.1 45.3 0.0 5gp12 1.5 1.5 1.5 1.5 1.5 1. 5 1.5 1. 5 1. 5 1.5 1.5 1.5 1.5 1. 5 1. 5 1.5 1.5 1.5 1.5 1.5 1.5 1. 5 1.5 1.5 5r40 26.0 15.3 5.3 5.5 10. 8 6.3 516.8 0.0 26. 8 14.5 5.3 5. 8 10.0 5.0 0.0 0.0 28. 3 15.3 6.8 6. 8 11.0 7.0 152. 5 0.0 6gp8 2.4 2.4 2.4 2.4 2.4 2. 4 2.4 2. 4 2. 4 2.4 2.4 2.4 2.4 2. 4 2. 4 2.4 2.4 2.4 2.4 2.4 2.4 2. 4 2.4 2.4 6r22 8.6 20.0 6.8 4.5 7.7 5.9 0.0 0.0 9.1 20.9 7.7 2.7 7. 3 4.5 0.0 0. 0 6.4 20.5 8.6 5.5 9.5 5.0 30.0 0.0 7gp5 3.9 3.9 3.5 3.9 3.9 3. 9 3.9 3. 9 3. 5 3.5 3.5 3.5 3.5 3. 5 3. 5 3.5 3.9 3.9 3.9 3.9 3.9 3. 5 3.9 3.9 7r14 0.0 18.6 7.1 0.7 0.0 2. 1 0.0 0. 0 0.0 23.6 8.6 1.4 0.7 2.9 0.0 0.0 0. 0 21.4 7.1 2. 1 0.0 2.1 0.0 0.0 8gp4 2.1 5.2 4.7 4.2 2.1 4. 2 3.6 4. 2 1. 6 3.6 2.6 2.6 2.1 1. 6 3. 6 3.1 0.5 4.7 5.2 4.2 2.6 4. 7 4.7 3.6 8r9 0.0 14.4 2.2 0.0 0.0 0. 0 0.0 0. 0 0.0 17.8 2.2 0.0 0.0 0.0 0.0 0.0 0. 0 14.4 2.2 0. 0 0.0 0.0 0.0 0.0 All avg. 14.4 7.2 3.1 2.2 2.7 2.5 57.2 1.3 11.1 7.8 3.0 1.9 2.7 2. 0 1. 5 1.1 11.1 7.4 3.4 2.4 3.0 2. 5 20.9 1.2 GP avg. 4.1 2.5 2.3 2.3 2.0 2.3 28.2 2.5 2.9 2.2 2.0 2.0 2.0 1. 8 2. 9 2.3 3.4 2.4 2.5 2.3 2.1 2.4 3.8 2.4 Rand. avg. 24.7 11.9 3.8 2.0 3.5 2.6 86.2 0.0 19.4 13.3 4. 1 1.8 3.5 2.2 0.1 0.0 18.8 12.3 4.3 2. 6 3.9 2.5 38.0 0.0 3-AP avg. 40.0 0.6 0.6 0.6 0.6 0.6 78.5 1.0 23.3 0. 6 0.6 0.6 0.7 0.7 2.8 1.1 21. 7 0.6 0.6 0.6 0.7 0.7 4.9 1.0 4-AP avg. 24.1 2.1 1.1 1.0 1.5 1.1 0.5 0.6 21. 0 2.0 0.8 0.8 2.0 0.8 0.7 0.5 23. 3 1.5 0.8 0.8 1.7 0.9 23.0 0.5 5-AP avg. 13.8 8.4 3.4 3.5 6.1 3.9 259.1 0.8 14.1 8.0 3.4 3.6 5.8 3. 3 0. 8 0.8 14.9 8.4 4.1 4.1 6.3 4. 3 77.0 0.8 6-AP avg. 5.5 11.2 4.6 3.5 5.1 4.2 1.2 1.2 5.7 11.7 5.1 2.6 4. 8 3.5 1.2 1. 2 4.4 11.4 5.5 3.9 6.0 3.7 16.2 1.2 7-AP avg. 2.0 11.2 5.3 2.3 2.0 3.0 2.0 2.0 1.8 13.5 6.0 2.5 2. 1 3.2 1.8 1. 8 2.0 12.7 5.5 3.0 2.0 2.8 2.0 2.0 8-AP avg. 1.0 9.8 3.5 2.1 1.0 2. 1 1. 8 2.1 0. 8 10.7 2. 4 1.3 1.0 0.8 1.8 1.6 0. 3 9.6 3.7 2.1 1.3 2.3 2.3 1. 8 3cq150 75.2 3.9 3.8 3.7 4.4 4.3 1219.1 491.9 38.2 2.5 2.5 2.5 3.1 3.0 41.1 20.9 36. 8 3.9 3.8 3.7 4.8 4.8 36.8 24. 2 3g150 0.0 0.0 0.0 0.0 0.0 0. 0 865.3 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 19.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.9 0.0 3p150 0.0 0.0 0.0 0.0 0.0 0. 0 76.3 76.3 0.0 0.0 0. 0 0.0 0.0 0.0 215.3 215. 3 0. 0 0.0 0.0 0.0 0.0 0.0 7.2 7.2 3sr150 85.8 4.5 4.3 4.5 5. 6 5.5 1249.7 630.5 41.0 3.2 3. 2 3.2 3.4 3.3 41.9 7. 4 42.8 4.0 3.9 4.0 4.9 4.9 42.8 32.7 4cq50 12.7 2.8 4.4 4.2 3. 6 4.8 283.6 9.7 10. 6 1.9 2.9 2.9 2.5 3.5 13.9 6. 6 13.5 3. 8 3.2 3. 2 3.8 3.6 28.4 9.6 4g50 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0.0 0.0 0. 0 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 4p50 0.0 0.0 0.0 0.0 0.0 0. 0 102.7 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 484.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8.3 0.0 4sr50 16.4 3.0 3.4 3.5 3. 1 3.9 155.4 10.7 13.7 2. 1 3.0 3.1 2.2 3.4 19.5 7.1 15.9 3. 8 4.2 4. 0 3.9 4.8 29.2 10. 5 5cq30 3.4 2.1 1.4 1.4 3.0 1.6 154.5 4.4 3.6 2. 0 2.2 2.2 2.3 2.2 20.2 3.2 4.1 2. 2 2.2 2.2 2.7 2.3 21.2 4.6 5g30 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0. 0 0.0 0.0 0.0 0.0 0. 0 2. 3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 5p30 0.0 0.0 0.0 0.0 0.0 0. 0 137.2 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 1016.7 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 7.6 0.0 5sr30 6.0 2.4 3.4 3.4 2. 7 3.6 195.6 5.3 4. 6 2.3 2.3 2.2 2.2 2.4 15.8 4. 2 4.7 3.4 3.1 3.2 3.9 3.4 27.6 6.4 6cq18 2.8 2.1 2.0 1.4 1.8 1.8 141.9 3.0 1.9 1. 6 1.5 1.5 1.7 1.2 15.4 2.3 2.7 2. 2 1.9 1.8 1.4 2.1 18.1 2.3 6g18 0.0 0.0 0.0 0.0 0.0 0. 0 260.1 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 26.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.6 0.0 6p18 0.0 0.0 0.0 0.0 0.0 0. 0 162.9 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 2117.7 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 7.8 0.0 6sr18 3.8 1.9 2.4 2.2 2. 3 2.3 120.7 3.5 3. 0 2.0 2.1 2.0 2.1 2.1 13.2 2. 6 3.9 2.3 1.8 2.1 2.7 1.7 19.1 3.0 7cq12 0.9 1.0 1.0 1.0 0.5 0.9 91.5 1. 2 0.7 0.6 0.7 0. 6 0.8 0.4 13.8 0.6 1.1 0.8 0.2 0.9 0.8 0.3 14.8 1.1 7g12 0.0 0.0 0.0 0.0 0.0 0. 0 156.4 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 18.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.4 0.0 7p12 0.0 0.0 0.0 0.0 0.0 0. 0 346.1 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 3161.5 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 9.2 0.0 7sr12 1.1 1.4 0.9 1.2 1.0 1. 1 77.7 0.9 1.4 1. 0 1.1 1.1 0.6 0.7 9.4 1.2 1.8 1.1 0.7 0.8 1.0 1.0 14.9 1.2 8cq8 0.1 0.3 0.2 0.2 0.1 0.4 62.3 0. 2 0.2 0.2 0.1 0. 2 0.1 0.1 10.4 0.3 0.2 0.3 0.2 0.3 0.2 0.2 9.9 0.3 8g8 0.0 0.0 0.0 0.0 0.0 0. 0 104.5 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 14.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.4 0.0 8p8 0.0 0.0 0.0 0.0 0.0 0. 0 176.7 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 3604.6 0.0 0.0 0. 0 0.0 0. 0 0.0 0.0 9.0 0. 0 8sr8 0.5 0. 2 0.3 0.4 0.1 0.2 51.9 0. 3 0.2 0.5 0.4 0. 4 0.2 0.3 6.6 0.6 0.3 0.5 0.3 0.6 0.2 0.3 9.8 0. 4 All avg. 8.7 1.1 1.1 1.1 1.2 1.3 258.0 51.6 5.0 0.8 0. 9 0.9 0.9 0.9 454.3 11.3 5.3 1.2 1. 1 1.1 1.3 1.2 13.8 4.3 Clique avg. 15. 9 2.0 2.1 2.0 2.2 2.3 325.5 85.1 9.2 1.5 1.7 1.7 1.8 1.7 19.1 5. 6 9.7 2.2 1.9 2.0 2.3 2.2 21.5 7.0 Geom. avg. 0.0 0.0 0.0 0.0 0.0 0. 0 231.1 0.0 0.0 0.0 0. 0 0.0 0.0 0.0 13.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.5 0.0 Product avg. 0.0 0.0 0.0 0.0 0.0 0.0 167.0 12.7 0.0 0.0 0. 0 0.0 0.0 0.0 1766.7 35. 9 0. 0 0.0 0.0 0.0 0.0 0.0 8.2 1.2 SR avg. 18.9 2.3 2.5 2.5 2. 5 2.8 308.5 108.5 10.7 1.8 2.0 2. 0 1.8 2.0 17.7 3. 9 11.6 2.5 2.3 2.4 2.8 2.7 23.9 9.0 3-AP avg. 40.2 2.1 2.0 2.1 2. 5 2.4 852.6 299.7 19.8 1.4 1.4 1.4 1.6 1.6 79.4 60.9 19. 9 2.0 1.9 1.9 2.4 2.4 22.4 16. 0 4-AP avg. 7.3 1.5 2.0 1.9 1. 7 2.2 135.4 5.1 6. 1 1.0 1.5 1.5 1.2 1.7 129.4 3. 4 7. 4 1.9 1.8 1.8 1.9 2.1 16.5 5.0 5-AP avg. 2.3 1.2 1.2 1.2 1. 4 1.3 121.8 2.4 2. 1 1.1 1.1 1.1 1.1 1.2 263.8 1. 8 2. 2 1.4 1.3 1.4 1.6 1.4 14.1 2.8 6-AP avg. 1.6 1.0 1.1 0.9 1.0 1.0 171.4 1.6 1.2 0. 9 0.9 0.9 1.0 0.8 543.1 1.2 1.7 1.1 0.9 1.0 1.0 1.0 11.9 1. 3 7-AP avg. 0.5 0.6 0.5 0.6 0.4 0.5 167.9 0. 5 0. 5 0.4 0.4 0.4 0.4 0. 3 800.9 0.5 0. 7 0.5 0.2 0.4 0.5 0.3 10.3 0.6 8-AP avg. 0.2 0.1 0.1 0.1 0.0 0.2 98.9 0. 1 0.1 0.2 0.1 0. 2 0.1 0.1 909.1 0.2 0. 1 0.2 0.1 0.2 0.1 0.1 7.5 0.2 Local Search Heur istics f or the Multidimensio nal Assignment Problem 28 T ab. 8: Multichain metah euristic starte d from Tri vial, Greedy and R OM. 10 secon ds given. 1 — 2- opt, 2 — 1D V , 3 — 2D V , 4 — s D V , 5 — 1DV 2 , 6 — 2DV 2 , 7 — s D V 3 , 8 — s D V v . Solution error , % Tri vial Greedy R OM Inst. 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 3gp100 11.2 1.0 1. 0 1.0 1.1 1. 1 2.5 1.7 5.3 1.0 1.0 1.0 1.1 1.1 2.8 1.8 9.7 1.0 0.9 1.0 1.2 1.1 2. 3 1.7 3r150 65. 3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 41.4 0. 0 0.0 0.0 0. 0 0.0 0. 0 0.0 33.6 0. 0 0.0 0.0 0. 0 0.0 0.0 0.0 4gp30 2.9 0.7 0.7 0.7 0.7 0.7 0.7 1.0 2.6 0.8 0.7 0.7 0.7 0.7 0.7 0.9 2.1 0.7 0.7 0. 7 0.7 0. 7 0.7 0. 8 4r80 43.8 2.5 0.9 0.9 2.1 1.0 0.2 0. 0 38.1 2.3 0. 6 0.6 2.3 0. 6 0.5 0.0 42.4 2.0 0.8 0.8 2.0 0.8 0. 5 0.0 5gp12 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1. 5 1.5 1. 5 1.5 1. 5 5r40 25.8 12.5 4.0 4.0 9. 0 4.0 0.0 0.0 23.5 13. 3 4.8 4.5 9. 0 4.8 0.0 0.0 25.8 14.3 5.5 5.5 9. 8 5.5 0.0 0.0 6gp8 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2.4 2. 4 2.4 2. 4 2.4 2. 4 6r22 6.4 18.2 5.9 2.7 6.4 5.0 0.0 0. 0 6.4 18.2 6.4 2.3 5. 0 2.7 0.0 0.0 5.9 17.3 6.8 3.6 5. 5 4.5 0.0 0.0 7gp5 3.9 3.9 3.5 3.9 3.9 3.9 3.1 3.9 3.5 3.5 3.5 3.5 3.5 3.5 3.5 3.5 3.9 3.9 3.5 3. 9 3.9 3. 5 3.9 3. 9 7r14 0.0 16.4 4.3 0.0 0.0 0.0 0.0 0.0 0.0 19.3 5.0 0.0 0.0 1.4 0.0 0.0 0.0 17.9 5. 0 0.0 0. 0 0.0 0.0 0.0 8gp4 0.0 5.2 4.7 3.6 1.6 2.6 3.6 4.2 1.0 3.1 2.6 2.1 1.0 1.6 3.6 2.1 0.0 4.7 5.2 3. 6 1.6 3. 6 4.2 2. 6 8r9 0.0 13.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 13.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0 1. 1 0.0 0. 0 0.0 0.0 0.0 All avg. 13.6 6.5 2.4 1.7 2.4 1.9 1.2 1. 2 10. 5 6.6 2.4 1.6 2.2 1.7 1.3 1.0 10.6 6.3 2.8 1.9 2.4 2.0 1.3 1.1 GP avg. 3.7 2.4 2.3 2.2 1.9 2.0 2.3 2. 4 2. 7 2.1 2.0 1.9 1.7 1.8 2.4 2.0 3.3 2.4 2.4 2. 2 1.9 2.1 2.5 2. 2 Rand. avg. 23.5 10.5 2.5 1.3 2.9 1.7 0.0 0.0 18.2 11.1 2.8 1.2 2.7 1.6 0.1 0.0 17.9 10.2 3. 2 1.6 2.9 1.8 0.1 0. 0 3-AP avg. 38.2 0.5 0.5 0.5 0.6 0.6 1.3 0.9 23.3 0.5 0.5 0.5 0.6 0.5 1.4 0.9 21.6 0.5 0.5 0.5 0.6 0.5 1. 2 0.9 4-AP avg. 23.3 1.6 0.8 0.8 1.4 0.8 0.5 0.5 20. 4 1.5 0.7 0.7 1.5 0.7 0.6 0.4 22.2 1.3 0.7 0. 7 1.3 0.7 0.6 0. 4 5-AP avg. 13.6 7.0 2.8 2.8 5.3 2.8 0.8 0.8 12.5 7.4 3.1 3.0 5.3 3.1 0.8 0.8 13.6 7.9 3.5 3.5 5.6 3.5 0. 8 0.8 6-AP avg. 4.4 10.3 4.2 2.6 4. 4 3.7 1.2 1.2 4.4 10. 3 4.4 2.3 3. 7 2.6 1. 2 1.2 4.2 9. 8 4.6 3.0 3. 9 3.5 1.2 1.2 7-AP avg. 2.0 10.2 3.9 2.0 2. 0 2.0 1.6 2.0 1.8 11. 4 4.3 1.8 1. 8 2.5 1. 8 1.8 2.0 10.9 4.3 2.0 2.0 1.8 2.0 2. 0 8-AP avg. 0.0 9.3 2.3 1.8 0.8 1.3 1.8 2.1 0.5 8.2 1.3 1.0 0.5 0.8 1.8 1.0 0.0 7.3 3.2 1. 8 0.8 1. 8 2.1 1. 3 3cq150 71.4 1.9 1. 7 1.8 3.0 2. 9 1219.1 8.8 38. 2 1.3 1.3 1.3 2.0 2.0 41. 1 6.0 36.8 2. 4 2.2 2.3 3.0 2.9 36.8 10.1 3g150 0.0 0.0 0.0 0.0 0.0 0.0 865.3 0.0 0.0 0. 0 0.0 0.0 0. 0 0.0 19.5 0. 0 0. 0 0.0 0.0 0.0 0.0 0.0 2.9 0. 0 3p150 0.0 0.0 0.0 0.0 0.0 0.0 76.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 215.3 0.0 0.0 0.0 0.0 0. 0 0.0 0.0 7.2 0. 0 3sr150 82.1 3.0 3.1 2.9 4.0 3.9 1249.7 10.5 41.0 1.9 1.9 1.9 2.8 2.8 41.9 6.2 42.8 2.9 2.8 2.8 3.7 3.7 42.8 10. 3 4cq50 11.1 2.6 3.4 3.3 3.2 3.8 11.3 7.8 10. 3 1.5 2.7 2.7 2.4 2.8 8.6 4.6 11.1 2.8 2.8 2. 8 3.5 3. 1 12.3 7.3 4g50 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0. 0 4p50 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0. 0 4sr50 14.6 2.0 2.7 2.6 2.6 3.2 12.9 8.1 12. 5 2.1 2.3 2.3 2.0 2.8 9.1 5.0 14.7 3.6 3.4 3. 4 3.3 3. 5 12.6 7.8 5cq30 2.2 2.1 1.3 1.3 2.4 1.3 8. 0 3.5 2.8 1.5 2.2 2. 2 1.8 2.2 5.0 2.6 3.0 2.2 2.0 2. 2 2.5 2. 2 8.0 4.2 5g30 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0. 0 5p30 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 809.7 0. 0 0. 0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 5sr30 4.6 2.2 3.3 3.3 2.6 3.4 9.6 4.3 3.6 1. 9 1.9 2.0 2.0 2. 0 6. 6 3.2 3.5 3. 1 2.9 2.9 3. 6 2.9 13.7 4.6 6cq18 2.8 2.1 2.0 1.4 1.8 1.7 5. 6 2.4 1.9 1.6 1.5 1. 5 1.5 1.2 4.2 1.9 2.0 2.2 1.9 1. 8 1.4 1. 9 5.8 2.3 6g18 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0. 0 6p18 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1038.3 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0. 0 6sr18 3.5 1.8 2.4 2.0 2.3 2.3 6.5 3.2 3.0 2. 0 2.1 1.9 2.1 2. 1 5. 3 2.3 3.9 2. 3 1.8 2.1 2. 6 1.7 6.7 2.6 7cq12 0.9 1.0 1.0 1.0 0.5 0.9 38.6 1.2 0.7 0.6 0.7 0.6 0.8 0.4 7.7 0.4 1.0 0.8 0.2 0.8 0.8 0.3 9. 2 0.9 7g12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 0.0 0. 0 7p12 0.0 0.0 0.0 0.0 0.0 0.0 346.1 0.0 0.0 0. 0 0.0 0.0 0. 0 0.0 3161.5 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 9.2 0.0 7sr12 1.0 1.4 0.9 1.1 1.0 1.1 62. 4 0.9 1.0 1. 0 1.1 1.1 0. 6 0.7 9.4 1.2 1.6 1.1 0.7 0.8 1.0 0.9 13.0 1.1 8cq8 0.1 0.2 0.2 0.2 0.0 0.4 62.3 0.2 0.2 0.2 0.1 0.2 0.1 0.1 10.4 0. 3 0.2 0.3 0.2 0.3 0.2 0.2 9. 9 0.3 8g8 0.0 0.0 0.0 0.0 0.0 0.0 104.5 0.0 0.0 0. 0 0.0 0.0 0. 0 0.0 14.9 0. 0 0. 0 0.0 0.0 0.0 0.0 0.0 1.4 0. 0 8p8 0.0 0.0 0.0 0.0 0.0 0. 0 176.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3604.6 0.0 0.0 0.0 0.0 0. 0 0.0 0. 0 9.0 0. 0 8sr8 0.5 0. 2 0.3 0.4 0. 1 0.2 51.9 0.2 0.2 0.5 0.4 0.4 0.2 0.3 6.6 0.6 0.3 0.5 0.3 0.6 0.1 0.3 9. 8 0.4 All avg. 8.1 0.9 0.9 0.9 1.0 1.0 179.4 2. 1 4.8 0.7 0.8 0.8 0.8 0.8 375.8 1.4 5.0 1.0 0.9 0.9 1.1 1.0 8.8 2.2 Clique avg. 14. 8 1.7 1.6 1.5 1.8 1.8 224.1 4.0 9.0 1.1 1.4 1.4 1.4 1.5 12.8 2.6 9.0 1.8 1.6 1. 7 1.9 1. 8 13.7 4.2 Geom. avg. 0.0 0.0 0.0 0.0 0.0 0.0 161.6 0.0 0.0 0. 0 0.0 0.0 0. 0 0.0 5.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0. 0 Product avg. 0.0 0.0 0. 0 0.0 0. 0 0.0 99.8 0.0 0.0 0. 0 0.0 0.0 0. 0 0.0 1471.6 0. 0 0.0 0.0 0.0 0.0 0.0 0.0 4. 2 0.0 SR avg. 17.7 1.8 2.1 2.0 2.1 2.4 232.2 4.5 10.2 1.6 1.6 1.6 1.6 1.8 13.2 3.1 11.1 2. 2 2.0 2.1 2. 4 2.2 16.4 4.5 3-AP avg. 38.4 1.2 1.2 1.2 1.7 1.7 852.6 4.8 19.8 0.8 0.8 0.8 1.2 1.2 79.4 3. 1 19. 9 1.3 1.3 1.3 1.7 1.6 22.4 5.1 4-AP avg. 6.4 1.2 1.5 1.5 1.4 1.7 6.1 4.0 5.7 0. 9 1.2 1.2 1.1 1. 4 4. 4 2.4 6.4 1. 6 1.6 1.6 1. 7 1.7 6.2 3.8 5-AP avg. 1.7 1.1 1.1 1.1 1.3 1.2 4.4 1.9 1.6 0. 8 1.0 1.1 1.0 1. 1 205.3 1.5 1.6 1.3 1.2 1. 3 1.5 1. 3 5.4 2.2 6-AP avg. 1.6 1.0 1.1 0.9 1.0 1.0 3. 0 1.4 1.2 0.9 0.9 0. 9 0.9 0.8 262.0 1.0 1.5 1.1 0.9 1. 0 1.0 0. 9 3.1 1. 2 7-AP avg. 0.5 0.6 0.5 0.5 0.4 0.5 111.8 0.5 0.4 0.4 0.4 0.4 0.4 0.3 794.7 0.4 0.6 0.5 0.2 0. 4 0.5 0. 3 7.8 0. 5 8-AP avg. 0.2 0.1 0.1 0.1 0.0 0.2 98.9 0.1 0.1 0.2 0.1 0.2 0.1 0.1 909.1 0. 2 0. 1 0.2 0.1 0.2 0.1 0.1 7.5 0. 2 Local Search Heur istics f or the Multidimensio nal Assignment Problem 29 T ab. 9: Heuristics compar ison for the i nstances with indep endent weights. Inst. < 10 ms < 30 ms < 100 ms < 300 ms < 1000 ms 3r150 — C C C s D V s D V 2DV 2 Gr 1.4 1.5 1.5 C s D V v 0.3 C C C C C C C s D V 2DV 2 s D V v s D V v s D V 2DV 2 s D V v Gr R R R 0.0 0.0 0.0 0.0 0.0 0.0 0.0 (no better solutions) 4r80 C 1DV 25.8 s D V 2DV 2 Gr Gr 6.1 6.1 s D V v Gr 1.5 C s DV v Gr 0.3 C C C s D V v s D V v s D V v Gr R 0.0 0.0 0.0 5r40 1DV 2 Gr 15.0 2DV s D V 2DV 2 Gr Gr Gr 13.5 13.5 13.5 C s D V v 1.2 C s D V v 0.0 (no better solutions) 6r22 C C 2DV s D V 46.4 47.3 2-opt Gr 25. 9 C s DV v Gr 1.4 C s DV v Gr 0.0 (no better solutions) 7r14 C 2-opt Gr 28.6 C s DV v Gr 13.6 C s D V v 1.4 C MC C s D V v s D V v s D V v Gr 0.0 0.0 0.0 (no better solutions) 8r9 C C 2-opt 2-opt Gr 22.2 24.4 C s D V v 12.2 C s DV v 0.0 (no better solutions) (no better solutions) T otal — C C C s D V 2DV 2 s D V Gr Gr 18.6 19.3 20.2 C s D V v Gr 4.8 C s DV v Gr 0.1 C C s D V v s D V v Gr 0.0 0.0 Local Search Heur istics f or the Multidimensio nal Assignment Problem 30 T ab. 10: Heur istics comparison for the instances with decomposable weights. Inst. < 100 ms < 300 ms < 1000 ms < 3000 ms < 10000 ms 3cq150 s D V Gr 8. 1 C C s D V 2DV 2 Gr Gr 7.8 7.8 MC MC s D V 2DV 2 Gr Gr 6.6 7.1 MC MC s D V 2DV 2 Gr Gr 3.1 3.4 MC s D V Gr 1.3 3sr150 C C s D V s D V 1DV 2 2DV 2 Gr Gr Gr Gr 9.6 9.8 9.8 10.2 C C s D V 2DV 2 Gr Gr 8.4 8.4 MC s DV Gr 6.6 MC s D V Gr 3.5 MC s DV Gr 2.0 4cq50 C MC C 1DV 1DV 1DV 2 9.7 10.0 10.3 MC MC 1DV 1DV Gr 6.4 6.9 MC MC MC MC MC 1DV 1DV 2 1DV s D V 1DV 2 Gr Gr R R 4.7 4.9 5.0 5.1 5.1 MC 1DV Gr 2.7 MC 1DV Gr 1.5 4sr50 C MC 1DV 1DV 11.7 12.2 MC MC 1DV 1DV Gr 7.0 7.7 MC MC 1DV 1DV 2 Gr Gr 4.7 5.0 MC MC 1DV 1DV 2 Gr Gr 2.6 2.7 MC MC MC MC 1DV 2 1DV 1DV 1DV Gr Gr M-R 2.0 2.0 2.1 2.1 5cq30 C MC 1DV 1DV 6.3 6.4 MC 1DV 3.2 MC MC MC 2DV 1DV s D V 2.6 2.6 2.7 MC MC 2DV s D V 1.7 1.7 MC MC MC s D V 2DV 2DV 2 1.3 1.3 1.3 5sr30 MC C 1DV 1DV 7.9 8.3 MC 1DV 3.9 MC 1DV 3.2 MC MC MC MC MC 1DV 2 2DV s D V 1DV 2DV 2 Gr Gr Gr Gr 2.4 2.5 2.5 2.5 2.6 MC MC MC MC MC 2DV 1DV s D V 2DV 2 1DV 2 Gr Gr Gr Gr Gr 1.9 1.9 2.0 2.0 2.0 6cq18 C 1DV 2.1 C 1D V 1.0 C 1D V 0.7 C 2D V Gr 0.3 C s DV Gr 0.0 6sr18 MC C 1DV 1DV 3.8 3.8 MC C 1DV 1DV 2.1 2.1 C C 2DV 2DV 2 R 1.4 1.5 C 1DV 0.8 C s DV Gr 0.3 7cq12 C 1DV 0.7 C 1D V 0.2 C 1DV 2 0.1 C 1D V 0.0 C C C C C C C C 1DV 2DV 2 1DV 1DV 2 1DV 2DV s D V 2DV 2 Gr Gr R R R R 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 7sr12 C 1DV 1.2 C C 1DV 1DV 2 0.5 0.5 C 1DV R 0. 1 C 2DV 0.0 (no better solutions) 8cq8 C 1DV 0.0 C 1D V 0.0 (no better solutions) (no better solutions) (no better s olutions) 8sr8 C 1DV 0.3 C C 1DV 2DV 0.0 0.0 C C C C C C C C C C C C 1DV 2DV 1DV 2 2DV 2 1DV 1DV 2 2DV 2 2-opt 1DV 2DV 1DV 2 2DV 2 Gr Gr Gr R R R R R 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 (no better solutions) (no better s olutions) T otal C 1DV 6.4 C C 1DV 2DV 4.5 5.0 MC MC C MC 1DV 2DV 2DV 1DV R 3.5 3.7 3.7 3.8 MC MC MC MC 1DV 2DV s D V 1DV 2 Gr Gr Gr Gr 1.9 2.1 2.1 2.1 MC 1DV Gr 1.3
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment