Exponential-Time Approximation of Hard Problems
We study optimization problems that are neither approximable in polynomial time (at least with a constant factor) nor fixed parameter tractable, under widely believed complexity assumptions. Specifically, we focus on Maximum Independent Set, Vertex C…
Authors: Marek Cygan, Lukasz Kowalik, Marcin Pilipczuk
Exponent ial-T ime Approxim ation of Hard Problems Marek Cygan, Łukasz K ow alik , Marcin Pilipczuk and Mateusz W ykurz ∗ Abstract W e study op timization problems that are neither app roximab le in polyn omial time (at least with a con stant factor) nor fixed parameter tractable, under widely belie ved complexity assumptions. Specifically , we f ocus on M A X I M U M I N D E P E N D E N T S E T , V E RT E X C O L O R I N G , S E T C O V E R , and B A N D W I D T H . In recent years, many researche rs desig n exact expon ential-time algor ithms for th ese and other hard prob lems. Th e go al is g etting the time c omplexity still of or der O ( c n ) , but with the co nstant c as small as p ossible. In this work we extend th is lin e of research an d we inv estigate wheth er the constant c can be ma de even sma ller when one allows co nstant factor approxima tion. In fact, we describe a k ind of appro ximation schemes — trad e-offs between approxim ation f acto r and the time complexity . W e study two natu ral app roache s. The first approach con sists o f d esigning a b acktrackin g al- gorithm with a small search tree. W e present o ne r esult of that kind: a (4 r − 1) -appro ximation of B A N D W I D T H in time O ∗ (2 n/r ) , for any positi ve integer r . The second approach uses gen eral transformatio ns from exponential-tim e exact algor ithms to approx imations that are faster b u t still exponen tial-time. For examp le, we show tha t f or any re- duction rate r , one can transform any O ∗ ( c n ) -time 1 algorithm f or S E T C O V E R into a (1 + ln r ) - approx imation algor ithm runnin g in time O ∗ ( c n/r ) . W e believe that results o f that kin d exten d th e applicability of exact algorithms for NP-hard problems. Classificati on: Algorithms and data structur es; “fast” exponential-time algorit hm s 1 Introd uction Motiv ation One way of copi ng with NP-hardness is polynomial-ti me approximat ion, i.e. looki ng for soluti ons that are relati ve ly close to optimal. Unfortunatel y it turns out tha t there are stil l many problems which do not allow for go od ap proximation. Let us recall some example s. H ˚ astad [16] sho wed that I N D E P E N D E N T S E T ca nnot be appr oximated in polynomial time w ith factor n 1 − ǫ for any ǫ > 0 unless NP = ZPP . The same holds for V E RT E X C O L O R I N G du e to Feige and Kilian [11]. By anot her result of Feige [8], S E T C OV E R cannot be approximated in polyno mial time with factor (1 − ǫ ) ln n , where n is the size of the set to cov er , for any ǫ > 0 unless NP ⊆ DTIME( n log log n ) . Another approac h is the ar ea o f para metrized co mplexity (s ee e.g. [7]). Then the goal is to find an algo rithm with time e xponential only in a para meter unrelat ed to the instan ce size (then we sa y the problem is fixed par ameter tract able , FPT in short). This parameter may reflect comple xity of the instan ce – lik e tree width, b ut then w e get an ef ficient algorit hm only for so me subclass of pos sible instan ces. Another choice of the parameter is the measure of the solution quality . For example , one can ver ify whether in an n -v ertex graph ther e is a verte x co ver of size k in O (1 . 2738 k + k n ) time [4]. Again, the parametrized approac h does not succeed in some cases. V erif ying whethe r a graph is k -colorable is NP-complete for any k ≥ 3 , while I N D E P E N D E N T S E T and S E T C OV E R are W [1 ] - and W [2] -c omplete ∗ Institute of Informatics, Uni versity of W arsa w , Poland . This research is partially supported by a grant from the Pol- ish Ministry of S cience and Higher E ducation, pro ject N206 005 32/0807. E-mail addresses: cygan@mimuw.edu.p l , kowalik@mimuw .edu.pl , mal cin@mimuw.ed u.pl , mateus z.wykurz@stu dents.mimuw.e du.pl . 1 O ∗ ( f ( n )) notation suppresses polynomial factors 1 respec tiv ely , meaning roughly that an FP T algor ithm for I N D E P E N D E N T S E T or S E T C O V E R would imply algorit hms of that kind for a host of other hard problems. The aforementione d hardne ss results motiv ate the study of “moderately expon ential time” algo- rithms. The go al here is to de vise algorit hms with exp onential running time O (2 n/r ) with r big enough. Indeed , a O (2 n/ 50 ) -time algori thm may appear practical for some rang e of n , say n ≤ 1000 . Despite some progres s in this area we are still far from exact algorith ms with time complex ity of that order . One of the most resea rched problems in this field is I N D E P E N D E N T S E T . Exhausti ve search fo r that proble m giv es O (2 n ) time boun d while the curre ntly best publis hed result [13] is O (2 n/ 3 . 47 ) . For V E R - T E X C O L O R I N G the first O ∗ (2 n/ 0 . 77 ) -time algorit hm by Lawler was then impro ved in a s eries of paper s culminat ing in a breakthroug h O ∗ (2 n ) bound of Bj ¨ orklun d, Husfeldt and Koi visto [2]. No w cons ider the U N W E I G H T E D S E T C OV E R problem. The instan ce consists of a f amily of sets S = { S 1 , . . . , S m } . The set U = S S is calle d the u niverse an d we de note n = | U | . The go al is to find the smallest possible subf amily C ⊆ S such that S C = U . Assume that m is relati vely small bu t big enough that finding an optimal solution usin g an exac t algorithm is out of questi on, say m = 150 . If the uni verse is small we can get a good approximation by the greedy algorithm (see e.g. [23]) with approx imation ratio H n < ln n + 1 . Ho wev er this approximatio n guaran tee becomes bad when n is big. A natural thing to conside r is an approximatio n algorithm w ith better (e.g. constant) guarantee and with runnin g time ex ponential bu t substant ially lower th an the b est kno wn ex act algori thm. In t his paper we expl ore such approach . Ideally , one would like to hav e a kind of trade-of f between the running-time and app roximation ratio – then one g ets as much accur acy as can be af forded . W e study tw o approache s yieldi ng results of that kind. Sear ch T r ee T echniques Many exponent ial-time algor ithms (e.g . backtrackin g algori thms) ca n be vie wed as visiting the nodes of an exponen tial sized search tree. The nodes of the tree correspo nd to insta nces of the probl em, and typically the instanc es in the lea ves are either tri vial or at least solv able in poly nomial-time. A natural idea is to use a search tree which has fe wer nodes than the search tree of the exact algo- rithm, an d with lea ves correspon ding to ins tances that can be appr oximat ed in po lynomial time. This natura l approach was used in a work on M A X S A T by Dantsin, Gavri lovich, Hirsch and Ko nev [6]. In this paper we describ e one result of that kind: a (4 r − 1) -appr oximation of B A N D W I D T H in time O ∗ (2 n/r ) , for any positi ve integer r (see Section 2 for the definitio n of the proble m and a brief d iscussion of kno wn results). This approac h can be used also for I N D E P E N D E N T S E T an d S E T C O V E R . For example, I N D E P E N - D E N T S E T has con stant ratio app roximation for graph s of bound ed degree. A stan dard appr oach to exac t algori thm for this proble m (used e.g. in [13]) is a recurs ive alg orithm, which pic ks a v ertex v and check s two possibili ties: (1) v is in the independen t se t – then remov es v and its neighbors and makes a recur - si ve call, and (2) v is not in the indepe ndent set – then remov es v and makes a recursi ve call. When we can assume that the verte x pick ed is of large degree (if there is no such verte x polyno mial-time approx - imation is used ) we always get a big reducti on of the inst ance size in one of the recurs ive calls w hich results in a better time bound than the exact algorithm. Howe ver , we w ill not elaborate on this because we obtained much better results (also for S E T C OV E R ) using the approach of reduction (see the ne xt paragr aph). Reductions Consider an instance S = { S 1 , . . . , S m } of the U N W E I G H T E D S E T C OV E R problem de- scribe d abo ve. Assume m is e ven. Then create a ne w instan ce Z = { Z 1 , . . . , Z m/ 2 } w here Z i = S 2 i − 1 ∪ S 2 i . Nex t find an opti mal solution OPT Z for Z usin g an e xact algor ithm. Let C = { S 2 i − 1 | Z i ∈ OPT Z } ∪ { S 2 i | Z i ∈ OPT Z } . Clearly , | OPT Z | ≤ | OPT S | and hen ce | C | ≤ 2 | OPT S | . Thus we get a 2-app roximation in T ( m/ 2) time where T ( m ) is the bes t kno wn bound for an exact algorithm, so current ly just T ( m/ 2) = O ∗ (2 m/ 2 ) after applying the exhausti ve search. O f course this m ethod is scalab le – similarly we get a 5-app roximation in time O ∗ (2 m/ 5 ) . W ith the presen t computing speed it should allo w to process instances with roughly 150 sets, ev en w hen the uni vers e U is lar ge. The abo ve ph enomenon is the second of the two approaches studie d this paper . W e will call it a reduct ion. Let us state a more precise definition no w (ho wev er , w e w ill make it a bit more general 2 furthe r in the paper). Consider a minimization problem P (reductio n of a maximization problem is defined analogo usly) with a measure of instance size s ( I ) , and a measure of the solution quality m ( S ) . Let OPT I be an optimal solu tion for instance I . Let r, a > 1 be constants . A ( r , a ) - r eduction (or simply a red uction ) of problem P is a pair of algori thms called r educer and mer ger satisfyin g the follo wing propertie s: • Reducer transfor ms I , an instanc e of P , into a set { I 1 , ..., I k } of instances of P so that for eve ry i = 1 , . . . , k , s ( I i ) ≤ s ( I ) /r + O (1) . • Let S 1 , . . . , S k be optimal solut ions of instanc es I 1 , . . . , I k . Then M er ger trans forms the s olutions S i into S , a solutio n fo r insta nce I , so that m ( S ) ≤ a · OPT I . (Merge r may also use I and any informat ion computed by reducer). The constan ts r and a in the abov e definiti on will be called the rate and the appr oximati on of th e reduct ion, respecti vely (we assume a ≥ 1 , ev en for a maximizatio n problem – then a solution with qualit y ≥ OPT I /a is returned). O bserv e that abov e we described a (2 , 2) -reduction of U N W E I G H T E D S E T C OV E R . Actua lly , w e noted that it generalizes to ( r, r ) -reducti on for any r ∈ N . If there is a ( r , a ( r )) -reductio n for r bein g arbit rarily big we talk abo ut r educt ion sch eme an d the function a ( r ) is the app r oximation of this scheme. (This defini tion is ver y flexible, ho wev er , most of our s chemes imply a reduct ion for any r ∈ N , and sometimes e ven for any r ∈ Q ). Problem Approxima tion Range of r Current time bound U N W E I G H T E D S E T C O V E R 1 + ln r r ∈ Q , r ≥ 1 O ∗ (2 n/r ) , [2] S E T C OV E R 1 + ln r r ∈ Q , r ≥ 1 O ∗ (2 n/ (0 . 5 r ) m log n ) , O ∗ (2 n/ (0 . 31 r ) ) , App. E S E T C OV E R r r ∈ N , r ≥ 1 O ∗ (2 m/r ) , [folk lore] M I N D O M I N A T I N G S E T r r ∈ N , r ≥ 1 O ∗ (2 0 . 598 n/r ) , [21] M A X I N D E P E N D E N T S E T r r ∈ Q , r ≥ 1 O ∗ (2 n/ (3 . 47 r ) ) , [13] C O L O R I N G a 1 + ln r r ∈ Q ∩ [1 , 4 . 05] O ∗ (2 n/ (0 . 85 r ) ) [2] C O L O R I N G b 1 + 0 . 247 r ln r r ∈ Q , r > 4 . 05 O ∗ (2 n/ (0 . 85 r ) ) [2] C O L O R I N G r r ∈ N , r > 1 O ∗ (2 n/ (0 . 85 r ) ) [2] B A N D W I D T H r log 2 9 = 9 k r = 2 k , k ∈ N O ∗ (10 n/r ) , [10] S E M I - M E T R I C T S P 1 + log 2 r = 1 + k r = 2 k , k ∈ N O ∗ (2 n/ (0 . 5 r ) ) , [1] a This is an O ∗ (2 n/ 3 . 47 ) -time reduction by Bj ¨ orklund and Husfeldt, see [2] b This is an O ∗ (2 n/ (0 . 85 r ) ) -time reduction. T able 1: Our reduc tions. Last column sho ws time bounds of approx imation alg orithms obtained using the best kno wn (polynomial space) exact algorit hms. W e present reduction schemes for se veral most natural optimization prob lems that are both hard to approx imate and resist FPT algorithms. T able 1 sho ws our results . In last column we p ut time bo unds of approx imation alg orithms obtained by using our reductio ns with best kno wn exa ct algorithms. As our moti vati ons are partially practical, all these bounds refer to polynomial space exact algorit hms (putting r = 1 gi ves the time complexi ty of the rele va nt e xact algorit hm). In most cases ( B A N D W I D T H , C O L - O R I N G , I N D E P E N D E N T S E T , S E M I - M E T R I C T S P ) there are faster exact exponen tial-space algo rithms (hence we would get also fa ster approx imations). Note that by pu tting r = n/ log n we ge t polynomial time approximati ons, and for S E T C OV E R we get the approx imation ratio 1 + ln n , whic h roug hly matches the ratio of the (esse ntially optimal) greedy algorithm. Thus, our reducti on can be vie wed as a continu ous scaling between the best possible polyn omial time appro ximation and the best kno wn ex ponential time a lgorithm. In othe r words, one can get as good solution as he can af ford, by using as much time as he can. A similar phenomenon appears 3 in the case of S E M I - M E T R I C T S P . (This is not very surprising since these two reductions are based on the rele van t polynomial time approximat ions). The notion of redu ction introduced in our paper is so natural that some reductio ns must hav e been consid ered before , especi ally for big reduction rates, like r = n/ log n , when the reductions essentiall y imply polynomial -time appro ximation algo rithms. W e are aware of one reductio n describ ed in the con- tex t of expon ential-time approximatio n: Bj ¨ o rklund and Husfeldt [2] described a reduction scheme w ith approx imation a ( r ) = (1 + ln r ) (worth using only for bound ed value s of r – see § C). Related W ork W e hav e alread y mentioned results of Bj ¨ o rklund and Husfeldt [2 ] and Dantsin et al. [6 ] on expon ential-time appro ximation. T he idea of joining the worlds of approx imation algorithms and “moderately” expon ential alg orithms appeared also in a recent work of V as silev ska, W illiams and W oo [22], howe ver their directi on of research is completely dif ferent fro m ours , i.e. they cons ider so- called hybrid algori thms. F or example, they report to ha ve an algorith m for B A N D W I D T H which for gi ven inp ut either retur ns an O (log n ) -approximate soluti on in polyno mial time or returns a (1 + ǫ ) - approx imate solutio n in O (2 n/ log log n ) time. W e see that the hybrid algorith m does not guarante e con- stant appro ximation ratio and hence cannot be directly compared w ith our work . Another promising area is joining the worlds of parametrized comple xity and polynomial-ti me ap- proximat ion algorithms — see the surve y paper [19]. Organiz ation of the Paper W e start from the app roximation scheme for the B A N D W I D T H problem in Section 2. Then in S ection 3 we introduce slightl y more general definition of reducti on and then in Section 4 describe two reduc tions for S E T C OV E R . (The reductio ns for M A X I M U M I N D E P E N D E N T S E T , V E RT E X C O L O R I N G , B A N D W I D T H and S E M I - M E T R I C T S P are put in A ppendi x due to spa ce limitatio ns.) W e conclu de in Sectio n 5 by so me complexity remar ks that sho w relatio ns between the notion of reductio n and polynomial-ti me appro ximation. 2 Band width Let G = ( V , E ) be an undi rected graph. For a giv en or dering of vert ices , i.e. a on e-to-one function f : V → { 1 , . . . , n } , its bandwidth is the maximum dif feren ce between th e numbers assigned to the endpo ints of an edg e, i.e. max uv ∈ E | f ( u ) − f ( v ) | . Bandwidth of grap h G , denoted by bw( G ) , is the minimum po ssible bandwidth of an order ing. The B A N D W I D T H problem asks to find, for a giv en graph, its bandwidth with the corre sponding ordering. B A N D W I D T H is a notoriou s NP -hard problem. It was sho wn by Unger [20] that B A N D W I D T H does not belong to APX ev en in very restrict ed case when G is a caterp illar , i.e. a very simpl e tree. It is also hard for any fixed lev el of the W hierarchy [3]. The best known poly nomial-time appr oximation, due to Feige [9], has O (log 3 n √ log n log log n ) approximation guarantee. The fastest known exact algori thm w orks in time O ∗ (5 n ) and spac e O ∗ (2 n ) and it is due to Cygan and Pilipczuk [5], while the best polyn omial-space exact algor ithm, due to F eige and Kilian [10], has time comple xity O ∗ (10 n ) . W e were able to find a redu ction for the B A N D W I D T H problem. Although it is probably the most nontri vial of our reducti ons, it giv es, for any k ∈ N , ap proximation ra tio 9 k with reduction rate 2 k , w hich is far fr om bei ng practi cal (see Ap pendix D for the detai ls). As a corolla ry it gi ves a 9 k -appro ximation in time O ∗ (10 n/ 2 k ) and polynomial space, or in time O ∗ (5 n/ 2 k ) and O ∗ (2 n/ 2 k ) space. It is an interestin g open prob lem whether there is a better reduction for this problem. No w we will describe a better approximation scheme using the approach of small search tree. 2.1 2 -app roximation in O ∗ (3 n ) -time (warm-up) W e be gin with an algorithm which is ve ry close to a fragment of the O ∗ (10 n ) -time exact algorithm of Feige and Kilian [10]. Assume w .l.o.g. the input graph is connected (we keep this assump tion also in the follo wing sections). 4 Let b be the bandwidt h of the input graph — we m ay assume it is giv en, otherwise with just a O (log n ) o verhead using binary search one can find the smalle st b for which the algorith m returns a soluti on. Let u s parti tion the set of po sitions { 1 , . . . , n } into ⌈ n/b ⌉ inter vals of size b (except, pos sibly , for the last inte rval) , so that for j = 0 , . . . , ⌈ n/b ⌉ − 1 , the j -th interv al consists of positions I j = { j b + 1 , j b + 2 , . . . , ( j + 1) b } ∩ { 1 , . . . , n } . Pseudocode 2.1 Generatin g at m ost n 3 n assign ments in the 2 -approx imation algori thm. 1: proce du re G E N E R A T E A S S I G N M E N T S ( A ) 2: if all nodes in are assign ed then 3: If each interv al I j is assign ed | I j | verti ces, order the vertices in interv als a rbitrarily and return the orde ring. 4: else 5: v ← a verte x w ith a neig hbor w already assigned. 6: if A ( w ) > 0 then G E N E R A T E A S S I G N M E N T S ( A ∪ { ( v , A ( w ) − 1) } ) 7: G E N E R A T E A S S I G N M E N T S ( A ∪ { ( v , A ( w )) } ) 8: if A ( w ) < ⌈ n/b ⌉ − 1 then G E N E R A T E A S S I G N M E N T S ( A ∪ { ( v , A ( w ) + 1) } ) 9: proce du re M A I N 10: f or j ← 0 to ⌈ n/b ⌉ − 1 do 11: G E N E R A T E A S S I G N M E N T S ( { ( r , j ) } ) ⊲ Generat e all assignments with r in interv al I j The algorithm finds a set of as signments o f ver tices in to inte rvals I j such that if there is an orderi ng π of bandwid th at most b , for at least one of thes e assignmen ts, for e very v ertex v , π ( v ) lies in the assign ed interv al. Clearly , if there is an orde ring of band w idth b , at least one such assig nment exis ts. The fo llow- ing method (introduced by Feige and K ilian originall y for interv als of length b/ 2 ) finds the required set with only at most n 3 n assign ments. Choose an interv al for the first ve rtex r (c hosen arbitrarily) in all ⌈ n/b ⌉ way s. Then pick vertice s o ne by one, each time taking a verte x adjacent to an alread y assig ned one. Then th ere are at m ost 3 inter vals where the n ew v ertex c an be pu t and so on. (See Pse udocode 2.2. (Partia l) assignment is represented by a set of pairs; a pair ( v , j ) means that A ( v ) = I j .) Obvio usly , when there are more v ertices assig ned to an interv al than its length , the assig nment is skippe d. Now it is clear that for any remaining assignment any ordering of the vertice s inside interv als gi ves an ordering of bandwidth at most 2 b . Hence we ha ve a 3 -appro ximation in O ∗ (3 n ) -time and polyn omial space. Note also that if we use interv als of length b/ 2 as in Feige and Kilian’ s algo rithm, similar method gi ves 3 / 2 -ap proximation in O ∗ (5 n ) -time. 2.2 Int roducing the Framework W e are going to exte nd the idea from the preceding sec tion further . T o this en d we ne ed generalize d ver sions of simple tools used in the 2 -app roximation abov e. Let a ( partial) interva l assignment be an y (parti al) function A : V → 2 { 1 ,...,n } , that as signs interv als of positio ns to vertices of the input graph . An interv al { i, i + 1 , . . . , j } w ill be denoted [ i, j ] . The size of an interv al is simply the number of its elements. Let π : V → { 1 , . . . , n } be an orderin g. When for e very verte x v , π ( v ) ∈ A ( v ) , we will say that an interv al assignment A is consis tent with π and π is consis tent with A . In Section 2.1 all interva ls had the same size b , moreo ver two interv als w ere alw ays either equal or disjoi nt. Whe n th is latter conditio n holds, it is triv ial t o ve rify whether there is an assignmen t cons istent a gi ven inte rval assignmen t — it suf fices to chec k whether the number of v ertices assigned to an y interv al does not exc eed its size. Luckily , in a general case it is still possible in polynomial time: just note that this is a special case of sched uling jobs on a single machine with release and deadline times specified 5 for each job (see e.g. [18], S ec. 4.2) — hence w e can use the simple greedy algorithm which processes ver tices in the order of max A ( v ) and assign s each verte x to the smallest a vail able position. Pro position 2.1. F or any interval assign ment A one can verify in O ( n log n ) time whether ther e is an or dering consisten t with A . T o g et a nice appr oximation ratio, h owe ver , we need a bound on the band width of r esulting ord ering. The obv ious bound is max uv ∈ E max i ∈ A ( u ) , j ∈ A ( v ) | i − j | . The abo ve bound was suf ficient in S ection 2.1, b ut we will need a better bound. Lemma 2.2. Let A be an inte rval assignment for an input gra ph G = ( V , E ) . L et s be the size of the lar ges t inte rval in A , i.e. s = m ax v ∈ V | A ( v ) | . If ther e is an or dering π ∗ of bandwidth b consisten t with A , then one can find in polyn omial time an ord ering π that is consi stent with A and has band width at most s + b . Pr oof. Consider any edge uv . Clearly , π ∗ ( u ) ∈ [min A ( v ) − b, max A ( v ) + b ] . Hence we can replace A ( u ) by A ( u ) ∩ [min A ( v ) − b, max A ( v ) + b ] , maintainin g the in v ariant that π ∗ is consistent w ith A . Similarly , we can replace A ( v ) by A ( v ) ∩ [min A ( u ) − b, max A ( u ) + b ] . O ur alg orithm performs such replac ements for ev ery edge uv ∈ E . As a result we get an assignmen t A ′ such that for e very edge uv , max i ∈ A ′ ( u ) ,j ∈ A ′ ( v ) | i − j | ≤ s + b . It is cle ar that any order ing consistent with A ′ has ban dwidth at most s + b . Such the orderi ng can be found in polynomial time by Proposition 2.1 In the follo wing sections it will be con v enient to formalize a little the order in which interv als are assign ed to the vert ices of the input graph. Recall that each time an interv al is assigned to a new ver tex v , v has an already assig ned neigh bor w , except for the initi al v ertex r . In other words, the algo rithm b uilds a rooted spanni ng tree for each assignment (here, r is the root, and w is the parent of v ). In what follo ws, we will fi x a rooted spanni ng tree T , and our algorith m will generate interv al assignments in such a way that the first vert ex assigned r is the root of T , and whene ver a verte x v 6 = r is assigned, its parent in T has been already assigned. 2.3 3 -app roximation in O ∗ (2 n ) -time This time, the algorithm uses ⌈ n/b ⌉ interv als, each of size 2 b (ex cept for one or two last interv als), so that for j = 0 , . . . , ⌈ n/b ⌉ − 1 , the j -th interv al co nsists of posit ions I j = { j b + 1 , j b + 2 , . . . , ( j + 2) b } ∩ { 1 , . . . , n } . Note that the interv als overla p. Pseudocode 2.2 Generatin g at m ost n 2 n assign ments in the 3 -approx imation algori thm. 1: proce du re G E N E R A T E A S S I G N M E N T S ( A ) 2: if all nodes in T are assigned then 3: Using Lemma 2.2 find orderin g consistent with interv al assignment correspon ding to A . 4: else 5: v ← a node in T such that v ’ s parent w is assigned . 6: if A ( w ) > 0 then G E N E R A T E A S S I G N M E N T S ( A ∪ { ( v , A ( w ) − 1) } ) 7: if A ( w ) < ⌈ n/b ⌉ − 1 then G E N E R A T E A S S I G N M E N T S ( A ∪ { ( v , A ( w ) + 1) } ) 8: proce du re M A I N 9: f or j ← 0 to ⌈ n/b ⌉ − 1 d o 10: G E N E R A T E A S S I G N M E N T S ( { ( r , j ) } ) ⊲ Generate all assign ments with root in I j The alg orithm genera tes all possib le assignment s of vertic es to interv als in such a way that if a node in T is as signed to int erval I j then eac h of its child ren is assig ned to interv al I j − 1 or I j +1 . Clearly , there are at most n 2 n such assignments. Moreov er , if there is an ordering π of bandwid th b , then the algo rithm 6 genera tes an inte rval as signment A π consis tent with π . T o find A π , visit the nodes of T in preorder , assign the root r to the interv al S ⌊ ( π ( r ) − 1) /r ⌋ , and for each node v with parent w already assigne d to an interv al I j , pu t A π ( v ) = I j +1 if π ( v ) > ( j + 1) b an d A π ( v ) = I j − 1 otherwis e. For each gener ated assign ment the algorithm tries to fi nd an ord ering of bandwidth at most 3 b using Lemma 2.2. Clearly , it succee ds for at least one assignment, namely A π . The algorit hm is sk etched in Pseudocode 2.2. 2.4 (4 r − 1 ) -appr oximatio n in O ∗ (2 n/r ) -time In this section we are going to generalize the algorith m from the prior section to an approxi mation scheme. Let r be a pos itiv e inte ger . W e will describe a (4 r − 1) -approx imation algorithm. Our algori thm uses interv als of sizes 2 i b , for r ≤ i ≤ 2 r − 1 . Note that unli ke in previo us algorithms, int erval s of man y dif ferent sizes are used. As before, interv als will beg in in positio ns j b + 1 , for j = 0 , . . . , ⌈ n/b ⌉ − 1 . The interv al beginn ing in j b + 1 and of length 2 ib will be denoted by I j, 2 i . For con v enience, we allo w interv als not completely cont ained in { 1 , . . . , n } , but each assi gned inte rval contains at least one position from { 1 , . . . , n } . Pseudocode 2.3 Generatin g assignments in the (4 r − 1) -approx imation algor ithm. 1: proce du re G E N E R A T E A S S I G N M E N T S ( A ) 2: if all nodes in T are assigned then 3: Cut all interv als in A to make them contained in { 1 , . . . , n } . 4: Using Lemma 2.2 find orderin g consistent with interv al assignment correspon ding to A . 5: else 6: v ← a node in T such that v ’ s parent w is assigned ; let I j, 2 i = A ( w ) . 7: if i + 1 ≤ 2 r − 1 then 8: G E N E R ATE A S S I G N M E N T S ( A ∪ { ( v, I j − 1 , 2( i +1) } ) 9: else 10: if j − 1 + 2 r ≥ 1 then G E N E R ATE A S S I G N M E N T S ( A ∪ { ( v , I j − 1 , 2 r } ) 11: if j − 1 + 2 r ≤ ⌈ n /b ⌉ − 1 then G E N E R A T E A S S I G N M E N T S ( A ∪ { ( v, I j − 1+2 r, 2 r } ) 12: proc edu r e M A I N ( i 0 ) 13: f or j ← 0 to ⌈ n/b ⌉ − 1 do 14: G E N E R A T E A S S I G N M E N T S ( { ( r , I j, 2 i 0 ) } ) ⊲ Generat e all assignmen ts with root in I j, 2 i 0 The algorithm is sketched in Pseudocode 2.3 . Let i 0 ∈ { r , . . . , 2 r − 1 } be a parameter that we w ill determin e later . The algori thm assigns the root of T to all possib le interv als of size 2 i 0 b that o verlap with { 1 , . . . , n } and extends each of these partia l assignments recursi vely . T o extend a gi ven assignment, the algorithm chooses a node v of T such that the parent w of v has been alr eady assigned an in terval , say I j, 2 i . Consider the interva l I j − 1 , 2( i +1) which is obtained from I j, 2 i by “e xtendin g” it by b positi ons both at lef t and rig ht side. Note th at in an y orderi ng consisten t with the current assignmen t, v is put in a posit ion from I j − 1 , 2( i +1) . H ence, if I j − 1 , 2( i +1) is not too big, i.e. i + 1 ≤ 2 r − 1 , the algori thm simply assign s I j − 1 , 2( i +1) to v and proceeds with no branching (just one recurs ive call). Otherwise, if i + 1 = 2 r , the inter val I j − 1 , 2( i +1) is split into two interv als of size 2 r , namely I j − 1 , 2 r and I j − 1+2 r, 2 r and two recurs ive calls follo w: with v assigned to I j − 1 , 2 r and I j − 1+2 r, 2 r respec tiv ely . As before, for ev ery generated assignmen t (after cutting the interv als to m ake them cont ained in { 1 , . . . , n } ) the algorithm applie s Lemma 2.2 to verify whether it is consisten t with an ordering of bandwid th [2(2 r − 1) + 1] b = (4 r − 1) b . Similarly as in the case r = 1 described before, for at least one assign ment such the ordering is found. W e conclude with the time complexity analysis. Observe that the nodes at tree dista nce d from the root are assigned inte rvals of size 2[( i 0 + d ) mo d r + r ] . It follo w s that branch ing ap pears only when i 0 + d ≡ 0 (mo d r ) . Let ˆ n ( i 0 ) denote the number of nodes at tree distanc e d satisfying this conditi on. 7 It is clear that the abov e algorithm works in time O ∗ (2 ˆ n ( i 0 ) ) . Since P i ∈{ r,... , 2 r − 1 } ˆ n ( i ) = n , for some i ∈ { r , . . . , 2 r − 1 } , ˆ n ( i ) ≤ n/r . By choo sing this valu e as i 0 , we get the O ∗ (2 n/r ) time boun d. Theor em 2.3. F or an y positiv e inte ger r , ther e is a (4 r − 1) -app r oximation algorithm for B A N D W I D T H runnin g in O ∗ (2 n/r ) time and polynomia l spa ce. 3 Reducib ility (slightly more general) In this section we intr oduce a slightly more general versio n of reductio n and discuss some of its basic proper ties. Essential ly the dif ference from the v ersion in the Intro duction is that sometimes between reduce r and merger we want to use ap proximation algorit hm inste ad of exac t one. As before let P be a minimizatio n problem, let s ( I ) and m ( S ) denote the m easure s of the instance size and the so lution quality , respect iv ely . L et r > 1 be a con stant and f : R → R a fu nction. An ( r , f ) - r eductio n (o r simply a re duction ) of P is a pair of algorithms called r educer and m er ger satisfying the follo wing properties: • Reducer transfor ms I , an instanc e of P , into a set { I 1 , ..., I k } of instances of P so that for eve ry j = 1 , . . . , k , s ( I j ) ≤ s ( I ) /r + O (1) . • Let S 1 , . . . S k be solutio ns of instanc es I 1 , . . . , I k . Let α > 1 be an approximati on guarante e of these solutions, i.e. for j = 1 , . . . , k , m ( S j ) ≤ αm (OPT I j ) . T hen m er ger transforms the soluti ons S i into S , a solution for instance I , so that m ( S ) ≤ f ( α )OPT I . (Mer ger may also use I and any information computed by reducer). As before, r is c alled the rate and f is call ed the approxima tion (sinc e we do not e xpect f to be a constan t functi on it should not lead to ambigui ty). A gain, if there is a ( r , f r ) -redu ction for arbitra rily bi g r we deal with r eduction scheme with approx imation a ( r, α ) = f r ( α ) . Note that the alrea dy descr ibed reduction for U N W E I G H T E D S E T C O V E R has approxi mation a ( r , α ) = r α . The time complex ity of the reduction is the sum of (wors t-case) time complexitie s of reducer and mer ger . In most cases our reduct ions will be polynomial time. Howe ver , und er some restriction s exp onential-ti me reductio ns may be interes ting as well. The follo wing lemma will be useful (an easy proof is in A ppendi x A): Lemma 3.1 (Reduct ion Compos ition) . If th ere is an ( r, f ) -r eductio n R then for any positive k ∈ N ther e is a ( r k , f k ) -r educti on 2 R ′ for the same pr oble m. Mor eover , if the mer ger of R gener ates a polynomial number of instan ces then R ′ has the same time comple xity as R , up to a polyno mial factor . Note that the Reduction Compositio n Lemma implies that once w e fi nd a single ( r , f ) -reduc tion, it ext ends to a reduction scheme, though for quite limited choice of r . W e will see m ore consequen ces of this lemma in Section 5. 4 Set Cover W e will use the notation for S E T C OV E R from the Introducti on. Since here we con sider the gen eral, weighted versio n of the proble m, now each set S ∈ S comes with its weight w ( S ) . W e will also w rite w ( C ) for the total weight of a fa mily of sets C . In th e case of S E T C OV E R there ar e two natural meas ures for size of the instan ce: the size of the un iv erse U and the number of sets in the f amily S . W e will pres ent reduct ions for both m easures . Both redu ctions work for the weighted versi on of the problem. 2 f k is the composition: f k = f ◦ f k − 1 , f 0 = id . 8 4.1 Red ucing the size of univ erse An r -approximate solution of S E T C OV E R can be foun d by divi ding U into r parts, co vering each of them separately and returning the union of these cov ers. It corresponds to a reduction scheme with approx imation a ( r, α ) = r α for r ∈ N . Howe ver , we will describ e a much be tter reducti on scheme. Let’ s recall t he greedy a lgorithm (see e.g. [23]), called Greedy from no w . It selec ts sets to the co ver one by one. Let C be the f amily of set s chosen so far . Then Greedy tak es a set th at cov ers new elements as cheap as poss ible, i.e. chooses S so as to minimize w ( S ) / | S \ S C | . For each e lement e ∈ S \ S C the amount w ( S ) / | S \ S C | is called the pric e of e and denoted as price( e ) . This procedu re continues until C cov ers the whole U . Let e 1 , . . . , e n be the sequence of all elements of U in the orde r of cove ring by Greedy (ties broken arbitraril y). The standard analysis of G reedy uses the followin g lemma (see [23] for the proof). Lemma 4.1. F or each k ∈ 1 , . . . , n , p rice( e k ) ≤ w (OPT) / ( n − k + 1) The idea of our reduction is very simple. For example , assume we aim at a reducti on rate 2, and n is ev en. Lemma 4.1 tells us that Gre edy starts from cove ring el ements very cheaply , and than pays more and more. So we just stop it before it pays much but after it cov ers suf ficiently many element s. Note that if we manage to stop it just after e n/ 2 is cov ered the total price of the cov ered elements (and hence the w eight of the sets chos en) is at most ( H n − H n/ 2 ) w (OPT ) = (ln 2 + O (1 /n )) w (OPT ) . If we cov er the remainin g elements, say , by ex act algorithm we get roughly a (1 + ln 2) -appro ximation. Ho w e ver the set th at co vers e n/ 2 may cov er man y ele ments e i , i > n / 2 . By Lemm a 4.1 the price of each of them is at most w (OPT ) / ( n/ 2) = 2 w (OP T) /n . Hence this last set costs us at most w (OP T) and together we get roughly a (2 + ln 2) -appr oximation. L uckily , it turns out that paying w (OPT ) for the last set chosen by G reedy is not necessary: belo w w e show a refined algo rithm which would yield a (1 + ln 2) -appr oximation in this particular case. Theor em 4.2. T her e is a polynomial-time | U | -scali ng r eduction sch eme for S E T C O V E R with appr ox i- mation a ( r , α ) = α + ln r + O (1 /n ) , for any r ∈ Q , r > 1 . Pseudocode 4.1 Univ erse-sca ling reducer for S E T C OV E R 1: C ← ∅ . 2: while S S ∪ S C = U do 3: Find T ∈ S so as to m inimize w ( T ) | T \ S C | 4: if n − | S C ∪ T | > n /r then 5: C ← C ∪ { T } . 6: else 7: C T ← C 8: [Create an instanc e I T = ( S T , w ) :] 9: for each P ∈ S , S T contai ns set P \ ( S C T ∪ T ) , of w eight w ( P ) . 10: S ← S \ { T } Pr oof. Let I = ( S , w ) be an ins tance of S E T C OV E R problem. Reducer w orks similar ly as Greedy . Ho w e ver , b efore addin g a set T to the partial co ver C it che cks whether a dding T to C mak es the number of non- cover ed elements at most n/r . If so, T is call ed a cr os sing set . Instead of ad ding T to C , redu cer creates an in stance I T = ( S T , w ) of S E T C OV E R that will be used to co ver th e elements cov ered neither by C nor by T . Namely , for each P ∈ S , S T contai ns set P \ ( S C ∪ T ) , of weight w ( P ) . Apart from I T reduce r st ores C T , a copy of C , which will be used by merger . After creatin g the instance, set T is remov ed from th e family of av ailable sets S . If it turns out tha t the uni verse canno t be co vered afte r remov ing T , i.e. S S ∪ S C 6 = U , the reducer stops. See Pseudocode 4.1 for details. Note that reducer creates at least 1 and at most | S | instances. 9 Let I T be any instance crea ted for some crossing set T and let S OL T ⊂ S be its solution such that w (S OL T ) ≤ αw (OP T I T ) , α ≥ 1 . Let S ′ T = C T ∪ { T } ∪ SOL T . Clearly , S ′ T is a cove r of U for ev ery crossi ng set T . The mer ger simply selects the lightest of these cove rs. Let T ∗ be the first crossing set found by reducer such that T ∗ belong s to OPT I , some optimal soluti on for instance I (note that at le ast one cross ing set is in OPT I ). Clearly OPT I \ { T ∗ } cov ers S S T ∗ . Hence w (OPT I T ∗ ) ≤ w (OPT I \ { T ∗ } ) so w ( T ∗ ) + w (OPT I T ∗ ) ≤ w (OPT I ) . It follo ws that w ( T ∗ ) + w (SOL T ∗ ) ≤ αw (OPT I ) . Since C T ∗ cov ers less than n − n/r elements, by Lemma 4.1 w ( C T ∗ ) ≤ ⌊ n − n/r ⌋ X k =1 w (O PT I ) n − k + 1 = n −⌈ n/r ⌉ X k =1 w (O PT I ) n − k + 1 = ( H n − H ⌈ n/r ⌉ ) w (OPT I ) = = (ln n − ln ⌈ n/r ⌉ + O (1 /n )) w (OPT I ) ≤ (ln r + O (1 /n )) w (OPT I ) . W e conclude that merge r returns a cov er of w eight ≤ ( α + ln r + O 1 n ) w (OPT I ) . Clearly , to make use of uni verse-sc aling redu ction we need a O ∗ ( c n ) exact algorithm, where c is a consta nt. A s far as we know there is no such result publishe d. Ho wev er we can follo w the div ide-and- conqu er approa ch of Gurevi ch and Shelah [15] redi scove red recently Bj ¨ orklu nd and Husfeldt [1] and we get a O ∗ (4 n m log n ) -time algorithm. If m is big we can use ano ther , O ∗ (9 n ) -time versio n of it. See Appendi x E for details. W e also note that f or the un weighted case th ere is a n O (2 n mn ) -time polyno mial space algorith m by Bj ¨ orklu nd et al. [2] using the inclusion -exclusi on pri nciple. 4.2 Red ucing the numbe r of sets Recall the redu ction described in the introduc tion. In the weighted v ersion it fails, basic ally because the sets from the optimal solution may be joined with some heav y sets. The natura l thing to do is sorting the sets accordin g to their w eight and joining only neighboring sets. This simple modification does not succee d fully but with some more ef fort we can make it work. Theor em 4.3. T her e is a polyno m ial-time | S | -s caling re duction sch eme for S E T C OV E R with appr oxi- mation a ( r , α ) = αr , for any r ∈ N , r > 1 . Pr oof. Reducer starts from sorting the sets in S in the order of non-dec reasing w eight. So let S = { S 1 , . . . , S m } so that w ( S 1 ) ≤ w ( S 2 ) ≤ . . . ≤ w ( S m ) . Nex t it partitions this sequence into blocks B i , i = 1 , . . . , ⌈ m/r ⌉ , each of size at most r , namely B i = { S j ∈ S | ( i − 1) r < j ≤ ir } . L et U i = S B i be the union of all sets in B i and define its weight as the total weight of B i , i.e. w ( U i ) = w ( B i ) . For any k = 1 , . . . , m we also define X k = { S j ∈ B ⌈ k/r ⌉ | j < k } and V k = S X k with w ( V k ) = w ( X ) . Reducer creates m instances, namely S i = { U j | S i 6∈ B j } ∪ { V i , S i } for i = 1 , . . . , m . Of course any subf amily (or a cove r) C ⊆ S i corres ponds to b C , a subfamily of S with the same weight obtained from C by splittin g the pre viously joined sets (we w ill use this denotation further). Clearly S C = S b C , in particu lar if C is a cove r , so is b C . Let C 1 , . . . , C m be solution s (co vers) for the instances crea ted by reduce r , such that w ( C i ) ≤ αw (OPT S i ) . Mer ger simply choos es the lightest of them, say C q , and returns c C q , a co ver of U . No w it suffices to sho w that o ne of the ins tances has a cov er th at is light eno ugh. Let i ∗ = max { i | S i ∈ OPT } . W e focus on instance S i ∗ . If X i ∗ ∩ OPT = ∅ we choose its cover R = { U j ∈ S i ∗ | B j ∩ OPT 6 = ∅} ∪ { S i ∗ } , otherwise R = { U j ∈ S i ∗ | B j ∩ OPT 6 = ∅} ∪ { V i ∗ , S i ∗ } Clearly it suf fi ces to sho w that w ( b R \ OPT) ≤ ( r − 1) w (OPT ) . Cons ider any S i ∈ b R \ OPT . If S i 6∈ X i ∗ we put f ( i ) = min { j | S j ∈ OPT and ⌈ j /r ⌉ > ⌈ i/r ⌉} , otherwise f ( i ) = i ∗ . The n w ( S i ) ≤ w ( S f ( i ) ) . W e see that f maps at most r − 1 el ements to a si ngle inde x of a se t f rom OPT , so indeed w ( b R \ OPT) ≤ ( r − 1) w (OPT) and hence w ( b R ) ≤ r w (OPT) . It follo ws that w (O PT S i ∗ ) ≤ r w (OPT) so w ( C i ∗ ) ≤ αr w (OPT ) and finally w ( c C q ) ≤ αr w (OPT ) . 10 4.3 S pecial Case: (W eighted) Minimum Dominating Set Of course , M I N I M U M D O M I N A T I N G S E T is a special case of S E T C OV E R – a graph G = ( V , E ) corres ponds to the se t system S = { N [ v ] | v ∈ V } , where N [ v ] consists of v and i ts neighb ors. Note the set mer ging algorithm described in Section 4.2 can be adapt ed her e: mer ging sets corresp onds simply to identifying verti ces. Hence w e get a reductio n scheme with approximat ion a ( r , α ) = αr . Combined with the rec ent O (2 0 . 598 n ) -time exa ct algorit hm by Rooij and Bodlean der we get an r -approx imation in time O (2 0 . 598 n/r ) , for an y natura l r . 5 Reductions and polynomial-time appr oxim ation Is it possib le to improv e an y of the reduction s presented before? Are some of them in some sense opti- mal? T o addre ss these question s at least partially w e explor e some conne ctions betwee n reduction s and polyn omial-time approximation . For example note that the ( r , αr ) -reducti on for M A X I N D E P E N D E N T S E T impl ies ( n/ log n ) -appro ximation in polynomial time, by putting r = n/ log n and using an exact algori thm for inst ances of size O (log n ) . Since we kno w that M A X I N D E P E N D E N T S E T cannot be ap- proximat ed m uch better in polyno mial time it sug gests that thi s reductio n may be close to optimal in some sense . The follo wing lemma is an immediate consequen ce of Reduct ion Composition Lemma. L et us call a reducti on bounded w hen the redu cer creates O (1) instances. Lemma 5 .1. If for s ome r > 1 ther e is a polynomial-ti m e bounded ( r , f ) -r eductio n for pr oble m P , th en P is f log r n − log r log 2 n (1) -ap pr oximable in polynomial time. Cor ollary 5.2. If for some consta nts c, r , r > 1 , c > 0 ther e is a polynomial-t ime bounded ( r , f ) - r eductio n with f ( α ) = cα + o ( α ) for pr oblem P , then P is O ( n log n ) log r c -appr oximable in polynomial time. Note that Coro llary 5.2 implies that neithe r M A X I N D E P E N D E N T S E T nor V E RT E X C O L O R I N G has a polynomia l time bounded ( r, qr α + o ( α )) -reduc tion for an y q < 1 , unless NP = ZPP . If we skip the assump tion that the reduction is boun ded, the e xistence of such a reduc tion implies a pproximation in n O (log n ) time, which is also widely believ ed to be unlik ely . Hence impro ved reductions need use either exp onential time or some strange depen dence on α , say a ( α ) = 0 . 1 α 2 . Refer ences [1] A. Bj ¨ ork lund and T . Husfeldt. Exact algorith ms for exac t satisfiability and number of perfect matching s. In Pr oc. ICALP ’06 , pa ges 548–55 9, 2006. [2] A. B j ¨ orkl und, T . Husfeld t, and M. Koi visto. Set partition ing via inclusio n-exclu sion. SIAM J. Comput., Specia l Issue for F OCS 2006 . T o a ppear . [3] H. L. Bodlaender , M. R. Fellows, an d M. T . Hallet t. B eyo nd NP-completen ess for pro blems of bound ed w idth: Hardness for the W hierarchy (ext ended abstract ). In A CM Symposium on T heory of Computin g , pages 449–458, 1994. [4] J. Chen, I. A. K anj, and G. Xia. Improve d parameterized upper bounds for vert ex cov er . In Pr oc. MFCS’06 , pages 238–24 9, 2006. [5] M. Cygan and M. Pilipczuk . Faster exa ct bandwidt h. In Pr oc. WG’08 . T o appear . [6] E. Dants in, M. Ga vrilov ich, E. A. Hirsch, and B. Kone v . MAX SA T approx imation beyond th e limits of polyn omial-time approximatio n. Ann. Pur e Appl. Logic , 113(1 -3):81–94, 2001. [7] R. G. Downe y and M. R. Fellows. P arameteriz ed Comple xity . S pringe r , 1999. 11 [8] U. Feige. A threshold of ln n for appro ximating set cove r . J. A CM , 45(4 ):634–652, 1998. [9] U. Feige. Approximati ng the bandwidth via v olume respectin g embeddings. J . Comput. Syst. Sci. , 60(3): 510–539, 2000. [10] U. Feig e. Coping with the NP-hardne ss of th e graph b andwidth problem. In Pr o c. SW AT’00 , pages 10–19 , 2000. [11] U. Feige and J. Kili an. Zero k nowledg e and the chro matic number . J . Comput . Syst. Sci. , 57(2): 187–199, 1998. [12] U. F eige and M. Singh. Improv ed approximatio n ratios for trav eling salesperson tours and paths in direc ted graphs. In Pr oc. APPR O X -RANDOM ’07 , pages 104–118, 2007. [13] F . V . Fo min, F . Grandoni, and D. Kratsch. Meas ure and conquer: a simple O (2 0 . 288 n ) independe nt set algorit hm. In Pr oc. SOD A’06 , pages 18–25, 2006. [14] A. Frieze, G . Galbiati, and F . Maf fioli. On the worst- case perf ormance of some algorit hms for the asymmetric tra veling salesman problem. Networks , 12:23–39, 1982. [15] Y . Gurevic h and S . Shelah. Expected compu tation time fo r hamilto nian path pro blem. SIAM J . Comput. , 16(3): 486–502, 1987. [16] J. H ˚ astad. Clique is hard to approx imate within n 1 − ǫ . Acta Mathematic a , 182( 1):105–142 , 1999. [17] M. Held and R. Karp . A dynamic programming approach to sequ encing problems. J ournal of SIAM , 10:1 96–210, 1962. [18] J. Kleinber g and E. T ard os. Algorith m Design . A ddison -W es ley Longman Publishing Co., Inc., Boston, MA, USA, 2005 . [19] D. M arx. Paramete rized comple xity and approximation algor ithms. The Computer Jour nal , 51(1): 60–78, 2008. [20] W . Unger . The comple xity of the approxi mation of the bandwidth problem. In Pr o c. FO CS’98 , pages 82–91, 1998 . [21] J. M. M. v an Rooij and H. L. Bodlaen der . D esign by measur e and co nquer , a f aster e xact alg orithm for dominatin g set. In Pr oc. ST A CS’08 , pages 657–66 8, 200 8. [22] V . V assile vska, R . W illiams, and S. L. M. W oo. Confronting hardness using a hybrid approac h. In Pr oc. SOD A’0 6 , pages 1–10, 2006. [23] V . V . V azirani. A ppr ox imation Algorithms . Springer , 200 1. 12 A Proof of the Red uction Compos ition Lemma Lemma A.1 (Reduc tion Composition ) . If th er e is a ( r , f ) -re duction R then fo r any positi ve k ∈ N ther e is a ( r k , f k ) -r educti on 3 R ′ for the same pr oble m. Mor eover , if the mer ger of R gener ates a polynomial number of instan ces then R ′ has the same time comple xity as R , up to a polyno mial factor . Pr oof. W e use inductio n on k . Let R and M be the reducer and the m er ger of R , respecti vel y . W e will descri be R ′ , which consi sts of the reducer R k and the mer ger M k . For k = 1 the claim is tri vial. Now assume there is a ( r k − 1 , f k − 1 ) -redu ction Q with reducer R k − 1 and mer ger M k − 1 . Let I be the input insta nce for reducer R k . R k ex ecutes R k − 1 which genera tes instances I 1 , . . . , I q . By induct ion hypothesis , s ( I i ) ≤ s ( I ) /r k − 1 for i = 1 , . . . , q . T hen for each i = 1 , . . . , q , R k applie s R to I i , which gener ates instances I i, 1 , . . . , I i,q i . Note that for each i, j , s ( I i,j ) ≤ s ( I ) /r k . No w assume S i,j is a solution for I i,j such that S i,j ≤ α OPT I i,j (for m inimizati on problem) 4 . Mer ger M k applie s M to ev ery sequence of soluti ons S i, 1 , . . . , S i,q i and gets a resultin g solutio n S i for e very i = 1 , . . . , q . By the de finition of reducti on m ( S i ) ≤ f ( α )OPT I i . Then M k applie s M k − 1 to S 1 , . . . , S q , obtaining a solution S . Then, by the induction hypothesis m ( S ) ≤ f k − 1 ( f ( α ))OPT I and hence m ( S ) ≤ f k ( α )OPT I , as require d. The secon d claim follo ws easily , since R k genera tes q k instan ces. B Maximum Independ ent Set Theor em B.1. Ther e is a polynomial-t ime r eduction scheme for M A X I M U M I N D E P E N D E N T S E T with appr o ximation a ( r, α ) = αr , for any r ∈ Q , r > 1 . Pr oof. Let r = k l , k , l ∈ N , k ≥ l > 0 . Let G = ( V , E ) be the inpu t graph. R educer partition s V into k pa rts V 0 , . . . , V k − 1 , of size at most ⌈| V | /k ⌉ ea ch. Then it create s k instances . Namely , for an y i = 0 , . . . , k − 1 it creates G i = G [ S l − 1 j =0 V ( i + j ) mod k ] . Let SOL 0 , . . . , S OL k − 1 be soluti ons (in dependent sets) for G 0 , . . . , G k − 1 such that | S OL i | ≥ | OPT G i | /α . Mer ger simply picks the bigg est solution. W e clai m that for some i = 0 , . . . , k − 1 , | V ( G i ) ∩ O PT | ≥ l k | OPT | . Indeed, otherwise all of V ( G i ) conta in less than l | OP T | copies of elemen ts of OPT , with each element of OPT app earing in exa ctly l copies, hence some elements of O PT do not belong to any V ( G i ) , a contradictio n. Clearly , if | V ( G i ) ∩ O PT | ≥ l k | OPT | then | OPT G i | ≥ l k | OPT | = OPT /r . Hence | SO L i | ≥ | OPT | / ( αr ) and the solutio n returned by merger will be at least that good. C Coloring There is a follo wing simple reduction for V E RT E X C O L O R I N G . Theor em C.1. Ther e is a polynomial -time r eduction sche me for V E RT E X C O L O R I N G with appr oxima- tion a ( r , α ) = αr , for any r ∈ N , r > 1 . Pr oof. Let G = ( V , E ) be the input graph . Reduce r partitio ns V into two r sets V 1 , . . . , V r , with at most ⌈| V | /r ⌉ ver tices each and returns r instance s G 1 , . . . , G r such that G i = G [ V i ] . The input for m er ger is a coloring c i : V r → { 1 , . . . , q r } for each graph G i such that q i ≤ αχ ( G i ) . For any i , G i ⊆ G , so χ ( G i ) ≤ χ ( G ) and further q i ≤ αχ ( G ) . Mer ger simply colors each v ∈ V i with color P i − 1 j =1 q j + c i ( v ) . Clearly , it uses at most αr χ ( G ) colors, as required . 3 f k is the composition: f k = f ◦ f k − 1 , f 0 = id . 4 The proof for a maximization problem is analogous 13 A more sophisticate d reduct ion was found by Bj ¨ orklund and Husfeldt [2]. Basic ally , it removes maximum indepen dent sets (found by an exact algorith m) until the graph is small enough, an d the mer ger just colors the prev iously remov ed vertices with new colors (one per each indepe ndent set). Ho w e ver , taking into account curren t best time bou nds of exact polynomial space algorithms for M A X I N D E P E N D E N T S E T and V E R T E X C O L O R I N G , this reductio n m akes sen se only when the reduct ion rate r < 4 . 05 (roughly ). This is because for large r rates the total time of the resultin g approximatio n is dominate d by finding max imum indep endent sets, so we would get wor se approx imation guarantee with the same to tal time. A natural idea here is to plug our M A X I N D E P E N D E N T S E T approximation into the algori thm of B j ¨ orklu nd and Husfeldt. By modifying their analysis we get the follo wing theore m. Theor em C.2. Assume that ther e is a β -appr oximati on alg orithm for M A X I M U M I N D E P E N D E N T S E T with time comple xity T ( n ) . Then ther e is an O ∗ ( T ( n )) -time red uction sche me for V E R T E X C O L O R I N G with appr oximation a ( r , α ) = α + β ln r plus additive err or 1, for any r ∈ Q , r > 1 . Pr oof. Let n = | V ( G ) | . As long as the number of ver tices of G exc eeds n/r redu cer finds an indepen- dent set using the β -approximatio n algorithm and removes it from G . Let I 1 , . . . , I t be the indepen dent sets found . Reducer returns the resultin g grap h G ′ . No w let us upperb ound t . Let G 0 = G and let G j be the graph obtained from G j − 1 by remov ing I j . Since an y subgraph of G is χ ( G ) -colorabl e, for an y j = 1 , . . . , k , G j contai ns an indepe ndent set of size at least | V ( G j ) | /χ ( G ) . It follo ws that | V ( G j ) | ≤ 1 − 1 β χ ( G ) j n ≤ e − j β χ ( G ) n. Hence for j ≥ χ ( G ) β ln r we hav e | V ( G j ) | ≤ n/r so t ≤ ⌈ χ ( G ) β ln r ⌉ . Let c : V ( G ′ ) → { 1 , . . . , q } be a coloring of G ′ , q ≤ αχ ( G ′ ) . Since G ′ is a subgra ph of G , χ ( G ′ ) ≤ χ ( G ) and hen ce q ≤ αχ ( G ) colors . Mer ger returns the follo wing colorin g of V ( G ) : if v ∈ V ′ it has color c ( v ) and if v ∈ I j it has color q + j . Clearly , it uses at most αχ ( G ) + t ≤ αχ ( G ) + ⌈ χ ( G ) β ln r ⌉ ≤ ( α + β ln r ) χ ( G ) + 1 colors. The time complexity of the reductio n is O ( nT ( n )) . Theor em C.3. G iven any O ∗ (2 cn ) -time exa ct algo rithm for M A X I M U M I N D E P E N D E N T S E T , for any β ≥ 1 , one can cons truct an O ∗ (2 cn/β ) -time r eduction sc heme for V E RT E X C O L O R I N G with appr oxi- mation a ( r , α ) = α + β ln r plus additi ve err or 1, for any r ∈ Q , r > 1 . Pr oof. The ( β , β ) -reductio n from Theor em B.1 together with th e O ∗ (2 cn ) -time exact algori thm gi ves β -approx imation for M A X I M U M I N D E P E N D E N T S E T in time O ∗ (2 cn/β ) . By Thm. C .2 w e get the claim. If we ha ve an O ∗ (2 dn ) -time exact algorithm for V E RT E X C O L O R I N G , it makes sense to ha ve a ( a, r ) -reduct ion of time O ∗ (2 dn/r ) . After putting β = cr /d in T heorem C. 3 (and keep ing β ≥ 1 ) we get O ∗ (2 dn/r ) -time reduction scheme for V E RT E X C O L O R I N G with appro ximation a ( r , α ) = α + ( cr /d ) ln r plus an additi ve error 1, for any r ∈ Q , r ≥ d/c . W it h the currently best kno wn va lue c = 0 . 288 and d = 1 . 167 (we consider polyn omial spa ce here) it gi ves a ( r, α ) = α + 0 . 247 r ln r for any r ∈ Q , r ≥ d/c > 4 . 05 . Note al so that The orem C.3 implies th at begi nning from so me va lue of r , the simple ( r , r ) -reductio n from Theorem C.1 o utperforms the abov e more sophisti cated one. S pecificall y , for the curr ent va lues of c and d the threshold is r ≥ 58 . D Reduction f or Bandwid th In this section w e describ e a (2 , 9 α ) -redu ction for B A N D W I D T H . The follo wing observ ation w ill be con v enient in our proof. Observ e th at an y ordering f : V → { 1 , . . . , | V |} corresp onds to a sequen ce f − 1 (1) , . . . , f − 1 ( | V | ) , which will be denoted as s ( f ) . Clearly , bandwid th of f corres ponds to maximum distan ce in s ( f ) between ends of an edge. 14 Theor em D.1. Ther e is a polyn omial-time (2 , 9 α ) -r educ tion for B A N D W I D T H . Pr oof. Let G = ( V , E ) be the input graph. T he reducer we are going to describ e creates just one instan ce. W .l.o.g. V does not con tain isolated ver tices for otherwise reduce r just remov es them and mer ger just adds them in the end of the ord ering. W e will also assume that bw( G ) ≥ 2 – othe rwise reduce r replace s G by the empty graph and merger finds the optimal solution in polynomia l (linear) time. Reducer beg ins with fi nding a m aximum cardin ality matching M in G . Next it de fines a fu nction ρ : V → V . Note that an un matched verte x has all n eighbors matched. For e ach unmatched ver tex v for each of its (matched) neighbors w we put ρ ( w ) = w . Consider any uw ∈ M . Assume both ρ ( u ) = u and ρ ( w ) = w . T hen u has an unmatc hed neighbor x and w has an u nmatched neighbo r y . Observ e that x = y for otherwise M \ { uw } ∪ { xu, wy } is a m atching lar ger than M . In this speci al case redefine ρ ( u ) = w . If for some uw ∈ M both ρ ( u ) and ρ ( w ) are unspe cified yet we put ρ ( u ) = u and ρ ( w ) = u (the choice which endpoint is u and w hich w is arbitrary). Finally , if for some uw ∈ M exactl y one v alue of ρ is specified, say ρ ( u ) = u , we put ρ ( w ) = u . Now ρ is fully defined in subdomain V ( M ) . Note the follo wing claims. Claim 1. For an y edge uw ∈ M either ρ ( u ) = u and ρ ( w ) = u or ρ ( u ) = w and ρ ( w ) = w . Claim 2. For any x 6∈ V ( M ) , for any its neighbor w , ρ ( w ) = w unless there is a triangle uw x with uw ∈ M . No w redu cer specifies the v alue of ρ on un matched ve rtices, one by one. Let x be an unmatched ver tex with ρ ( x ) unspecified and let w be any of its neighbors . If w has another unmatched neighb or y with ρ ( y ) unspecified we put ρ ( x ) = x and ρ ( y ) = x . Otherwise we put ρ ( x ) = w . The way we de fi ned ρ implies the follo wing two claims. Claim 3. For at le ast half of the vert ices ρ ( v ) 6 = v . Claim 4. For an y v ∈ V , | ρ − 1 ( v ) | ≤ 3 . Finally , redu cer simply identifies each pair of ver tices u , w (i.e. adds edges w x such that x ∈ N ( u ) \ N [ w ] and remov es u ) such that ρ ( u ) = w . Denote the result ing graph G ′ = ( V ′ , E ′ ) . Let V ′ ⊂ V in the sense that V ′ = { v ∈ V | ρ ( v ) = v } . By Claim 3 the rate of the reduct ion is at least 2. Before we describe the merger let us bound bw( G ′ ) . Let f : V → { 1 , . . . , | V |} be the orderin g of V ( G ) with bandwidt h bw( G ) . Claim 5. b w ( G ′ ) ≤ 3bw( G ) − 1 . Pr oof of Claim 5. Let g : V ′ → { 1 , . . . , | V ′ |} be the orde ring of V ′ which arra nges vertice s of V ′ in the same order as f does, i.e. s ( g ) which is obtained from s ( f ) by remov ing th e v ertices outside V ′ . W e will sho w that g has bandwid th at most 3bw( G ) − 1 . Conside r any u ′ v ′ ∈ E ′ . Then for some u, v ∈ V , ρ ( u ) = u ′ , ρ ( v ) = v ′ and uv ∈ E . If bo th f ( u ) and f ( v ) are outside th e interv al (min { f ( ρ ( u )) , f ( ρ ( v )) } , max { f ( ρ ( u )) , f ( ρ ( v )) } ) , b ut at opposite sides of it, then | f ( ρ ( u )) − f ( ρ ( v )) | ≤ bw( G ) . If both f ( u ) and f ( v ) are outsi de this interv al, bu t at the same side o f it, say at u ’ s side, t hen since | f ( v ) − f ( ρ ( v )) | ≤ 2 b , we ha ve | f ( ρ ( u )) − f ( ρ ( v )) | ≤ 2bw( G ) . In an y of these two cases, | f ( ρ ( u )) − f ( ρ ( v )) | ≤ 2b w ( G ) ≤ 3b w ( G ) − 2 . It follo ws that ρ ( u ) and ρ ( v ) are at distance at most 3b w ( G ) − 2 in s ( g ) , as requi red. No w assume one of f ( u ) and f ( v ) , say f ( u ) , is in the interv al (min { f ( ρ ( u )) , f ( ρ ( v )) } , max { f ( ρ ( u )) , f ( ρ ( v )) } ) , so in particula r u 6∈ V ′ . Then ρ ( u ) and ρ ( v ) are in s ( g ) at distance smaller by at least 1 from their dis- tance in s ( f ) . Hence it suf fices to sho w that | f ( ρ ( u )) − f ( ρ ( v )) | ≤ 3bw( G ) . First note that if ρ ( u ) , ρ ( v ) 6∈ V ( M ) then by Claim 1 u, v 6∈ V ( M ) and then M ∪ { uv } is a matchi ng lar ger than M , a contradict ion. Hence w .l.o.g. we can assu me tha t ρ ( u ) ∈ V ( M ) . T hen ρ ( u ) = u or ρ ( u ) is a neighb or of u . No w assume ρ ( v ) ∈ V ( M ) . Then also ρ ( v ) = v or ρ ( v ) is a neighbor of v . It follo ws that | f ( ρ ( u )) − f ( ρ ( v )) | ≤ | f ( ρ ( u )) − f ( u ) | + | f ( u ) − f ( v ) | + | f ( v ) − f ( ρ ( v )) | ≤ 3bw( G ) . Finally , let ρ ( v ) 6∈ V ( M ) . Then v is at distanc e at most 2 from ρ ( v ) and hence | f ( ρ ( v )) − f ( v ) | ≤ 2b w ( G ) . By Claim 2 either ρ ( u ) = u or v ρ ( u ) ∈ E so in an y case v ρ ( u ) ∈ E which implies | f ( v ) − f ( ρ ( u )) | ≤ b w ( G ) . T ogeth er we get | f ( ρ ( u )) − f ( ρ ( v )) | ≤ | f ( ρ ( v )) − f ( v ) | + | f ( v ) − f ( ρ ( u )) | ≤ 3b w ( G ) . It finishes the proof of Claim 5. 15 No w we desc ribe the mer ger . Let f ′ : V ′ → { 1 , . . . , | V ′ |} be the ordering of vertices of V ′ with bandwid th at most α b w ( G ′ ) for some α ≥ 1 . By Claim 5, b andwidth of f is at most α (3 bw( G ) − 1) ≤ 3 α b w ( G ) − 1 . Mer ger returns the order ing f such that s ( f ) is obtained from s ( f ′ ) by adding vert ices of ρ − 1 ( v ) \{ v } (there are at most 2 of them by Clai m 4) right af ter v . Clea rly s ( f ) is a permuta tion of V . Now c onsider any edge uv ∈ E . There are at m ost 3 α bw( G ) − 2 vertice s between ρ ( u ) and ρ ( v ) in s ( f ′ ) . It follows that there are at mos t 3(3 α b w ( G ) − 2) = 9 α bw( G ) − 6 v ertices between ρ ( u ) and ρ ( v ) in s ( f ) . In other word s, the dista nce between ρ ( u ) and ρ ( v ) in s ( f ) is at most 9 α b w ( G ) − 5 . As u is at dist ance at most 2 from ρ ( u ) in s ( f ) , and th e same holds for v , it follo ws that the distance between u and v in s ( f ) is at most 9 α b w ( G ) − 1 . It follo ws that f has bandwidth at most 9 α bw( G ) − 1 . The abo ve theorem together with Reduction Composition Lemma implies: Cor ollary D.2. Ther e is a polynomial- time r eductio n sc heme for B A N D W I D T H with appr oximatio n a ( r , α ) = α 9 k , for any r = 2 k , k ∈ N . For any k , the above reduct ion gi ves a 9 k -appro ximation in time O ∗ (10 n/ 2 k ) and polynomial space (using the ex act algorithm of Feige and Kilian), or in time O ∗ (5 n/ 2 k ) and O ∗ (2 n/ 2 k ) space (using the exa ct algori thm of C ygan and Pilipczu k). E O ∗ ( c n ) -time polynomial space exact algorithms f or S E T C OV E R Clearly , to make use of uni verse- scaling reductio n we need a O ∗ ( c n ) exact algorit hm, where c is a consta nt. A s far as we know there is no such result publishe d. Ho wev er we can follo w the div ide-and- conqu er approach of Gurevi ch and She lah [15] rediscov ered recently Bj ¨ orklund and Husfeldt [1] and we get a O ∗ (4 n m log n ) -time algorit hm. If m is big we can use anothe r , O ∗ (9 n ) -time vers ion of it. Theor em E.1. Ther e i s a O ∗ (min { 4 n m log n , 9 n } ) -time algor ithm that finds a min imum-weight cove r of univer se of size n by a family m sets. Pr oof. The algorithm is as follo ws. For an instance with uni verse U of size n we recurse on an expo- nentia l number of instances, each with uni verse of size smaller than n/ 2 . Namely , we choose one of m sets S and w e di vide the remaining elements, i.e. U \ S into two parts, each of size at most n/ 2 . W e consid er all choices of sets and all such partitions – there are O ( m 2 n ) of them. For eac h such set S and partiti on U 1 , U 2 we find recursi vely C 1 , an optimal cove r of U 1 and C 2 , an optimal cove r of U 2 . Clea rly C 1 ∪ C 2 ∪ { S } forms a cover of U . W e choose the best cove r out of the O ∗ ( m 2 n ) cover s obta ined lik e this. Consider an optimal cov er OPT . T o each element e of U assign a uniq ue set S e from OPT such that e ∈ S e . For each S ∈ OPT let S ∗ = { e ∈ U : S = S e } . Let P = { S ∗ : S ∈ OPT } . P is a partiti on of U . C learly , afte r remo ving the biggest set ˆ S fro m OPT we can di vide all the sets in P into two groups, P 1 and P 2 , each cov ering les s than n / 2 elements fr om U \ ˆ S . It follows that one of the O ∗ ( m 2 n ) cov ers found by the algorith m has w eight w (OPT) , namely the cov er obtained for set ˆ S and partiti on ( S P 1 \ ˆ S , S P 2 \ ˆ S ) . It is clear tha t the abov e algorithm works in time O ∗ (4 n m log n ) . Similarly we can also get a O ∗ (9 n ) bound – instead of at most m 2 n instan ces we recurse on at most 2 · 3 n instan ces: we consider all partiti ons of U into three sets A , B , C . L et | A | ≥ | B | ≥ | C | . If | A | ≤ n/ 2 we recurse on A , B and C and otherwis e we check w hether A ∈ S and if so, w e recurs e on B and C . F Semi-Metric TSP S E M I - M E T R I C T S P is a v ariant of the classical T R A V E L I N G S A L E S M A N P R O B L E M . H ere we are also gi ven n vertices and an edge w eight function w : V 2 → R , howe ver now the function w does not need 16 to be symmetric, i.e. for any x, y we may hav e w ( x, y ) 6 = w ( y , x ) . Thus the instance can be viewed as a directed graph, with two opposi tely oriented edges joining eve ry pai r of vertic es. In this v ariant we assume that w satisfies triang le inequality . The goal is to find the lightest (direct ed) Hamiltonian cycle. In contrast to other proble ms consider ed in this pap er it is not kno wn whet her S E M I - M E T R I C T S P is in APX. The fi rst appro ximation algorithm for this proble m appear ed in the work of Frie ze, Galbiati a nd Maf fioli [14] and it has approxima tion ratio log 2 n . Curren tly the best result is a 2 3 log n -approx imation due to Feige and Singh [12]. The best k nown exact al gorithms are t he O ∗ (2 n ) -time e xponent ial space class ical algorithm by H eld and Karp [17] and a O ∗ (4 n n log n ) -time polyno mial space algorithm by B j ¨ orklu nd and Husfeldt [1]. The idea of our reduction is very simple – similarly as in Section 4.1 we run a polyno mial-time approx imation, namely the algor ithm b y Friez e et al. and stop it the middle. Let us recall the algorithm of Frieze e t al. It be gins with fi nding a lighte st cycle cov er C 0 in G (this can be done in polyn omial time by finding a minimum weight matchi ng in a corr esponding bipa rtite grap h). Note that w ( C 0 ) ≤ w (OP T G ) . If the c ycle cov er cons ists of just on e c ycle we are done. Otherwise th e algo rithm sel ects one verte x from each cycle . Let G 1 be the subgraph of graph G induced by these vert ices. T hen we find a lightest cycle cov er in G 1 . Note that w (OPT G 1 ) ≤ w (O PT G ) , which follo ws by the triangle inequality . T hen again w ( C 1 ) ≤ w (OP T G ) . Again, if C 1 has just one cycle we finish, otherwise we choose a ve rtex from each cycle, b uild G 2 and so on. As the cycle s in c ycle cov ers ha ve lengths at least 2, we finish afte r finding at mos t log n cycle co vers. Finally we consi der the union of all c ycle cov ers U = S C i . Clearly , U is Eule rian and we ca n find an Euleria n cycl e E in U . Then we can transform it to a Hamilto nian cyc le H by followin g E , but replacin g pa ths with already visit ed vertic es by single edges. W e see that w ( H ) ≤ w ( E ) by triangle inequality and hence w ( H ) ≤ log 2 n · OPT . No w , assume r = 2 k . If we stop after creating just k cycle cov ers, we are left with graph G k with | V ( G k ) | ≤ | V ( G ) | /r , while the cycle covers hav e weigh t at most k · OPT . If we ha ve an α -approximate TSP to ur T in G k , we can a dd it to th e cycle cov ers and p roceed as in the orig inal algorithm. Clearly we get a Hamiltonian cyc le of weight at most ( α + k )OPT . Cor ollary F .1. Ther e is a polynomial-t ime r edu ction sch eme for S E M I - M E T R I C T S P with appr ox ima- tion a ( r , α ) = α + log 2 r , for any r = 2 k , k ∈ N . 17
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment