A learning graph based quantum query algorithm for finding constant-size subgraphs

Let $H$ be a fixed $k$-vertex graph with $m$ edges and minimum degree $d >0$. We use the learning graph framework of Belovs to show that the bounded-error quantum query complexity of determining if an $n$-vertex graph contains $H$ as a subgraph is $O…

Authors: Troy Lee, Frederic Magniez, Miklos Santha

A learning graph based quantum query algorithm for finding constant-size   subgraphs
Learning graph based quan tum query algorithms for finding constan t-size subgraphs ∗ T ro y Lee † 1 , F r ´ ed ´ eric Magniez ‡ 2 , and Miklos Santha § 1,2 1 Cen tre for Quantum T ec hnologies, National Universit y of Singap ore, Singap ore 117543 2 CNRS, LIAF A, Univ Paris Diderot, Sorb onne P aris Cit ´ e, F-75205 Paris, F rance Abstract Let H b e a fixed k -vertex graph with m edges and minimum degree d > 0. W e use the learning graph framew ork of Belovs to show that the bounded-error quantum query complexit y of determining if an n -vertex graph con tains H as a subgraph is O ( n 2 − 2 /k − t ), where t = max  k 2 − 2( m + 1) k ( k + 1)( m + 1) , 2 k − d − 3 k ( d + 1)( m − d + 2)  > 0 . The previous b est algorithm of Magniez et al. had complexity e O ( n 2 − 2 /k ). 1 In tro duction Quan tum query complexit y . Quan tum query complexit y has b een a v ery successful mo del for studying the p o w er of quan tum computation. Imp ortan t quan tum algorithms, in particular the search algorithm of Grov er [Gro96] and the p erio d finding subroutine of Shor’s factoring al- gorithm [Sho97], can b e formulated in this mo del, yet it is still simple enough that one can often pro ve tight lo wer b ounds. This mo del is the quantum analog of deterministic and randomized de- cision tree complexities; the resource measured is the num b er of queries to the input and all other op erations are for free. F or promise problems the quan tum query complexit y can b e exp onen tially smaller than the classical complexity , the Hidden Subgroup Problem [Sim97, EHK99] b eing the most striking ex- ample. The situation is dramatically different for total functions, as Beals et al. [BBC + 01] sho wed that in this case the deterministic and the quantum query complexities are p olynomially related. One rich source of concrete problems are functions related to prop erties of graphs. Graph problems w ere first studied in the quan tum query mo del by Buhrman et al. [BCWZ99] and later b y Buhrman et al. [BDH + 05], who lo ok ed at T riangle Finding together with Elemen t Distinctness. ∗ P artially supported b y the F rench ANR Defis pro ject ANR-08-EMER-012 (QRA C) and the Europ ean Commission IST STREP pro ject 25596 (QCS). Research at the Centre for Quantum T ec hnologies is funded by the Singap ore Ministry of Education and the National Research F oundation. † troyjlee@gmail.com ‡ frederic.magniez@univ-paris-diderot.fr § miklos.santha@liafa.univ-paris-diderot.fr 1 This w as follo w ed b y the exhaustiv e w ork of D¨ urr et al. [DHHM06] who in vestigated man y standard graph problems including Connectivit y , Strong Connectivit y , Minimum Spanning T ree, and Single Source Shortest P aths. All these approac hes w ere based on clev er uses of Gro v er’s searc h algorithm. The groundbreaking work of Ambainis [Amb07] using quantum walks for Elemen t Distinctness initiated the study of quan tum walk based search algorithms. Magniez et al. [MSS07] used this tec hnique to design quan tum query algorithms for finding constan t size subgraphs, and recen tly Childs and Kothari found a nov el application of this framework to decide minor-closed graph prop erties [CK11]. The results of [MSS07] imply that a k -vertex subgraph can b e found with e O ( n 2 − 2 /k ) queries, and moreov er T riangle Finding is solv able with e O ( n 1 . 3 ) queries. Later, quantum phase estimation techniques [MNRS11] were also applied to these problems, and in particular the quan tum query complexit y of T riangle Finding was improv ed to O ( n 1 . 3 ). The b est low er b ound kno wn for finding an y constant sized subgraph is the trivial Ω( n ). The general adversary b ound and learning graphs. Recently , there ha ve b een exciting dev elopments leading to a characterization of quantum query complexit y in terms of a (relativ ely) simple semidefinite program, the general adv ersary b ound [Rei11, LMR + 11]. Now to design quan- tum algorithms it suffices to exhibit a solution to this semidefinite program. This plan turns out to b e quite difficult as the minimization form of the general adv ersary b ound (the easiest form to up- p er b ound) has exp onen tially man y constraints. Ev en for simple functions it is difficult to directly come up with a feasible solution, muc h less w orry ab out finding a solution with go od ob jective v alue. Belo vs [Bel12b] rec en tly introduced the mo del of learning graphs, which can b e view ed as the minimization form of the general adversary b ound with additional structure imp osed on the form of the solution. This additional structure mak es learning graphs m uch easier to reason ab out. In particular, it ensures that the feasibility constrain ts are automatic al ly satisfied, allo wing one to fo cus on coming up with a solution ha ving a go od ob jectiv e v alue. Learning graphs are a very promising mo del and ha ve already b een used to improv e the complexit y of T riangle Finding to O ( n 35 / 27 ) [Bel12b] and to give an o ( n 3 / 4 ) algorithm for k -Elemen t Distinctness [Bel12a], impro ving the previous b ound of O ( n k/ ( k +1) ) [Amb07]. Our contribution. W e give tw o learning graph based algorithms for the problem of deter- mining if a graph G con tains a fixed k -v ertex subgraph H . Throughout the pap er we will assume that k > 2, as the problem of determining if G contains an edge is equiv alen t to searc h. W e denote b y m the num ber of edges in H . The first algorithm we give has complexit y O ( n 2 − 2 /k − t ) where t = ( k 2 − 2( m + 1)) / ( k ( k + 1)( m + 1)) > 0. The second algorithm dep ends on the minimum degree of a vertex in H . Say that the smallest degree of a vertex in H is d > 0. This is without loss of generalit y as isolated vertices of H can be remov ed and the theorem applied to the resulting graph H 0 . The second algorithm has complexit y O ( n 2 − 2 /k − t ) where t = (2 k − d − 3) / ( k ( d + 1)( m + 2)) > 0. Both algorithms th us impro ve on the previous b est general subgraph finding algorithm of [MSS07], whic h has complexit y e O ( n 2 − 2 /k ). The first algorithm performs b etter, for example, on dense regular graphs H , while the second algorithm p erforms b etter on the important case where H is a triangle, ha ving complexit y O ( n 35 / 27 ), equal to that of the algorithm of Belovs [Bel12b]. T o explain these algorithms, w e first giv e a high lev el description of the learning graph algorithm in [Bel12b] for T riangle Finding, and its relation to the quantum w alk algorithm giv en in [MSS07]. The learning graph algorithm in [Bel12b] for T riangle Finding is roughly a translation of the quan- tum walk algorithm on the Johnson graph of [MSS07] in to the learning graph framew ork, with one additional twist. This is to main tain a database not of all edges present in G amongst a subset 2 of r -v ertices but rather a random sample of these edges. W e will refer to this as sparsifying the database. While in the quan tum w alk world this idea does not help, in the context of learning graphs it leads to a b etter algorithm. The quantum w alk of [MSS07] works by lo oking for a subgraph H 0 = H \ { v } , where v is a v ertex of minimal degree in H , and then (using the algorithm for elemen t distinctness) finding the v ertex v and the edges linking it to H 0 to form H . Our second learning graph algorithm translates this pro cedure into the learning graph framew ork, and again applies the trick of sparsifying the database. Our first algorithm is simpler and translates the quantum w alk searching for H directly to the learning graph framew ork, again maintaining a sparsified database. The wa y we apply sparsification differs from ho w it is used in [Bel12b]. There ev ery edge slot is taken indep enden tly with some fixed probability , while in our case the sparse random graphs are c hosen uniformly from a set of structured multipartite graphs whose edge pattern reflects that of the given subgraph. The probability space ev olves during the algorithm, but at ev ery stage the m ultipartite graphs hav e a very regular degree structure. This uniformit y of the probability space renders the structure of the learning graph very transparen t. Related contribution. Indep enden tly of our work, Zhu [Zh u11] also obtained Theorem 10. His algorithm is also based on learning graphs, but differs from ours in w orking with randomly sparsified cliques as in the algorithm of Belovs [Bel12b] for T riangle Finding, rather than graphs with sp ecified degrees as we do. 2 Preliminaries W e denote b y [ N ] the set { 1 , 2 , . . . , N } . The quantum query c omplexity of a function f , denoted Q ( f ), is the n umber of input queries needed to ev aluate f with error at most 1 / 3. W e refer the reader to the survey [H ˇ S05] for precise definitions and background. F or a bo olean function f : D → { 0 , 1 } with D ⊆ { 0 , 1 } N , the general adversary b ound [HL ˇ S07], denoted ADV ± ( f ), can b e defined as follows (this formulation w as first given in [Rei09]): AD V ± ( f ) = minimize u x,i max x ∈D X i ∈ [ N ] k u x,i k 2 sub ject to X i ∈ [ N ] x i 6 = y i h u x,i | u y ,i i = 1 for all f ( x ) 6 = f ( y ) . (1) As the general adv ersary b ound c haracterizes quan tum query complexity [Rei11], quantum algorithms can b e developed (simply!) by devising solutions to this semidefinite program. This turns out not to b e so simple, how ev er, as even coming up with fe asible solutions to Equation (1) is not easy b ecause of the large n umber of strict constraints. Learning graphs are a mo del of computation in tro duced by Belovs [Bel12b] that giv e rise to solutions of Equation (1) and therefore quantum query algorithms. The mo del of learning graphs is very useful as it ensures that the constraints are satisfied automatically , allo wing one to fo cus on coming up with a solution having a go o d ob jectiv e v alue. Definition 1 (Learning graph) . A le arning gr aph G is a 5-tuple ( V , E , w , `, { p y : y ∈ Y } ) wher e ( V , E ) is a r o ote d, weighte d and dir e cte d acyclic gr aph, the weight function w : E → R maps le arning gr aph e dges to p ositive r e al numb ers, the length function ` : E → N assigns e ach e dge a natur al numb er, and p y : E → R is a unit flow whose sour c e is the r o ot, for every y ∈ Y . 3 Definition 2 (Learning graph for a function) . L et f : { 0 , 1 } N → { 0 , 1 } b e a function. A le arning gr aph G for f is a 5-tuple ( V , E , S , w , { p y : y ∈ f − 1 (1) } ) , wher e S : V → 2 N maps v ∈ V to a set S ( v ) ⊆ [ N ] of variable indic es, and ( V , E , w , `, { p y : y ∈ f − 1 (1) } ) is a le arning gr aph for the length function ` define d as ` (( u, v ) = | S ( v ) \ S ( u ) | for e ach e dge ( u, v ) . F or the r o ot r ∈ V we have S ( r ) = ∅ , and every le arning gr aph e dge e = ( u, v ) satisfies S ( u ) ⊆ S ( v ) . F or e ach input y ∈ f − 1 (1) , the set S ( v ) c ontains a 1 -c ertific ate for y on f , for every sink v ∈ V of p y . Note that it can b e the case for an edge ( u, v ) that S ( u ) = S ( v ) and the length of the edge is zero. In Belo vs [Bel12b] what w e define here is called a reduced learning graph, and a learning graph is restricted to ha ve all edges of length one. Definition 3 (Flow preserving edge sets) . A set of e dges E ⊆ E is flow preserving , if in the sub gr aph G = ( V , E ) induc e d by E , for every vertex v ∈ V which is not a sour c e or a sink in G , P u ∈ V p y (( u, v )) = P w ∈ V p y (( v , w )) , for every y . F or a flow pr eserving set of e dges E we let p y ( E ) denote the v alue of the flo w p y o ver E , that is p y ( E ) = P s : sour c e in G P v ∈ V p y (( s, v )) . Observ e that p y /p y ( E ) is a unit flow ov er E whenev er p y ( E ) 6 = 0, and that p y ( E ) = 1 for ev ery y . The complexity of a learning graph is defined as follo ws. Definition 4 (Learning graph complexit y) . L et G b e a le arning gr aph, and let E ⊆ E a set of flow pr eserving le arning gr aph e dges. The ne gative c omplexity of E is C 0 ( E ) = P e ∈ E ` ( e ) w ( e ) . The p ositive c omplexity of E under the flow p y is C 1 ,y ( E ) = X e ∈ E ` ( e ) w ( e )  p y ( e ) p y ( E )  2 , if p y ( E ) > 0 , and 0 otherwise. The p ositive c omplexity of E is C 1 ( E ) = max y ∈ Y C 1 ,y ( E ) . The c omplexity of E is C ( E ) = p C 0 ( E ) C 1 ( E ) , and the le arning gr aph c omplexity of G is C ( G ) = C ( E ) . The le arning gr aph c omplexity of a function f , denote d LG ( f ) , is the minimum le arning gr aph c omplexity of a le arning gr aph for f . The usefulness of learning graphs for quantum query complexity is giv en by the following the- orem. Theorem 1 (Belo vs) . Q ( f ) = O ( LG ( f )) . W e study functions f : { 0 , 1 } ( n 2 ) → { 0 , 1 } whose input is an undirected n -vertex graph. W e will refer to the v ertices and edges of the learning graph as L -vertices and L -edges so as not to cause confusion with the vertices/edges of the input graph. F urthermore, we will only consider learning graphs where ev ery L -vertex is lab eled by a k -partite undirected graph on [ n ], where k is some fixed p ositiv e integer. Differen t L -v ertices will hav e different lab els, and w e will identify an L -v ertex with its lab el. 3 Analysis of learning graphs W e first review some to ols dev elop ed by Belo vs to analyze the complexity of learning graphs and then dev elop some new ones useful for the learning graphs we construct. W e fix for this section a learning graph G = ( V , E , w , `, { p y } ). By level d of G we refer to the set of v ertices at distance d from 4 the ro ot. A stage is the set of edges of G b et w een level i and lev el j , for some i < j . F or a subset V ⊆ V of the L -vertices let V + = { ( v , w ) ∈ E : v ∈ V } and similarly let V − = { ( u, v ) ∈ E : v ∈ V } . F or a vertex v we will write v + instead of { v } + , and similarly for v − instead of { v } − . Let E b e a stage of G and let V b e some subset of the L -v ertices at the b eginning of the stage. W e set E → V = { ( v , w ) ∈ E : v is u or a descenden t of u for some u ∈ V } . F or a v ertex v w e will write E → v instead of E → { v } . Giv en a learning graph G , the easiest wa y to obtain another learning graph is to mo dify the w eight function of G . W e will often use this rew eighting scheme to obtain learning graphs with b etter complexit y or complexity that is more con venien t to analyze. When G is understo od from the context, and when w 0 is the new w eight function, for any subset E ⊆ E of the L -edges, we denote the complexit y of E with resp ect to w 0 b y C w 0 ( E ). An illustration of the reweigh ting metho d is the follo wing lemma of Belovs whic h states that w e can upp er b ound the complexit y of a learning graph by partitioning it into a constant n umber of stages and summing the complexities of the stages. Lemma 2 (Belovs) . If E c an b e p artitione d into a c onstant numb er k of stages E 1 , . . . , E k , then ther e exists a weight function w 0 such that C w 0 ( G ) = O ( C ( E 1 ) + . . . + C ( E k )) . No w we will fo cus on ev aluating the complexity of a stage. Belovs has giv en a general theorem to simplify the calculation of the complexit y of a stage for flows with a high degree of symmetry (Theorem 6 in [Bel12b]). Our flows will p ossess this symmetry but rather than apply Belovs’ theorem, we develop one from scratch that tak es further adv antage of the regular structure of our learning graphs. Definition 5 (Consisten t flo ws) . L et E b e a stage of G and let V 1 , . . . , V s b e a p artition of the L -vertic es at the b e ginning of the stage. We say that { p y } is c onsistent with E → V 1 , . . . , E → V s if p y ( E → V i ) is indep endent of y for e ach i . Lemma 3. L et E b e a stage of G and let V 1 , . . . , V s b e a p artition of the L -vertic es at the b e ginning of the stage. Set E i = E → V i , and supp ose that { p y } is c onsistent with E 1 , . . . , E s . Then ther e is a new weight function w 0 for G such that C w 0 ( E ) ≤ max i C ( E i ) . Pr o of. Since b y h yp othesis p y ( E i ) is indep enden t from y , denote it by α i . W e assume that α i > 0 for eac h i ; if α i = 0 then p y (( u, v )) = 0 for every y and ( u, v ) ∈ E i , and these edges can b e deleted from the graph without affecting an ything. F or e ∈ E i , w e define the new weigh t w 0 ( e ) = α i C 1 ( E i ) w ( e ). Let us analyze the complexit y of E under this weigh ting. T o ev aluate the p ositiv e complexit y observ e that p y ( E ) = 1 for every y , since E is a stage, and th us P i α i = 1. Therefore C w 0 1 ( E ) = max y X i X e ∈ E i ` ( e ) p y ( e ) 2 w 0 ( e ) ≤ X i α i C 1 ( E i ) max y X e ∈ E i ` ( e ) p y ( e ) 2 w ( e ) α 2 i = X i α i = 1 . The negative complexit y can b e b ounded by C 0 ( E ) = X i X e ∈ E i ` ( e ) w 0 ( e ) = X i α i C 1 ( E i ) X e ∈ E i ` ( e ) w ( e ) = X i α i C 1 ( E i ) C 0 ( E i ) ≤ max i C ( E i ) 2 . 5 A t a high level, we will analyze the complexity of a stage E as follows. First, we partition the set of vertices V in to equiv alence classes [ u ] = { σ ( u ) : σ ∈ S n } for some appropriate action of S n that we will define later, and use symmetry to argue that the flow is consistent with { E → [ u ] } . Th us b y Lemma 3, it is enough to focus on the maximum complexity of E → [ u ] . Within E → [ u ] , our flo ws will b e of a particularly simple form. In particular, incoming flow will b e uniformly distributed ov er a subset of [ u ] of fixed size indep enden t of y . The next tw o lemmas ev aluate the complexit y of E → [ u ] in this situation. Lemma 4. L et E b e a stage of G and let V b e some subset of the L -vertic es at the b e ginning of the stage. F or e ach y let W y ⊆ V b e the set of vertic es in V which r e c eive p ositive flow under p y . Supp ose that for every y the fol lowing is true: 1. E → u ∩ E → v = ∅ for u 6 = v ∈ V , 2. | W y | is indep endent of y , 3. for al l v ∈ W y we have p y ( E → v ) = p y ( E → V ) / | W y | . Then C ( E → V ) ≤ s max v ∈ V C 0 ( E → v ) max v ∈ V C 1 ( E → v ) | V | | W y | . Pr o of. The negative complexit y can easily b e upp er bounded by C 0 ( E → V ) = X v ∈ V C 0 ( E → v ) ≤ | V | max v ∈ V C 0 ( E → v ) . F or the p ositiv e complexity w e hav e C 1 ( E → V ) = max y X v ∈ W y X e ∈ E → v ` ( e ) p y ( e ) 2 w ( e ) p y ( E → V ) 2 ≤ 1 | W y | 2 X v ∈ W y max y X e ∈ E → v ` ( e ) p y ( e ) 2 ω 2 w ( e ) p y ( E → V ) 2 ≤ max v ∈ V C 1 ( E → v ) | W y | . Observ e that when E is a stage b et ween t wo consecutive lev els, that is betw een level i and i + 1 for some i , and V is a subset of the v ertices at the b eginning of the stage, then E → V = V + . W e will use Lemma 3 in conjunction with Lemma 4 first in this context. Lemma 5. L et E b e a stage of G b etwe en two c onse cutive levels. L et V b e the set of L -vertic es at the b e ginning of the stage and supp ose that e ach v ∈ V has outde gr e e d and al l L -e dges e of the stage satisfy w ( e ) = 1 and ` ( e ) ≤ ` . L et V 1 , . . . , V s b e a p artition of V , and for al l y and i , let W y ,i ⊆ V i b e the set of vertic es in V i which r e c eive p ositive flow under p y . Supp ose that 1. the flows { p y } ar e c onsistent with { V i + } , 6 2. | W y ,i | is indep endent fr om y for every i , and for al l v ∈ W y ,i we have p y ( v + ) = p y ( V i + ) / | W y ,i | , 3. ther e is a g such that for e ach vertex v ∈ W y ,i the flow is dir e cte d uniformly to g of the d many neighb ors. Then ther e is a new weight function w 0 such that C w 0 ( E ) ≤ max i ` s d g | V i | | W y ,i | . (2) Pr o of. By h yp othesis (1) w e are in the realm of Lemma 3 and therefore C w 0 ( E ) ≤ max i C ( V i + ). T o ev aluate C ( V i + ), w e can apply Lemma 4 according to h yp othesis (2). The statemen t of the lemma then follows, since for ev ery v ∈ V w e hav e C 0 ( v + ) = `d , and C 1 ( v + ) = `/g b y h yp othesis (3). This lemma will b e the main to ol w e use to analyze the complexity of stages. Note that the complexit y in Equation (2) can b e decomp osed in to three parts: the length ` , the degree ratio d/g , and the maximum vertex r atio max i | V i | / | W y ,i | . This terminology will b e very helpful to ev aluate the complexity of stages. W e will use symmetry to decomp ose our flo ws as a con vex combinations of uniform flows ov er disjoin t sets of edges. Recall that eac h L -vertex u is lab eled by a k -partite graph on [ n ], say with color classes A 1 , . . . , A k , and that w e iden tify an L -v ertex with its lab el. F or σ ∈ S n w e define the action of σ on u as σ ( u ) = v , where v is a k -partite graph with color classes σ ( A 1 ) , . . . , σ ( A k ) and edges { σ ( i ) , σ ( j ) } for ev ery edge { i, j } in u . Define an equiv alence class [ u ] of L -v ertices by [ u ] = { σ ( u ) : σ ∈ S n } . W e say that S n acts tr ansitively on flows { p y } if for ev ery y , y 0 there is a τ ∈ S n suc h that p y (( u, v )) = p y 0 (( τ ( u ) , τ ( v )) for all L -edges ( u, v ). As shown in the next lemma, if S n acts transitively on a set of flo ws { p y } then they are consisten t with [ v ] + , where v is a v ertex at the b eginning of a stage b et ween consecutiv e levels. This will set us up to satisfy h yp othesis (1) of Lemma 5. Lemma 6. Consider a le arning gr aph G and a set of flows { p y } such that S n acts tr ansitively on { p y } . L et V b e the set of L -vertic es of G at some given level. Then { p y } is c onsistent with { [ u ] + : u ∈ V } , and, similarly, { p y } is c onsistent with { [ u ] − : u ∈ V } . Pr o of. Let p y , p y 0 b e tw o flows and τ ∈ S n suc h that p y (( u, v )) = p y 0 (( τ ( u ) , τ ( v )) for all L -edges ( u, v ). Then p y ([ u ] + ) = X v ∈ [ u ] X w :( v ,w ) ∈E p y (( v , w )) = X v ∈ [ u ] X w :( v ,w ) ∈E p y 0 (( τ ( v ) , τ ( w ))) = X τ − 1 ( v ) ∈ [ u ] X τ − 1 ( w ):( τ − 1 ( v ) ,τ − 1 ( w )) ∈E p y 0 (( v , w )) = X v ∈ [ u ] X w :( v ,w ) ∈E p y 0 (( v , w )) = p y 0 ([ u ] + ) . The statement p y ([ u ] − ) = p y 0 ([ u ] − ) follows exactly in the same w ay . 7 The next lemma giv es a sufficient condition for h yp othesis (2) of Lemma 5 to be satisfied. The partition of vertices in Lemma 5 will b e taken according to the equiv alence classes [ u ]. Note that unlik e the previous lemmas in this section that only consider a stage of a learning graph, this lemma sp eaks ab out the learning graph in its entiret y . Lemma 7. Consider a le arning gr aph and a set of flows { p y } such that S n acts tr ansitively on { p y } . Supp ose that for every L -vertex u and flow p y such that p y ( u − ) > 0 , 1. the flow fr om u is uniformly dir e cte d to g + ([ u ]) many neighb ors, 2. for every L -vertex w , the numb er of inc oming e dges with fr om [ w ] to u is g − ([ w ] , [ u ]) . Then for every L -vertex u the flow entering [ u ] is uniformly distribute d over W y , [ u ] ⊆ [ u ] wher e | W y , [ u ] | is indep endent of y . Pr o of. W e first use h yp otheses (1),(2) of Lemma 7 to show that for every flow p y and for ev ery L -v ertex u , the incoming flo w p y ( u − ) to u is either 0 or α y ([ u ]) > 0, that is it dep ends only on the equiv alence class of u . W e then use transitivit y and hypothesis (2) of Lemma 7 to reac h the conclusion of the lemma. Let V t b e the set of vertices at level t and fix a flow p y . The pro of is then by induction on the lev el t on a stronger statement for every σ , σ 0 ∈ S n and L -vertices u ∈ V t and v , v 0 ∈ V t +1 : [ p y (( u, v )) > 0 and p y (( σ ( u ) , v 0 )) > 0] = ⇒ p y (( u, v )) = p y (( σ ( u ) , v 0 )) , (3) [ p y ( σ ( u ) − ) > 0 and p y ( σ 0 ( u ) − ) > 0] = ⇒ p y ( σ ( u ) − ) = p y ( σ 0 ( u ) − ) . (4) A t level t = 0, the statement is correct since the ro ot is unique, has incoming flow 1, and outgoing edges with flow 0 or 1 /g + (ro ot). Assume the statemen ts hold up to and including lev el t . Hyp othesis 1 implies that when p y (( u, v )) > 0 for u ∈ V t , it satisfies p y (( u, v )) = p y ( u − ) /g + ([ u ]), and similarly p y (( τ ( u ) , v 0 )) = p y ( τ ( u ) − ) /g + ([ u ]). Therefore, Equation 4 at level t implies Equation 3 at level t + 1. W e no w turn to Equation 4 at level t + 1. Fix v ∈ V t +1 and σ, σ 0 ∈ S such that σ ( v ) and σ 0 ( v ) ha ve p ositiv e incoming flo ws. Then p y ( σ ( v ) − ) = X u ∈ V t p y (( u, σ ( v ))) and p y ( σ 0 ( v ) − ) = X u ∈ V t p y (( u, σ 0 ( v ))) . W e will show that p y ( σ ( v ) − ) = p y ( σ 0 ( v ) − ) by pro ving the following equalit y for every u X τ ∈ S n p y (( τ ( u ) , σ ( v ))) = X τ ∈ S n p y (( τ ( u ) , σ 0 ( v ))) . By Equation 3 at lev el t , all nonzero terms in the respective sum are iden tical. By Hyp othesis 2, the num b er of nonzero terms is g − ([ u ] , [ v ]) in b oth sums. Therefore the t wo sums are iden tical. W e now hav e concluded that the incoming flow to an L -v ertex u is either 0 or α y ([ u ]) > 0. This implies that the flo w entering u is uniformly distributed ov er some se t W y , [ u ] ⊆ [ u ]. W e no w show that the size of this set is indep enden t of y . If the flo w is transitiv e then α y ([ u ]) is independent of y and furthermore by the second statemen t of Lemma 6 applied to the lev el of u , P v ∈ [ u ] p y ( v − ) = P v ∈ [ u ] p y 0 ( v − ). Thus the n umber of terms in each sum must b e the same and | W y , [ u ] | is indep enden t of y . 8 4 Algorithms W e first discuss some basic assumptions ab out the subgraph H . Say that H has k vertices and minim um degree d . First, we assume that d ≥ 1, that is H has no disconnected v ertices. Rec all that we are testing if G contains H as a subgraph, not as an induced subgraph. Thus if H 0 is H with disconnected vertices remov ed, then G will con tain H if and only if G con tains H 0 and n ≥ k . F urthermore, the algorithms w e give in this section b eha ve monotonically with k , and so will hav e smaller complexity on the graph H 0 . Additionally , we assume that k ≥ 3 as if k = 2 and d = 1 then H is simply an edge and in the case the complexit y is kno wn to b e Θ( n ) as it is equiv alent to searc h on Θ( n 2 ) items. Th us let H b e a graph on v ertex set { 1 , 2 , . . . , k } , with k ≥ 3 vertices. W e present t wo algorithms in this section for determining if a graph G contains H . F ollowing Belo vs, w e say that a stage lo ads an edge { a, b } if for all L -edges ( u, v ) with flo w in the stage, we hav e { a, b } ∈ S ( v ) \ S ( u ). Both algorithms will use a subroutine, giv en in Section 4.1, to load an induced subgraph of H . F or some integer 1 ≤ u ≤ k , let H [1 ,u ] b e the subgraph of H induced b y vertices 1 , 2 , . . . , u . The first algorithm, given in Section 4.2, will take u = k and load H directly; the second algorithm, given in Section 4.3, will first load H [1 ,k − 1] , and then search for the missing vertex that completes H . 4.1 Loading a subgraph of H Fix 1 ≤ u ≤ k and and let e 1 , . . . , e m b e the edges of H [1 ,u ] , en umerated in some fixed order. W e assume that m ≥ 1. F or any p ositiv e input graph G , that is a graph G whic h con tains a cop y of H , w e fix k vertices a 1 , a 2 , . . . , a k suc h that { a i , a j } is an edge of G whenev er { i, j } is an edge of H . W e define a bit of terminology that will b e useful. F or tw o sets Y 1 , Y 2 ⊆ [ n ], w e say that a bipartite graph b et w een Y 1 and Y 2 is of t yp e ( { ( n 1 , d 1 ) , . . . , ( n j , d j ) } , { ( m 1 , g 1 ) , . . . , ( m ` , g ` ) } ) if Y 1 has n i v ertices of degree d i for i = 1 , . . . , j , and Y 2 has m i v ertices of degree g i for i = 1 , . . . , ` , and this is a complete listing of vertices in the graph, i.e. | Y 1 | = P j i =1 n i and | Y 2 | = P ` i =1 m i . V ertices of our learning graph will b e lab eled by a u -partite graph Q on disjoint sets X 1 , . . . , X u ⊆ [ n ]. The global structure of Q will mimic the edge pattern of H [1 ,u ] . Namely , for eac h edge e t = { i, j } of H [1 ,u ] , there will b e a bipartite graph Q t b et ween X i and X j with a sp ecified degree sequence. There are no edges b et ween X i and X j if { i, j } is not an edge of H [1 ,u ] . The mapping S : V → 2 ( n 2 ) from learning graph vertices to query indices returns the union of the edges of Q t for t = 1 , . . . , m . W e now describ e the stages of our first learning graph. Let V t denote the L -v ertices at the b eginning of stage t (and so the end of stage t − 1 for t > 0). The L -edges b et ween V t and V t +1 are defined in the obvious wa y—there is an L -edge b et w een v t ∈ V t and v t +1 ∈ V t +1 if the graph lab eling v t is a subgraph of the graph lab eling v t +1 . W e initially set the weigh t of all L -edges to b e one, though some edges will b e rew eighted in the complexit y analysis using Lemma 5. The ro ot of the learning graph is lab eled b y the empty graph. The algorithm dep ends on tw o parameters r, s which will b e optimized later. The parameter r ∈ [ n ] will control the n umber of v ertices, and s ∈ [0 , 1] the edge densit y , of graphs lab eling the L -v ertices. Learning graph G 1 : 9 Stage 0 : Setup (Figure 1). V 1 consists of all L -vertices lab eled by a u -partite graph Q with color classes A 1 , . . . , A u ⊆ [ n ], each of size r − 1. The edges will b e the union of the edges in bipartite graphs Q 1 , . . . , Q m , where if e ` = { i, j } is an edge of H [1 ,u ] , then Q ` is a bipartite graph of t yp e ( { ( r − 1 − r s, r s ) , ( r s, r s − 1) } , { ( r − 1 − r s, r s ) , ( r s, r s − 1) } ) b et ween A i and A j . The num b er of edges added in this stage is O ( sr 2 ). Flow is uniform from the ro ot of the learning graph, whose lab el is the empt y graph, to all L -vertices suc h that a 1 , . . . , a k 6∈ A i for i = 1 , . . . , u . Stage t for t = 1 , . . . , u : Load a t (Figures 2 and 3). V t +1 consists of all L -v ertices lab eled by a u -partite graph Q with color classes B 1 , . . . , B t , A t +1 , . . . A u , where | B i | = r , and | A i | = r − 1. The edges of Q are the union of edges of bipartite graphs Q 1 , . . . , Q m , where if e ` = { i, j } then the t yp e of Q ` is given b y the following cases: • If t < i < j , then Q ` is of type ( { ( r − 1 − r s, r s ) , ( r s, r s − 1) } , { ( r − 1 − r s, r s ) , ( r s, r s − 1) } ) b et ween A i and A j . • If i ≤ t < j , then Q ` is of t yp e ( { ( r − r s, r s ) , ( r s, r s − 1) } , { ( r − 1 , r s ) } ) b et ween B i and A j . • If i < j ≤ t , then Q ` is of t yp e ( { ( r, r s ) } , { ( r , r s ) } ) b et ween B i and B j . The num b er of edges added at stage t is O ( rs ). The flow is directed uniformly on those L -edges where the elemen t added to A t is a t and none of the edges { a i , a j } are presen t. Stage u + 1 : Hiding (Figure 4). No w we are ready to start loading edges { a i , a j } . If we simply loaded the edge { a i , a j } no w, ho wev er, it w ould b e uniquely iden tified by the degrees of a i , a j since only these vertices w ould ha ve degree r s + 1. This means that for example at the last stage of the learning graph the vertex ratio w ould b e Ω( n k − 1 ), no matter what r is. Thus in this stage we first do a “hiding” step, adding edges so that half of the vertices in every set hav e degree r s + 1. F ormally , V u +2 consists of all L -v ertices labeled by a u -partite graph Q with color classes B 1 , . . . , B u , where | B i | = r . The edges of Q are the union of edges of bipartite graphs Q 1 , . . . , Q m , where if e ` = { i, j } then Q ` is of t yp e ( { ( r / 2 , r s ) , ( r / 2 , r s + 1) } , { ( r / 2 , r s ) , ( r / 2 , r s + 1) } ) b et ween B i and B j . The num b er of edges added in this stage is O ( r ). The flow is directed uniformly to those L -vertices where for ev ery e ` = { i, j } , b oth a i and a j ha ve degree r s in Q ` . Stage u + t + 1 for t = 1 , . . . , m : Load { a i , a j } if e t = { i, j } (Figure 5). T ake an L -v ertex at the b eginning of stage u + t + 1 whose edges are the union of bipartite graphs Q 1 , . . . , Q m . In stage u + t + 1 only Q t will b e mo dified, by adding single edge { b i , b j } where b i ∈ B i and b j ∈ B j ha ve degree r s in Q t . The flo w is directed uniformly along those L -edges where b i = a i and b j = a j . Th us at the end of stage u + m + 1, the L -vertices are lab eled by the edges in the union of bipartite graphs Q 1 , . . . , Q m eac h of t yp e ( { ( r/ 2 − 1 , r s ) , ( r / 2 + 1 , r s + 1) } , { ( r / 2 − 1 , r s ) , ( r / 2 + 1 , r s + 1) } ). The incoming flo w is uniform o ver those L -v ertices where a i ∈ B i for i = 1 , . . . , u , and if e ` = { i, j } then the edge { a i , a j } is presen t in Q ` for ` = 1 , . . . , m, and b oth a i , a j ha ve degree r s + 1 in Q ` . 10 degree rs degree rs  1 degree rs  1 degree rs A i : r  1 ver ti ce s A j : r  1 ver ti ce s r  1  rs ve rt ice s rs ve rt ic es r  1  rs ve rt ice s rs ve rt ic es Figure 1: Stage 0 : Edges added to G ` when e ` = { i, j } is an edge of K . The flow is uniform to instances with a 1 , . . . , a k 6∈ A i for i = 1 , . . . , k − 1. A j : r  1 ver ti ce s r  1  rs ver ti ce s rs ver ti ce s r  1  rs ver ti ce s rs ver ti ce s with degree rs with degree rs  1 with degree rs with degree rs  1 B t : A t with 1 new vertex ! r ve rt ice s degree rs 1 new vertex Figure 2: Stage t for t = 1 , . . . , u : r s added edges in some G ` at stage t , when e ` = { t, j } with t < j . See Figure 3 for the case e ` = { i, t } with i < t . (No edge is added to G ` at stage t when e ` = { i, j } with t 6 = i and t 6 = j .) The added edges are b et ween the new v ertex of A t and the r s v ertices in A j , resp ectiv ely B i , of degree ( r s − 1). The flo w is directed to instances where the new v ertex of A t is a t . 11 rs ve rti ce s with degree rs with degree rs  1 with degree rs r  rs ve rt ic es B i : r ver ti ce s B t : A t with 1 new vertex ! r ve rt ic es 1 new vertex degree rs r  1 ve rt ic es Figure 3: Stage t for t = 1 , . . . , u : rs added edges in some G ` at stage t , when e ` = { i, t } with i < t . See Figure 2 for the case e ` = { t, j } with t < j . (No edge is added to G ` at stage t when e ` = { i, j } with t 6 = i and t 6 = j .) The added edges are b et ween the new v ertex of A t and the r s v ertices in A j , resp ectiv ely B i , of degree ( r s − 1). The flo w is directed to instances where the new v ertex of A t is a t . r/ 2 v ertex-disjoint edges r/ 2 ver ti ce s r/ 2 ver ti ce s B i : r ve rt ic es B j : r ve rt ic es all of degree rs all of degree rs Figure 4: Stage u + 1 : W e add r / 2 vertex-disjoin t edges to G ` when e ` = { i, j } is an edge of K . The flow is directed to instances where the degrees of a i and a j remain r s in G ` . 12 r/ 2 ve rti ce s r/ 2 ve rti ce s B i : r ve rt ice s B j : r ve rt ice s with degree rs +1 with degree rs +1 with degree rs r/ 2 ve rti ce s with degree rs r/ 2 ve rti ce s 1 edge b et ween 2 rs -degree vertices Figure 5: Stage u + 1 + t for t = 1 , . . . , m : Let e t = { i, j } . Then a single edge is added to Q t b et ween tw o vertices b i ∈ B i and b j ∈ B j of degree r s in Q t . The flo w is directed to instances where b i = a i and b j = a j . Complexit y analysis of the stages Note that for an input graph y containing a copy of H [1 ,u ] the definition of flow dep ends only on the vertices a 1 , . . . , a u that span H . As for any tw o graphs y , y 0 con taining H there is a p erm utation τ mapping a copy of H in y to a copy of H in y 0 w e see that S n acts transitively on flows. F urthermore, b y construction of our learning graph, from a vertex v with p y ( v − ) > 0, flow is directed uniformly to g out of d many neighbors, where g , d dep end only on the stage, not y or v . Additionally , by symmetry of the flow, h yp othesis (2) of Lemma 7 is also satisfied. W e will inv oke Lemma 5 to ev aluate the cost of eac h stage. Hyp othesis (1) is satisfied b y Lemma 6, hypothesis (2) b y Lemma 7, and hypothesis (3) by construction of the learning graph. • Stage 0: The set of L -vertices at the b eginning of this stage is simply the ro ot thus the v ertex ratio (and maximum vertex ratio) is one. The degree ratio can b e upp er b ounded by (( n − k ) / ( n − k r − k )) k = O (1), as we will choose r = o ( n ) and k is constant. The length of this stage is O ( sr 2 ) and so its complexit y is O ( sr 2 ). • Stage t for t = 1 , . . . u : An L -v ertex in V t will b e used b y the flo w if and only if a i ∈ B i for i = 1 , . . . , t − 1 and a i 6∈ B 1 , . . . , B t , A t +1 , . . . , A u for i = t, . . . , k . F or any v ertex v ∈ V t the probabilit y ov er σ ∈ S n that σ ( v ) satisfies the second even t is constant th us the vertex ratio is dominated b y the first even t which has probabilit y O (( r /n ) t − 1 ). Th us the maximum v ertex ratio is O (( n/r ) t − 1 ). The degree ratio is n . Since O ( sr ) edges are added, the complexity is O ( sr √ n ( n/r ) ( t − 1) / 2 ). • Stage u + 1: As ab o ve, an L -v ertex in V k +1 will b e used b y the flow if and only if a i ∈ B i for i = 1 , . . . , u . F or any vertex v ∈ V k +1 the probability ov er σ that this is satisfied b y σ ( v ) is O (( r/n ) u ) therefore the maximum v ertex ratio is O (( n/r ) u ). F or eac h e ` = { i, j } , half of the v ertices in B i and half of the vertices in B j will ha ve degree r s in Q ` . Therefore, the degree ratio is 4 m = O (1). Since O ( r ) edges are added, the complexity of this stage is therefore O ( r ( n/r ) u/ 2 ). 13 • Stage u + t + 1 for t = 1 , . . . m : In every stage, the degree ratio is O ( r 2 ). An L -vertex is in the flow at the b eginning of stage u + t + 1 if the following t wo conditions are satisfied: a i ∈ B i for i = 1 , . . . u, (5) if e ` = { i, j } then { a i , a j } ∈ Q ` with a i , a j of degree r s + 1 in Q ` , for ` = 1 , . . . , t − 1 . (6) The probability ov er σ that σ ( v ) satisfies Equation (5) is Ω(( r /n ) u ). Among v ertices in [ v ] satisfying this condition, a further Ω( s t − 1 ) fraction will satisfy Equation (6). This follo ws from Lemma 8 b elo w, together with the indep endence of the bipartite graphs Q 1 , . . . , Q m . Th us the maximum vertex ratio is O (( n/r ) u s − ( t − 1) ). As only one edge is added at this stage, w e obtain a cost of O ( r ( n/r ) u/ 2 s − ( t − 1) / 2 ). Lemma 8. L et Y 1 , Y 2 b e disjoint r -element subsets of [ n ] , and let ( y 1 , y 2 ) ∈ Y 1 × Y 2 . L et K b e a bip artite gr aph b etwe en Y 1 and Y 2 of typ e ( { ( r / 2 − 1 , r s ) , ( r / 2 + 1 , r s + 1) } , { ( r / 2 − 1 , r s ) , ( r / 2 + 1 , r s + 1) } ) . The pr ob ability over σ ∈ S n that the e dge { y 1 , y 2 } is in σ ( K ) and b oth y 1 and y 2 ar e of de gr e e r s + 1 , is at le ast s/ 4 . Pr o of. The degree condition is satisfied with probability at least 1 / 4. Giv en that the degree condi- tion is satisfied, it is enough to show that for a bipartite graph K 0 of type ( { ( r, r s ) } , { ( r , r s ) } ) the probabilit y ov er σ ∈ S n that σ ( K 0 ) contains the fixed edge ( y 1 , y 2 ) is at least s , since K is such a graph plus some additional edges. Because of symmetry , this probability doesn’t dep end on the choice of the edge, let’s denote it b y p . Let K 1 , . . . , K c b e an enumeration of all bipartite graphs isomorphic to K 0 . W e will coun t in tw o differen t wa ys the cardinalit y χ of the set { ( e, h ) : e ∈ K h } . Ev ery K h con tains sr 2 edges, therefore χ = csr 2 . On the other hand, ev ery edge app ears in pc graphs, therefore χ = r 2 pc , and th us p = s . 4.2 Loading H When u = k , the constructed learning graph determines if H is a subgraph of the input graph, since a copy of H is loaded on p ositiv e instances. Cho osing the parameters s, r to optimize the total cost giv es the following theorem. Theorem 9. L et H b e a gr aph on k ≥ 3 vertic es and m ≥ 1 e dges. Then ther e is a quantum query algorithm for determining if H is a sub gr aph of an n -vertex gr aph making O ( n 2 − 2 / ( k +1) − k / (( k +1)( m +1)) ) many queries. Pr o of. By Theorem 1, it suffices to show that the learning graph G 1 has the claimed complexity . W e will use Lemma 2 and upp er b ound the learning graph complexity by the sum of the costs of the stages. As usual, we will ignore factors of k . The complexity of stage 0 is: S 0 = O  sr 2  . The complexit y of eac h stage 1 , . . . , k , and also their sum, is dominated b y the complexit y of stage k : U 0 = O  sr √ n ( n/r ) ( k − 1) / 2  . The complexity of stage k + 1 is: U 00 = O  r ( n/r ) k/ 2  . 14 Again, the complexit y of each stage k + 2 , . . . , k + m + 1, and also their sum, is dominated by the complexit y of stage k + m + 1: U 000 = O  r ( n/r ) k/ 2 s − ( m − 1) / 2  . Observ e that U 00 = O ( U 000 ). Therefore the ov erall cost can b e b ounded b y S 0 + U 0 + U 000 . Choosing r = n 1 − 1 / ( k +1) mak es S 0 = U 0 for an y v alue of s , as their dependence on s is the same. When s = 1 we hav e U 000 < S 0 = U 0 th us w e can choose s < 1 to balance all three terms. Letting s = n − t w e hav e S 0 = U 0 = O ( n 2 − 2 / ( k +1) − t ) and U 000 = O ( n 1+( k − 2) / (2( k +1))+ t ( m − 1) / 2 ). Making these equal gives t = k/ (( k + 1)( m + 1)), and giv es o verall cost O ( n 2 − 2 / ( k +1) − t ). 4.3 Loading the full graph but one v ertex Recall that H is a graph on vertex set { 1 , 2 , . . . , k } , with k ≥ 3 vertices, m ≥ 1 edges and minimum degree d ≥ 1. By renaming the vertices, if necessary , w e assume that vertex k has degree d . Our second algorithm emplo ys the learning graph G 1 of Section 4.1 with u = k − 1 to first load H [1 ,k − 1] . This is then com bined with searc h to find the missing vertex and a collision subroutine to verify it links with H [1 ,k − 1] to form H . Again, let H [1 ,k − 1] b e the subgraph of H induced by v ertices 1 , 2 , . . . , k − 1, and let e 1 , . . . , e m 0 b e the edges of H [1 ,k − 1] , enumerated in some fixed order. Th us note that m = m 0 + d . F or any p ositiv e input graph y , we fix k vertices a 1 , a 2 , . . . , a k suc h that { a i , a j } is an edge of y whenev er { i, j } is an edge of H . F or notational conv enience we assume that a k is of degree d and connected to a 1 , . . . , a d . Learning graph G 2 : Stages 0 , 1 , . . . , k + m 0 : Learning graph G 1 of Section 4.1. Stage k + m 0 + 1 : W e use searc h plus a d -wise collision subroutine to find a vertex v and d edges whic h link v to H [1 ,k − 1] to form H . The learning graph for this subroutine is giv en in Section 4.4. Complexit y analysis of the stages All stages but the last one hav e b een analyzed in Sec- tion 4.1, therefore only the last stage remains to study . • Stage k + m 0 + 1: Let V k + m 0 +1 b e the set of L -v ertices at the b eginning of stage k + m 0 + 1. W e will ev aluate the complexit y of this stage in a similar fashion as we hav e done previously . As S n acts transitiv ely on the flo ws, by Lemma 6 w e can inv oke Lemma 3 and it suffice s to consider the maximum of C ( E → [ u ] ) o ver equiv alence classes [ u ]. F urthermore, as w e hav e argued in Section 4.1, the learning graph also satisfies the conditions of Lemma 7, th us w e can apply Lemma 4 to ev aluate C ( E → [ u ] ). The maxim um v ertex ratio o ver [ u ] is O ( s − m 0 ( n/r ) k − 1 ). As shown in Section 4.4, the complexity of the subroutine learning graph attac hed to each v ∈ V k + m 0 +1 is at most O ( √ nr d/ ( d +1) ). Thus b y Lemma 4, the complexity of this stage is O  s − m 0 / 2  n r  ( k − 1) / 2 √ nr d/ ( d +1)  . 15 Cho osing the parameters s, r to optimize the total cost giv es the following theorem. Theorem 10. L et H b e a gr aph on k ≥ 3 vertic es with minimal de gr e e d ≥ 1 and m e dges. Then ther e is a quantum query algorithm for determining if H is a sub gr aph of an n -vertex gr aph making O ( n 2 − 2 /k − (2 k − d − 3) / ( k ( d +1)( m − d +2)) ) many queries. Pr o of. By Theorem 1, it suffices to show that the learning graph G 2 has the claimed complexity . W e will use Lemma 2 and upp er b ound the learning graph complexity by the sum of the costs of the stages. As usual, we will ignore factors of k . The complexity of stage 0 is: S 0 = O  sr 2  . The complexity of eac h stage 1 , . . . , k − 1, and also their sum, is dominated b y the complexity of stage k − 1: U 0 = O  sr √ n ( n/r ) ( k − 2) / 2  . The complexity of stage k is: U 00 = O  r ( n/r ) ( k − 1) / 2  . Again, the complexit y of eac h stage k + 1 , . . . , k + m 0 , and also their sum, is dominated by the complexit y of stage k + m 0 : U 000 = O  r ( n/r ) ( k − 1) / 2 s − ( m 0 − 1) / 2  . Observ e that U 00 = O ( U 000 ). Finally , denote the cost of stage k + m 0 + 1 by C 0 = O  s − m 0 / 2  n r  ( k − 1) / 2 √ nr d/ ( d +1)  . Observ e that U 000 = O ( C 0 ), provided that r 1 / ( d +1) s 1 / 2 = O ( n 1 / 2 ). The later is alwa ys satisfied since s ≤ 1, r ≤ n and d ≥ 1. Therefore the ov erall cost can then b e b ounded by S 0 + U 0 + C 0 . Cho osing r = n 1 − 1 /k mak es S 0 = U 0 for any v alue of s , as their s dep endence is the same. When s = 1 w e ha ve C 0 < S 0 = U 0 th us w e can c ho ose s < 1 to balance all three terms. Letting s = n − t w e ha ve S 0 = U 0 = O ( n 2 − 2 /k − t ) and C 0 = O ( n 2 − 2 /k +1 / (2 k ) − ( k − 1) / ( k ( d +1))+ tm 0 / 2 ). Making these equal giv es t = (2 k − d − 3) / ( k ( d + 1)( m 0 + 2)). Since k ≥ 3 we ha v e t > 0 and th us s < 1. The o verall cost of the algorithm is O ( n 2 − 2 /k − t ). Noting that m = m 0 + d gives the statement of the theorem. Our main result is an immediate consequence of Theorem 9 and Theorem 10. Theorem 11. L et H b e a gr aph on k ≥ 3 vertic es with minimal de gr e e d ≥ 1 and m e dges. Then ther e is a quantum query algorithm for determining if H is a sub gr aph of an n -vertex gr aph making O ( n 2 − 2 /k − t ) many queries, wher e t = max  k 2 − 2( m + 1) k ( k + 1)( m + 1) , 2 k − d − 3 k ( d + 1)( m − d + 2)  . 16 4.4 Graph collision subroutine In this section w e describ e a learning graph for the graph collision subroutine that is used in the learning graph giv en in Section 4.3. F or eac h vertex v at the end of stage k + m 0 w e will attach a learning graph G v . The ro ot of G v will b e the lab el of v and we will sho w that it has complexit y √ nr d/ ( d +1) . F urthermore for every flo w p y on G v , the sinks of flo w will be L -vertices that ha ve loaded a cop y of H . W e now describ e G v in further detail. A vertex v at the end of stage k + m 0 is lab eled by a ( k − 1)-partite graph Q on color classes B 1 , . . . , B k − 1 of size r . The edges of Q are the union of the edges in bipartite graphs Q 1 , . . . , Q m 0 eac h of type ( { ( r / 2 − 1 , r s ) , ( r / 2 + 1 , r s + 1) } , { ( r / 2 − 1 , r s ) , ( r / 2 + 1 , r s + 1) } ). This will b e the lab el of the ro ot of G v . On G v w e define a flo w p 0 y for ev ery input y such that p y ( v − ) > 0 in the learning graph loading H [1 ,k − 1] . Sa y that y con tains a copy of H and that v ertices a 1 , . . . , a k span H in y . F or ease of notation, assume that vertex a k (the degree d vertex remov ed from H ) is connected to a 1 , . . . , a d . Recall that the L -vertex v will hav e flo w if and only if a i ∈ B i , a k 6∈ B i for i = 1 , . . . , k − 1 and if e ` = { i, j } then the edge { a i , a j } is presen t in Q ` for ` = 1 , . . . , m 0 , and b oth a i , a j ha ve degree r s + 1 in Q ` . Th us for each such y w e will define a flow on G v . The flo w will only dep end on a 1 , . . . , a k . The complexit y of G v will dep end on a parameter 1 ≤ λ ≤ r , that we will optimize later. Stage 0: Cho ose a v ertex u 6∈ B i for i = 1 , . . . k − 1 and load λ edges b et ween u and v ertices of degree r s + 1 in B i , for eac h i = 1 , . . . d . Flow is directed uniformly along those L-edges where u = a k and none of the edges loaded touch an y of the a 1 , . . . , a d . Stage t for t = 1 , . . . , d : Load an additional edge b et ween u and B t . The flow is directed uniformly along those L-edges where the edge loaded is { a k , a t } . Complexit y analysis of the stages • Stage 0: W e use Lemma 4. As the vertices at the b eginning of this stage consist only of the ro ot, conditions (1) and (2) are trivially satisfied; Condition (3) is satisfied by construction. Flo w is present in all L -edges of this stage where u = a k , which is a Ω(1 /n ) fraction of the total num b er of L -edges. Th us the degree ratio d/g = O ( n ). The length of the stage is λ , giving a total cost of λ √ n . • Stage t for t = 1 , . . . , d : Let V t b e the set of v ertices at the b eginning of stage t . The definition of flow dep ends only on a 1 , . . . , a k , th us S n acts transitively on the flows. Applying Lemma 6 giv es that { p 0 ( y ) } is consisten t with [ u ] + for u ∈ V t . Also by construction the hypothesis of Lemma 7 is satisfied, th us we are in p osition to use Lemma 5. The length of eac h stage is 1. The out-degree of an L -v ertex in stage t is O ( r ) while the flo w uses just one outgoing edge, th us the degree ratio d/g = O ( r ). Finally , we m ust estimate the fraction of vertices in [ u ] with flo w for u ∈ V t . A vertex u in V t has flow if and only if a k w as loaded in stage 0 and the edges { a k , a i } are loaded for i = 1 , . . . , t − 1. The probabilit y o ver σ ∈ S n that the first even t holds in σ ( u ) is Ω(1 /n ). Giv en that a k has b een loaded at v ertex u ∈ V t the probability o ver σ that { a k , a i } ∈ σ ( u ) is Ω( λ/r ). Thus w e obtain that the maxim um v ertex ratio at stage t is n ( r /λ ) t − 1 . The complexity of stage t is maximized when t = d , giving an ov erall complexity √ nr ( r/λ ) ( t − 1) / 2 . 17 The sum of the costs λ √ n and √ nr ( r/λ ) ( t − 1) / 2 is minimized for λ = r d/ ( d +1) giving a cost of O ( √ nr d/ ( d +1) ). 4.5 Comparison with the quantum walk approac h It is insightful to compare the cost of the learning graph algorithm for finding a subgraph with the the algorithm of [MSS07] using a quantum w alk on the Johnson graph. W e saw in the analysis of the learning graph that there w ere three imp ortan t terms in the cost, denoted S 0 , U 0 , C 0 . In the quantum walk formalism there are also three types of costs: se tu p, aggregated update, and aggregated chec king, whic h we will denote by S, U, C . When the walk is done on the Johnson graph with v ertices lab eled b y r -elemen t subsets these costs are S = r 2 U =  n r  ( k − 1) / 2 r 3 / 2 C =  n r  ( k − 1) / 2 √ nr d/ ( d +1) . Here d is the minimal degree of a vertex in H . Here there is only one parameter, and in general r cannot b e chosen to make all three terms equal. In the case of triangle finding ( k = 3 , d = 2), the choice r = n 3 / 5 is made. This makes S = n 1 . 2 and U = C = n 1 . 3 . In the general case of finding H , the choice r = n 1 − 1 /k is made, giving the first and second terms equal to n 2 − 2 /k and the third term C = n 2 − 1 /k (1+ k / ( d +1)+( d − 1) / 2( d +1)) . Th us C < S = U even for the largest p ossible v alue d = k − 1. Because of this, the analysis gives n 2 − 2 /k queries for an y graph on k v ertices, indep enden t of d . Ac kno wledgmen ts W e thank Aleksandrs Belovs for p oin ting out an error in the original version of the algorithm. W e are very grateful to L´ aszl´ o Babai whose insigh tful remarks made us realize the imp ortance of distinguishing the t wo algorithms presented here. W e also thank the anonymous referees for their helpful comments for improving the presentation of the pap er. References [Am b07] A. Am bainis. Quan tum w alk algorithm for element distinctness. SIAM Journal on Computing , 37(1):210–239, 2007. [BBC + 01] R. Beals, H. Buhrman, R. Clev e, M. Mosca, and R. de W olf. Quantum lo wer b ounds b y p olynomials. Journal of the ACM , 48(4):778–797, 2001. [BCWZ99] H. Buhrman, R. Clev e, R. de W olf, and C. Zalk a. Bounds for small-error and zero-error quan tum algorithms. In Pr o c e e dings of IEEE Symp osium on F oundations of Computer Scienc e , pages 358–368, 1999. [BDH + 05] H. Buhrman, C. D ¨ urr, M. Heiligman, P . Høyer, F. Magniez, M. Santha, and R. de W olf. Quantum algorithms for elemen t distinctness. SIAM Journal on Computing , 34(6):1324–1330, 2005. 18 [Bel12a] A. Belovs. Learning-graph-based quan tum algorithm for k -distinctness. In Pr o c e e dings of IEEE Symp osium on F oundations of Computer Scienc e , 2012. T o app ear. [Bel12b] A. Belo vs. Span programs for functions with constant-sized 1-certificates. In Pr o c e e dings of the A CM Symp osium on the The ory of Computing , pages 77–84, 2012. [CK11] A. Childs and R. Kothari. Quan tum query complexity of minor-closed graph properties. In Leibniz International Proceedings in Informatics, editor, Pr o c e e dings of Symp osium on The or etic al Asp e cts of Computer Scienc e , volume 9, pages 661–672, 2011. [DHHM06] C. D ¨ urr, M. Heiligman, P . Høyer, and M. Mhalla. Quan tum query complexity of some graph problems. SIAM Journal on Computing , 35(6):1310–1328, 2006. [EHK99] M. Ettinger, P . Høy er, and E. Knill. Hidden subgroup states are almost orthogonal. T echnical Report quant-ph/9901034, arXiv, 1999. [Gro96] Lo v K. Gro ver. A fast quan tum mechanical algorithm for database searc h. In Pr o c e e d- ings of the ACM Symp osium on the The ory of Computing , pages 212–219, 1996. [HL ˇ S07] P eter Høy er, T roy Lee, and Robert ˇ Spalek. Negativ e w eights make adversaries stronger. In Pr o c e e dings of the ACM Symp osium on the The ory of Computing , pages 526–535, 2007. [H ˇ S05] P . Høy er and R. ˇ Spalek. Low er b ounds on quan tum query complexity . Bul letin of the Eur op e an Asso ciation for The or etic al Computer Scienc e , 87, 2005. Also arXiv rep ort quan t-ph/0509153v1. [LMR + 11] T. Lee, R. Mittal, B. Reichardt, R. ˇ Spalek, and M. Szegedy . Quan tum query complexit y of state conv ersion. In Pr o c e e dings of IEEE Symp osium on F oundations of Computer Scienc e , pages 344–353, 2011. [MNRS11] F. Magniez, A. Nay ak, J. Roland, and M. Santha. Search via quantum walk. SIAM Journal on Computing , 40(1):142–164, 2011. [MSS07] F. Magniez, M. San tha, and M. Szegedy . Quantum algorithms for the triangle problem. SIAM Journal on Computing , 37(2):413–424, 2007. [Rei09] Ben W. Reichardt. Span programs and quantum query complexit y: The general ad- v ersary b ound is nearly tight for every b oolean function. In Pr o c e e dings of IEEE Symp osium on F oundations of Computer Scienc e , pages 544–551, 2009. [Rei11] Ben W. Reichardt. Reflections for quantum query algorithms. In Pr o c e e dings of the A CM-SIAM Symp osium on Discr ete A lgorithms , pages 560–569, 2011. [Sho97] P . Shor. Algorithms for quantum computation: Discrete logarithm and factoring. SIAM Journal on Computing , 26(5):1484–1509, 1997. [Sim97] D. Simon. On the p o w er of quantum computation. SIAM Journal on Computing , 26(5):1474–1483, 1997. [Zh u11] Y. Zhu. Quan tum query complexit y of subgraph con tainment with constant-sized cer- tificates. T echnical Rep ort arXiv:1109.4165v1, arXiv, 2011. 19

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment