Strong direct product conjecture holds for all relations in public coin randomized one-way communication complexity
Let f subset of X x Y x Z be a relation. Let the public coin one-way communication complexity of f, with worst case error 1/3, be denoted R^{1,pub}_{1/3}(f). We show that if for computing f^k (k independent copies of f), o(k R^{1,pub}_{1/3}(f)) commu…
Authors: Rahul Jain
Strong direct pro duct conjecture holds for all relations in public coin randomized one-w a y comm unication complexit y Rah ul Jain ∗ No v em b er 20, 2018 Abstract Let f ⊆ X × Y × Z b e a relation. Let the public coin one-w a y communicatio n complexit y of f , with worst case error 1 / 3, b e denoted R 1 , pub 1 / 3 ( f ). W e sh o w that if for computing f k ( k indep end en t c opies of f ), o ( k · R 1 , pub 1 / 3 ( f )) comm u nication is provi ded, then the s uccess is exp onent ially small in k . Th is settles the str ong direct pro duct conjecture for all relations in pub lic coin one-w ay communicati on complexit y . W e sho w a n ew tigh t c h aracte rization of public coin one-wa y communicat ion com- plexit y w hic h strengthens on the tight c haracterization shown in J., Klauc k , Na y ak [JKN08]. W e use the new c haracterization to sho w our direct pro du ct r esult and this ma y also b e of indep end en t inte rest. 1 In tro ductio n Let f ⊆ X × Y × Z b e a relation and ε > 0. Let Alice with inpu t x ∈ X , and Bob with in p ut y ∈ Y , wish to compute a z ∈ Z suc h that ( x, y , z ) ∈ f . W e consider the mod el of public coin one-w a y communicati on complexit y in whic h Alice send s a single message to Bob, and Alice an d Bob ma y use pu bic coins. Let R 1 , pub ε ( f ) den ote the comm u nicatio n of the b est proto col P whic h ac hieve s this with error at most ε (o ve r the public coins) for any input ( x, y ). No w sup p ose that Alice and Bob wish to compute f simultaneously on k inp uts ( x 1 , y 1 ) , . . . , ( x k , y k ) f or some k ≥ 1. They can ac hieve this by runn ing k indep end ent copies of P in parallel . Ho w ev er in this case the o veral l success could b e as lo w as (1 − ε ) k . Strong d irect pro du ct conjecture for f states that this is roughly the b est th at Alice and Bob can d o. W e sho w that this is indeed true for all relations. ∗ Cent re for Q ua n tum T ec hnolo gies and Depar tmen t of Co mputer Science, National University of Singa- po re. rahul @comp.n us.edu.sg 1 Theorem 1.1 L et f ⊆ X × Y × Z b e a r elation. L et k ≥ 1 b e a natur al nu mb er. Then, R 1 , pub 1 − 2 − Ω( k ) ( f k ) ≥ Ω( k · R 1 , pub 1 / 3 ( f )) . W e show this result by showing a n ew tigh t c haracterizatio n of public coin one-w ay comm un icatio n complexit y for all relations. W e in tro duce a new measure of complexit y whic h we call the robu st conditional relativ e min-en tropy b ound . W e show that this b ound is equiv alen t, u p to constan ts, to R 1 , pub 1 / 3 ( f ) and use this to show the direct pro duct result. Th is b ound forms lo wer b ound on the one-w ay su b distribution b oun d of J., Klauc k, Na y ak [JKN08] where they sh o w that their b ound is equiv alen t, up to constan ts, to R 1 , pub 1 / 3 ( f ). They also s ho we d that the one-w ay sub d istr ibution b ound satisfies the dir ect pro duct p rop ert y und er pro duct d istributions. There has b een su bstan tial prior wo rk on the s tr ong direct pro du ct qu estion and the w eak er d irect sum and w eak d irect pro duct questions in v arious mo dels of comm u - nication complexit y , e.g. [IR W94, PR W97, CSWY01, Sh a03, JRS 03, K ˇ SdW04, Kla04, JRS05, BPSW07 , Ga v08 , J KN08, JK09, HJMR09, BBR10 , BR10, Kla10]. In the next section we provide some information theory and comm unication com- plexit y p r eliminaries that we need. W e refer the reader to the texts [CT91, K N97] for go od in tro ductions to th ese topics resp ectiv ely . In section 3 we int ro duce our new b ound. In s ection 4 we sh o w that it tigh tly charact erizes pub lic coin one-w a y commu- nication complexit y . Finally in section 5 w e s ho w our direct pro duct resu lt. 2 Preliminaries Information th eory Let X , Y b e sets and k b e a natural n umber. Let X k represent X × · · · × X , k times. Let µ b e a distribution o v er X whic h we denote b y µ ∈ X . W e use µ ( x ) to represent the probabilit y of x under µ . The en trop y of µ is defined as S ( µ ) = − P x ∈X µ ( x ) log µ ( x ). Let X b e a r andom v ariable d istributed according to µ wh ic h we denote by X ∼ µ . W e use the s ame symb ol to represent a random v ariable and its distrib ution when ev er it is clear from the con text. F or distributions µ, µ 1 ∈ X , µ ⊗ µ 1 represent s the pr od uct distribution ( µ ⊗ µ 1 )( x ) = µ ( x ) ⊗ µ 1 ( x ) and µ k represent s µ ⊗ · · · ⊗ µ , k times. T h e ℓ 1 distance b et ween d istributions µ, µ 1 is defi ned as || µ − µ 1 || 1 = 1 2 P x ∈X | µ ( x ) − µ 1 ( x ) | . Let λ, µ ∈ X × Y . W e u se µ ( x | y ) to r epresen t µ ( x, y ) /µ ( y ). When we say X Y ∼ µ w e assume that X ∈ X and Y ∈ Y . W e u se µ x and Y x to repr esent Y | X = x . The conditional entrop y of Y giv en X , is defin ed as S ( Y | X ) = E x ← X S ( Y x ). The relativ e en tropy b et wee n λ and µ is defined as S ( λ || µ ) = P x ∈X λ ( x ) log λ ( x ) µ ( x ) . W e use th e follo wing pr op erties of r elat iv e ent rop y at man y p laces without explicitly men tioning. F act 2.1 1. R elative entr opy is jointly c onvex in its ar guments, that is for distribu- tions λ 1 , λ 2 , µ 1 , µ 2 S ( pλ 1 + (1 − p ) λ 2 || p µ 1 + (1 − p ) µ 2 ) ≤ p · S ( λ 1 || µ 1 ) + (1 − p ) · S ( λ 2 || µ 2 ) . 2 2. L et X Y , X 1 Y 1 ∈ X × Y . R elative entr opy satisfies the fol lowing chain rule, S ( X Y || X 1 Y 1 ) = S ( X || X 1 ) + E x ← X S ( Y x || Y 1 x ) . This in-p articular implies, using joint c onvexity of r elative entr opy, S ( X Y || X 1 ⊗ Y 1 ) = S ( X || X 1 ) + E x ← X S ( Y x || Y 1 ) ≥ S ( X || X 1 ) + S ( Y || Y 1 ) . 3. F or distributions λ, µ : || λ − µ || 1 ≤ p S ( λ || µ ) and S ( λ || µ ) ≥ 0 . The relativ e min-entrop y b etw een λ and µ is defined as S ∞ ( λ || µ ) = max x ∈X log λ ( x ) µ ( x ) . It is easily seen that S ( λ || µ ) ≤ S ∞ ( λ || µ ). Let X, Y , Z b e random v ariables. The mutual information b et ween X and Y is d efi ned as I ( X : Y ) = S ( X ) + S ( Y ) − S ( X Y ) = E x ← X S ( Y x || Y ) = E y ← Y S ( X y || X ) . The conditional m utual inform atio n is defi ned as I ( X : Y | Z ) = E z ← Z I ( X : Y | Z = z ). Random v ariables X Y Z f orm a Mark ov c h ain Z ↔ X ↔ Y iff I ( Y : Z | X = x ) = 0 for eac h x in the s u pp ort of X . One-w a y comm unication complexit y Let f ⊆ X × Y × Z b e a relation. W e only consider complete relations that is for eac h ( x, y ) ∈ X × Y , there exists at least one z ∈ Z suc h that ( x, y , z ) ∈ f . In the one-wa y mo del of comm unication there is a single message, f r om Alice with input x ∈ X to Bob with in put y ∈ Y , at the end of whic h Bob is supp osed to determine an answe r z suc h that ( x, y , z ) ∈ f . Let ε > 0 and let µ ∈ X × Y b e a d istribution. W e let D 1 ,µ ε ( f ) represent th e d istributional one-w a y communicati on complexit y of f un der µ with exp ected error ǫ , i.e., the comm unication of the b est deterministic one-w a y proto col for f , with distrib utional error (a ve rage error ov er the inputs) at most ε un der µ . Let R 1 , pub ǫ ( f ) represent the public-coin one-w ay comm un icatio n complexit y of f with wo rst case error ε , i.e., th e communicatio n of the b est pub lic-c oin one-wa y proto col for f with error f or eac h inpu t ( x, y ) b eing at most ε . The follo wing is a consequence of the min-max theorem in game theory [KN97, Theorem 3.20, page 36]. Lemma 2.2 (Y ao principle) R 1 , pub ǫ ( f ) = max µ D 1 ,µ ǫ ( f ) . The follo wing result follo ws fr om the arguments in Brav erman and Rao [BR10]. W e skip its p ro of. Lemma 2.3 (Brav erman and Rao [BR10 ]) L et f ⊆ X × Y × Z b e a r elation and ε > 0 . L et X Y ∼ µ b e inputs to a private c oins one-way c ommunic ation pr oto c ol P with distributional err or at most ε . L et M r epr esent the message of P . L et θ b e the distribution of X Y M and let Pr ( x,y ,i ) ← θ log θ ( i | x ) θ ( i | y ) > c ≤ δ. Ther e e xists a deterministic one-way pr oto c ol P 1 for f with inputs distribute d ac c or ding to µ , such that the c ommunic ation of P 1 is c + O (log(1 /δ )) , and distributional err or of P 1 is at most ε + 2 δ . 3 3 New b ound Let f ⊆ X × Y × Z b e a relation, µ, λ ∈ X × Y b e d istributions and ε, δ > 0. Definition 3.1 (One-wa y distribut ions) D istribution λ is c al le d one-way for dis- tribution µ if for al l ( x, y ) in the supp ort of λ we have µ ( y | x ) = λ ( y | x ) . Definition 3.2 (Error of a distribution) Err or of distribution µ with r esp e c t to f , denote d err f ( µ ) , is define d as err f ( µ ) def = min { Pr ( x,y ) ← µ [( x, y , g ( y )) / ∈ f ] | g : Y → Z } . Definition 3.3 (Robust conditional relativ e min-entrop y) The δ -r obust c ondi- tional r elative min-entr opy of λ with r esp e ct to µ , denote d rcment µ δ ( λ ) , is define d to b e the minimum numb er c such that Pr ( x,y ) ← λ log λ ( x | y ) µ ( x | y ) > c ≤ δ. Definition 3.4 (Robust conditional relativ e min-entrop y b ound) The ε -err or δ -r obust c onditional r elative min-entr opy b ound of f with r esp e ct to distribution µ , denote d rcment µ ε,δ ( f ) , is define d as rcment µ ε,δ ( f ) def = min { rcment µ δ ( λ ) | λ is one-way for µ and err f ( λ ) ≤ ε } . The ε -err or δ -r obust c onditional r elative min-entr opy b ound of f , denote d rcment ǫ,δ ( f ) , is define d as rcment ε,δ ( f ) def = max { rcment µ ε,δ ( f ) | µ is a distribution over X × Y } . The follo wing b ound w as d efined in [JKN08] where it w as referred to as the one-w a y sub distrib ution b ound. W e call it differently here for consistency of nomenclature w ith the other b oun d. Definition 3.5 (Relative min-en t rop y b ound) Th e ε -err or r elative min-entr opy b ound of f with r esp e ct to distribution µ , denote d ment µ ε ( f ) , is define d as ment µ ε ( f ) def = min { S ∞ ( λ || µ ) | λ is one-way for µ and err f ( λ ) ≤ ε } . The ε - e rr or r elative min-entr opy b ound of f , denote d ment ( f ) , is define d as ment ε ( f ) def = max { ment µ ε ( f ) | µ is a distribution over X × Y } . The f ollo win g is easily seen from definitions. Lemma 3.1 rcment µ δ ( λ ) ≤ S ∞ ( λ || µ ) and henc e rcment µ ε,δ ( f ) ≤ ment µ ε ( f ) and rcment ε,δ ( f ) ≤ ment ε ( f ) . 4 4 New c haracteri zation o f public coin one-wa y comm unicatio n comple xit y The f ollo win g lemma ap p ears in [J KN08] . Lemma 4.1 L et f ⊆ X × Y × Z b e a r elation and µ ∈ X × Y b e a distribution and ε, k > 0 . Then, D 1 ,µ ǫ (1 − 2 − k ) ( f ) ≥ ment µ ε ( f ) − k . W e sh o w the follo wing lemma w hic h we pro v e later. Lemma 4.2 L et f ⊆ X × Y × Z b e a r elation and µ ∈ X × Y b e a distribution and ε, δ > 0 . Then, D 1 ,µ ε +4 δ ( f ) ≤ rcment ε,δ ( f ) + O (log 1 δ ) . Theorem 4.3 L et f ⊆ X × Y × Z b e a r elation and ε > 0 . Then, ment 2 ε ( f ) − 1 ≤ R 1 , pub ε ( f ) ≤ rcment ε/ 5 ,ε/ 5 ( f ) + O (log 1 ε ) . Henc e R 1 , pub ε ( f ) = Θ( ment ε ( f )) = Θ( rcment ε,ε ( f )) . Pro of: Th e first inequalit y follo ws f rom Lemma 4.1 (set k = 1) and maximizing b oth sides o ver all distributions µ and using Lemma 2.2. The second inequalit y follo ws from Lemma 4.2 (set ε = ε, δ = ε ) and maximizing b oth sides o v er all d istr ibutions µ and using Lemm a 2.2. The other relations no w follo w from Lemma 3.1 and from the fact that the error in pub lic coin r andomized one-w a y comm unication complexity can b e made a constan t factor do wn by increasing the comm u nication b y a constan t factor. Pro of of Lemma 4.2 : W e m ak e the follo wing k ey claim which w e pro v e later. Claim 4.4 Ther e exists a natur al numb er k and a M arkov chain M ↔ X ↔ Y , wher e M ∈ [ k ] and X Y ∼ µ , such that 1. for e ach i ∈ [ k ] : err f ( P i ) ≤ ε , wher e P i = ( X Y | M = i ) , 2. Pr ( x,y ,i ) ← θ h log θ ( i | x ) θ ( i | y ) > rcment ε,δ ( f ) + log 1 δ i ≤ 2 δ , wher e θ is the distribution of X Y M . The ab o ve claim immediately gives us a p riv ate-coin one-w a y p ro otocol P 1 for f , where Alice on input x generates i f r om the distribu tion M x and sends i to Bob. It is easily seen that the distribu tional error of P 1 is at most ε . No w us in g Lemma 2.3 w e get a deterministic p rotocol P 2 for f , with d istributional err or at most ε + 4 δ and commu- nication at most d = rcment ε,δ ( f ) + O (log 1 δ ). W e return to p ro of of Claim 4.4. Pro of of Claim 4.4: Let c = rcment ε,δ ( f ). Let us p erform a pro cedure as follo ws. Start with i = 1. 5 1. Le t us sa y we ha v e collect ed d istributions P 1 , . . . , P i − 1 , eac h one-w ay for µ , and p ositiv e num b ers p 1 , . . . , p i − 1 suc h that µ ≥ P i − 1 j =1 p j P j . If µ = P i − 1 j =1 p j P j then set k = i − 1 and stop. 2. Otherwise let us express µ = P i − 1 j =1 p j P j + q i Q i , w here Q i is a distribu tion, one- w ay for µ . Since rcment Q i ε,δ ( f ) ≤ c , we kn o w that th er e is a distribution R , one-w a y for Q i (hence also one-wa y for µ ), suc h that rcment Q i δ ( R ) ≤ c and err f ( R ) ≤ ε . Let r = m ax { q | Q i ≥ q R } . Let P i = R , p i = q i ∗ r, i = i + 1 and go b ac k to step 1. It can b e observed that for eac h new i , there is a new x ∈ X such that Q i ( x ) = 0. Hence th e ab ov e pro cess conv erges after at most |X | iterations. A t the end w e ha ve µ = P k i =1 p i P i . Let u s define M ∈ [ k ] su c h that P r[ M = i ] = p i . Let us defin e X Y ∈ X × Y correlated w ith M such that ( X Y | M = i ) ∼ P i . I t is easily c h eck ed that X Y ∼ µ . Also since eac h P i is one-w ay for µ , X Y M f orm a Marko v chain M ↔ X ↔ Y . Let θ b e the distribu tion of X Y M . Let us define 1. B = { ( x, y , i ) | log P i ( x | y ) µ ( x | y ) > c + log 1 δ } , 2. B 1 = { ( x, y , i ) | log P i ( x | y ) Q i ( x | y ) > c } , 3. B 2 = { ( x, y , i ) | µ ( y ) q i Q i ( y ) > 1 δ } . Since q i Q ( x, y ) ≤ µ ( x, y ), P i ( x | y ) µ ( x | y ) = P i ( x | y ) Q i ( x | y ) · Q i ( x | y ) µ ( x | y ) = P i ( x | y ) Q i ( x | y ) · Q ( x, y ) µ ( y ) Q ( y ) µ ( x, y ) ≤ P i ( x | y ) Q i ( x | y ) · µ ( y ) q i Q ( y ) Therefore B ⊆ B 1 ∪ B 2 . Since for eac h i, rcment Q i δ ( P i ) ≤ c , w e hav e Pr ( x,y ,i ) ← θ [( x, y , i ) ∈ B 1 ] ≤ δ . F or a giv en y , let i y b e the smallest i suc h that µ ( y ) q i Q i ( y ) > 1 δ . Then , Pr ( x,y ,i ) ← θ [( x, y , i ) ∈ B 2 ] = X y q i y Q i y ( y ) < X y δ µ ( y ) = δ. Hence, Pr ( x,y ,i ) ← θ [( x, y , i ) ∈ B ] < 2 δ . Finally note that, P i ( x | y ) µ ( x | y ) = θ ( x | ( y , i )) θ ( x | y ) = θ ( x | y ) θ ( i | ( x, y )) θ ( i | y ) θ ( x | y ) = θ ( i | x ) θ ( i | y ) . 5 Strong direct pro duct for one-wa y comm uni- cation compl exit y W e start with the follo w ing theorem whic h w e p ro ve later. 6 Theorem 5.1 (Direct pro duct in terms of ment and rcment ) L et f ⊆ X × Y × Z b e a r elation and µ ∈ X × Y b e a distribution. L et 0 < 200 √ δ < ε < 0 . 5 and k b e a natur al numb er. Then ment µ k 1 − (1 − ε/ 2) ⌊ δ k ⌋ ( f k ) ≥ δ · k · rcment µ ε,ε ( f ) . W e n o w state and pro v e our main resu lt. Theorem 5.2 (Direct pro duct for one-wa y comm unication complexit y) L et f ⊆ X × Y × Z b e a r elation. L et 0 < 200 √ δ < ε < 0 . 5 and k b e a natur al numb er. L et δ ′ = (1 − ε/ 10) ⌊ δk ⌋ + 2 − k . Ther e exists a c onstant κ such that, R 1 , pub 1 − δ ′ ( f k ) ≥ δ · k κ · R 1 , pub ε ( f ) − k . In other wor ds, R 1 , pub 1 − 2 − Ω( k ) ( f k ) ≥ Ω( k · R 1 , pub 1 / 3 ( f )) . Pro of: Let µ 1 b e a distribution suc h that D 1 ,µ 1 ε ( f ) = R 1 , pub ε ( f ). Let µ b e a distribution suc h that rcment µ ε/ 5 ,ε/ 5 ( f ) = rcment ε/ 5 ,ε/ 5 ( f ). Let κ b e a constant (guaran teed b y Lemma 4.2) s uc h that D 1 ,µ 1 ε ( f ) ≤ κ · rcment ε/ 5 ,ε/ 5 ( f ). Usin g Lemma 4.1, Lemma 4.2 and Theorem 5.1, δ · k κ · R 1 , pub ε ( f ) = δ · k κ · D 1 ,µ 1 ε ( f ) ≤ δ · k · rcment ε/ 5 ,ε/ 5 ( f ) = δ · k · rcment µ ε/ 5 ,ε/ 5 ( f ) ≤ ment µ k 1 − (1 − ε/ 10) ⌊ δ k ⌋ ( f k ) ≤ D 1 ,µ k 1 − (1 − ε/ 10) ⌊ δ k ⌋ − 2 − k ( f k ) + k ≤ R 1 , pub 1 − δ ′ ( f k ) + k . Pro of of Theorem 5.1: Let c = rcment µ ε,ε ( f ). Let λ ∈ X k ×Y k b e a distribu tion w hic h is one-w a y for µ k and with S ∞ ( λ || µ k ) < δ ck . W e show that err f k ( λ ) ≥ 1 − (1 − ε/ 2) ⌊ δk ⌋ . This sh ows the desired. Let B b e a set. F or a r an d om v ariable distributed in B k , or a string in B k , the p ortion corresp onding to the i th co ordinate is repr esen ted with su bscript i . Also the p ortion except the i th co ordinate is r epresen ted with subscript − i . Similarly p ortion corresp onding to a subset C ⊆ [ k ] is represente d with su bscript C . F or join t random v ariables M N , we let M n to represent M | ( N = n ) and also M N | ( N = n ) and is clear from th e con text. Let X Y ∼ λ . Let us fix g : Y k → Z k . F or a co ordinate i , let the b in ary random v ariable T i ∈ { 0 , 1 } , correlated with X Y , denote success in the i th co ordinate. That is T i = 1 iff X Y = ( x, y ) suc h that ( x i , y i , g ( y ) i ) ∈ f . W e make the follo w ing claim whic h w e p ro ve later. Let k ′ = ⌊ δ k ⌋ . Claim 5.3 Ther e exists k ′ distinct c o or dinates i 1 , . . . , i k ′ such that Pr[ T i 1 = 1] ≤ 1 − ε/ 2 and for e ach r < k ′ , 7 1. either Pr[ T i 1 × T i 2 × · · · × T i r = 1] ≤ (1 − ε/ 2) k ′ , 2. o r Pr[ T i r +1 = 1 | ( T i 1 × T i 2 × · · · × T i r = 1)] ≤ 1 − ε/ 2 . This sh ows that the o verall success is Pr[ T 1 × T 2 × · · · × T k = 1] ≤ Pr[ T i 1 × T i 2 × · · · × T i k ′ = 1] ≤ (1 − ε/ 2) k ′ . Pro of of Cla im 5.3: Let us say we ha ve identi fied r < k ′ co ordinates i 1 , . . . i r . Let C = { i 1 , i 2 , . . . , i r } . Let T = T i 1 × T i 2 × · · · × T i r . If Pr[ T = 1] ≤ (1 − ε/ 2) k ′ then we will b e done. So assume th at Pr [ T = 1] > (1 − ε/ 2) k ′ ≥ 2 − δk . Let X ′ Y ′ ∼ µ . Let X 1 Y 1 = ( X Y | T = 1). Let D b e u niformly distributed in { 0 , 1 } k and indep endent of X 1 Y 1 . Let U i = X 1 i if D i = 0 and U i = Y 1 i if D i = 1. Let U = U 1 . . . U k . Belo w for an y random v ariable ˜ X ˜ Y , we let ˜ X ˜ Y d,u , represen t the random v ariable obtained by appropr iate conditioning on ˜ X ˜ Y : for all i , ˜ X i = u i if d i = 0 otherwise ˜ Y i = u i if d = 1 . Consider, δ k + δck > S ∞ ( X 1 Y 1 || X Y ) + S ∞ ( X Y || ( X ′ Y ′ ) ⊗ k ) ≥ S ∞ ( X 1 Y 1 || ( X ′ Y ′ ) ⊗ k ) ≥ S ( X 1 Y 1 || ( X ′ Y ′ ) ⊗ k ) = E d ← D S ( X 1 Y 1 || ( X ′ Y ′ ) ⊗ k ) ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( X 1 Y 1 ) d,u,x C ,y C || (( X ′ Y ′ ) ⊗ k ) d,u,x C ,y C ) ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S ( X 1 d,u,x C ,y C || X ′ d 1 ,u 1 ,x C ,y C ⊗ . . . ⊗ X ′ d k ,u k ,x C ,y C ) ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) X i / ∈ C S (( X 1 d,u,x C ,y C ) i || X ′ d i ,u i ) = X i / ∈ C E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( X 1 d,u,x C ,y C ) i || X ′ d i ,u i ) . (5.1) Also δ k > S ∞ ( X 1 Y 1 || X Y ) ≥ S ( X 1 Y 1 || X Y ) = E d ← D S ( X 1 Y 1 || X Y ) ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S ( Y 1 d,u,x C ,y C || Y d 1 ,u 1 ,x C ,y C ⊗ . . . ⊗ Y d k ,u k ,x C ,y C ) ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) X i / ∈ C S (( Y 1 d,u,x C ,y C ) i || Y d i ,u i ) = X i / ∈ C E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( Y 1 d,u,x C ,y C ) i || Y ′ d i ,u i ) . (5.2) F rom Eq. 5.1 and Eq. 5.2 an d using Mark o v’s inequ alit y we get a co ordinate j outside of C suc h that 1. E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( X 1 d,u,x C ,y C ) j || X ′ d j ,u j ) ≤ 2 δ ( c +1) (1 − δ ) ≤ 4 δ c, and 2. E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( Y 1 d,u,x C ,y C ) j || Y ′ d j ,u j ) ≤ 2 δ (1 − δ ) ≤ 4 δ . Therefore, 4 δ c ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( X 1 d,u,x C ,y C ) j || X ′ d j ,u j ) = E ( d − j ,u − j ,x C ,y C ) ← ( D − j U − j X 1 C Y 1 C ) E ( d j ,u j ) ← ( D j U j ) | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) S (( X 1 d,u,x C ,y C ) j || X ′ d j ,u j ) . 8 And, 4 δ ≥ E ( d,u,x C ,y C ) ← ( DU X 1 C Y 1 C ) S (( Y 1 d,u,x C ,y C ) j || Y ′ d j ,u j ) = E ( d − j ,u − j ,x C ,y C ) ← ( D − j U − j X 1 C Y 1 C ) E ( d j ,u j ) ← ( D j U j ) | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) S (( Y 1 d,u,x C ,y C ) j || Y ′ d j ,u j ) . No w using Mark o v’s inequalit y , there exists set G 1 with Pr[ Y 1 − j ∈ G 1 ] ≥ 1 − 0 . 2, suc h that f or all ( d − j , u − j , x C , y C ) ∈ G 1 , 1. E ( d j ,u j ) ← ( D j U j ) | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) S (( X 1 d,u,x C ,y C ) j || X ′ d j ,u j ) ≤ 40 δc , and 2. E ( d j ,u j ) ← ( D j U j ) | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) S (( Y 1 d,u,x C ,y C ) j || Y ′ d j ,u j ) ≤ 40 δ . Fix ( d − j , u − j , x C , y C ) ∈ G 1 . Conditioning on D j = 1 (which happ ens with p robabilit y 1 / 2) in inequalit y 1. ab o v e we get, E y j ← Y 1 j | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) S (( X 1 d − j ,u − j ,y j ,x C ,y C ) j || X ′ y j ) ≤ 80 δc. (5.3) Conditioning on D j = 0 (whic h happ ens with probabilit y 1 / 2) in in equ alit y 2. a b o ve w e get, E x j ← X 1 j | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) S (( Y 1 d − j ,u − j ,x j ,x C ,y C ) j || Y ′ x j ) ≤ 80 δ . Using conca vity of square ro ot w e get, E x j ← X 1 j | ( D − j U − j X 1 C Y 1 C )=( d − j ,u − j ,x C ,y C ) || ( Y 1 d − j ,u − j ,x j ,x C ,y C ) j − Y ′ x j || 1 ≤ √ 80 δ . (5.4) Let X 2 Y 2 b e suc h that X 2 ∼ ( X 1 d − j ,u − j ,x C ,y C ) j and ( Y 2 | X 2 = x j ) ∼ Y ′ x j . F rom Eq. 5.4 w e get, || X 2 Y 2 − (( X 1 Y 1 ) d − j ,u − j ,x C ,y C ) j || 1 ≤ √ 80 δ . (5.5) F rom construction X 2 Y 2 is one-w a y for µ . Usin g using Eq. 5.3 and Eq. 5.5 w e conclude that Pr ( x,y ) ← X 2 Y 2 log X 2 Y 2 ( x | y ) µ ( x | y ) > c ≤ 100 δ + √ 80 δ ≤ ε. Hence rcment µ ε ( X 2 Y 2 ) ≤ c . Hence, err f ( X 2 Y 2 ) ≥ ε and therefore err f ((( X 1 Y 1 ) d − j ,u − j ,x C ,y C ) j ) ≥ ε − √ 80 δ ≥ 3 ε 4 . Since conditioned on ( Y 1 d − j ,u − j ,x C ,y C ) j , the distribution ( X 1 Y 1 ) d − j ,u − j ,x C ,y C is pr od uct across th e X k and Y k parts, w e ha ve, Pr[ T j = 1 | (1 , d − j , u − j , x C , y C ) = ( T D − j U − j X C Y C )] ≤ 1 − err f ((( X 1 Y 1 ) d − j ,u − j ,x C ,y C ) j ) . Therefore ov erall Pr[ T j = 1 | ( T = 1)] ≤ 0 . 8(1 − 3 ε 4 ) + 0 . 2 ≤ 1 − ε/ 2 . 9 References [BBR10] X. Chen B. Barak, M. Bra verman and A. R ao. Ho w to compress in teractiv e comm un icatio n. In Pr o c e e dings of the 42nd Annual ACM Symp osium on The ory of Computing , 2010. [BPSW07] P aul Beame, T oniann Pitassi, Nathan Segerlind, and Avi Wigderson. A direct sum theorem for corrup tion and a lo wer b oun d for the m ultiparty comm un icatio n complexit y of Set Disjointness. Computationa l Complexity , 2007. [BR10] M. Brav erman and A. Rao. Efficient comm u nicatio n usin g partial in f or- mation. T ec hn ical rep ort, E lect ronic Collo q u ium on C omputational C om- plexit y , http ://www.eccc .uni-trier.de/report/2010/083/ , 2010. [CSWY01] Amit Chakrab arti, Y ao yun S h i, An thon y Wirth, and Andr ew C.-C. Y ao. Informational complexit y and the direct sum p roblem for sim u ltaneous message complexit y . In Pr o c e e dings of the 42nd Annual IE EE Symp osium on F oundations of Computer Scienc e , p age s 270–278 , 2001. [CT91] Thomas M. C o v er and Joy A. Thomas. Elements of Information The ory . Wiley Series in T eleco mm unications. John Wiley & Sons, New Y ork, NY, USA, 1991. [Ga v08] Dmitry Ga vinsky . On the role of shared entangleme n t. Quantum Informa- tion and Computation , 8, 2008. [HJMR09] Prahladh Harsha, Rahul Jain, Da vid McAllester, and Jaikumar Radhakr- ishnan. Th e comm u nicatio n complexit y of correlation. IEEE T r ansactions on Information The ory , 56(1):438 – 449, 2009. [IR W94] Russell Impagliazzo, Ran Raz, and Avi Wigderson. A dir ect pro duct the- orem. In Pr o c e e dings of the Ninth Annual IE EE Structur e in Complexity The ory Confer enc e , pages 88–96, 1994. [JK09] Rah u l Jain and Hartmut Klauc k. New results in the sim ultaneous m essage passing mo del via information theoretic techniques. I n Pr o c e e ding of the 24th IEE E Confer e nc e on Computational Complexity , pages 369– 378, 2009. [JKN08] Rah u l Jain, Hartm ut Klauck, and Ash win Na y ak. Direct pr od uct theo- rems for classica l comm u nicatio n complexit y via sub distribution b ounds. In Pr o c e e dings of the 40th ACM Symp osium on The ory of Computing , pages 599–6 08, 2008. [JRS03] Rahul Jain, J aikumar Radhakrishnan, and Pranab Sen. A direct sum the- orem in comm un icat ion complexity via message compression. In Pr o c e e d- ings of the Thirtieth International Col lo quium on Automa ta L anguages and Pr o gr amming , volume 271 9 of L e ctu r e notes in Computer Scienc e , pages 300–3 15. Springer, Berlin/Heidelb erg, 2003. [JRS05] Rahul Jain, Jaikumar Radhakrishn an , and P r anab Sen. Prior entangle- men t, message compression and priv acy in quan tum comm un icatio n. In Pr o c e e dings of the 20th Annual IEEE Confer enc e on Computational Com- plexity , p age s 285–29 6, 2005. 10 [Kla04] Hartm ut Klauc k. Quan tum and classical comm unication-space tradeoffs from rectangle b ounds. In Pr o c e e dings of the 24th Annual IARCS Interna- tional Confer enc e on F oundations of Softwar e T e chnolo gy and The or etic al Computer Scienc e , v olume 3328 of L e ctur e notes in Computer S cienc e , pages 384–395. S pringer, Berlin/Heidelb erg, 2004. [Kla10] Hartm ut Klauck. A s tr ong direct p ro duct th eorem for disjoin tn ess. In Pr o c e e dings of the 42nd Annua l ACM Symp osium on The ory of Computing , pages 77–86, 2010. [KN97] Ey al Kushilevitz and Noa m Nisan. Communic ation Complexity . C ambridge Univ ers it y Press, Cambridge, UK, 1997. [K ˇ SdW04] Hartm ut Klauc k, Rob ert ˇ Spalek, and Ronald de W olf. Quan tu m and clas- sical strong direct pro duct theorems and optimal time-space tradeoffs. In Pr o c e e dings of the 45th Annual IEE E Symp osium on F oundations of Com- puter Scie nc e , pages 12–21, 2004. [PR W97] Itzhak Parnafes, Ran Raz, and Avi Wigderson. Direct pr od uct results and the GCD prob lem, in old and n ew communicatio n m od els. In Pr o c e e dings of the Twenty-Ninth Annual ACM Symp osium on The ory of Computing , pages 363–372, 1997. [Sha03] Ronen Shaltiel. T o wards pro ving str on g d irect pro duct theorems. Compu- tational Complexity , 12(1–2):1–2 2, 2003. 11
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment