Anchored parallel repetition for nonlocal games
aa r X i v : . [ qu a n t - ph ] S e p Anchoring games for parallel repetition
Mohammad Bavarian ∗ MIT Thomas Vidick † Caltech Henry Yuen ‡ MIT
Abstract
Two major open problems regarding the parallel repetition of games are whether an ana-logue of Raz’s parallel-repetition theorem holds for (a) games with more than two players,and (b) games with quantum players using entanglement. We make progress on these prob-lems: we introduce a class of games we call anchored , and then prove exponential-decay paral-lel repetition theorems for anchored games in the multiplayer and entangled players settings.We then introduce a simple transformation on games called anchoring , inspired in part by theFeige-Kilian transformation [SICOMP ’00], and show that this transformation turns any (multi-player) game into an anchored game. Together, our parallel repetition theorem and our anchor-ing transformation provide a simple and efficient hardness-amplification technique for generalgames in the multiplayer and quantum settings.
Multiplayer games are central objects of study in complexity theory and quantum information.For simplicity we focus our exposition on the two-player case. A two-player one-round game isspecified by finite question sets X , Y , finite answer sets A , B , a probability distribution µ over X × Y , and a verification predicate V : X × Y × A × B → {
0, 1 } that determines the accept-able question and answer combinations. The game is played as follows: a referee samples ques-tions ( x , y ) ∈ X × Y according to µ and sends x to the first player and y to the second. Eachplayer replies with an answer, a ∈ A and b ∈ B respectively. The referee accepts if and onlyif V ( x , y , a , b ) =
1, in which case we say that the players win the game. Multiplayer gamesarise naturally in settings ranging from hardness of approximation [H˚as01, Vaz13], interactiveproof systems [BOGKW88, FRS88], and the study of Bell inequalities and non-locality in quantumphysics [Bel64, CHSH69].The main quantity associated with a multiplayer game G is its value : the maximum acceptanceprobability achievable by the players, where the probability is taken over the questions, as chosenby the referee, and the players’ answers. Different notions of value arise from different restrictionson allowed strategies for the players. The most important for us are the classical value (denoted ∗ [email protected] † [email protected] ‡ [email protected]
1y val ( G ) ) and the entangled value (denoted by val ∗ ( G ) ). The former is obtained by restricting theplayers to classical strategies, where each player’s answer is a function of its question only (bothprivate and shared randomness are in principle allowed, but easily seen not to help). The latterallows for quantum strategies, in which each player’s answer is obtained as the outcome of a localmeasurement performed on a quantum state shared by the players. The use of quantum states does not allow communication between the players, but it does allow for correlations betweentheir questions and answers that cannot be reproduced by any classical strategy.We study the behavior of val ( G ) and val ∗ ( G ) under parallel repetition. In the n -fold parallelrepetition G n of a game G the referee samples ( x , y ) , . . . , ( x n , y n ) independently from µ , andsends ( x , . . . , x n ) to the first player and ( y , . . . , y n ) to the second. The players respond withanswer tuples ( a , . . . , a n ) and ( b , . . . , b n ) respectively, and they win if and only if their answerssatisfy V ( x i , y i , a i , b i ) = i .Clearly, if the players play each instance of G in G n independently of each other (i.e. accordingto a product strategy ), their success probability is the n -th power of their success probability in G . The main obstacle to proving a parallel repetition theorem is that players need not employproduct strategies – their answers for the i -th instance of G may depend on their questions in the j -th instance for j = i . Indeed, it is known that there are games G for which non-product strategiesenable the players to win G n with probability significantly greater than val ( G ) n [FV02, Raz11].Nevertheless, the parallel repetition theorem of Raz [Raz98] establishes that if G is a two-playergame such that val ( G ) < ( G n ) decays exponentially with n . The two followingdecades have seen a substantial amount of research on this question, connecting the problem ofparallel repetition to topics such as the Unique Games conjecture, hardness of approximation,communication complexity, and more [BHH +
08, Rao11, BRWY13]. The most important applica-tion of parallel repetition is its use as a generic and efficient method for performing gap amplifi-cation , or hardness amplification . Suppose a certain problem – deciding membership in a language L , or breaking a given cryptosystem – has been reduced to the task of distinguishing betweenval ( G ) = ( G ) < − ε for a certain G . Parallel repetition can be employed to claim thatthe presumedly easier task of distinguishing between val ( H ) = ( H ) < δ for all games H is at least as hard as the original problem, by letting H = G n for some n = poly ( log δ − , ε − ) .In spite of much research—and partial results, as surveyed below—two major questions aboutparallel repetition remain open: what effect does parallel repetition have on (a) games with anarbitrary number of players, and (b) games with quantum players using entanglement? In thispaper, we make progress on both questions. Our main results can be summarized as follows; seeTheorems 11 and 17 for precise statements. Theorem 1 (Main theorem, informal) . There exists a polynomial-time transformation (called anchor-ing ) that takes the description of an arbitrary k-player game G and returns a game G ⊥ with the followingproperties:1. val ( G ⊥ ) = + val ( G ) .2. val ∗ ( G ⊥ ) = + val ∗ ( G ) . . If val ( G ) = − ε , then val ( G n ⊥ ) ≤ exp ( − Ω ( ε · n )) .4. If the number of players k = and val ∗ ( G ) = − δ , then val ∗ ( G n ⊥ ) ≤ exp ( − Ω ( δ · n )) ,where the implied constants in the Ω ( · ) only depend on the number of players k and the cardinality of theanswer sets. Remark 2.
We expect item 4. from the theorem to extend to the case of the entangled value of k -player anchored games for k > G to a so-called miss-match game G FK . The transformation is value-preserving in the sense that there is a precise affine relationshipval ( G FK ) = ( + val ( G )) /3. Furthermore Feige and Kilian are able to show that the value of the n -fold repetition of G FK decays polynomially in n whenever val ( G ) <
1. This enables them to es-tablish a general gap amplification result without having to prove a parallel repetition theorem forarbitrary games. This is sufficient for many applications, including to hardness of approximation,for which it is enough that the gap amplification procedure be efficient and value-preserving.Theorem 1 adopts a similar approach to that of Feige and Kilian by providing an arguablyeven simpler transformation, anchoring , which preserves both the classical and entangled valueof a game and for which we are able to prove an exponential decay under parallel repetition.In contrast, the transformation considered by Feige and Kilian does not in general preserve theentangled value. We proceed to describe our transformation and then discuss the role it plays infacilitating the proof of our parallel repetition theorem.
The anchoring transformation.
Our parallel repetition results apply to a class of games we call anchored . The anchoring transformation of Theorem 1 produces games of this type; however,anchored games can be more general. We give a full definition of anchored games in Section 2.First we describe the anchoring transformation.
Definition 3 (Basic anchoring) . Let G be a two player game with question distribution µ on X × Y ,and verification predicate V. In the α -anchored game G ⊥ the referee chooses a question pair ( x , y ) ∈ X ×Y according to µ , and independently and with probability α replaces each of x and y with an auxiliary“anchor” symbol ⊥ to obtain the pair ( x ′ , y ′ ) ∈ ( X ∪ { ⊥ } ) × ( Y ∪ { ⊥ } ) which is sent to the players as theirrespective questions. If any of x ′ , y ′ is ⊥ the referee accepts regardless of the players’ answers; otherwise, thereferee checks the players’ answers according to the predicate V. For a choice of α = − √ it holds that both val ( G ⊥ ) = val ( G ) + and val ∗ ( G ⊥ ) = val ∗ ( G ) + . One can think of G ⊥ as playing the original game G with probability 3/4, and a trivial gamewith probability 1/4. The term “anchored” refers to the fact that question pairs chosen accordingto µ are all “anchored” by a common question ( ⊥ , ⊥ ) . Though the existence of this anchor question3akes the game G ⊥ easier to play than the game G , it facilitates showing that the repeated game G n ⊥ is hard . At a high level, the anchor questions provide a convenient way to handle the complicatedcorrelations that may arise when the players use non-product strategies in the repeated game, aswe explain next. Proving parallel repetition by breaking correlations.
In virtually all known (information the-oretic) proofs of parallel repetition theorems, the key step consists in arguing that the players’success probability in most instances of G individually cannot be substantially larger than thevalue of G itself, even when conditioned on the player winning a significant fraction of the instances .Coupled with the possibility of using non-product strategies this conditioning introduces correla-tions between the player’s questions which make the task of bounding their success probabilityin the remaining instances of G non-trivial.In the proof of his parallel repetition theorem, Raz [Raz98] introduced a technique, further re-fined in subsequent work of Holenstein [Hol09], to break such correlations. The idea consists ofintroducing a dependency-breaking random variable Ω satisfying two properties: (1) Ω can be sam-pled jointly, using shared randomness, by all players, and (2) conditioned on Ω , the players areable to locally generate questions and answers from the same distribution as they would in the re-peated game, conditioned on winning a (not too large) subset of instances. These two requirementsare at odds with each other, and the main difficulty is to design an Ω that satisfies both simultane-ously. In Raz’s proof this heavily relies both on the fact that the game involves only two playersand that the players’ strategies can be assumed to be deterministic.Extending this approach to more players, or quantum strategies, remains a challenge. Ratherthan solving the general problem directly, we sidestep it and instead analyze the parallel repeti-tion of anchored games, for which designing an appropriate dependency-breaking variable (or,in the case of entangled players, a dependency-breaking quantum state) is easier, though by nomeans trivial. Combined with the anchoring operation this yields a simple and efficient methodto achieve hardness amplification for arbitrary games in the multiplayer and entangled-playersettings. We give a more detailed explanation of how this is achieved in Section 2 below. We refer to the surveys by Feige and Raz [Fei95, Raz10] for an extensive historical account ofthe classical parallel repetition theorem and its connections to the hardness of approximation andmultiprover interactive proof systems, and instead focus on more recent results, specifically thosepertaining to the quantum or multiplayer parallel repetition.The first result on the parallel repetition of entangled-player games was obtained by Cleve etal. [CSUU08] for XOR games. This was extended to the case of unique games by Kempe, Regevand Toner [KRT08]. Kempe and Vidick [KV11] studied a Feige-Kilian type repetition for the en-tangled value of two-player games, and obtained a polynomial rate of decay. The Feige-Kiliantransformation does not in general preserve the entangled value, and their result does not pro-4ide a hardness amplification technique for arbitrary entangled games.Dinur et al. [DSV14] extend the analytical framework of Dinur and Steurer [DS14] to obtainan exponential-decay parallel repetition theorem for the entangled value of two-player projectiongames. Their analysis is notable because it provides a quantum extension of the correlated samplinglemma , a key component of Holenstein’s solution to the correlation-breaking problem. Howevertheir use of the quantum correlated sampling lemma appears to heavily rely on symmetries ofprojection games, and it is unclear how to extend the argument to general games. Chailloux andScarpa [CS14a] and Jain et al. [JPY14] prove exponential-decay parallel repetition for free two-playergames , i.e. games with a product question distribution. Their analysis, as well as the follow-upwork Chung et al. [CWY15], is based on extending the information-theoretic approach of Raz andHolenstein.Turning to the multiplayer setting, very little is known. It is folklore that free games with anynumber of players satisfy a parallel repetition theorem, and this was explicitly proved for both theclassical and quantum case in [CWY15]. Multiplayer parallel repetition has been studied in thesetting of non-signaling strategies , a superset of entangled strategies which allows the players togenerate any correlations that do not imply communication. Buhrman et al. [BFS13] show that thenon-signaling value of a game G with any number of players decays exponentially under parallelrepetition, with a rate of decay that depends on the entire description of the game G . Arnon-Friedman et al. [AFRV14] and Lancien and Winter [LW15] achieve similar results using a differenttechnique based on “de Finetti reductions”.The transformation from general games into anchored games that we introduce is inspired bythe work of Feige and Kilian [FK00]. This alternative approach to achieving gap amplification isalso used by Moshkovitz [Mos14], who shows how projection games can be “fortified”, and givesa simple and elegant proof that the classical value of fortified games decays exponentially underparallel repetition (see also the follow-up work by Bhangle et al. [BSVV15]). In an upcoming paperwe prove a parallel repetition theorem for the entangled value of fortified games. In Section 2 we give an overview of the techniques underlying our main results, mainly focusingon the general ideas and leaving the specifics to each subsequent section. Section 3 introducessome preliminaries, including the definition of anchored games. In Section 4 we present the resulton the parallel repetition of multiplayer classical anchored games; for the benefit of the readersmore interested in the classical aspects of the paper, this section is mostly self-contained. Section 5contains the proof of the result on the parallel repetition of two-player entangled games. Theissues arising in the quantum setting are significantly more subtle than in the classical case, andhence the argument in this section is more involved, requiring a more careful analysis as well asseveral new technical ideas. We conclude in Section 6 by recounting a few open problems relatedto parallel repetition. 5
Technical overview
We give a technical overview of anchored games and their parallel repetition. For concretenesswe focus on the case of two-player games. For the full definition of k -player anchored games, seeSection 3.3. Definition 4 (Two-player anchored games) . Let G be a two-player game with question alphabet
X × Y and distribution µ . For any < α ≤ we say that G is α -anchored if there exists subsets X ⊥ ⊆ X and Y ⊥ ⊆ Y such that, denoting by µ the respective marginals of µ on both coordinates,1. Both µ ( X ⊥ ) , µ ( Y ⊥ ) ≥ α ,2. Whenever x ∈ X ⊥ or y ∈ Y ⊥ it holds that µ ( x , y ) = µ ( x ) · µ ( y ) . Informally, a game is anchored if each player independently has a significant probability ofreceiving a question from the set of “anchor questions” X ⊥ and Y ⊥ . An alternative way of thinkingabout the class of anchored games is to consider the case where µ is uniform over a set of edges ina bipartite graph on vertex set X × Y ; then the condition is that the induced subgraph on X ⊥ × Y ⊥ is a complete bipartite graph that is connected to the rest of X × Y and has weight at least α . Inother words, a game G is anchored if it contains a free game that is connected to the entire game.It is easy to see that the games G ⊥ output by the anchoring transformation given in Definition3 are α -anchored. Free games are automatically 1-anchored (set X ⊥ = X and Y ⊥ = Y ), but theclass of anchored games is much broader; indeed assuming the Exponential Time Hypothesis it isunlikely that there exists a similar (efficient) reduction from general games to free games [AIM14].Additionally, since free games are anchored games, our parallel repetition theorems automaticallyreproduce the quantum and multiplayer parallel repetition of free games results of [JPY14, CS14a,CWY15], albeit with worse parameters. Dependency-breaking variables and states.
Essentially all known proofs of parallel repetitionproceed via reduction, showing how a “too good” strategy for the repeated game G n can be“rounded” into a strategy for G with success probability strictly greater than val ( G ) , yieldinga contradiction.Let S n be a strategy for G n that has a high success probability. By an inductive argumentone can identify a set of coordinates C and an index i such that Pr ( Players win round i | W ) > val ( G ) + δ , where W is the event that the players’ answers satisfy the predicate V in all instancesof G indexed by C . Given a pair of questions ( x , y ) in G the strategy S embeds them in the i -thcoordinate of a n -tuple of questions x [ n ] y [ n ] = (cid:18) x , x , . . . , x i − , x , x i + , . . . , x n y , y , . . . , y i − , y , y i + , . . . , y n (cid:19) that is distributed according to P X [ n ] Y [ n ] | X i = x , Y i = y , W . The players then simulate S n on x [ n ] and y [ n ] respectively to obtain answers ( a , . . . , a n ) and ( b , . . . , b n ) , and return ( a i , b i ) as their answers6n G . The strategy S succeeds with probability precisely Pr ( Win i | W ) in G , yielding the desiredcontradiction.As S n need not be a product strategy, conditioning on W may introduce correlations that make P X [ n ] Y [ n ] | X i = x , Y i = y , W impossible to sample exactly. A key insight in Raz’ proof of parallel repetitionis that it is still possible for the players to approximately sample from this distribution. Drawing onthe work of Razborov [Raz92], Raz introduced a dependency-breaking variable Ω with the followingproperties:(a) Given ω ∼ P Ω the players can locally sample x [ n ] and y [ n ] according to P X [ n ] Y [ n ] | X i = x , Y i = y , W ,(b) The players can jointly sample from P Ω using shared randomness.In [Hol09] Ω is defined so that a sample ω fixes at least one of { x i ′ , y i ′ } for each i ′ = i . It can thenbe shown that conditioned on x , Ω is nearly (though not exactly) independent of y , and vice-versa.In other words, P Ω | X i = x , W ≈ P Ω | X i = x , Y i = y , W ≈ P Ω | Y i = y , W (1)where “ ≈ ” denotes closeness in statistical distance. Eq. (1) suffices to guarantee that the playerscan approximately sample the same ω from P Ω | X i = x , Y i = y , W with high probability, achieving point (b)above. This sampling is accomplished through a technique called correlated sampling .This argument relies heavily on the assumption that there are only two players who employ adeterministic strategy. With more than two players, it is not known how to design an appropriatedependency-breaking variable Ω that satisfies requirements (a) and (b) above: in order to be jointlysampleable, Ω needs to fix as few inputs as possible; in order to allow players to locally sampletheir inputs conditioned on Ω , the variable needs to fix as many inputs as possible. These tworequirements are in direct conflict as soon as there are more than two players.In the quantum case the rounding argument seems to require that Alice and Bob jointly sam-ple a dependency-breaking state | Ω x , y i , which again depends on both their inputs. Although it istechnically more complicated, as a first approximation | Ω x , y i can be thought of as the players’post-measurement state, conditioned on W . Designing a state that simultaneously allows Aliceand Bob to (a) simulate the execution of the i -th game in G n conditioned on W , and (b) locallygenerate | Ω x , y i without communication is the main obstacle to proving a fully general parallelrepetition theorem for entangled games.It has long been known that in the free games case (i.e. games with product question distribu-tions) these troubles with the dependency-breaking variable disappear, and consequently we haveparallel repetition theorems for free games for the multiplayer and quantum settings [CWY15].With free games involving more than two players, it can be shown that P Ω | X i = x , Y i = y , Z i = z ,..., W ≈ P Ω | W , (2)on average over question tuples ( x , y , z , . . . ) . In the quantum case, [JPY14, CS14a, CWY15] showedhow to construct dependency-breaking states | Ω X i = x , Y i = y , W i and local unitaries U x and V y suchthat ( U x ⊗ V y ) | Ω i ≈ | Ω X i = x , Y i = y , W i (3)7or some fixed quantum state | Ω i . This eliminates the need for the players to use correlated sam-pling, as they can simply share a sample from P Ω | W or the quantum state | Ω i from the outset. Breaking correlations in repeated anchored games.
Rather than providing a complete extensionof the framework of Raz and Holenstein to the multiplayer and quantum settings, we interpolatebetween the case of free games and the general setting by showing how the same framework ofdependency-breaking variables and states can be extended to anchored games – without usingcorrelated sampling. We introduce dependency-breaking variables Ω and states | Φ x , y i so that wecan prove analogous statements to (2) and (3) in the anchored games setting.The analysis for anchored games is more intricate than for free games. Proofs of the analogousstatements for free games in [JPY13, CS14b, CWY15] make crucial use of the fact that all possiblequestion tuples are possible. An anchored game can be far from having this property. Instead, weuse the anchors as a “home base” that is connected to all questions. Intuitively, no matter whatquestion tuple ( x , y , z , . . . ) we are considering, it is only a few replacements away from the set ofanchor questions. Thus the dependency of the variable Ω or state | Φ x , y i on the questions can beiteratively removed by “switching” each players’ question to an anchor as P Ω | X i = x , Y i = y , Z i = z , W ≈ P Ω | X i = x , Y i = y , Z i ∈⊥ , W ≈ P Ω | X i = x , Y i ∈ ⊥ , Z i ∈ ⊥ , W ≈ P Ω | X i ∈ ⊥ , Y i ∈ ⊥ , Z i ∈ ⊥ , W ,where “ X i ∈ ⊥ ” is shorthand for the event that X i ∈ X ⊥ .Dealing with quantum strategies adds another layer of complexity to the argument. The lo-cal unitaries U x and V y involved in (3) are quite important in the arguments of [JPY14, CS14a,CWY15]. The difficulty in extending the argument for free games to the case of general games isto show that these local unitaries each only depend on the input to a single player. In fact with thedefinition of | Ω x , y i used in these works it appears likely that this statement does not hold, thus adifferent approach must be found.When the game is anchored, however, we are able to use the anchor question in order to showthe existence of unitaries U x and V y that achieve (3) and depend only on a single player’s questioneach. Achieving this requires us to introduce dependency-breaking states | Ω x , y i that are morecomplicated than those used in the free games case; in particular they include information aboutthe classical dependency-breaking variables of Raz and Holenstein.We prove (3) for anchored games by proving a sequence of approximate equalities: first weshow that for most x there exists U x such that ( U x ⊗ I ) | Ω ⊥ , ⊥ i ≈ | Ω x , ⊥ i , where | Ω ⊥ , ⊥ i denotes thedependency-breaking state in the case that both Alice and Bob receive the anchor question “ ⊥ ”,and | Ω x , ⊥ i denotes the state when Alice receives x and Bob receives “ ⊥ ”. Then we show that for all y such that µ ( y | x ) > V y such that ( I ⊗ V y ) | Ω x , ⊥ i ≈ | Ω x , y i . Accomplishingthis step requires ideas and techniques going beyond those in the free games case. Interestingly,a crucial component of our proof is to argue the existence of a local unitary R x , y that depends on both inputs x and y . The unitary R x , y is not implemented by Alice or Bob in the simulation, but itis needed to show that V y maps | Ω x , ⊥ i onto | Ω x , y i .8ne can view our work as pushing the limits of arguments for parallel repetition that do notrequire some form of correlated sampling, a procedure that seems inherently necessary to analyzethe general case. Our results demonstrate that such procedure is not needed for the purpose ofachieving strong gap amplification theorems for multiplayer and quantum games. We largely adopt the notational conventions from [Hol09] for probability distributions. We letcapital letters denote random variables and lower case letters denote specific samples. We willuse subscripted sets to denote tuples, e.g., X [ n ] : = ( X , . . . , X n ) , x [ n ] = ( x , . . . , x n ) , and if C ⊂ [ n ] is some subset then X C will denote the sub-tuple of X [ n ] indexed by C . We use P X to denote theprobability distribution of random variable X , and P X ( x ) to denote the probability that X = x for some value x . For multiple random variables, e.g., X , Y , Z , P XYZ ( x , y , z ) denotes their jointdistribution with respect to some probability space understood from context.We use P Y | X = x ( y ) to denote the conditional distribution P YX ( y , x ) / P X ( x ) , which is definedwhen P X ( x ) >
0. When conditioning on many variables, we usually use the shorthand P X | y , z todenote the distribution P X | Y = y , Z = z . For example, we write P V | ω − i , x i , y i to denote P V | Ω − i = ω − i , X i = x i , Y i = y i .For an event W we let P XY | W denote the distribution conditioned on W . We use the notation E X f ( x ) and E P X f ( x ) to denote the expectation ∑ x P X ( x ) f ( x ) .Let P X be a distribution of X , and for every x in the support of P X , let P Y | X = x be a conditionaldistribution defined over Y . We define the distribution P X P Y | X over X × Y as ( P X P Y | X )( x , y ) : = P X ( x ) · P Y | X = x ( y ) .Additionally, we write P X Z P Y | X to denote the distribution ( P X Z P Y | X )( x , z , y ) : = P X Z ( x , z ) · P Y | X = x ( y ) .For two random variables X and X over the same set X , P X ≈ ε P X indicates that the totalvariation distance between P X and P X , k P X − P X k : = ∑ x ∈X | P X ( x ) − P X ( x ) | ,is at most ε .The following simple lemma will be used repeatedly. Lemma 5.
Let Q F and S F be two probability distributions of some random variable F, and let R G | F be aconditional probability distribution for some random variable G, conditioned on F. Then k Q F R G | F − S F R G | F k = k Q F − S F k .9 roof. Note that k Q F R G | F − S F R G | F k is equal to12 ∑ f , g | Q ( f ) R ( g | f ) − S ( f ) R ( g | f ) | = ∑ f | Q ( f ) − S ( f ) | · ∑ g R ( g | f ) ! = ∑ f | Q ( f ) − S ( f ) | = k Q F − S F k . For comprehensive references on quantum information we refer the reader to [NC10, Wil13].For a vector | ψ i , we use k| ψ ik to denote its Euclidean length. For a matrix A , we will use k A k to denote its trace norm Tr ( √ AA † ) . A density matrix is a positive semidefinite matrix withtrace 1. The fidelity between two density matrices ρ and σ is defined as F ( ρ , σ ) = k√ ρ √ σ k . TheFuchs-van de Graaf inequalities relate fidelity and trace norm as1 − F ( ρ , σ ) ≤ k ρ − σ k ≤ q − F ( ρ , σ ) . (4)For Hermitian matrices A , B we write A (cid:22) B to indicate that A − B is positive semidefinite. We use I to denote the identity matrix. For an operator X and a density matrix ρ , we write X [ ρ ] for X ρ X † .A positive operator valued measurement (POVM) with outcome set A is a set of positive semidefinitematrices { E a } labeled by a ∈ A that sum to the identity.We will use the convention that, when | ψ i is a pure state, ψ refers to the rank-1 density matrix | ψ ih ψ | . We use subscripts to denote system labels; so ρ AB will denote the density matrix on thesystems A and B . A classical-quantum state ρ XE is classical on X and quantum on E if it can bewritten as ρ XE = ∑ x p ( x ) | x ih x | X ⊗ ρ E | X = x for some probability measure p ( · ) . The state ρ E | X = x isby definition the E part of the state ρ XE , conditioned on the classical register X = x . We write ρ XE | X = x to denote the state | x ih x | X ⊗ ρ E | X = x . We often write expressions such as ρ E | x as shorthandfor ρ E | X = x when it is clear from context which registers are being conditioned on. This will beuseful when there are many classical variables to be conditioned on. Entropic quantities.
For two positive semidefinite operators ρ , σ , the relative entropy S ( ρ k σ ) isdefined to be Tr ( ρ ( log ρ − log σ )) . The relative min-entropy S ∞ ( ρ k σ ) is defined as min { λ : ρ (cid:22) λ σ } .Let ρ AB be a bipartite state. The mutual information I ( A : B ) ρ is defined as S ( ρ AB k ρ A ⊗ ρ B ) .For a classical-quantum state ρ XAB that is classical on X and quantum on AB , we write I ( A ; B | x ) ρ to indicate I ( A ; B ) ρ x .The following technical lemmas will be used in Section 5. Proposition 6 (Pinsker’s inequality) . For all density matrices ρ , σ , k ρ − σ k ≤ S ( ρ k σ ) . emma 7 ([JPY14], Fact II.8) . Let ρ = ∑ z P Z ( z ) | z ih z | ⊗ ρ z , and ρ ′ = ∑ z P Z ′ ( z ) | z ih z | ⊗ ρ ′ z . ThenS ( ρ ′ k ρ ) = S ( P Z ′ k P Z ) + E Z ′ [ S ( ρ ′ z k ρ z )] . In particular, S ( ρ ′ k ρ ) ≥ E Z ′ [ S ( ρ ′ z k ρ z )] . We will also use the following Lemma from [CWY15]. Here we present an argument thatobtains better parameters ([CWY15] proved that ∑ ni = I ( X i : A ) ρ ≤ S ( ρ XA k σ XA ) .) Lemma 8 ([CWY15], Quantum Raz’s Lemma) . Let ρ and σ be two CQ states with ρ XA = ρ X X ... X n A and σ = σ XA = σ X ⊗ σ X ⊗ . . . ⊗ σ X n ⊗ σ A with X = X X . . . X n classical in both states. Then n ∑ i = I ( X i : A ) ρ ≤ S ( ρ XA k σ XA ) . (5)The conditions on ρ and σ stated in the lemma are equivalent to them satisfying the followingform ρ XA = ∑ x P X ( x ) | x ih x | ⊗ ρ A | X = x , σ XA = ∑ x P ′ X ( x ) | x ih x | ⊗ σ A ,where x = ( x , x , . . . , x n ) is an n -tuple, P X an arbitrary distribution, and P ′ X ( x ) = ∏ ni = P ′ X i ( x i ) aproduct distribution. Proof of Lemma 8.
By the chain rule (Lemma 7) we have S ( ρ XA k σ XA ) = S ( ρ X k σ X ) + E x ← ρ X S ( ρ X | X = x k σ X ) + . . . + E x ← ρ X ··· Xn S ( ρ A | X = x k σ A ) , (6)where x ← ρ X means sampling x according to the classical distribution ρ X , and similarly for x ← ρ X ··· X n . Consider any of the first n terms in (6). We have E x < i ← ρ X X Xi − S ( ρ X i | x < i k σ X i ) ≥ E x < i ← ρ X X Xi − S ( ρ X i | x < i k ρ X i ) = I ( X . . . X i − : X i ) ρ ,where ρ X i | x < i stands for ρ X i | X < i = x < i . Now consider the last term in (6): E x ← ρ X S ( ρ A | X = x k σ A ) ≥ E x ← ρ X S ( ρ A | X = x k ρ A ) = S ( ρ XA k ρ X ⊗ ρ A )= I ( X : A ) ρ = n ∑ i = I ( X i : A | X X . . . X i − ) ρ .Summing up the last two equations and using I ( X i : AX . . . X i ) = I ( X i : X . . . X i − ) + I ( X i : A | X . . . X i − ) implies S ( ρ XA k σ XA ) ≥ n ∑ i = I ( X i : AX . . . X i − ) ρ ≥ n ∑ i = I ( X i : A ) ρ ,where the last inequality follows from strong subadditivity, i.e., I ( X i : X . . . X i − | A ) ρ ≥ We formally define k -player one-round games, their parallel repetition, and anchored games. Some versions of this lemma, though in a less compact form, also appear in [JPY14, CS14a]. ultiplayer games. A k -player game G = ( X , A , µ , V ) is specified by a question set X = X × X × · · · × X k , answer set A = A × A × · · · × A k , a probability measure µ on X , anda verification predicate V : X × A → {
0, 1 } . Throughout this paper, we use superscripts in orderto denote which player an input/output symbol is associated with. For example, we write x todenote the input to the first player, and a t to denote the output of the t -th player. Finally, to de-note the tuple of questions/answers to all k players we write x = ( x , . . . , x k ) and a = ( a , . . . , a k ) respectively.The classical value of a game G is denoted by val ( G ) and defined asval ( G ) : = sup f ,..., f k E ( x ,..., x k ) ∼ µ h V (cid:16) ( x , . . . , x k ) , ( f ( x ) , . . . , f k ( x k ) (cid:17)i where the supremum is over all functions f i : X i → A i ; these correspond to deterministic strate-gies used by the players. It is easy to see that the classical value of a game is unchanged if weallow the strategies to take advantage of public or private randomness.The entangled value of G is denoted by val ∗ ( G ) and defined asval ∗ ( G ) : = sup | ψ i∈ ( C d ) ⊗ k M ,..., M k E ( x ,..., x k ) ∼ µ ∑ ( a ,..., a k ) : V ( ( x ,..., x k ) , ( a ,..., a k ) ) = h ψ | M ( x , a ) ⊗ · · · ⊗ M k ( x k , a k ) | ψ i where the supremum is over all integer d ≥ k -partite pure states | ψ i in ( C d ) ⊗ k , and M , . . . , M k for each player. Each M t is a set of POVM measurements { M ( x t , a t ) } a t ∈A t acting on C d , one foreach question x t ∈ X t . Repeated games.
Let G = ( X , A , µ , V ) be a k -player game, with X = X × · · · × X k and A = A × · · · × A k . Let µ ⊗ n denote the product probability distribution over X ⊗ n = N ni = X i , whereeach X i is a copy of X . Similarly let A ⊗ n = N ni = A i where each A i is a copy of A . Let V ⊗ n : X ⊗ n × A ⊗ n → {
0, 1 } denote the verification predicate that is 1 on question tuple ( x , . . . , x n ) ∈X ⊗ n and answer tuple ( a , . . . , a n ) ∈ A ⊗ n iff for all i , V ( x i , a i ) =
1. We define the n -fold parallelrepetition of G to be the k -player game G n = ( X ⊗ n , A ⊗ n , µ ⊗ n , V ⊗ n ) .When working with games with more than 2 players, we use subscripts to denote which gameround/coordinate a question/answer symbol is associated with. For example, by x ti we mean thequestion to the t -th player in the i -th round. While this is overloading notation slightly (becausesuperscripts are meant to indicate tuples), we use this convention for the sake of readability. When x n refers to a tuple ( x , . . . , x n ) and when x ti refers to the t -th player’s question in the i -th coordinateshould be clear from context. Anchored games.
We give the general definition of an anchored game.
Definition 9 (Multiplayer Anchored Games) . A game G = ( X , A , µ , V ) is called α -anchored if thereexists X t ⊥ ⊆ X t for all t ∈ [ k ] where We will use the tensor product notation (“ N ”) to denote product across coordinates in a repeated game, and thetraditional product notation (“ × ”) to denote product across players. . µ ( X t ⊥ ) ≥ α for all t ∈ [ k ] , and2. for all x ∈ X , µ ( x ) = µ ( x | F x ) · ∏ t ∈ F x µ ( x t ) (7) where for all question tuples x = ( x , x , . . . , x k ) ∈ X , F x ⊆ [ n ] denotes the set of coordinates of x that liein the anchor, i.e. F x = { t ∈ [ k ] : x t ∈ X t ⊥ } and F x denotes the complement, i.e., [ n ] − F x . Here for a set S ⊆ [ n ] , µ ( x | S ) denotes the marginal probability of the question tuple x restrictedto the coordinates in S , i.e. µ ( x | S ) = ∑ x ′ | S = x | S µ ( x ′ ) .When k = k -player games: Proposition 10.
Let G = ( X , A , µ , V ) be a k-player game. Let G ⊥ be the k-player game where the refereesamples ( x , x , . . . , x k ) according to µ , replaces each x t with an auxiliary symbol ⊥ independently withprobability α , and checks the players’ answers according to V if all x t = ⊥ , and otherwise the referee accepts.Then G ⊥ is an α -anchored game satisfying val ( G ⊥ ) = − ( − α ) k · ( − val ( G )) , val ∗ ( G ⊥ ) = − ( − α ) k · ( − val ∗ ( G )) . (8) Proof.
We give the proof for the classical value; the same argument carries over to the entangledvalue. First, it is clear that val ( G ⊥ ) ≥ ( − ( − α k )) + ( − α ) k · val ( G ) . For the other direction,consider an optimal strategy for G ⊥ . Under this strategy, we can express the entangled value asval ( G ⊥ ) = ( − α ) k · Pr ( W |∀ t , x t = ⊥ ) + ( − ( − α k )) · Pr ( W |∃ t s.t. x t = ⊥ ) where W is the event that the players win. The optimal strategy for G ⊥ yields a strategy for G that wins with probability Pr ( W |∀ t , x t = ⊥ ) , which can be at most val ( G ) . Since Pr ( W |∃ t s.t. x t = ⊥ ) =
1, we obtain the desired equality.
Perhaps the most well-known open problem about the classical parallel repetition of games iswhether an analogue of Raz’s theorem holds for games with more than two players. While thetwo-player case already presented a number of non-trivial difficulties, proving a parallel repetitiontheorem for three or more players is believed to require substantially new ideas. This is mainly because the Raz/Holenstein framework, if extended to a multiplayer parallel repetition theorem infull generality, would likely also yield new lower bound techniques for multiparty communication complexity, an areathat has long resisted progress (especially for the important multiparty direct sum/product problems).
13n this section we make some progress on the multiplayer parallel repetition question: weprove a parallel repetition theorem for anchored games involving any number of players.
Theorem 11.
Let G = ( X , A , µ , V ) be a k-player α -anchored game such that val ( G ) ≤ − ε . Then val ( G n ) ≤ exp (cid:18) − α k · ε · n · s · k (cid:19) , (9) where s = log |A| . Combined with the anchoring operation described in Proposition 10, we obtain a gap amplifi-cation transformation that can be applied to any k -player game, yielding a decay of the value thatmatches, at least qualitatively, what one would expect from a general parallel repetition theorem.From a more quantitative point of view, even in the two-player setting the optimal exponentof ε in (9) remains unknown. Perhaps more importantly, it is unclear whether the exponentialdependence in k , due to the term α k , in the bound is necessary; known lower bounds [Fei95,CWY15] only show the need for a polynomial dependence on k in the exponent.For the remainder of this section we fix a k -player α -anchored game G = ( X , A , µ , V ) , aninteger n , and a deterministic strategy for the k players in the repeated game G n that achievessuccess probability val ( G n ) . In Section 4.1 we introduce the notation, random variables and basiclemmas for the proof. The proof of Theorem 11 itself is given in Section 4.2. We refer to Section 3.3 for basic notation related to multiplayer games.Let C ⊆ [ n ] a fixed set of coordinates for the repeated game G n of size | C | = n − m . It will beconvenient to fix C = { m + m +
2, . . . , n } ; the symmetry of the problem will make it clear thatthis is without loss of generality. Let Z = A C = ( A C , A C , . . . , A kC ) denote the players’ answersassociated with the coordinates indexed by C .For t ∈ [ k ] let Y t = ( X t \ X t ⊥ ) ∪ { ⊥ } , and define a random variable Y t = X t , X t ∈ X t \ X t ⊥ ⊥ , X t ∈ X t ⊥ . (10)Let Y = Y × Y × . . . × Y k and Y = ( Y , Y , . . . , Y k ) . For G n we write Y ⊗ n = ( Y , Y , . . . , Y n ) = (cid:16)(cid:16) Y , . . . , Y k (cid:17) , (cid:16) Y , . . . , Y k (cid:17) , . . . , (cid:16) Y n , . . . , Y kn (cid:17)(cid:17) .Note that each k -tuple Y i is a deterministic function of X i . Furthermore, we will write Y − ti todenote Y i with the t -th coordinate Y ti omitted.For i ∈ [ n ] let D i be a subset of [ k ] of size k − D i ∈ [ k ] itscomplement in [ k ] . Let M i = Y D i i denote the coordinates of Y associated to indices in D i . Definethe dependence-breaking random variable Ω i as Ω i = ( D i , M i ) i ∈ CX i i ∈ C . (11)14he importance of Ω is captured in the following lemma. Lemma 12. (Local Sampling) Let X , Z, Ω be as above. Then P X − i | X i Ω − i Z is a product distribution acrossthe players: P X − i | X i Ω − i Z = k ∏ t = P X t − i | Ω t − i Z t X ti . Proof.
Conditioned on M i = Y D i i each X i = ( X i , X i , . . . , X ki ) is a product distribution, hence P X − i | Ω − i X i is product. Since for t ∈ [ k ] Z t is a deterministic function of X t the same holds of P X − i | Ω − i Z X i .Lemma 12 crucially relies on the sets D j being of size k −
1: if two or more of the players’ ques-tions are unconstrained in a coordinate it is no longer necessarily true that P X − i | Ω − i Z X i is productacross all players.Let W = W C = V Ci = W i denote the event that the players’ answers Z to questions in thecoordinates indexed by C satisfy the predicate V . Let δ = | C | log |A| + log ( W C ) m . (12)The following lemma and its corollary are direct consequences of analogous lemmas used inthe analysis of repeated two-player games, as stated in e.g. [Hol09, Lem. 5] and [Hol09, Cor. 6].They do not depend on the structure of the game, and only rely on W being an event defined onlyon ( X C , Z ) . Lemma 13.
We have ( i ) E i ∈ [ m ] k P X i Y i Ω i | W − P X i Y i Ω i k ≤ √ δ . ( ii ) E i ∈ [ m ] k P X i Y i Z Ω − i | W − P X i | Y i P Y i Z Ω − i | W k ≤ √ δ ( iii ) E i ∈ [ m ] k P Y i Z Ω | W − P Y i | Ω i P Z Ω | W k ≤ √ δ . Proof.
Item (i) follows directly from [Hol09, Lem. 5] by taking U i = X i Y i Ω i . For (ii) apply [Hol09,Cor. 6] with U i = X i and T = ( Y , Y , . . . , Y m , X C ) to get E i ∈ [ m ] k P X i Z Y [ m ] X C | W − P X i | Y i P Y i Z Y [ m ] \{ i } X C | W k ≤ √ δ , (13)which is stronger than (ii); (ii) follows by marginalizing Y D i i in each term. Finally, the same corol-lary applied with U i = Y i and T = Ω shows (iii). Corollary 14. E i ∈ [ m ] k ∑ t = k P Y i P Z Ω − i | WY i − P Y i P Z Ω − i | WY − ti k ≤ k · √ δ .15 roof. We have P Y i | Ω i P Z Ω | W = P Y i | Ω i P Ω i | W P Z Ω − i | W Ω i . Applying Lemma 5 with Q F = P Ω i | W , S F = P Ω i , and R G | F = P Y i | Ω i P Z Ω − i | W Ω i , we see that E i ∈ [ m ] k P Y i | Ω i P Z Ω | W − P Y i Ω i P Z Ω − i | W Ω i k = E i ∈ [ m ] k P Ω i | W − P Ω i k ≤ √ δ ,where the last inequality follows from Lemma 13, item (i). Combining the above with item (iii) ofthe same Lemma, we have E i ∈ [ m ] k P Y i Z Ω | W − P Y i Ω i P Z Ω − i | W Ω i k ≤ √ δ . (14)Noting that Ω i is determined by Y i (the D i are completely independent of everything else), (14)implies E i ∈ [ m ] E t ∈ [ k ] k P Y i Z Ω − i | W − P Y i P Z Ω − i | WY − ti k = E i ∈ [ m ] k P Y i Z Ω − i | W − P Y i P Z Ω − i | W Ω i k≤ √ δ .Finally, notice that Lemmas 5 and 13 imply E i ∈ [ m ] k P Y i Z Ω − i | W − P Y i P Z Ω − i | WY i k = E i ∈ [ m ] k P Y i − P Y i | W k ≤ √ δ ; the desired result follows. This section is devoted to the proof of Theorem 11. The main ingredient of the proof is given inthe next proposition.
Proposition 15.
Let C ⊆ [ n ] and X , Z, Ω − i be defined as in Section 4.1. Then E i ∈ [ m ] (cid:13)(cid:13) P X i Ω − i Z | W − P X i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) ≤ ( k α − k + ) √ δ , (15) where δ is defined in (12) . Theorem 11 follows from this proposition in a relatively standard fashion; this is done at theend of this section. Let us now prove Proposition 15 assuming a certain technical statement,Lemma 16. This lemma is proved immediately after.
Proof of Proposition 15.
First observe that (cid:13)(cid:13) P X i Ω − i Z | W − P X i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) = (cid:13)(cid:13) P X i Y i Ω − i Z | W − P X i Y i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) as Y i is a deterministic function of X i . Applying Lemma 13, item (ii) we get E i ∈ [ m ] (cid:13)(cid:13) P X i Y i Ω − i Z | W − P X i | Y i P Y i Ω − i Z | W (cid:13)(cid:13) ≤ √ δ .The latter distribution can be written as P Y i | W P X i | Y i P Ω − i Z | WY i . Applying Lemma 5 with Q F = P Y i | W and S F = P Y i we see that (cid:13)(cid:13) P X i | Y i P Y i Ω − i Z | W − P X i Y i P Ω − i Z | WY i (cid:13)(cid:13) = (cid:13)(cid:13) P Y i | W − P Y i (cid:13)(cid:13) ,16hich is bounded by √ δ on average over i by Lemma 13, item (i). Hence E i ∈ [ m ] (cid:13)(cid:13) P X i Ω − i Z | W − P X i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) ≤ √ δ + E i ∈ [ m ] (cid:13)(cid:13) P X i Y i P Ω − i Z | WY i − P X i Y i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) = √ δ + E i ∈ [ m ] (cid:13)(cid:13) P Y i P Ω − i Z | WY i − P Y i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) ,where the equality follows from Lemma 5 applied with R G | F = P X i | Y i . Applying the triangleinequality, E i ∈ [ m ] (cid:13)(cid:13) P X i Y i P Ω − i Z | WY i − P X i Y i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) = E i ∈ [ m ] (cid:13)(cid:13) P Y i P Ω − i Z | WY i − P Y i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13) ≤ E i ∈ [ m ] k ∑ t = (cid:13)(cid:13) P Y i P Ω − i Z | WY < ti = ⊥ t − , Y ≥ ti − P Y i P Ω − i Z | WY ≤ ti = ⊥ t , Y > ti (cid:13)(cid:13) (16) ≤ k α − k · √ δ , (17)where (16) is proved by Lemma 16 below and (17) follows from Corollary 14. Lemma 16.
Let S ⊂ [ k ] and t ∈ S. Then (cid:13)(cid:13) P Y i P Ω − i Z | WY Si = ⊥ S , Y Si − P Y i P Ω − i Z | WY S ∪{ t } i = ⊥ S ∪{ t } , Y S \{ t } i (cid:13)(cid:13) ≤ α − ( | S | + ) · (cid:13)(cid:13) P Y i P Z Ω − i | WY i − P Y i P Z Ω − i | WY − ti (cid:13)(cid:13) . (18) Proof.
In the proof for ease of notation we omit the subscript i and write Y instead of Y i . Afterrelabeling we may assume S = {
1, 2, . . . , r − } and t = r where 1 ≤ r < k . Expanding theexpectation over Y explicitly we can rewrite the left-hand side of (18) as (cid:13)(cid:13)(cid:13) P Y · (cid:0) P Ω − i Z | W , y ≥ r , y < r = ⊥ r − − P Ω − i Z | W , y > r , y ≤ r = ⊥ r (cid:1)(cid:13)(cid:13)(cid:13) . (19)Next we use a symmetrization argument to bound the above expression. Consider a random vari-able ˆ Y that is a copy of Y , and is coupled to Y in the following way: ˆ Y − r = Y − r , and conditioned onany setting of Y r = y r , ˆ Y r and Y r are independent. Using the fact that Pr [ ˆ Y r = ⊥ ] ≥ α conditionedon any value of Y − r = U − r = y − r , we get that the expression in (19) is at most α − (cid:13)(cid:13)(cid:13) P Y − r P Y r | Y − r P ˆ Y r | Y − r · (cid:0) P Ω − i Z | W , y > r , y r , y < r = ⊥ r − − P Ω − i Z | W , y > r , ˆ y r , y < r = ⊥ r − (cid:1)(cid:13)(cid:13)(cid:13) .Using the triangle inequality and symmetry of Y and ˆ Y , this expression can be bounded by2 α − · (cid:13)(cid:13)(cid:13) P Y · (cid:16) P Ω − i Z | W , y > r , y r , y < r = ⊥ r − − P Ω − i Z | W , y > r , y < r = ⊥ r − (cid:17)(cid:13)(cid:13)(cid:13) ,which after noting that the quantity k P Ω − i Z | W , y > r , y r , y < r = ⊥ r − − P Ω − i Z | W , y > r , y ≤ r = ⊥ r k is independent ofthe variable Y < r , can be rewritten as2 α − · (cid:13)(cid:13)(cid:13) P Y ≥ r · (cid:0) P Ω − i Z | W , y > r , y r , y < r = ⊥ r − − P Ω − i Z | W , y > r , y < r = ⊥ r − (cid:1)(cid:13)(cid:13)(cid:13) .17sing that the event that Y < r = ⊥ r − occurs with probability at least α r − and P Y ≥ r | Y < r = ⊥ r − = P Y ≥ r by the anchor property, we can finally bound (19) by2 α − r · k P Y P Z Ω − i | WY − P Y P Z Ω − i | WY − r k ,which is the desired result.We prove Theorem 11 by iteratively applying Proposition 15 as follows. Proof of Theorem 11.
Let C = ∅ and δ =
0. While ( k α − k + ) √ δ s ≤ ε /2, by Proposition 15, wecan choose i ∈ C s with (cid:13)(cid:13)(cid:13) P X i Ω − i Z | W − P X i P Ω − i Z | W , Y i = ⊥ k (cid:13)(cid:13)(cid:13) ≤ ε /2. Set C s + = C s ∪ { i } and δ s + =( | C s + | log |A| + log 1/ Pr ( W C s + )) / m . First we show that throughout this process the boundPr [ W C s ] ≤ ( − ε /2 ) | C s | (20)holds. Since by the choice of i one has (cid:13)(cid:13) P X i Ω − i Z | W C − P X i P Ω − i Z | W C , Y i = ⊥ k (cid:13)(cid:13) ≤ ε /2, to establish (20) itwill suffice to show thatPr ( W i | W C ) ≤ val ( G ) + (cid:13)(cid:13) P X i Ω − i Z | W C − P X i P Ω − i Z | W C , Y i = ⊥ k (cid:13)(cid:13) . (21)The proof of (21) is based on a rounding argument. Consider the following strategy for G : First,the players use shared randomness to obtain a common sample from P Ω − i Z | W C , Y i = ⊥ k . After receiv-ing her question x ∗ t , player t ∈ [ k ] samples questions for the remaining coordinates according to P X t − i | Ω t − i Z t X ti , forming the tuple X t = ( X t − i , x ∗ t ) . She determines her answer a ti ∈ A ti according to thestrategy for G n . The distribution over questions X implemented by players following this strategyis P X i P Ω − i Z | W C Y i = ⊥ k k ∏ t = P X t − i | Ω t − i Z t X ti ,which by Lemma 12 is equal to P X i P Ω − i Z | W C Y i = ⊥ k P X − i | Ω − i Z .On the other hand from the definition of Ω − i we have P X Ω − i Z | W C = P X i Ω − i Z | W C P X − i | Ω − i Z W C = P X i Ω − i Z | W C P X − i | Ω − i Z .Applying Lemma 5 with R = P X − i | Ω − i Z it follows that (cid:13)(cid:13)(cid:13) P X Z Ω − i | W C − P X i P Ω − i Z | W C Y i = ⊥ k P X − i | Ω − i Z (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) P X i Ω − i Z | W C − P X i P Ω − i Z | W C , Y i = ⊥ k (cid:13)(cid:13)(cid:13) .Now by definition the winning probability of the extracted strategy for G is at most val ( G ) ,and (21) follows.Let now C be the final set of coordinates when the above-described process stops; at this pointwe must have δ = | C | log |A| + log ( W C ) n − | C | > α k ε · k .18f | C | ≥ n /2 we are already done by (20). Suppose | C | log |A| + log ( [ WC ] ) n > α k ǫ · k . If log ( ( W C ) ) ≥ n · α k ǫ · k we are again done; hence, we can assume | C | log |A| n > α k ε · k .Now plugging the lower bound on the size of C in (20) we getval ( G n ) ≤ Pr ( W C ) ≤ exp (cid:18) − α k · ε · n · k · s (cid:19) where s = log |A| , which completes the proof. Some remarks on multiplayer parallel repetition for general games.
We conclude this sectionwith some remarks about Theorem 11 and the more general problem of multiplayer parallel rep-etition. Our analysis of repeated anchored games follows the information-theoretic approach ofRaz and Holenstein. It is a natural question, predating this work by many years, whether one canextend this framework to prove parallel repetition for general multiplayer games?At first sight the Raz/Holenstein framework may seem quite suitable for multiplayer parallelrepetition. For instance, it is folklore that classically the approach extends to the case of free gameswith any number of players, and furthermore, many of the other technical components of theproof readily carry over in much generality. Despite these positive signs, attempts to extend Raz’soriginal argument to the general multiplayer setting have so far failed for different and ratherinteresting technical reasons . Embarrassingly, to our knowledge, it is not even known how to extendthe information-theoretic approach to prove that the value of a repeated k -player game decays atall! We give an example of one of the difficulties in proving a multiplayer parallel repetition the-orem for general games. Consider the problem of defining an appropriate dependency-breakingvariable Ω in the multiplayer setting. There are two competing demands on Ω : on one hand thebreaking of dependencies between the players’ respective questions seems to require it to containas many of the players’ questions as possible for each coordinate i ∈ C . In fact, if the correlationsbetween the players inputs’ are generic, it seems hard to avoid the need to keep at least k − Ω i , as we do in Lemma 12. On the other hand, for correlated sampling to be possi-ble, it seems necessary for Ω to specify very few of the questions per coordinate, or in fact in thegeneric case, at most 1; as soon as k ≥ Ω (i.e. the dependency-breaking and the correlated sampling components). Moreprecisely, when the base game is anchored, we show how to define a useful dependency-breakingvariable (or quantum state, in the entangled players setting) that can be sampled without corre-lated sampling. With correlated sampling out of the way, the aforementioned conflict between One can modify a Ramsey-theoretic argument of Verbitsky to show that if val ( G ) <
1, then val ( G n ) must go to 0eventually as n grows [Ver96], but the bound on the rate of decay is extremely poor. This section is devoted to the analysis of the entangled value of repeated anchored games. Whilewe expect the arguments to carry over to the multiplayer case without any additional difficultywe leave this for future work and focus on two-player entangled games. We use X and Y (resp. A and B ) to denote the two players’ respective question (resp. answer) sets in G , and following awell-established convention we name the first player Alice and the second Bob. Our main resultis the following. Theorem 17.
Let G = (
X × Y , A × B , µ , V ) be a two-player α -anchored game satisfying val ∗ ( G ) = − ε . Then val ∗ ( G n ) ≤ exp ( − Ω ( α · ε · n / s )) , where s = log |A||B| . Thus as in the classical multiplayer case, the anchoring operation described in Proposition 10provides a general gap amplification transformation for the entangled value of any two-playergame.For the remainder of the section we fix an α -anchored two-player game G = ( X × Y , A ×B , µ , V ) with entangled value val ∗ ( G ) = − ε and anchor sets X ⊥ ⊆ X , Y ⊥ ⊆ Y for Alice andBob, respectively. Without loss of generality we will assume that α ≤ α -anchored for α > G n , con-sisting of a shared entangled state | ψ i E A E B and POVMs { A ( x [ n ] , a [ n ] ) } and { B ( y [ n ] , b [ n ] ) } for Aliceand Bob respectively. Without loss of generality we assume that | ψ i is invariant under permuta-tion of the two registers, i.e. there exist basis vectors {| v j i} j such that | ψ i = ∑ j p λ j | v j i| v j i . We introduce the random variables, entangled states and operators that play an important role inthe proof of Theorem 17. The section is divided into three parts: first we define the dependency-breaking variable Ω , with a slightly modified definition from the one introduced for the classicalmultiplayer setting in Section 4. Then we state useful lemmas about conditioned distributions.Finally we describe the states and operators used in the proof. Dependency-breaking variables.
Let C ⊆ [ n ] a fixed set of coordinates for the repeated game G n . We fix C = { m + m +
2, . . . , n } , where m = n − | C | , as this will easily be seen to hold withoutloss of generality. Let ( X [ n ] , Y [ n ] ) be distributed according to µ [ n ] and ( A [ n ] , B [ n ] ) be defined from X [ n ] and Y [ n ] as follows: P A [ n ] B [ n ] | X [ n ] = x [ n ] , Y [ n ] = y [ n ] ( a [ n ] , b [ n ] ) = h ψ | A ( x [ n ] , a [ n ] ) ⊗ B ( y [ n ] , b [ n ] ) | ψ i .20et ( X C , Y C ) and Z = ( A C , B C ) be random variables that denote the players’ questions and an-swers respectively associated with the coordinates indexed by C . For i ∈ [ n ] let W i denote theevent that the players win round i while playing G n . Let W C = V i ∈ C W i . We will often write W inplace of W C when C is clear from context.We introduce a random variable Ω that is closely related to dependency-breaking variablesused in classical proofs of the parallel repetition theorem (such as in Section 4), as well as directsum theorems in communication complexity. In those works, for all i ∈ [ n ] , Ω i fixes at least oneof X i or Y i (and sometimes both). Thus, conditioned on Ω , X [ n ] and Y [ n ] are independent of eachother. Our Ω variable is slightly more complicated. It will be important in our analysis to allow,e.g., X i to be free to take on a value from Alice’s anchor X ⊥ , regardless of the value of Ω i . Inother words, from Bob’s point of view, even when conditioned on Ω i , Alice’s input may randomlychoose to take on some anchor value.Let D , . . . , D m be independent and uniformly distributed over { A , B } . Let M , . . . , M m beindependent random variables defined in the following way. For each i ∈ [ m ] and ( x , y ) ∈ X × Y , M i = ⊥ with prob. 1 − p A if D i = A ⊥ / x with prob. p A · µ ( x | x / ∈ X ⊥ ) if D i = A ⊥ / x with prob. p A · ( − p A ) · µ ( x | x ∈ X ⊥ ) if D i = A ⊥ with prob. 1 − p B if D i = B ⊥ / y with prob. p B · µ ( y | y / ∈ Y ⊥ ) if D i = B ⊥ / y with prob. p B · ( − p B ) · µ ( y | y ∈ Y ⊥ ) if D i = B where p A : = ( − µ ( X ⊥ )) and p B : = ( − µ ( Y ⊥ )) . For i ∈ [ m ] let Ω i : = ( D i , M i ) , and Ω : =( Ω , . . . , Ω m , X C , Y C ) . We write Ω − i to denote the random variable ( Ω , . . . , Ω i − , Ω i + , . . . , Ω m , X C , Y C ) .We re-introduce random variables ( X [ n ] , Y [ n ] ) that depend on Ω . We will show that the distri-bution of ( X [ n ] , Y [ n ] ) is µ n , as required. For i ∈ C , X i and Y i are fixed by Ω . If i / ∈ C , and D i = A ,then X i = x with prob. µ ( x | x ∈ X ⊥ ) if M i = ⊥ ( x ′ with prob. ( − p A ) · µ ( x ′ | x ′ ∈ X ⊥ ) x with prob. p A if M i = ⊥ / xY i = ( y with prob. µ ( y ) if M i = ⊥ y with prob. µ ( y | x ) if M i = ⊥ / x The distribution of M i X i Y i conditioned on D i = B is defined similarly with the roles of X i and Y i interchanged. Clearly, P X [ n ] Y [ n ] = P X Y · · · P X n Y n , thus the following claim shows that the marginaldistribution P X [ n ] Y [ n ] is equal to µ [ n ] . Claim 18.
For all i ∈ [ n ] , for all ( x , y ) ∈ X × Y , P X i Y i ( x , y ) = µ ( x , y ) .Proof. If i ∈ C then by definition X i Y i are distributed according to µ . Suppose that i / ∈ C . We showthat P X i Y i | D i = A is identical to µ . An analogous calculation shows that P X i Y i | D i = B = µ , proving theclaim. 21et ( x , y ) ∈ X × Y , and consider two cases. In the first case, x ∈ X − X ⊥ . This implies that M i = ⊥ / x ; otherwise, x would have been drawn from X ⊥ : P X i Y i | D i = A ( x , y ) = P M i | D i = A ( ⊥ / x ) · P X i | M i = ⊥ / x , D i = A ( x ) · P Y i | X i = x , M i = ⊥ / x , D i = A ( y )= p A · µ ( x | x / ∈ X ⊥ ) · p A · µ ( y | x )= p A · µ ( x , y ) / µ ( x / ∈ X ⊥ ) .Using p A = − µ ( X ⊥ ) , P X i Y i | D i = A ( x , y ) = µ ( x , y ) . In the second case, x ∈ X ⊥ . Then, P X i Y i | D i = A ( x , y )= P M i | D i = A ( ⊥ ) · P X i | M i = ⊥ , D i = A ( x ) · P Y i | X i = x , M i = ⊥ , D i = A ( y )+ ∑ x ′ / ∈X ⊥ P M i | D i = A ( ⊥ / x ′ ) · P X i | M i = ⊥ / x ′ , D i = A ( x ) · P Y i | X i = x , M i = ⊥ / x ′ , D i = A ( y )+ ∑ x ′ ∈X ⊥ P M i | D i = A ( ⊥ / x ′ ) · P X i | M i = ⊥ / x ′ , D i = A ( x ) · P Y i | X i = x , M i = ⊥ / x ′ , D i = A ( y )= ( − p A ) · µ ( x | x ∈ X ⊥ ) · µ ( y ) (22) + p A · ∑ x ′ / ∈X ⊥ µ ( x ′ | x ′ / ∈ X ⊥ ) | {z } = µ ( x ′ ) / µ ( X \X ⊥ ) · ( − p A ) · µ ( x | x ∈ X ⊥ ) | {z } = µ ( x ) / µ ( X ⊥ ) · µ ( y | x ′ ) (23) + p A · ( − p A ) · ∑ x ′ ∈X ⊥ µ ( x ′ | x ′ ∈ X ⊥ ) · ( − p A ) · µ ( x | x ∈ X ⊥ ) · µ ( y | x ′ ) (24) + p A · ( − p A ) · µ ( x | x ∈ X ⊥ ) · p A · µ ( y | x ) (25) = µ ( x | x ∈ X ⊥ ) · µ ( y ) h − p A | {z } (22) + p A · ( − p A ) | {z } (23) + p A · ( − p A ) · ( − p A ) | {z } (24) + p A · ( − p A ) · p A | {z } (25) i = µ ( x ) µ ( y ) − p A µ ( X ⊥ ) .But 1 − p A = µ ( X ⊥ ) , and µ ( x ) µ ( y ) = µ ( x , y ) when x ∈ X ⊥ . So we can conclude in this case toothat P X i Y i | D i = A ( x , y ) = µ ( x , y ) . Conditioned distributions.
Define δ : = m ( log 1/ Pr ( W ) + | C | log |A||B| ) . For notational con-venience we often use the shorthand X i ∈ ⊥ and Y i ∈ ⊥ to stand for X i ∈ X ⊥ and Y i ∈ Y ⊥ ,respectively. The following lemma essentially follows from lemmas in [Hol09] and the argumentsused in the proof of Lemma 16 in Section 4. Lemma 19.
The following statements hold on, average over i chosen uniformly in [ m ] :1. E i k P D i M i X i Y i | W − P D i M i X i Y i k ≤ O ( √ δ ) E i k P Ω ZX i Y i | W − P Ω Z | W P X i Y i | Ω k ≤ O ( √ δ ) . E i k P X i Y i P Ω − i Z | X i ∈ ⊥ , Y i ∈ ⊥ , W − P X i Y i P Ω − i Z | X i Y i W k ≤ O ( √ δ / α ) E i (cid:13)(cid:13) P X i Y i P Ω − i Z | X i Y i W − P X i Y i Ω − i Z | W (cid:13)(cid:13) ≤ O ( √ δ / α ) Quantum states and operators.
Recall that we have fixed an optimal strategy for Alice andBob in the game G n . This specifies a shared entangled state | ψ i , and measurement operators { A ( x [ n ] , a [ n ] ) } for Alice and { B ( y [ n ] , b [ n ] ) } for Bob. Operators.
Define, for all a C , b C , x [ n ] , y [ n ] : A ( x [ n ] , a C ) : = ∑ a [ n ] | a C A ( x [ n ] , a [ n ] ) B ( y [ n ] , b C ) : = ∑ b [ n ] | b C B ( y [ n ] , b [ n ] ) where a [ n ] | a C (resp. b [ n ] | b C ) indicates summing over all tuples a [ n ] consistent with the suffix a C (resp. b [ n ] consistent with suffix b C ). For all i , ω − i , x i , and y i define: A ω − i ( x i , a C ) : = E X [ n ] | ω − i , x i A ( x [ n ] , a C ) B ω − i ( y i , b C ) : = E Y [ n ] | ω − i , y i B ( y [ n ] , b C ) where recall that E X [ n ] | ω − i , x i is shorthand for E X [ n ] | Ω − i = ω − i , X i = x i . Intuitively, these operators repre-sent the “average” measurement that Alice and Bob apply, conditioned on Ω − i = ω − i , and X i = x i and Y i = y i . Next, define A ω − i ( ⊥ , a C ) : = E X [ n ] | Ω − i = ω − i , X i ∈⊥ A ( x [ n ] , a C ) B ω − i ( ⊥ , b C ) : = E Y [ n ] | Ω − i = ω − i , Y i ∈⊥ B ( y [ n ] , b C ) .These operators represent the “average” measurement performed by Alice and Bob, conditionedon Ω − i = ω − i and M i = ⊥ . Finally, for all x i ∈ X and y i ∈ Y , define A ω − i ( ⊥ / x i , a C ) : = ( − p A ) A ω − i ( ⊥ , a C ) + p A A ω − i ( x i , a C ) B ω − i ( ⊥ / y i , b C ) : = ( − p B ) B ω − i ( ⊥ , b C ) + p B B ω − i ( y i , b C ) where the weights p A and p B were defined in Section 5.1. Intuitively, these operators representthe “average” measurements conditioned on Ω − i = ω − i and M i = ⊥ / x i for some x i (or M i = ⊥ / y i for some y i ).For notational convenience we often suppress the dependence on ( i , ω − i , z = ( a C , b C )) whenit is clear from context. Thus, when we refer to an operator such as A ⊥ / x , we really mean theoperator A ω − i ( ⊥ / x i , a C ) . States.
For all x ∈ X and y ∈ Y , define the following (unnormalized) states: | Φ x , y i : = p A x ⊗ q B y | ψ i | Φ x , ⊥ i : = p A x ⊗ √ B ⊥ | ψ i| Φ ⊥ / x , ⊥ i : = p A ⊥ / x ⊗ √ B ⊥ | ψ i | Φ ⊥ / x , y i : = p A ⊥ / x ⊗ q B y | ψ i (26) | Φ ⊥ , ⊥ i : = p A ⊥ ⊗ √ B ⊥ | ψ i together with the normalization factors γ x , y : = (cid:13)(cid:13) | Φ x , y i (cid:13)(cid:13) γ x , ⊥ : = k| Φ x , ⊥ ik ⊥ / x , ⊥ : = k| Φ ⊥ / x , ⊥ ik γ ⊥ / x , y : = (cid:13)(cid:13) | Φ ⊥ / x , y i (cid:13)(cid:13) γ ⊥ , ⊥ : = k| Φ ⊥ , ⊥ ik Note that these normalization factors are the square-roots of the probabilities that a certain pair ofanswers z = ( a C , b C ) occurred, given the specified inputs and the dependency-breaking variables.For example, revealing the depencies on ω − i and z , we have γ ω − i , zx i , y i = q P Z | ω − i , x i , y i ( z ) .We denote the normalized states by | e Φ x , y i = | Φ x , y i / γ x , y , | e Φ x , ⊥ i = | Φ x , ⊥ i / γ x , ⊥ , | e Φ ⊥ / x , ⊥ i = | Φ ⊥ , ⊥ i / γ ⊥ / x , ⊥ , | e Φ ⊥ / x , ⊥ / y i = | Φ ⊥ / x , y i / γ ⊥ / x , y , and | e Φ ⊥ , ⊥ i = | Φ ⊥ , ⊥ i / γ ⊥ , ⊥ . The proof of Theorem 17 follows the same outline as that of Theorem 11 given in Section 4, but thedetails differ significantly. The proof is based on the following main lemma.
Lemma 20.
For every i, ω − i , z = ( a C , b C ) , x i and y i there exists unitaries U ω − i , z , x i acting on E A andV ω − i , z , y i acting on E B such that E i E X i Y i E Ω − i Z | W (cid:13)(cid:13)(cid:13) ( U ω − i , z , x i ⊗ V ω − i , z , y i ) (cid:12)(cid:12)(cid:12)(cid:12) e Φ ω − i , z ⊥ , ⊥ (cid:29) − (cid:12)(cid:12)(cid:12)(cid:12) e Φ ω − i , zx i , y i (cid:29) (cid:13)(cid:13)(cid:13) = O ( δ / α ) .The proof of Lemma 20 is given in Section 5.2.1. Assuming the lemma, we prove Theorem 17. Proof of Theorem 17.
Using the same reasoning that allowed us to derive Theorem 11 from (21), theproof of Theorem 17 will follow once it is established that for any C ⊆ [ n ] , E i ∈ [ n ] \ C Pr ( W i | W C ) ≤ val ∗ ( G ) + O ( δ / α ) . (27)As for (27), the proof of (27) is based on the rounding argument, but it now involves entangledstrategies. Fix C ⊆ [ n ] and i ∈ [ n ] \ C , and let W = W C . For every ω − i , z = ( a C , b C ) , x i ∈ X , y i ∈ Y , a i ∈ A and b i ∈ B , defineˆ A ω − i ( x i , a i ) : = ( A ω − i ( x i , a C )) − ∑ a [ n ] | a i , a C A ω − i ( x i , a [ n ] ) ( A ω − i ( x i , a C )) − ˆ B ω − i ( y i , b i ) : = ( B ω − i ( y i , b C )) − ∑ b [ n ] | b i , b C B ω − i ( y i , b [ n ] ) ( B ω − i ( y i , b C )) − where a [ n ] | a i , a C (resp. b [ n ] | b i , b C ) denotes summing over tuples a [ n ] that are consistent with a C and a i (resp. b [ n ] that are consistent with b C and b i ), and ( A ω − i ( x i , a C )) − and ( B ω − i ( y i , b C )) − denote pseudoinverses. Note that the { ˆ A ω − i ( x i , a i ) } a i and { ˆ B ω − i ( y i , b i ) } b i are positive semidefiniteoperators that sum to identity, so form valid POVMs.Consider the following strategy to play game G . Alice and Bob share classical public random-ness, and for every setting of i , ω − i , z , the bipartite state | e Φ ω − i , z ⊥ , ⊥ i . Upon receiving questions x ∈ X and y ∈ Y respectively they perform the following:24. Alice and Bob use public randomness to sample ( i , ω − i , z ) conditioned on W .2. Alice applies U ω − i , z , x to her register of | e Φ ω − i , z ⊥ , ⊥ i .3. Bob applies V ω − i , z , y to his register of | e Φ ω − i , z ⊥ , ⊥ i .4. Alice measures with POVM operators { ˆ A ω − i ( x , a i ) } and returns the outcome as her answer.5. Bob measures with POVM operators { ˆ B ω − i ( y , b i ) } and returns the outcome as his answer.Suppose that, upon receiving questions ( x , y ) and after jointly picking a uniformly random i ∈ [ m ] ,Alice and Bob could jointly sample ω − i , z from P Ω − i Z | W and locally prepare the state | e Φ ω − i , zx , y i .For a fixed ( x , y ) , ω − i and z , the distribution of outcomes ( a i , b i ) after measuring { ˆ A ω − i ( x , a i ) ⊗ ˆ B ω − i ( y , b i ) } a i , b i will be identical to P A i B i | ω − i , z , x , y (where we mean conditioning on X i = x and Y i = y ). Averaging over ( x , y ) ∼ µ , i , ω − i , and z , the above-defined strategy will win game G withprobability at least E i Pr ( W i | W ) .Next we show that Alice and Bob are able to approximately prepare | e Φ ω − i , zx , y i with high prob-ability, and thus produce answers that are approximately distributed according to P A i B i | ω − i , z , x , y ,allowing them to win game G with probability greater than 1 − ε — a contradiction.From Lemma 20, using the fact that for two pure states | ψ i and | φ i , k ψ − φ k ≤ √ k | ψ i − | φ ik ,as well as Jensen’s inequality, E i E XY E Ω − i Z | W (cid:13)(cid:13)(cid:13) ( U ω − i , z , x ⊗ V ω − i , z , y ) h e Φ ω − i , z ⊥ , ⊥ i − e Φ ω − i , zx , y (cid:13)(cid:13)(cid:13) = O (cid:16) δ α (cid:17) , (28)where the second expectation is over ( x , y ) drawn from µ , and ( U ⊗ V )[ e Φ ] denotes ( U ⊗ V ) e Φ ( U ⊗ V ) † . Conditioned on a given pair of questions ( x , y ) and the players sampling ( i , ω − i , z ) in Step 1.,the state that the players prepare after Step 3. in the protocol is precisely ( U ω − i , z , x ⊗ V ω − i , z , y )[ e Φ ω − i , z ⊥ , ⊥ ] .Let E ω − i , zx , y denote the quantum-classical channel on density matrices that performs the measure-ment { ˆ A ω − i ( x , a i ) ⊗ ˆ B ω − i ( y , b i ) } a i , b i , and outputs a classical register with the measurement outcome ( a i , b i ) . Applying E ω − i , zx , y to the expression inside the trace norm in (28), using that the trace norm isnon-increasing under quantum operations, E i E XY E Ω − i Z | W (cid:13)(cid:13)(cid:13)e P A i B i | ω − i , z , x , y − P A i B i | ω − i , z , x , y (cid:13)(cid:13)(cid:13) ≤ O ( δ / α ) .where e P A i B i | ω − i , z , x , y ( a i , b i ) deontes the probability of outcome ( a i , b i ) in the above strategy, condi-tioned on questions ( x , y ) and the players sampling ( i , ω − i , z ) in Step 1. Thus P I · P Ω − i Z | W · P XY · e P A i B i | Ω − i Z X i Y i ≈ O ( δ / α ) P I · P Ω − i Z | W · P XY · P A i B i | Ω − i Z X i Y i ≈ O ( δ / α ) P I · P Ω − i Z X i Y i | W · P A i B i | Ω − i Z X i Y i where the X i Y i in the conditionals is shorthand for X i = x , Y i = y . The last approximate equalityfollows from Lemma 19. Marginalizing Ω − i Z, we get P I · P XY · e P A i B i | X i Y i ≈ O ( δ / α ) P I · P X i Y i A i B i | W . (29)25nder the distribution P X i Y i A i B i | W , the probability that V ( x i , y i , a i , b i ) = ( W i | W ) .On the other hand, (29) implies that using the protocol described above the players win G withprobability at least E i Pr ( W i | W ) − O ( δ / α ) . This establishes (27) and concludes the proof of thetheorem. This section is devoted to the proof of Lemma 20. The proof is based on two lemmas. The firstdefines the required unitaries.
Lemma 21.
For all i, ω − i , z, x ∈ X and y ∈ Y there exists unitaries U ω − i zx acting on E A and V ω − i zy ,V ω − i , zx , y acting on E B such that E i E Ω − i Z | W E X (cid:13)(cid:13)(cid:13)(cid:12)(cid:12)(cid:12) e Φ x , ⊥ E − U ω − i zx (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E(cid:13)(cid:13)(cid:13) = O ( δ / α ) , (30) E i E Ω − i Z | W E Y (cid:13)(cid:13)(cid:13) V ω − i zy (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E − (cid:12)(cid:12)(cid:12) e Φ ⊥ , y E(cid:13)(cid:13)(cid:13) = O ( δ / α ) , (31) E i E Ω − i Z | W E XY (cid:13)(cid:13)(cid:13)(cid:13) V ω − i , zx , y (cid:12)(cid:12)(cid:12) e Φ ⊥ / x , y E − (cid:12)(cid:12)(cid:12) e Φ ⊥ / x , ⊥ E(cid:13)(cid:13)(cid:13)(cid:13) = O ( δ / α ) . (32) where E X , E Y , and E XY denote averaging over µ ( x ) , µ ( y ) , and µ ( x , y ) respectively. The proof of Lemma 21 is given in Section 5.2.2. The second lemma relates the normaliza-tion factors γ x , y , γ x , ⊥ , γ ⊥ , y , γ ⊥ / x , y , γ ⊥ / x , ⊥ , γ ⊥ , ⊥ that appear in the definition of the correspondingnormalized states | e Φ i . Lemma 22.
There exists a set S of triples ( i , ω − i , z ) that has probability − δ under P I · P Ω − i Z | W suchthat m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) (cid:12)(cid:12)(cid:12) γ ω − i , zx , y − γ ω − i , z ⊥ , ⊥ (cid:12)(cid:12)(cid:12) = O (cid:0) δ / α (cid:1) γ , (33) where γ = (cid:16) m ∑ i ∑ x , y , ω − i , z P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · γ ω − i , zx , y (cid:17) . Furthermore, similar bounds as (33) hold where γ ω − i , zx , y is replaced by any of γ ω − i , zx , ⊥ , γ ω − i , z ⊥ , y , γ ω − i , z ⊥ / x , y , γ ω − i , z ⊥ / x , ⊥ . The proof of Lemma 22 uses the following claim.
Claim 23. m ∑ i ∑ x , y , ( ω − i , z ) ∈ W P XY ( x , y ) (cid:13)(cid:13)(cid:13) P Ω − i Z | X i = x , Y i = y ( ω − i , z ) − P Ω − i Z | X i ∈ ⊥ , Y i ∈ ⊥ ( ω − i , z ) (cid:13)(cid:13)(cid:13) = O (cid:16) √ δα (cid:17) Pr ( W ) .26 roof. First note that1 m ∑ i ∑ x , y P XY ( x , y ) | Pr ( W | X i = x , Y i = y ) − Pr ( W ) | = Pr ( W ) m ∑ i (cid:13)(cid:13) P X i Y i | W − P X i Y i (cid:13)(cid:13) = O ( √ δ ) Pr ( W ) , (34)where the second equality follows from Lemma 19. Using the triangle inequality and Pr ( X i ∈ ⊥ , Y i ∈ ⊥ ) ≥ α we also get1 m ∑ i ∑ x , y P XY ( x , y ) | Pr ( W | X i = x , Y i = y ) − Pr ( W | X i ∈ ⊥ , Y i ∈ ⊥ ) | = O ( √ δ / α ) Pr ( W ) . (35)Using (34) and letting P Ω − i Z | x , y , W denote P Ω − i Z | X i = x , Y i = y , W ,1 m ∑ i ∑ x , y P XY ( x , y ) ∑ ( ω − i , z ) ∈ W (cid:12)(cid:12)(cid:12) Pr ( W ) · P Ω − i Z | x , y , W ( ω − i , z ) − P Ω − i Z | x , y ( ω − i , z ) (cid:12)(cid:12)(cid:12) ≈ O ( √ δ ) Pr ( W ) m ∑ i ∑ x , y P XY ( x , y ) ∑ ( ω − i , z ) ∈ W (cid:12)(cid:12)(cid:12) P Ω − i Z, W | x , y ( ω − i , z ) − P Ω − i Z | x , y ( ω − i , z ) (cid:12)(cid:12)(cid:12) = m ∑ i ∑ ( ω − i , z ) ∈ W (cid:12)(cid:12) Pr ( W ) · P Ω − i Z | X i ∈ ⊥ , Y i ∈ ⊥ , W ( ω − i , z ) − P Ω − i Z | X i ∈ ⊥ , Y i ∈ ⊥ ( ω − i , z ) (cid:12)(cid:12) = O ( √ δ / α ) Pr ( W ) .Combining the previous two bounds with the bound1 m ∑ i Pr ( W ) k P X i Y i P Ω − i Z | X i ∈ ⊥ , Y i ∈ ⊥ , W − P X i Y i P Ω − i Z | X i Y i , W k ≤ O ( √ δ / α ) Pr ( W ) from Lemma 19 with the triangle inequality proves the claim. Proof of Lemma 22.
For any i , x , y and ( ω − i , z ) ∈ W write P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · γ ω − i , zx , y = ( W ) P XY ( x , y ) · P Ω − i Z ( ω − i , z ) · γ ω − i , zx , y = ( W ) P XY ( x , y ) · P Ω − i | x , y ( ω − i ) · P Z | ω − i ( z ) · γ ω − i , zx , y ,where for the last equality we used P Ω − i | X i Y i = P Ω − i . From the definition, γ ω − i , zx , y = P Z | ω − i , x , y ( z ) , = ( W ) P XY ( x , y ) · P Z | ω − i ( z ) · P Ω − i Z | x , y ( ω − i , z ) , (36)where P Ω − i Z | x , y ( ω − i , z ) denotes P Ω − i Z | X i = x , Y i = y ( ω − i , z ) . Similarly, we have P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · γ ω − i , z ⊥ , ⊥ = ( W ) P XY ( x , y ) · P Z | ω − i ( z ) · P Ω − i Z |⊥ , ⊥ ( ω − i , z ) . (37)27y definition γ = m ∑ i , ω − i , z P Ω − i Z | W ( ω − i , z ) · P V | ω − i ( z ) ,thus for any η > − η of ( i , ω − i , z ) distributedaccording to P I · P Ω − i Z | W are such that P Z | ω − i ( z ) ≤ γ / η . Let S be the set of such triples, andconsider summing the difference P XY ( x , y ) · P Z | ω − i ( z ) · (cid:12)(cid:12)(cid:12) P Ω − i Z | x , y ( ω − i , z ) − P Ω − i Z |⊥ , ⊥ ( ω − i , z ) (cid:12)(cid:12)(cid:12) over all ( x , y ) and ( i , ω − i , z ) ∈ S . By lines (36) and (37), and applying Claim 23 we obtain1 m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:12)(cid:12)(cid:12) γ ω − i , zx , y − γ ω − i , z ⊥ , ⊥ (cid:12)(cid:12)(cid:12) ≤ γ η O (cid:16) √ δα (cid:17) .Choosing η = δ proves the lemma. Proof of Lemma 20.
For every ( i , ω − i , z ) , x ∈ X and y ∈ Y let unitaries U ω − i zx , V ω − i zy and V ω − i , zx , y beas in Lemma 21. For notational convenience we suppress the dependence on ( i , ω − i , z ) when it isclear from context. Call triples ( i , ω − i , z ) that satisfy the conclusion of Lemma 22 for γ ω − i , zx , y , γ ω − i , zx , ⊥ , γ ω − i , z ⊥ , y , γ ω − i , z ⊥ / x , y , and γ ω − i , z ⊥ / x , ⊥ simutaneously good triples , and let S denote the set of good triples. Using | a − b | ≤ | a − b | for a , b ≥ m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13)(cid:13) | e Φ x , y i − γ − | Φ x , y i (cid:13)(cid:13)(cid:13) (38) = m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) γ − γ ω − i , zx , y γ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) γ − γ ω − i , zx , y γ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = O ( δ / α ) , (39)and similar bounds hold for | e Φ x , ⊥ i , | e Φ ⊥ , y i and | e Φ ⊥ , ⊥ i . Thus to prove the theorem it will be suffi-cient to establish the following bound on the distance between unnormalized states:1 m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13) ( U x ⊗ V y ) | Φ ⊥ , ⊥ i − (cid:12)(cid:12) Φ x , y (cid:11)(cid:13)(cid:13) = O (cid:16) δ α (cid:17) γ . (40)To see that (40) is sufficient, observe that by using the lower bound on the probability of S we canbound the desired expression in the statement of Lemma 20 by1 m ∑ x , yi , ω − i , v P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13)(cid:13) ( U x ⊗ V y ) (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E − (cid:12)(cid:12)(cid:12) e Φ x , y E (cid:13)(cid:13)(cid:13) m ∑ x , y ( i , ω − i , v ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13)(cid:13) ( U x ⊗ V y ) (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E − (cid:12)(cid:12)(cid:12) e Φ x , y E (cid:13)(cid:13)(cid:13) + O ( δ ) Then, by the triangle inequality and the bound of (39),1 m ∑ x , y ( i , ω − i , v ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13)(cid:13) ( U x ⊗ V y ) (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E − (cid:12)(cid:12)(cid:12) e Φ x , y E (cid:13)(cid:13)(cid:13) ≤ m ∑ x , y ( i , ω − i , v ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) (cid:13)(cid:13)(cid:13) (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E − γ − | Φ ⊥ , ⊥ i (cid:13)(cid:13)(cid:13) + (cid:13)(cid:13)(cid:13) (cid:12)(cid:12)(cid:12) e Φ x , y E − γ − (cid:12)(cid:12) Φ x , y (cid:11) (cid:13)(cid:13)(cid:13) + γ − (cid:13)(cid:13)(cid:13) ( U x ⊗ V y ) | Φ ⊥ , ⊥ i − (cid:12)(cid:12) Φ x , y (cid:11) (cid:13)(cid:13)(cid:13) ≤ γ − m ∑ x , y ( i , ω − i , v ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13)(cid:13) ( U x ⊗ V y ) | Φ ⊥ , ⊥ i − (cid:12)(cid:12) Φ x , y (cid:11) (cid:13)(cid:13)(cid:13) + O ( δ / α ) .Thus we have established the sufficiency of proving (40). Using (39), the bounds stated in Lemma 21also imply the following bounds on the un-normalized vectors:1 m ∑ x ( i , ω − i , z ) ∈ S P X ( x ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13) | Φ x , ⊥ i − U ω − i zx | Φ ⊥ , ⊥ i (cid:13)(cid:13) = O (cid:16) δ α (cid:17) γ , (41)1 m ∑ y ( i , ω − i , z ) ∈ S P Y ( y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13) V ω − i zy | Φ ⊥ , ⊥ i − (cid:12)(cid:12) Φ ⊥ , y (cid:11)(cid:13)(cid:13) = O (cid:16) δ α (cid:17) γ , (42)1 m ∑ x , y ( i , ω − i , z ) ∈ S P XY ( x , y ) · P Ω − i Z | W ( ω − i , z ) · (cid:13)(cid:13)(cid:13)(cid:13) V ω − i , zx , y (cid:12)(cid:12) Φ ⊥ / x , y (cid:11) − | Φ ⊥ / x , ⊥ i (cid:13)(cid:13)(cid:13)(cid:13) = O (cid:16) δ α (cid:17) γ . (43)We show how to combine these bounds to establish (40). We have (cid:13)(cid:13) U x | Φ ⊥ , y i − | Φ x , y i (cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) U x A ⊥ A − ⊥ / x | Φ ⊥ / x , y i − A x A − ⊥ / x | Φ ⊥ / x , y i (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) U x A ⊥ A − ⊥ / x ⊗ V xy | Φ ⊥ / x , y i − A x A − ⊥ / x ⊗ V xy | Φ ⊥ / x , y i (cid:13)(cid:13)(cid:13) .Using the triangle inequality again, ≤ (cid:13)(cid:13)(cid:13)(cid:16) U x A ⊥ A − ⊥ / x (cid:17) ⊗ V xy | Φ ⊥ / x , y i − (cid:16) U x A ⊥ A − ⊥ / x (cid:17) | Φ ⊥ / x , ⊥ i (cid:13)(cid:13)(cid:13) (44) + (cid:13)(cid:13)(cid:13)(cid:16) U x A ⊥ A − ⊥ / x (cid:17) | Φ ⊥ / x , ⊥ i − A x A − ⊥ / x | Φ ⊥ / x , ⊥ i (cid:13)(cid:13)(cid:13) (45) + (cid:13)(cid:13)(cid:13) A x A − ⊥ / x | Φ ⊥ / x , ⊥ i − A x A − ⊥ / x ⊗ V xy | Φ ⊥ / x , y i (cid:13)(cid:13)(cid:13) . (46)Using k U x A ⊥ A − ⊥ / x k ≤ p − p A the term (44) can be bounded as (cid:13)(cid:13)(cid:13)(cid:16) U x A ⊥ A − ⊥ / x (cid:17) ⊗ V xy | Φ ⊥ / x , y i − (cid:16) U x A ⊥ A − ⊥ / x (cid:17) | Φ ⊥ / x , ⊥ i (cid:13)(cid:13)(cid:13) ≤ − p A (cid:13)(cid:13) V xy | Φ ⊥ / x , y i − | Φ ⊥ / x , ⊥ i (cid:13)(cid:13) .29he term (45) can be re-written as (cid:13)(cid:13)(cid:13)(cid:16) U x A ⊥ A − ⊥ / x (cid:17) | Φ ⊥ / x , ⊥ i − A x A − ⊥ / x | Φ ⊥ / x , ⊥ i (cid:13)(cid:13)(cid:13) = k U x | Φ ⊥ , ⊥ i − | Φ x , ⊥ ik .Finally, using k A x A − ⊥ / x k ≤ √ p A the term (46) can be bounded as (cid:13)(cid:13)(cid:13) A x A − ⊥ / x | Φ ⊥ / x , ⊥ i − A x A − ⊥ / x ⊗ V xy | Φ ⊥ / x , y i (cid:13)(cid:13)(cid:13) ≤ p A (cid:13)(cid:13) | Φ ⊥ / x , ⊥ i − V xy | Φ ⊥ / x , y i (cid:13)(cid:13) .Putting the three bounds together, from (46) we get (cid:13)(cid:13) U x | Φ ⊥ , y i − | Φ x , y i (cid:13)(cid:13) ≤ { p A , 1 − p A } (cid:13)(cid:13) V xy | Φ ⊥ / x , y i − | Φ ⊥ / x , ⊥ i (cid:13)(cid:13) + k U x | Φ ⊥ , ⊥ i − | Φ x , ⊥ ik .(47)Recall that we defined p A = − µ ( X ⊥ ) , and we are assuming that µ ( X ⊥ ) ≤ { p A , 1 − p A } ≥ α . Using that U x is unitary, (cid:13)(cid:13) ( U x ⊗ V y ) | Φ ⊥ , ⊥ i − | Φ x , y i (cid:13)(cid:13) ≤ (cid:13)(cid:13) V y | Φ ⊥ , ⊥ i − | Φ ⊥ , y i (cid:13)(cid:13) + (cid:13)(cid:13) U x | Φ ⊥ , y i − | Φ x , y i (cid:13)(cid:13) ≤ α − (cid:13)(cid:13) V xy | Φ ⊥ / x , y i − | Φ ⊥ / x , ⊥ i (cid:13)(cid:13) + k U x | Φ ⊥ , ⊥ i − | Φ x , ⊥ ik + (cid:13)(cid:13) V y | Φ ⊥ , ⊥ i − | Φ ⊥ , y i (cid:13)(cid:13) ,where the last inequality is (47). Eqs. (41), (42) and (43) bound the three terms above by O ( δ / α ) γ on average over ( x , y ) weighted by P XY , and ( i , ω − i , z ) ∈ S , weighted by P I · P Ω − i Z | W . Thisproves (40), and the theorem follows. In this section we give the proof of Lemma 21, which states the existence of the local unitarytransformations needed for the proof of Theorem 17.
Proof of Lemma 21.
Recall that we let the entangled state | ψ i and POVMs { A ( x [ n ] , a [ n ] ) } and { B ( y [ n ] , b [ n ] ) } constitute an optimal strategy for G n . Define the operators A ω ( a C ) : = E X [ n ] | Ω = ω A ( x [ n ] , a C ) B ω ( b C ) : = E Y [ n ] | Ω = ω B ( y [ n ] , b C ) .Let ρ denote the reduced density matrix of | ψ i on either system (this is well-defined because we’veassumed | ψ i is symmetric).We first prove (30). Recall the notation ψ = | ψ ih ψ | and X [ ρ ] = X ρ X † . We introduce thefollowing state: Ξ Ω X [ n ] E A E B Z = ∑ ω , x [ n ] , a C , b C P Ω X [ n ] ( ω , x [ n ] ) | ω x [ n ] ih ω x [ n ] | ⊗ (cid:16) A ( x [ n ] , a C ) ⊗ B ω ( b C ) (cid:17) [ ψ ] ⊗ | a C b C ih a C b C | .The state Ξ is defined so that tracing out the entanglement registers E A and E B the resulting state Ξ Ω X [ n ] A C B C is a classical state that can be identified with P Ω X [ n ] A C B C . We can condition Ξ on the30vent W (which is well-defined since the event is determined by the classical random variables Ω and Z ) by defining the state ξ Ω X [ n ] E A E B Z = Ξ Ω X [ n ] E A E B Z | W , . (48)In what follows, we will frequently refer to states such as ξ E B | ω − i , z , x i , ⊥ ,which is shorthand for the state ξ E B | Ω − i = ω − i , Z = z , X i = x i , Ω i =( B , ⊥ ) , the reduced density matrix of ξ on thesystem E B , conditioned on the indicated settings of the classical random variables. The followingclaim provides the main step of the proof by relating the reduced densities on Bob’s registers ofstates (48) associated with different choices for x i . Claim 24. E i E Ω − i Z | W E X i (cid:13)(cid:13)(cid:13) ξ E B | ω − i , z , x i , ⊥ − ξ E B | ω − i , z , ⊥ , ⊥ (cid:13)(cid:13)(cid:13) = O (cid:0) √ δ / α (cid:1) (49) Proof.
First we observe that Pr ( W ) ξ (cid:22) Ξ , thus by definition S ( ξ k Ξ ) ≤ S ∞ ( ξ k Ξ ) ≤ log 1/ Pr ( W ) .Using the chain rule for the relative entropy (Lemma 7), E Ω V | W S ( ξ X [ n ] E B | ω , z k Ξ X [ n ] E B | ω , z ) ≤ log 1Pr ( W ) . (50)Next we note that for any ω , using Ando’s identity h ψ | X ⊗ Y | ψ i = Tr ( X √ ρ Y ⊤ √ ρ ) ,where | ψ i = ∑ p λ j | v j i| v j i , ρ = ∑ λ j | v j ih v j | , X , Y are any linear operators and the transpose istaken with respect to the orthonormal basis {| v j i} , Ξ X [ n ] E B Z | ω = ∑ x [ n ] , a C , b C P X [ n ] | ω ( x [ n ] ) | x [ n ] ih x [ n ] | ⊗ q B ω ( b C ) √ ρ A ( x [ n ] , a C ) √ ρ q B ω ( b C ) ⊗ | a C b C ih a C b C |(cid:22) ∑ x [ n ] , a C , b C P X [ n ] | ω ( x [ n ] ) | x [ n ] ih x [ n ] | ⊗ q B ω ( b C ) √ ρ A ( x [ n ] , a C ) √ ρ q B ω ( b C ) ⊗ I = ∑ x [ n ] , b C P X [ n ] | ω ( x [ n ] ) | x [ n ] ih x [ n ] | ⊗ q B ω ( b C ) ρ q B ω ( b C ) ⊗ I , (51)where the last equality uses ∑ a C A ( x [ n ] , a C ) = I . From (51) and the definition of S ∞ it follows thatfor all z , S ∞ ( Ξ X [ n ] E B | ω , z k Ξ X [ n ] | ω ⊗ Ξ E B | ω ) ≤ | C | · log |A||B| . Applying Lemma 8, E i E Ω Z | W I ( X i ; E B | ω , z ) ξ ≤ m E Ω Z | W S ( ξ X [ n ] E B | ω , z k Ξ X [ n ] | ω , z ⊗ Ξ E B | ω , z ) ≤ m (cid:18) E Ω Z | W S ( ξ X [ n ] E B | ω , z k Ξ X [ n ] E B | ω , z ) + E Ω Z | W S ∞ ( Ξ X [ n ] E B | ω , z k Ξ X [ n ] | ω ⊗ Ξ E B | ω ) (cid:19) ≤ m (cid:18) log 1Pr ( W ) + | C | · log |A||B| (cid:19) (52)31 δ (53)where in the third inequality the first term is bounded using (50) and the second using (51). Ap-plying Lemma 19, E i P D i M i | W ( B , ⊥ ) ≈ O ( √ δ ) E i P D i M i ( B , ⊥ ) = − p B ≥ α Ω i = ( B , ⊥ ) we deduce E i E Ω Z | Ω i =( B , ⊥ ) , W I (cid:0) X i ; E B | ω , z (cid:1) ξ = O (cid:0) δ / α (cid:1) , (54)as long as α = Ω ( √ δ ) . Next we apply Pinsker’s inequality (Lemma 6) and use that X i is classicalin ξ to write E i E Ω Z | Ω i =( B , ⊥ ) , W E X i | ω , z (cid:13)(cid:13) ξ E B | ω − i , z , x i , ⊥ − ξ E B | ω , z (cid:13)(cid:13) ≤ E i E Ω Z | Ω i =( B , ⊥ ) , W E X i | ω , z S (cid:0) ξ E B | ω − i , z , x i , ⊥ (cid:13)(cid:13) ξ E B | ω , z (cid:1) = E i E Ω Z | Ω i =( B , ⊥ ) , W I ( X i ; E B | ω , z ) ξ = O (cid:0) δ / α (cid:1) by (54). To conclude Claim 24, we use Lemma 19 to obtain P I · P Ω − i Z X i | Ω i =( B , ⊥ ) , W ≈ O ( √ δ / α ) P I · P Ω − i Z | W · P X i .The proof of (30) essentially follows from Claim 24 and Uhlmann’s theorem. We give thedetails. First write ξ E B | ω − i , z , x i , ⊥ and ξ E B | ω − i , z , ⊥ , ⊥ explicitly as ξ E B | ω − i , z , x i , ⊥ ∝ q B ω − i ( ⊥ , b C ) √ ρ A ω − i ( x i , a C ) √ ρ q B ω − i ( ⊥ , b C ) , ξ E B | ω − i , z , ⊥ , ⊥ ∝ q B ω − i ( ⊥ , b C ) √ ρ A ω − i ( ⊥ , a C ) √ ρ q B ω − i ( ⊥ , b C ) ,which makes it apparent that the states | e Φ x i , ⊥ i and | e Φ ⊥ , ⊥ i introduced in (26) purify ξ E B | ω − i , z , x i , ⊥ and ξ E B | ω − i , z , ⊥ , ⊥ respectively. Applying Uhlmann’s Theorem, there exists a unitary U ω − i , z , x i acting on E A such that E i E Ω − i Z | W E X i (cid:12)(cid:12)(cid:12)D e Φ x i , ⊥ (cid:12)(cid:12)(cid:12) U ω − i , z , x i (cid:12)(cid:12)(cid:12) e Φ ⊥ , ⊥ E(cid:12)(cid:12)(cid:12) ≥ − E i E Ω − i Z | W E X i (cid:13)(cid:13) ξ E B | ω − i , z , x i , ⊥ − ξ E B | ω − i , z , ⊥ , ⊥ (cid:13)(cid:13) ≥ − O ( δ / α ) , (55)where the first inequality follows from the Fuchs-van de Graaf inequality (4) and the second usesJensen’s inequality and (49) from Claim 24. Expanding out the squared Euclidean norm and mak-ing sure that U ω − i , z , x i is chosen so as to ensure that the inner product h e Φ x i , ⊥ | U x i | e Φ ⊥ , ⊥ i is positivereal, (55) proves (30). 32 nearly identical argument yields (31). It remains to show (32). To do so, conside the state Π Ω Y [ n ] Z : = ∑ ω , y [ n ] , a C , b C P Ω Y [ n ] ( ω , y [ n ] ) | ω y [ n ] ih ω y [ n ] | ⊗ (cid:18)q A ω ( a C ) ⊗ q B ( y [ n ] , b C ) (cid:19) [ ψ ] ⊗ | a C b C ih a C b C | .and let π denote Π conditioned on the event W . Note that Π is such that the classical state Π Ω Y [ n ] A C B C matches the distribution P Ω Y [ n ] A C B C . The required analogue of Claim 24 can be stated asfollows. Claim 25. E i E Ω − i V | W E X i Y i (cid:13)(cid:13)(cid:13) π E A (cid:12)(cid:12) ω − i , z , y i ω i =( A , ⊥ / x i ) − π E A (cid:12)(cid:12) ω − i , z , y i ∈ ⊥ ω i =( A , ⊥ / x i ) (cid:13)(cid:13)(cid:13) ≤ O ( √ δ / α ) . (56) Proof.
The proof follows very closely that of Claim 24. The conditioning on Ω i = ( B , ⊥ ) that ledto (54) is replaced here by conditioning on D i = A and M i = ⊥ . The main observation needed isthat the distribution P M i Y i | D i = A , M i = ⊥ is a “reshaped” version of P X i Y i in the sense that P M i Y i | D i = A , M i = ⊥ ∼ = p A · P X i Y i | X i ∈X ⊥ + ( − p A ) · P X i Y i | X i / ∈X ⊥ where “ ∼ = ” denotes a relabeling of M i on the left-hand side by X i on the right-hand side. Using theassumption that α ≤ p A is bounded away from zero and 1 by Θ ( α ) , and the proof concludesas for Claim 24 (with the extra dependence on α − coming from the possible imbalance in p A ).The densities π E A (cid:12)(cid:12) ω − i , z , y i ω i =( A , ⊥ / x i ) and π E A (cid:12)(cid:12) ω − i , z , y i ∈ ⊥ ω i =( A , ⊥ / x i ) are purified by | e Φ ⊥ / x i , y i i and | e Φ ⊥ / x i , ⊥ i respectively. Applying Uhlmann’s Theorem, there existunitaries V ω − i , z , x i , y i acting on E B such that E i E Ω − i V | W E X i Y i (cid:13)(cid:13)(cid:13)(cid:12)(cid:12)(cid:12) e Φ ⊥ / x i , ⊥ E − V ω − i , z , x i , y i (cid:12)(cid:12)(cid:12) e Φ ⊥ / x i , y i E(cid:13)(cid:13)(cid:13) ≤ O ( δ / α ) (57)by Claim 25, proving (32). Many interesting problems about the parallel repetition of multiplayer and entangled games re-main open. Perhaps the most obvious and pressing is the problem of obtaining a complete ex-tension of Raz’s theorem for general entangled two-player games. For example, obtaining afully quantum analogue of Raz’s theorem, as was the case for Raz’s theorem itself, is likely tohave important implications in the setting of communication complexity. One promising candi-date approach could be to leverage the recent ideas related to quantum information complexity[Tou14, BGK + generalmultiplayer games remain a fascinating challenge. In our view, however, this problem (even classi-cally) seems more challenging than the two-player entangled case, as its difficulties are related tocommunication complexity and circuit complexity lower bounds.One limitation of our result is that it is essentially most suitable in the case of games withvalue close to 1, in the sense that if val ( G ) , val ∗ ( G ) are already subconstant , our bounds do not takeadvantage of this fact. Indeed, even if G originally has a value close to 0, the anchoring operationitself pushes the value up to Ω ( ) . So it remains open to find a hardness amplification result thatreplicates the strength of similar theorems obtained recently in the classical setting [DS14, BG15]. Acknowledgments.
We thank Mark Braverman and Ankit Garg for useful discussions. MB wassupported by NSF under CCF-0939370 and CCF-1420956. HY was supported by Simons Founda-tion grant
References [AFRV14] Rotem Arnon-Friedman, Renato Renner, and Thomas Vidick. Non-signalling paral-lel repetition using de finetti reductions. arXiv preprint arXiv:1411.1582 , 2014.[AIM14] Scott Aaronson, Russell Impagliazzo, and Dana Moshkovitz. Am with multiple mer-lins. In
Computational Complexity (CCC), 2014 IEEE 29th Conference on , pages 44–55.IEEE, 2014.[Bel64] John S Bell. On the Einstein-Podolsky-Rosen paradox.
Physics , 1(3), 1964.[BFS13] Harry Buhrman, Serge Fehr, and Christian Schaffner. On the parallel repetition ofmulti-player games: The no-signaling case. arXiv preprint arXiv:1312.7455 , 2013.[BG15] Mark Braverman and Ankit Garg. Small value parallel repetition for general games.In
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing ,STOC, 2015.[BGK +
15] Mark Braverman, Ankit Garg, Young Kun Ko, Jieming Mao, and Dave Touchette.Near-optimal bounds on bounded-round quantum communication complexity ofdisjointness. In
Proceedings of Foundations of Computer Science (FOCS)—to appear ,2015.[BHH +
08] Boaz Barak, Moritz Hardt, Ishay Haviv, Anup Rao, Oded Regev, and David Steurer.Rounding parallel repetitions of unique games. In
Foundations of Computer Science,2008. FOCS’08. IEEE 49th Annual IEEE Symposium on , pages 374–383. IEEE, 2008.[BOGKW88] Michael Ben-Or, Shafi Goldwasser, Joe Kilian, and Avi Wigderson. Multi-proverinteractive proofs: How to remove intractability assumptions. In
Proceedings of Sym-posium on Theory of computing (STOC) , 1988.34BRWY13] Mark Braverman, Anup Rao, Omri Weinstein, and Amir Yehudayoff. Direct prod-ucts in communication complexity. In
Foundations of Computer Science (FOCS), 2013IEEE 54th Annual Symposium on , pages 746–755. IEEE, 2013.[BSVV15] Amey Bhangale, Ramprasad Saptharishi, Girish Varma, and Rakesh Venkat. Onfortification of projection games. In
RANDOM . (arXiv:1504.05556), 2015.[CHSH69] John F Clauser, Michael A Horne, Abner Shimony, and Richard A Holt. Proposedexperiment to test local hidden-variable theories.
Physical Review Letters , 23(15):880–884, 1969.[CS14a] Andr´e Chailloux and Giannicola Scarpa. Parallel repetition of entangled games withexponential decay via the superposed information cost. In , pages 296–307, 2014.[CS14b] Andr´e Chailloux and Giannicola Scarpa. Parallel repetition of free entangled games:Simplification and improvements. arXiv preprint arXiv:1410.4397 , 2014.[CSUU08] Richard Cleve, William Slofstra, Falk Unger, and Sarvagya Upadhyay. Perfect par-allel repetition theorem for quantum xor proof systems.
Computational Complexity ,17(2):282–299, 2008.[CWY15] Kai-Min Chung, Xiaodi Wu, and Henry Yuen. Parallel repetition for entangled k-player games via fast quantum search. In , page 512, 2015.[DS14] Irit Dinur and David Steurer. Analytical approach to parallel repetition. In
Proceed-ings of the 46th Annual ACM Symposium on Theory of Computing , pages 624–633. ACM,2014.[DSV14] Irit Dinur, David Steurer, and Thomas Vidick. A parallel repetition theorem forentangled projection games. In the 29th Conference on Computational Complexity, CCC ,pages 197–208, 2014.[Fei95] Uriel Feige. Error reduction by parallel repetition: The state of the art.
Technical Report CS95-32 of the Weizmann Institute , 1995.[FK00] Uriel Feige and Joe Kilian. Two-prover protocols—low error at affordable rates.
SIAM Journal on Computing , 30(1):324–346, 2000.[FRS88] Lance Fortnow, John Rompel, and Michael Sipser. On the power of multi-power in-teractive protocols. In
Proceedings of Structure in Complexity Theory Conference (CCC) ,pages 156–161, 1988.[FV02] Uriel Feige and Oleg Verbitsky. Error reduction by parallel repetition: a negativeresult.
Combinatorica , 22(4):461–478, 2002.35H˚as01] Johan H˚astad. Some optimal inapproximability results.
Journal of the ACM (JACM) ,48(4), 2001.[Hol09] Thomas Holenstein. Parallel repetition: Simplification and the no-signaling case.5(1):141–172, 2009.[JPY13] Rahul Jain, Attila Pereszl´enyi, and Penghui Yao. A parallel repetition theorem forentangled two-player one-round games under product distributions. arXiv preprintarXiv:1311.6309 , 2013.[JPY14] Rahul Jain, Attila Pereszl´enyi, and Penghui Yao. A parallel repetition theorem forentangled two-player one-round games under product distributions. In the 29thConference on Computational Complexity, CCC , 2014.[KRT08] Julia Kempe, Oded Regev, and Ben Toner. Unique games with entangled proversare easy. In
Proceedings of Foundations of Computer Science (FOCS) , 2008.[KV11] Julia Kempe and Thomas Vidick. Parallel repetition of entangled games. In
Proceed-ings of the forty-third annual ACM symposium on Theory of computing (STOC) , pages353–362, 2011.[LW15] C´ecilia Lancien and Andreas Winter. Parallel repetition and concentration for (sub-) no-signalling games via a flexible constrained de finetti reduction. arXiv preprintarXiv:1506.07002 , 2015.[Mos14] Dana Moshkovitz. Parallel repetition from fortification. In
Foundations of ComputerScience (FOCS), 2014 IEEE 55th Annual Symposium on , pages 414–423. IEEE, 2014.[NC10] Michael A Nielsen and Isaac L Chuang.
Quantum computation and quantum informa-tion . Cambridge university press, 2010.[Rao11] Anup Rao. Parallel repetition in projection games and a concentration bound.
SIAMJournal on Computing , 40(6):1871–1891, 2011.[Raz92] Alexander A. Razborov. On the distributional complexity of disjointness.
TheoreticalComputer Science , 106(2):385–390, 1992.[Raz98] Ran Raz. A parallel repetition theorem.
SIAM Journal on Computing , 27(3):763–803,1998.[Raz10] Ran Raz. Parallel repetition of two prover games (invited survey). In , pages 3–6. IEEE, 2010.[Raz11] Ran Raz. A counterexample to strong parallel repetition.
SIAM Journal on Computing ,40(3):771–777, 2011. 36Tou14] Dave Touchette. Quantum information complexity and amortized communication. arXiv preprint arXiv:1404.3733 , 2014.[Vaz13] Vijay V Vazirani.
Approximation algorithms . Springer Science & Business Media, 2013.[Ver96] Oleg Verbitsky. Towards the parallel repetition conjecture.
Theoretical Computer Sci-ence , 157(2):277–282, 1996.[Wil13] Mark M Wilde.