Classical simulations of communication channels
aa r X i v : . [ c s . I T ] F e b CLASSICAL SIMULATIONS OF COMMUNICATIONCHANNELS
PÉTER E. FRENKEL
In memoriam Katalin Marton
Abstract.
We investigate whether certain non-classical commu-nication channels can be simulated by a classical channel with agiven number of states and a given amount of noise. It is provedthat any noisy quantum channel can be simulated by the corre-sponding noisy classical channel. General probabilistic channelsare also studied.
Introduction
A communication protocol with l possible inputs and k possible out-puts can be described by a transition matrix A = ( a ij ) ∈ [0 , k × l ,where a ij is the conditional probability of output i if the input is j .This is a stochastic matrix: for all j , we have P ki =1 a ij = 1 . A commu-nication channel can be described by the set of transition matrices thatit affords. Channel Q can be simulated by channel C if all transitionmatrices afforded by Q are convex combinations of transition matricesafforded by C.The classical channel with n states affords stochastic 0-1 matriceswith at most n nonzero rows. The quantum channel of level n affordschannel matrices of the form (tr E i ρ j ) , where ρ , . . . , ρ l ∈ M n ( C ) are density matrices , and E , . . . , E k ∈ M n ( C ) is a positive operator valuedmeasure (POVM) . It is easy to see that the classical channel with n states can be simulated by the quantum channel of level n . By [3, The-orem 3] by Weiner and the present author, the converse also holds. Thepresent paper is about variants of this theorem for general probabilisticchannels (Section 1) and for noisy quantum channels (Section 2). The Key words and phrases.
Quantum channel, noise, classical simulation, signallingdimension.Research partially supported by MTA Rényi “Lendület” Groups and GraphsResearch Group, by ERC Consolidator Grant 1040085, and by NKFIH (OTKA)grant K 124152. This paper was presented in part at the School on AdvancedTopics in Quantum Information and Foundations, February 2021. This work hasbeen submitted to the IEEE for possible publication. Copyright may be transferredwithout notice, after which this version may no longer be accessible. two sections are logically independent and can be read in arbitrary or-der. Section 2 is mathematically deeper, and more relevant to the realworld.
Notations and terminology.
The set { , . . . , k } is denoted by [ k ] .For a real number a , we write a + = max( a, . The indicator of anevent A is written ( A ) .A convex body is a convex compact set with nonempty interior.A matrix is stochastic if all entries are nonnegative reals and eachcolumn sums to 1. The set of n -square matrices with complex entriesis written M n ( C ) . The identity matrix is . A complex matrix A is psdh if it is positive semi-definite Hermitian, written A ≥ . A positiveoperator valued measure (POVM) is a sequence E , . . . , E k of psdhmatrices summing to . A density matrix is a psdh matrix with trace1. 1. General probabilistic theory
Let S be a convex body in a finite dimensional real affine space. Let E be the cone of effects , i.e., affine linear functions e : S → [0 , ∞ ) .A partition of unity is a a sequence e , . . . , e k ∈ E of effects suchthat e + · · · + e k = 1 (the constant 1 function). The channel withstate space S affords transition matrices of the form ( e i ( x j )) ∈ [0 , k × l ,where x , . . . , x l ∈ S , and e , . . . , e k is a partition of unity.1.1. Signalling dimension.
Following the terminology introduced in[2], the signalling dimension sign . dim S of S is the smallest positiveinteger n such that the channel with state space S can be simulatedby the classical channel with n states. By [3, Theorem 3] mentioned inthe Introduction, the signalling dimension of the set of n -square densitymatrices is n .Calculating, or even efficiently estimating the signalling dimensionof a given convex body seems to be a difficult problem, and stronggeneral theorems are yet to be searched for. In this section, we startwith weak general results and work our way towards deeper results forspecial cases.The affine dimension aff . dim S of S is the dimension of S as a convexbody. Adding 1, we get the linear dimension lin . dim S of S , i.e., thedimension of the space of affine linear functions on S .A partition of unity is extremal if it cannot be written as a convexcombination of two partitions of unity in a nontrivial way. Proposition 1.1.
The nonzero effects in an extremal partition of unityare linearly independent. Thus, their number is ≤ the linear dimensionof S .Proof. Let e , . . . , e k be an extremal partition of unity. If λ e + · · · + λ k e k = 0 and | ǫ | ≤ / max {| λ i | : λ i = 0 } , then (1 ± ǫλ ) e , . . . , LASSICAL SIMULATIONS OF COMMUNICATION CHANNELS 3 (1 ± ǫλ k ) e k is also a partition of unity, which must coincide with e ,. . . , e k because of extremality. Thus λ i e i = 0 for all i . (cid:3) Following [5] by Matsumoto and Kimura, the information storabil-ity inf . stor S of S is the maximum of P ki =1 max j a ij over all transi-tion matrices ( a ij ) afforded by S , or, equivalently, the maximum of P ki =1 max S e i over all partitions of unity e , . . . , e k . When takingthese maxima, if suffices to consider extremal partitions of unity. ThenProposition 1.1 and a simple compactness argument shows that thesemaxima are attained.By [5, Corollary 2], inf . stor S ≤ lin . dim S . We refine this inequalityas follows. Theorem 1.2. (1) inf . stor S ≤ sign . dim S ≤ lin . dim S . (2) If inf . stor S ≤ aff . dim S , then sign . dim S ≤ aff . dim S .Proof. (1) Let n = sign . dim S . Any transition matrix afforded by S isa convex combination of transition matrices afforded by the classicalchannel with n states. Such a matrix has ≤ n nonzero rows and there-fore sum of row-maxima ≤ n . This property is preserved when takingconvex combinations. This proves the first inequality.Any transition matrix afforded by S is a convex combination of tran-sition matrices of the form ( e i ( x j )) , where e , . . . , e k is an extremal partition of unity, and x j ∈ S . By Proposition 1.1, such a matrix has ≤ lin . dim S nonzero rows, and therefore is a convex combination ofmatrices afforded by the classical channel with lin . dim S states. Thisproves the second inequality.(2) Let inf . stor S ≤ aff . dim S = n . Any transition matrix affordedby S is a convex combination of matrices of the form A = ( a ij ) ∈ [0 , k × l , where a ij = e i ( x j ) , e , . . . , e k is an extremal partition ofunity, and x j ∈ S . We shall show that such an A is always a convexcombination of transition matrices afforded by the classical channelwith n states. Using Proposition 1.1, we may assume that k = n + 1 .Set m i = max S e i ∈ [0 , for each i ∈ [ k ] . Note that P ki =1 (1 − m i ) ≥ n + 1 − inf . stor S ≥ . Choose a probability distribution p , . . . , p k such that p i ≤ − m i for all i . Then p i ≤ − a ij = X i ′ = i a i ′ j for all i and j , and X i ∈ T p i ≤ k X i =1 a ij for all T ⊆ [ k ] .For any fixed j , put supply a ij and demand p i at each node i of thecomplete (but loopless) graph on k nodes. Then, for the total supply P. E. FRENKEL at the neighbors of any subset T ⊆ [ k ] , we have X i ∈ N ( T ) a ij ≥ X i ∈ T p i . By the Supply–Demand Theorem [4, 2.1.5. Corollary], the demandscan be met: there exist stochastic column vectors b j (1) , . . . , b j ( k ) suchthat the i -th entry of b j ( i ) is zero for all i , and P ki =1 p i b j ( i ) is the j -th column of A . Now let B ( i ) be the matrix with columns b ( i ) , . . . , b l ( i ) . Then the i -th row of B ( i ) is zero, so B ( i ) has ≤ k − n nonzero rows, so B ( i ) is a convex combination of transition matricesafforded by the classical channel with n states. Then so is A , since A = P ki =1 p i B ( i ) . (cid:3) For the remainder of this section, assume that S is not just a point.The Minkowski measure of asymmetry asymm S of S is the smallestreal number m ≥ such that there exists a point O ∈ S such that forany chord AOB of S , we have | OB | ≤ m | OA | .By [5, Theorem 1] of Matsumoto and Kimura, the information stora-bility is related to the Minkowski measure of asymmetry as follows. Proposition 1.3. inf . stor S = asymm S + 1 Although this is a known statement, we include the sketch of a geo-metric proof for the convenience of the reader.
Proof. ≤ : There exists a point O ∈ S such that for any chord AOB of S , we have | OB | ≤ (asymm S ) | OA | . Let n = asymm S + 1 . Then e ( x ) ≤ ne ( O ) for all e ∈ E and x ∈ S , whence k X i =1 max S e i ≤ n k X i =1 e i ( O ) = n for all partitions of unity e , . . . , e k . ≥ : Let n = inf . stor S . Then P ki =1 max S e i ≤ n for all partitionsof unity e , . . . , e k . When k is the linear dimension of S , this tellsus that for any simplex ∆ containing S , there exists a point each ofwhose barycentric coordinates with respect to ∆ is at least /n timesthe maximum value of that barycentric coordinate on S . Using Helly’stheorem, we see that there exists a point O that divides the distancebetween any two parallel supporting hyperplanes of S in a ratio atleast as equitable as n − . Then, for any chord AOB of S with | AO | ≤ | OB | , considering the supporting hyperplane of S at A and theparallel supporting hyperplane, we get that | OB | ≤ ( n − | OA | . (cid:3) Corollary 1.4.
For the regular octahedron, we have asymm = 1 , inf . stor = 2 , sign . dim = aff . dim = 3 , and lin . dim = 4 . LASSICAL SIMULATIONS OF COMMUNICATION CHANNELS 5
Proof.
The regular octahedron is centrally symmetric, which meansthat asymm = 1 . By Proposition 1.3, we have inf . stor = asymm +1 =2 . Obviously, aff . dim = 3 and lin . dim = aff . dim +1 = 4 .By Theorem 1.2(2), we have sign . dim ≤ . To prove the converseinequality, let X = − − − be the matrix whose columns are the vertices of the octahedron (theentries not shown are zero). Let V = − − − − − − , then V X = − − − − − − − − − − − − . Adding 1 to each entry and dividing by 4, we get the stochastic matrix A = 12 , which is therefore a transition matrix afforded by the octahedron. Sinceany two rows of A have an 1/2 at the same position, we have X ≤ i
If an origin is chosen in S , and ≤ δ ≤ , thenthe δ -noisy channel with state space S affords the transition matrices ( e i ( x j )) , where e , . . . , e k is a partition of unity and x j ∈ (1 − δ ) S forall j . Note that e i ≥ is required on all of S .For the classical channel with n states, we consider the state space(1.1) ∆ n = { ( ξ , . . . , ξ n ) : ξ i ≥ for all i, ξ + · · · + ξ n = 1 } , P. E. FRENKEL a simplex with n vertices, with the origin chosen at (1 /n, . . . , /n ) , thecenter of the simplex. For ≤ δ ≤ , consider the smaller simplex ∆ n ( δ ) = { ( ξ , . . . , ξ n ) : ξ i ≥ δ/n for all i, ξ + · · · + ξ n = 1 } . For the δ -noisy classical system with n states , the requirement on thestates used is that x j ∈ ∆ n ( δ ) for all j .It is easy to see that if S ′ is an affine image of S , then S ′ can besimulated by S . If, in addition, the origin O is mapped to O ′ , then δ -noisy S ′ can be simulated by δ -noisy S . In particular, a classical bitcan be simulated by S unless S is just a point, and a δ -noisy classicalbit can be simulated by any δ -noisy S = { O } that is symmetric withrespect to O . Theorem 1.5.
Let n be an even positive integer. Put S = { x ∈ R d : k x k n/ ( n − ≤ } , the unit ball of the n/ ( n − -norm. Let ≤ δ ≤ . (1) The δ -noisy channel with state space S can be simulated by the δ -noisy classical channel with n states. (2) The signalling dimension of S is ≤ n . (3) For an ellipsoid of arbitrary affine dimension ≥ , the signallingdimension is . A δ -noisy ellipsoid can be simulated by a δ -noisyclassical bit. The proof below is similar to that of [3, Theorem 3]. However, themixed discriminant used there (and used in Section 2 of the presentpaper) must be replaced by a different n -linear symmetric function {· , . . . , ·} .To introduce {· , . . . , ·} , we can think of an affine linear function e : S → R as a formal sum of a number and a vector: e = c + v ∈ R d +1 ,meaning that e ( x ) = c + vx for x ∈ S , where vx is the usual innerproduct. For an effect e ∈ E , the condition e ≥ translates to k v k n ≤ c because ( n/ ( n − − + n − = 1 . Given e , . . . , e n ∈ R d +1 , where e i = c i + v i , we define { e , . . . , e n } = c · · · c n − v · · · v n , where v · · · v n means that we take the coordinatewise product and thenadd up the coordinates (which is an n -linear generalization of the usualinner product). For n = 2 , {· , ·} is the Lorentzian indefinite symmetricbilinear product well known from the special theory of relativity. Forgeneral n , {· , . . . , ·} is symmetric, multilinear and { , . . . , } = 1 . When e , . . . , e n ∈ E , we have { e , . . . , e n } ≥ by repeated application ofHölder’s inequality. Further, if ≤ e ≤ holds pointwise on S , then LASSICAL SIMULATIONS OF COMMUNICATION CHANNELS 7 writing e = c + v and a = k v k n , we have ≤ a ≤ min( c, − c ) andtherefore { e, . . . , e } = c n − v n ∗ = c n − a n == ( c − a )( c n − + c n − a + · · · + ca n − + a n − ) ≤≤ ( c − a )( c + (1 − c )) n − = c − a = min x ∈ S e ( x ) . Note that the equality marked by a * holds because n is even.We are now ready to start the proof of Theorem 1.5. Proof. (1) Let A ∈ [0 , k × l be a δ -noisy transition matrix affordedby S , i.e., a ij = e i ((1 − δ ) x j ) , where x , . . . , x l ∈ S , e i ∈ E , and e + · · · + e k = 1 . We shall prove that A is a convex combination of δ -noisy n -state classical transition matrices.If e i = c i + v i as before, then c + · · · + c k = 1 , v + · · · + v k = 0 ,and a ij = c i + (1 − δ ) v i x j = δc i + (1 − δ ) e i ( x j ) , so A = δC + (1 − δ ) A ′ ,where C is the matrix with entries c ij = c i not depending on j , and A ′ is the matrix with entries a ′ ij = e i ( x j ) .For I = ( i , . . . , i n ) ∈ [ k ] n , put p I = { e i , . . . , e i n } . We have p I ≥ for all I . Thus, we get a measure P on [ k ] n definedby P ( T ) = P I ∈ T p I . Using the multilinearity of the bracket and theassumption that e , . . . , e k is a partition of unity, we see that P ([ k ] n ) = { , . . . , } = 1 , so P is a probability measure.Let D ( I ) be the matrix with entries d ( I ) ij = m ( i, I ) /n not dependingon j , where m ( i, I ) is the number of occurrences of i in the sequence I . Then R D d P = C because Z d ij d P = X I ∈ [ k ] n p I m ( i, I ) /n = { e i , , . . . , } = c i = c ij . For any R ⊆ [ k ] , we may put e R = P i ∈ R e i , and then we have P ( R n ) = { e R , . . . , e R } ≤ min x ∈ S e R ( x ) ≤ e R ( x j ) for all j since ≤ e R ≤ . The right hand side here is A ′ j ( R ) , where A ′ j is the probability measure on [ k ] given by the numbers e i ( x j ) . Sowe have A ′ j ( R ) ≥ P ( R n ) for all R ⊆ [ k ] . Let us connect I ∈ [ k ] n to i ∈ [ k ] by an edge if i occurs in I . Thisgives us a bipartite graph. The neighborhood of any set T ⊆ [ k ] n isthe set R ⊆ [ k ] of indices occurring in some element of T . We alwayshave T ⊆ R n , whence A ′ j ( R ) ≥ P ( R n ) ≥ P ( T ) . Thus, by the Supply–Demand Theorem [4, 2.1.5. Corollary], and using the fact that both A ′ j and P are probability measures, there exists a probability measure P. E. FRENKEL P j on [ k ] n × [ k ] which is supported on the edges of the graph and hasmarginals P and A ′ j . Whenever p I = 0 , let B ′ ( I ) be the k × l stochasticmatrix whose j -th column is given by the conditional distribution P j | I on [ k ] . We have A ′ = R B ′ d P .Now B ( I ) = δD ( I ) + (1 − δ ) B ′ ( I ) is a δ -noisy classical transitionmatrix with n states, and A = R B d P , as desired.(2) Set δ = 0 in (1).(3) The signalling dimension of an ellipsoid is the same as that ofthe Euclidean unit ball. This is ≤ by (2), and is ≥ because the unitball is not a point. The noisy claim follows from (1). (cid:3) Noisy quantum channels
Let K ⊆ ∆ n (cf. (1.1)) be a convex set of probability distributionsthat is invariant under all permutations of the n coordinates. The K -noisy classical channel affords transition matrices of the form EX ∈ [0 , k × l , where X ∈ K l is an n × l matrix with all columns in K , and E is a k × n stochastic 0-1 matrix. A density matrix is K -noisy ifthe sequence of its eigenvalues is in K . The K -noisy quantum channel affords transition matrices of the form (tr E i ρ j ) , where E , . . . , E k is aPOVM and ρ j is a K -noisy density matrix for j = 1 , . . . , l .It is easy to see that the K -noisy classical channel can be simulatedby the K -noisy quantum channel. Our goal is to prove the converse,which is a far-reaching generalization of [3, Theorem 3] mentionedin the Introduction. As in [3], our main tool is the mixed discrimi-nant , the unique symmetric n -linear function D in M n ( C ) such that D ( E, . . . , E ) = det E for all E ∈ M n ( C ) . Explicitly, if E i = [ e i , . . . , e ni ] are the columns, then(2.1) D ( E , . . . , E n ) = 1 n ! X π ∈ S n det (cid:2) e π (1) , . . . , e nπ ( n ) (cid:3) . We shall need the following inequalities.
Lemma 2.1.
For λ , . . . , λ n ∈ [0 , and r = 1 , , . . . , n , we have (2.2) X Q ⊆ [ n ] ( r − | Q | ) + Y i/ ∈ Q λ i Y i ∈ Q (1 − λ i ) ≤ λ + · · · + λ r , where a + = max( a, .Proof. We have ( r − | Q | ) + ≤ | [ r ] \ Q | = r X j =1 ( j / ∈ Q ) LASSICAL SIMULATIONS OF COMMUNICATION CHANNELS 9 for all Q . Thus, the left hand side of (2.2) is ≤ r X j =1 X Q ⊆ [ n ] \{ j } Y i/ ∈ Q λ i Y i ∈ Q (1 − λ i ) = r X j =1 λ j Y i = j ( λ i +(1 − λ i )) = λ + · · · + λ r . (cid:3) Lemma 2.2.
For an n -square Hermitian matrix ≤ E ≤ witheigenvalues λ , . . . , λ n , and r = 1 , , . . . , n , we have r − X t =0 ( r − t ) (cid:18) nt (cid:19) D ( E, . . . , E | {z } n − t , − E, . . . , − E | {z } t ) ≤ λ + · · · + λ r . Proof.
Since the spectrum and the mixed discriminant are both invari-ant under unitary conjugation, we may assume that E is a diagonalmatrix. Then (2.1) reduces Lemma 2.2 to Lemma 2.1. (cid:3) By Bapat’s [1, Lemma 2(vi)], if E , . . . , E n are all positive semidef-inite Hermitian matrices, then(2.3) D ( E , . . . , E n ) ≥ . Given a POVM E , . . . , E k ∈ M n ( C ) , we define(2.4) p I = D ( E i , . . . , E i n ) for all I = ( i , . . . , i n ) ∈ [ k ] n . By multilinearity and (2.3), this definesa probability distribution on [ k ] n . Lemma 2.3. If E , . . . , E k ∈ M n ( C ) is a POVM, u , . . . , u k are realnumbers, and λ , . . . , λ n are the eigenvalues of E = P ki =1 u i E i , then (2.5) X I ∈ [ k ] n p I min (X j ∈ J u i j : J ⊆ [ n ] , | J | = r ) ≤ λ + · · · + λ r for all r = 1 , , . . . , n .Proof. We may assume that all u i ≥ because adding u to all u i adds ru to both sides of (2.5). We may assume u ≥ · · · ≥ u k . Put u k +1 = 0 .Write E = P ki =1 v i F i , where v i = u i − u i +1 and F i = E + · · · + E i .Let σ i be the sum of the r smallest eigenvalues of F i . Then(2.6) k X i =1 v i σ i ≤ λ + · · · + λ r . As ≤ F i ≤ , we have(2.7) r − X t =0 ( r − t ) (cid:18) nt (cid:19) D ( F i , . . . , F i | {z } n − t , − F i , . . . , − F i | {z } t ) ≤ σ i for all i , by Lemma 2.2. On the other hand, since u i = v i + · · · + v k , we have min (X j ∈ J u i j : J ⊆ [ n ] , | J | = r ) = k X i =1 v i ( r − |{ j ∈ [ n ] : i j > i }| ) + . It remains to check that X I ∈ [ k ] n p I ( r − |{ j ∈ [ n ] : i j > i }| ) + == r − X t =0 ( r − t ) (cid:18) nt (cid:19) D ( F i , . . . , F i | {z } n − t , − F i , . . . , − F i | {z } t ) for all i ∈ [ k ] . This follows from X ( p I : I ∈ [ k ] n , |{ j ∈ [ n ] : i j > i }| = t ) == (cid:18) nt (cid:19) D ( F i , . . . , F i | {z } n − t , − F i , . . . , − F i | {z } t ) , which is clear from the definitions of p I and F i , and from the symmetryand multilinearity of D . (cid:3) We are ready for the main result of this paper.
Theorem 2.4.
The K -noisy quantum channel can be simulated by the K -noisy classical channel.Proof. It suffices to prove that for any POVM E , . . . , E k , and any K -noisy density matrix ρ , there exist points x I = ( x I, , . . . , x I,n ) ∈ K for each I = ( i , . . . , i n ) ∈ [ k ] n such that(2.8) tr E i ρ = X I ∈ [ k ] n p I X ( x I,j : j ∈ [ n ] , i j = i ) for each i ∈ [ k ] . Here the p I are defined as in (2.4).Let the eigenvalues of ρ be ≤ µ ≤ · · · ≤ µ n ; we have µ + · · · + µ n =1 . Since ρ is K -noisy, we have µ = ( µ , . . . , µ n ) ∈ K . Since K is convexand invariant with respect to permutations, any convex combinationof permutations of µ is in K . Thus, if x ∈ [0 , n is a stochastic vector,and any r distinct coordinates of x sum to ≥ µ + · · · + µ r for each r = 1 , , . . . , n , then x ∈ K . If we require • these n inequalities for each x I , together with • x I,j ≥ for all I and j , and • (2.8) for all i ,then each x I will be a stochastic vector since setting r = n yields x I, + · · · + x I,n ≥ µ + · · · + µ n = 1 , LASSICAL SIMULATIONS OF COMMUNICATION CHANNELS 11 while summing (2.8) for i = 1 , , . . . , k yields X I ∈ [ k ] n p I ( x I, + · · · + x I,n ) . Therefore, it suffices to prove that the system of (2 n + n ) k n inequali-ties and k equations above has a solution. By the well-known FarkasLemma, this is equivalent to saying that a linear combination of theinequalities and equations in the system cannot lead to the contradic-tory inequality ≥ . That is, it suffices to prove that if nonnegativenumbers w I,H ( I ∈ [ k ] n , H ⊆ [ n ]) and real numbers u , . . . , u k satisfy(2.9) X ( w I,H : H ⊆ [ n ] , H ∋ j ) ≤ p I u i j for all I ∈ [ k ] n and all j ∈ [ n ] , then(2.10) X I ∈ [ k ] n X H ⊆ [ n ] w I,H ( µ + · · · + µ | H | ) ≤ k X i =1 u i tr E i ρ. Let λ ≤ · · · ≤ λ n be the eigenvalues of u E + · · · + u k E k . By vonNeumann’s inequality, the right hand side of (2.10) is ≥ λ µ n + · · · + λ n µ . The coefficient of µ s on the left hand side of (2.10) is X I ∈ [ k ] n X | H |≥ s w I,H , so it suffices to prove that n X s = n − r +1 X I ∈ [ k ] n X | H |≥ s w I,H ≤ λ + · · · + λ r for r = 1 , . . . , n . In view of Lemma 2.3, this follows if n X s = n − r +1 X | H |≥ s w I,H ≤ p I X j ∈ J u i j for all I ∈ [ k ] n and all J ⊆ [ n ] with | J | = r . This follows from (2.9)and the fact that X n − r
I am grateful to Mihály Weiner for useful con-versations.
References [1] R. B. Bapat: Mixed discriminants of positive semidefinite matrices.
LinearAlgebra Appl. (1989), 107–124.[2] Michele Dall’Arno, Sarah Brandsen, Alessandro Tosini, Francesco Buscemi,and Vlatko Vedral: No-Hypersignaling Principle, Phys. Rev. Lett. 119 (2017),020401[3] P. E. Frenkel and M. Weiner: Classical information storage in an n -level quan-tum system, Communications in Mathematical Physics 340 (2015), 563–574.[4] L. Lovász and M. D. Plummer: Matching Theory. North-Holland, 1986.[5] Keiji Matsumoto, Gen Kimura: Information storing yields a point-asymmetryof state space in general probabilistic theories, arXiv:1802.01162 Eötvös Loránd University, Pázmány Péter sétány 1/C, Budapest,1117 Hungary, and Rényi Institute, Budapest, Reáltanoda u. 13-15,1053 Hungary
Email address ::