Entropic uncertainty relations for multiple measurements
aa r X i v : . [ qu a n t - ph ] N ov Entropic uncertainty relations for multiple measurements
Shang Liu, Liang-Zhu Mu, and Heng Fan
2, 3, ∗ School of Physics, Peking University, Beijing 100871, China Institute of Physics, Chinese Academy of Science, Beijing 100190, China Collaborative Innovation Center of Quantum Matter, Beijing 100190, China
We present the entropic uncertainty relations for multiple measurement settings in quantum me-chanics. Those uncertainty relations are obtained for both cases with and without the presence ofquantum memory. They take concise forms which can be proven in a unified method and easy tocalculate. Our results recover the well known entropic uncertainty relations for two observables,which show the uncertainties about the outcomes of two incompatible measurements. Those un-certainty relations are applicable in both foundations of quantum theory and the security of manyquantum cryptographic protocols.
PACS numbers: 03.65.Ta, 03.65.Ud, 03.67.Dd, 03.65.Aa
Introduction. —Uncertainty principle is one unique fea-ture of quantum mechanics differing from the classicalcase. Heisenberg [1] formulated the first uncertaintyrelation which shows that one cannot predict the out-comes with arbitrary precision for two incompatible mea-surements simultaneously, such as position and momen-tum, on a particle. As a fundamental property, uncer-tainty principle is continuously attracting lots of atten-tion and research interests. Variants of uncertainty re-lations are presented in the past years. One type ofbest known uncertainty relations today is in the formproposed by Robertson [2]. For arbitrary two observ-ables U and V , the uncertainty relation given by Robert-son takes the form, σ U σ V ≥ (cid:12)(cid:12) h ψ | [ U, V ] | ψ i (cid:12)(cid:12) , where σ is the standard deviation of an observable. This boundof uncertainty, however, may have the drawback of be-ing state-dependent. So for some states, this bound istrivial. Deutsch [3] proposed to use Shannon entropy( H ( { p i } ) := − P i p i log p i , the base of logarithm is as-sumed being 2 hereafter) as a proper measure of uncer-tainty and presented the entropic uncertainty relation: H ( U ) + H ( V ) ≥ − p c ( U, V )2 ! (1)Here, U and V are two projective measurements withbases {| u i i} and {| v j i} respectively. In this form, theuncertainty is naturally quantified by entropy in theinformation-theoretical context instead of the standarddeviation. Also, we use the notations c ( u i , v j ) := |h u i | v j i| , c ( U, V ) := max i,j c ( u i , v j ) = max i,j |h u i | v j i| ,which are consistent with past works. We may find thatthe bound of uncertainty depends only on the comple-mentarity of the observables avioding the shortcomingsof state-dependent. Maassen and Uffink [4] (MU) furtherstrengthened Deutsch’s inequality to a tighter and moresuccinct form as H ( U ) + H ( V ) ≥ − log c ( U, V ) . (2) ∗ [email protected] In this inequality, the largest uncertainty can be ob-tained for observables which are mutually unbiased, i.e.,the quantities c ( u i , v j ) := |h u i | v j i| take the same value,1 / √ d , which depends on the dimension d . It is knownthat the mutually unbiased bases (MUBs) are useful inquantum information processing, in particular, for quan-tum key distributions, see for example Refs. [5–9]. Byconsidering that the number of MUBs can be at most d + 1 [9], other than only restricting to 2, it is naturalto investigate the uncertainty relation with more thantwo measurements even for the simplest two-dimensionalcase, see FIG. 1.Many efforts have been made to generalize the uncer-tainty relations to more than two observables, see [10]for a review. Significant progresses have been made inthis direction for case of MUBs [11–13], and a few lat-est results in other cases [14–17]. We will present theentropic uncertainty relations for multiple measurementswith general condition.On the other hand, a remarkable result of the uncer-tainty principle recently is to investigate the effect ofquantum memory which is available with current tech-nologies. It is shown that the extent of uncertainty canbe reduced with the help of memory which might be en-tangled with the measured system [18]. This uncertaintyrelation is confirmed experimentally [19, 20] and can beapplied in studying the security of quantum cryptogra-phy. Again, the inequality is only for two measurementswhile the case of multiple observables is of fundamentalinterest and of practical applications for quantum keydistributions with more than two measurements settings[6, 8]. We remark that the uncertainty inequality hasbeen extended to multi-partite systems [21] and can berelated with many concepts such as teleportation, en-tanglement witness in quantum information processing[20, 22]. Still, the general uncertainty inequalities formultiple measurements in the presence of quantum mem-ory are still absent. In this Letter, we will present theuncertainty inequalities for multiple measurements whichare in a unified framework for both cases with or withoutthe quantum memory. x yz | cos | 0 sin |12 2 i e ϕ θ θψ〉 = 〉 + 〉 θ ϕ | 0 〉 |1 〉 FIG. 1. (color online) A typical set of three MUBs in two-dimensional Hilbert space is visualized on the Bloch sphere.These measurements can provide a complete description ofany quantum state in this space.
R´enyi entropy and the generalization of Deutsch’s in-equality. —The generalization of Deutsch’s inequality isrelatively simple, but we need the concept of R´enyi en-tropy [23] to present our result. For a set of probabilities { p i } and any real number α >
0, the classical R´enyi en-tropy is defined as H α ( { p i } ) := 11 − α log( X i p αi ) . (3)R´enyi entropy is a monotonic decreasing function withrespect to α when the probability distribution is fixed.Taking the limit as α →
1, one reaches the defini-tion of Shannon entropy: lim α → H α ( p ) ≡ H ( p ) = − P i p i log p i , where we have used p as an abbreviation of { p i } . On the other hand, we also have lim α →∞ H α ( p ) = − log(max i p i ). Obviously, having similar properties withShannon entropy, R´enyi entropies are also appropriatetools for the description of uncertainty.Now, suppose that we have N projective measurements M , M , . . . , M N whose bases are {| u i i} , {| u i i} , . . . , {| u Ni N i} , respectively. We have the following theorem. Theorem 1.
The following entropic uncertainty relationholds. N X m =1 H ∞ ( M m ) ≥ − log( h ) , (4) where h = max i ,i ,...i N N Y m =1 ( 1 + q c ( u mi m , u m +1 i m +1 )2 ) . (5)Here, module N is assumed for superscripts. We notethat because of the monotonicity of R´enyi entropy, wecan actually replace the l.h.s. by H α ( M ) + H α ( M ) + . . . + H α N ( M N ) for an arbitrary set of { α i } . The state-ment in Theorem 1 is the tightest version. Especially, ifwe choose all α i ’s to be 1, the natural generalization ofDeutsch’s work is obtained. This is not a simple summa-tion of two-observable inequalities since the maximum istaken outside the multiplication. Proof of this result is not difficult, but still requiresmany lines of argument. Roughly speaking, we aimat giving an upper bound for the quantity p i p i ...p Ni N ,where p mi m is the probability of getting the i m th result ofthe m th measurement. This quantity can be factorizedinto terms like p p mi p nj = |h ψ | u mi ih ψ | u nj i| . If we imaginethat all the vectors are in real Euclidean space, we simplyhave, |h ψ | u mi ih ψ | u nj i| ≤
12 (1 + |h u mi | u nj i| ) , (6)which obviously implies the inequality in this theorem.However the vectors actually live in a complex Hilbertspace, but similar procedure is still able to be applied.This concludes our proof. More details are given in thesupplementary information [24]. Generalization of MU inequality. — We now considerthe MU bound for multi-observable uncertainty. Thestate of the measured system is denoted by ρ , which isgenerally a mixed state with its von Neumann entropydefined as, S ( ρ ) := − Tr( ρ log ρ ). Theorem 2.
The following entropic uncertainty relationholds, N X m =1 H ( M m ) ≥ − log( b ) + ( N − S ( ρ ) , (7) where b = max i N { X i ∼ i N − max i [ c ( u i , u i )]Π N − m =2 c ( u mi m , u m +1 i m +1 ) } . (8)For example, if N = 3, we have: b = max k { X j max i [ c ( u i , u j )] c ( u j , u k ) } . (9)The outline of the proof is sketched below, where themethod used is inspired by the excellent works of Coles et al. [21].First, one can easily verify that for a projective mea-surement, say U , we have the following relation, H ( U ) − S ( ρ ) = S ( ρ || X i | u i ih u i | ρ | u i ih u i | ) , (10)where S ( ρ || σ ) := Tr( ρ log ρ ) − Tr( ρ log σ ) is the quantumrelative entropy. Then, by using the well known theoremthat quantum channels never increase relative entropy(see Page 208 of Ref. [25] and Ref. [26]), i.e. S ( ρ || σ ) ≥ S ( E ( ρ ) ||E ( σ )) for any trace-preserving operation E , weobtain the following inequality from the equation above, H ( U ) + H ( V ) ≥ S ( ρ || X i,j p i c ( u i , v j ) | v j ih v j | ) + 2 S ( ρ ) , (11)where the operation E utilized is E ( ρ ) := P j | v j ih v j | ρ | v j ih v j | , and p i = h u i | ρ | u i i is the prob-ability of obtaining the i -th outcome of U . Notice thatthe r.h.s. of this inequality again contains a term ofrelative entropy, thus we can apply the same methodcontinuously and obtain inequalities with arbitrarilymany entropic terms in the l.h.s. More precisely, wefind, − N S ( ρ ) + N X m =1 H ( M m ) ≥ S ( ρ || X j β Nj | u Nj ih u Nj | ) , (12)where β Nj := P i ,i ,...i N − p i c ( u i , u i ) ...c ( u N − i N − , u Nj )with p i = h u i | ρ | u i i . Finally, by slightly weakening thisinequality, we obtain exactly the result shown in Theo-rem 2.Let us take a further look of this result. Notice thatsince the maximum over j is taken outside the summa-tion, the quantity b in this inequality is always less thanor equal to 1 resulting a non-negative − log( b ) and there-fore non-trivial. The additional term of von Neumann en-tropy is physically meaningful (though making the boundstate-dependent) since a mixed state is expected to in-crease the uncertainty. By taking N = 2, one simply re-covers (2) with a tighter bound with an additional term S ( ρ ). So we regard inequality (7) as a generalization ofMU inequality. We remark that the MU inequality withterm S ( ρ ) can be obtained from the memory assisted en-tropic uncertainty inequality [18], and we still call it theMU inequality in this Letter.In addition, in the proof of Theorem 2, we can obtaina corollary as a weighted uncertainty relation. Corollary 1.
Suppose that we have three projective mea-surements U , V and W with bases {| u i i} , {| v j i} and {| w k i} , we have H ( U ) + H ( V ) + 2 H ( W ) ≥ S ( ρ ) − log { max i,j,k [ c ( u i , w k ) c ( w k , v j )] } (13)This is also a generalization of MU inequality. Butthis inequality seems difficult to be extended to moreobservables. Performance of the inequalities. —Next, we shall showthat our result provides a non-trivial new bound for theuncertainty. Explicitly, we shall compare our new boundwith known ones and show that ours is not overwhelmed.To start, let us specify what other bounds will be con-sidered. Note that one could always construct a multi-measurement inequality from two-measurement ones bysummation. For instance, by simply combining three MUinequalities for mixed state, H ( U ) + H ( V ) ≥ − log c ( U, V ) + S ( ρ ) , (14)we have the following inequality, H ( M ) + H ( M ) + H ( M ) − S ( ρ ) ≥ −
12 log[ c ( M , M ) c ( M , M ) c ( M , M )] . (15) We will call the bounds constructed in this manner sum-mation bounds hereafter.Also, note that any two-measurement bound itself isa valid bound for multi-measurement cases. More pre-cisely, if we have a bound b ( i, j ) such that H ( M i ) + H ( M j ) ≥ b ( i, j ), we should also have P Nm =1 H ( M m ) ≥ b ( i, j ). This is straightforward, but we should note thattwo-measurement bounds are not necessarily lower thanthe summation bound mentioned above. Therefore theyare also needed to be taken into consideration.For convenience, we call all summation bounds andtwo-measurement bounds the simply constructed bounds (SCB), where we only consider the contribution of MUbounds from now on. We will later compare our resultwith the maximum among all SCBs.Before running into numerical computation, we couldfirst prove analytically that our bound is always no lessthan two-measurement MU bound. To see this, as-sume that we are to compare our result with the two-measurement bound for M and M , which could alwaysbe achieved by a relabeling of the measurements. Thenwe have b = max i N { X i ∼ i N − max i [ c ( u i , u i )]Π N − m =2 c ( u mi m , u m +1 i m +1 ) }≤ max i N { X i ∼ i N − max i ,i [ c ( u i , u i )]Π N − m =2 c ( u mi m , u m +1 i m +1 ) } = max i ,i c ( u i , u i ) = c ( M , M ) . (16)Consequently, our bound is no less than two-measurement bound, − log( b ) ≥ − log c ( M , M ).We remark that, in two-dimensional case, since thequantity max i c ( u i , u j ) becomes exactly the same asmax i,j c ( u i , u j ), our bound actually reduces to two-measurement bound which is not very interesting. How-ever, this condition does not hold for higher dimensions.Let us now consider an example for three measure-ments in three-dimensional space. The measurements arechosen explicitly as follows, { (1 , , , (0 , , , (0 , , } , { (1 / √ , , − / √ , (0 , , , (1 / √ , , √ } , { ( √ a, e i φ √ − a, , ( √ − a, − e i φ √ a, , (0 , , } . (17)The value of several bounds for φ = π/ a is shown in Fig. 2. Those bounds include the maximalSCB, our bound of Theorem 2 and the RPZ direct summajorization bound due to Rudnicki et al. [16]. In thiscase, our bound is always better than the SCB and alsobecomes a complementary to the RPZ bound. We thusconfirm that our result is non-trivial. Entropic uncertainty relations in the presence of quan-tum memory. —Recently, Berta et al. [18] introduced anentropic uncertainty relation in the presence of quantummemory as follows, H ( U | B ) + H ( V | B ) ≥ − log c ( U, V ) + S ( A | B ) (18) FIG. 2. (color online) Comparison of several bounds for φ = π/ a , including the maximal SCB in dashed-green line, our bound of Theorem 2 in solid-black line andthe RPZ bound in dotted-blue line. Here, x -axis indicatesthe value of a and y -axis indicates the value of the bounds. Here, A and B represent two particles in a two-bodysystem ρ AB , which is generally mixed and might be en-tangled, U and V are two projective measurements ap-plied on A , where B is regarded as a quantum mem-ory of system A . By definition, H ( U | B ) is the condi-tional von Neumann entropy of the post-measurementstate P i ( | u i ih u i | ⊗ I ) ρ AB ( | u i ih u i | ⊗ I ), and H ( V | B ) issimilarly defined. We know that the conditional en-tropy takes the form, S ( A | B ) = S ( ρ AB ) − S ( ρ B ). When ρ AB is pure, we can find, H ( U | B ) = H ( U ) − S ( ρ B ), H ( V | B ) = H ( V ) − S ( ρ B ).In quantum theory, the conditional entropy S ( A | B )can become negative, implying that A and B are entan-gled [27]. So this inequality shows that the existenceof memory B can help reduce uncertainty. The condi-tional entropy represents also partial quantum informa-tion related with quantum state merging [28]. An easierand more heuristic proof of this memory assisted uncer-tainty inequality and its generalization to tripartite sys-tem, with one being measured and two particles beingused as memories, have been studied by Coles et al. [21].We could now, using our method, provide a generaliza-tion from a different view point of the memory assisteduncertainty inequality to multi-measurement cases. Ourresult is presented as follows, Theorem 3.
For a bipartite state ρ AB and N projectivemeasurements { M i } applied on A , we have N X m =1 H ( M m | B ) ≥ − log( b ) + ( N − S ( A | B ) , (19) where b is the same as that in Theorem 2. The proof is similar to that for Theorem 2. All thatwe should do is to replace ρ by ρ AB and go along thesame process. On the other hand as shown in [18], wewould like to remark that if taking the dimension of B tobe zero, the uncertainty relation in the presence of quan-tum memory can be reduced to case without quantummemory shown in (7).Actually, this is not the only possible formalism ofmulti-measurement memory-assisted inequality. For ex-ample consider (7), if we interpret ρ there to be asubsystem ρ A of a pure bipartite state ρ AB then with S ( ρ A ) = S ( ρ B ), we have straightforwardly the result, N X m =1 H ( M m ) ≥ − log( b ) + ( N − S ( ρ A )= − log( b ) + N S ( ρ B ) + S ( A | B ) . (20)One can easily recover the following corollary after sub-tracting N S ( ρ B ) on both sides. Corollary 2.
For a bipartite pure state ρ AB and N pro-jective measurements { M i } applied on A , we have N X m =1 H ( M m | B ) ≥ − log( b ) + S ( A | B ) . (21)We may also have a SCB for multiple measurements byrepeatedly using the inequality with two measurements(18). Similarly, all those bounds are complementary toeach other. When S ( A | B ) is negative, the SCB from(18) or the special case (21) for pure state can be tighter,when S ( A | B ) is positive, the bound in (19) is tighter. Discussions. —Entropic uncertainty relations for mul-tiple measurements are fundamental in quantum physicsand can be applied for general quantum key distributionprotocols. We present the general uncertainty relationsfor three different but related cases, the Deutsch type in-equalities, the MU type inequalities and the case in thepresence of quantum memory. Non-trivial and easy tocompute bounds are presented which can provide a moreprecise description of the uncertainty principle for quan-tum mechanics. The experimental realization [19, 20] canalso be implemented with more than two measurementssettings.
Acknowledgement. —We thank useful discussions withA. Winter, X. J. Ren and Y. C. Chang. We thank K. Zy-czkowski for correspondence. This work was supportedby NSFC (11175248), NFFTBS (J1030310, J1103205)and grants from the Chinese Academy of Sciences.
Appendix A: Generalization of Deutsch’s inequality
We will here prove Theorem 1 in the main article, which is not quite difficult. By the definition of R´enyi entropy, H ∞ ( { p i } ) = − log(max i p i ). Then the l.h.s. of the inequality is X m H ∞ ( M m ) = − log(max i p i max i p i . . . max i N p Ni N ) (A1)= log[ max i ,...,i N ( p i p i . . . p Ni N )] (A2)Here we have used the similar notation convention as in the main article that superscripts represent the labels ofmeasurements and subscripts represent the corresponding outcomes. To prove Theorem 1, it suffices to provide anupper bound on max i ,...,i N ( p i . . . p Ni N ). We first do it for two measurements and show it could be easily generalized.Recall the notation of measurements and bases in the main article. We tend to bound the term max i,j |h u i | Φ ih Φ | u j i| with a larger and state-independent value, where we denote by | Φ i the state of the measured system. Note that weare finally interested in the absolute value, thus we can convert our discussion from a complex vector space to a realone. More precisely, for a certain orthonormal basis {| k i} and any vector | η i = P dk =1 α k | k i , define | ˜ η i = P k | α k || k i .This map | η i 7→ | ˜ η i reduces this space to a subset of a real Euclidean space. Then, replacing every vector by itsreal image, all the upper bounds we get will also hold for original vectors, because obviously we have |h a | b i| = | a ∗ b + a ∗ b + ... + a ∗ n b n | ≤ | a || b | + | a || b | + ... + | a n || b n | = h ˜ a | ˜ b i . Here, the angle between two reduces vectors, whichis always well defined in an Euclidean space, ranges from 0 to π/ |h u i | Φ ih Φ | u j i| = |h u i | Φ k ih Φ k | u j i| (A3) ≤ |h ˜ u i | ˜Φ k ih ˜Φ k | ˜ u j i| (A4)= | cos( θ ) cos( θ ) | = 12 | cos( θ + θ ) + cos( θ − θ ) | (A5) ≤ | cos( θ + θ ) + 1 | (A6)= 12 (1 + |h ˜ u i | ˜ u j i| ) , (A7)where | Φ k i is the projection of | Φ i onto the plane expanded by | u i i and | u j i , θ is the angle between | ˜ u i i and | ˜Φ k i ,and θ is that between | ˜ u j i and | ˜Φ k i . Then notice that, we could always choose a certain basis {| k i} with which wedefine the reduction such that | ˜ u i i = | u i i and | ˜ u j i = e i α | u j i . Therefore we have |h u i | Φ ih Φ | u j i| ≤
12 (1 + |h ˜ u i | ˜ u j i| ) = 12 (1 + |h u i | u j i| ) . (A8)If taking maximum here over i and j , one will recover the result of Deustch [3], but we can go further.We have straightforwardly that= |h u i | Φ ih u i | Φ i ... h u Ni N | Φ i| (A9)= q |h u i | Φ ih u i | Φ i| q |h u i | Φ ih u i | Φ i| ... q |h u N − i N − | Φ ih u Ni N | Φ i| q |h u Ni N | Φ ih u i | Φ i| (A10) ≤ r ( 1 + cos θ , r ( 1 + cos θ , ... r ( 1 + cos θ N,N − r ( 1 + cos θ N, vuut N Y m =1 ( 1 + θ m,m +1 vuut N Y m =1 ( 1 + |h u mi m | u m +1 i m +1 i| . (A12)Again, N + 1 in the superscripts or subscripts is equivalent to 1. We square the inequality above, take maximum andlogarithm, then it is exactly the desired result. Appendix B: Generalization of the Inequality of Maassen and Uffink
We will here prove Theorem 2 in the main article as well as its corollaries. Considering the complexity on notation,we will first provide the proof for a simplified condition where there are only three measurements, and then generalizeit by induction.
1. Entropic uncertainty relation for three measurements
For simplicity, we use the abbreviation [ ψ ] to denote the projector | ψ ih ψ | . In the proof of our result, we utilize theconcept of quantum relative entropy: by definition, S ( ρ || σ ) := Tr( ρ log ρ ) − Tr( ρ log σ ). The method of our proof isinspired by Ref. [21].Consider three projective measurements, namely U = {| u i i} , V = {| v j i} , W = {| w k i} . We have the followinginequality. Theorem 4.
Denote by H( · ) the Shannon entropy of a set of probabilities of a measurement, we have an entropicuncertainty relation: H ( U ) + H ( V ) + H ( W ) ≥ − log(max i X k (max j c ( v j , w k )) c ( w k , u i )) + 2 S ( ρ ) , (B1) where c ( a, b ) := |h a i | b j i| , and S ( ρ ) is the von Neumann entropy of the state being measured.Proof. First, we notice the following relation: S ( ρ || X j [ v j ] ρ [ v j ]) (B2)= Tr( ρ log ρ ) − Tr( ρ log( X j [ v j ] ρ [ v j ])) (B3)= − S ( ρ ) − Tr( ρ log( X j | v j i p j h v j | )) ( p j := h v j | ρ | v j i ) (B4)= − S ( ρ ) − Tr( ρ ( X j | v j i log p j h v j | )) (B5)= − S ( ρ ) − X j (Tr( ρ | v j i log p j h v j | )) (B6)= − S ( ρ ) − X j p j log p j (B7)= − S ( ρ ) + H ( V ) . (B8)We then have − S ( ρ ) + H ( V ) = S ( ρ || X j [ v j ] ρ [ v j ]) (B9) ≥ S ( X k [ w k ] ρ [ w k ] || X j,k | w k i p j c ( w k , v j ) h w k | ) (explained later) (B10)= Tr( X k [ w k ] ρ [ w k ] log( X k [ w k ] ρ [ w k ])) (B11) − Tr( X k [ w k ] ρ [ w k ] log( X j,k | w k i p j c ( w k , v j ) h w k | )) (B12)= − H ( W ) − Tr( X k [ w k ] ρ [ w k ] log( X k | w k i α k h w k | )) (B13)= − H ( W ) − Tr( ρ log( X k | w k i α k h w k | )) (B14)= − H ( W ) + S ( ρ || X j,k | w k i p j c ( w k , v j ) h w k | ) + S ( ρ ) . (B15)The second line invoked S ( ρ || σ ) ≥ S ( E ( ρ ) ||E ( σ )) (see Page 208 of Ref. [25]) with E ( ρ ) = P k [ w k ] ρ [ w k ].Thus we first get H ( V ) + H ( W ) ≥ S ( ρ || X j,k | w k i p j c ( w k , v j ) h w k | ) + 2 S ( ρ ) . (B16)Then, we have − S ( ρ ) + H ( V ) + H ( W ) ≥ S ( ρ || X j,k | w k i p j c ( w k , v j ) h w k | ) (B17) ≥ S ( X i [ u i ] ρ [ u i ] || X i,j,k | u i i p j c ( w k , v j ) c ( w k , u i ) h u i | ) (again) (B18)= S ( X i [ u i ] ρ [ u i ] || X i | u i i β i h u i | ) ( β i := X j,k p j c ( w k , v j ) c ( w k , u i )) (B19) ≥ S ( X i [ u i ] ρ [ u i ] || X i | u i i max i ( β i ) h u i | ) (B20)= S ( X i [ u i ] ρ [ u i ] || hI ) ( h := max i β i ) (B21)= − H ( U ) − Tr( X i [ u i ] ρ [ u i ] log( hI )) (B22)= − H ( U ) − log( h ) · Tr( I X i [ u i ] ρ [ u i ]) (B23)= − H ( U ) − log( h ) . (B24)We get H ( U ) + H ( V ) + H ( W ) ≥ − log( h ) + 2 S ( ρ ) . (B25)However, the term h ( h = max i P j,k p j c ( w k , v j ) c ( w k , u i )) is still state-dependent. To obtain a state-independentbound, we should take maximum over j inside the summation. More precisely, h = max i X j,k p j c ( w k , v j ) c ( w k , u i ) (B26) ≤ max i X j,k p j (max j c ( v j , w k )) c ( w k , u i ) (B27)= max i X k (max j c ( v j , w k )) c ( w k , u i ) . (B28)Finally, H ( U ) + H ( V ) + H ( W ) ≥ − log( b ) + 2 S ( ρ ) , (B29)where b = max i P k (max j c ( v j , w k )) c ( w k , u i ).We note that since the sum over i in the expression of b is outside the summation over k , b is always lower than orequal to 1, thus our bound is non-trivial.Actually, from the proof of this theorem, we can further get a corollary that is a weighted uncertainty relation.Recall (B16): H ( V ) + H ( W ) ≥ S ( ρ || X j,k | w k i p j c ( w k , v j ) h w k | ) + 2 S ( ρ ) (B30)= S ( ρ ) − Tr( ρ log( X k | w k i [ X j p j c ( w k , v j )] h w k | )) , (B31)so obviously we have: H ( W ) + H ( V ) + H ( W ) + H ( U ) (B32) ≥ S ( ρ ) − Tr( ρ log( X k | w k i [ X j p j c ( w k , v j )] h w k | )) (B33)+ S ( ρ ) − Tr( ρ log( X k | w k i [ X i q i c ( w k , u i )] h w k | )) ( q i := h u i | ρ | u i i ) (B34)= 2 S ( ρ ) − Tr( ρ log( X k | w k i [ X i,j p j q i c ( w k , u i ) c ( w k , v j )] h w k | )) (B35) ≥ S ( ρ ) − Tr( ρ log(max i,j,k [ c ( u i , w k ) c ( w k , v j )] I )) (B36)= 2 S ( ρ ) − log(max i,j,k ( c ( u i , w k ) c ( w k , v j ))) . (B37)This is actually Corollary 1 in the main article.
2. Arbitrary number of measurements
We boil down the proof into two steps.
Lemma 1.
Given N measurements M , M , ...M N , we have − N S ( ρ ) + N X m =1 H ( M m ) ≥ S ( ρ || X j | u Nj i β Nj h u Nj | ) , (B38) where β Nj := P i ,i ,...i N − c ( ρ, u i ) c ( u i , u i ) ...c ( u N − i N − , u Nj ) . We have used the consistent notation c ( ρ, u mi ) := h u mi | ρ | u mi i = p mi .Proof. We have got this relation for N = 2 (see equation (B16)), so we prove it by induction. Suppose that thisrelation is satisfied for N measurements, we proceed to prove it for N + 1. We have − N S ( ρ ) + N X m =1 H ( M m ) (B39) ≥ S ( X k [ u N +1 k ] ρ [ u N +1 k ] || X j,k | u N +1 k i β Nj c ( u Nj , u N +1 k ) h u N +1 k | ) (B40)= − H ( M N +1 ) − Tr[ X k [ u N +1 k ] ρ [ u N +1 k ] log( X k | u N +1 k i η k h u N +1 k | )] (B41)= − H ( M N +1 ) − Tr[ X k [ u N +1 k ] ρ [ u N +1 k ] X l | u N +1 l i log η l h u N +1 l | ] (B42)= − H ( M N +1 ) − X k Tr([ u N +1 k ] ρ [ u N +1 k ] X l | u N +1 l i log η l h u N +1 l | ) (B43)= − H ( M N +1 ) − X k Tr( ρ X l | u N +1 k i δ kl log η l δ kl h u N +1 k | ) (B44)= − H ( M N +1 ) − X k Tr( ρ | u N +1 k i log η k h u N +1 k | ) (B45)= − H ( M N +1 ) − Tr( ρ log( X k | u N +1 k i η k h u N +1 k | )) + Tr( ρ log ρ ) − Tr( ρ log ρ ) (B46)= − H ( M N +1 ) + S ( ρ || X k | u N +1 k i η k h u N +1 k | ) + S ( ρ ) (B47)Notice that η k = P j β Nj c ( u Nj , u N +1 k ) = β N +1 k (second and third lines). We therefore finish the proof.Just like the previous proof, if we impose a state-independent as well as j -independent upper bound to β Nj , we canget a state-independent uncertainty relation.Then we simply have − N S ( ρ ) + N X m =1 H ( M m ) ≥ S ( ρ || X j | u Nj i β Nj h u Nj | ) (B48)= − S ( ρ ) − Tr( ρ log( X j | u Nj i β Nj h u Nj | )) (B49) ≥ − S ( ρ ) − Tr( ρ log( bI )) (since b is greater than β Nj ) (B50)= − S ( ρ ) − log( b ) . (B51)This is nothing but Theorem 2 in the main article. Appendix C: Uncertainty relation with quantum memory
To start, we justify some assertions about quantum conditional entropy mentioned in the main article.By the definition of Berta et al. [18], H ( U | B ) is the conditional von Neumann entropy of the post-measurementstate P i ( | u i ih u i | ⊗ I ) ρ AB ( | u i ih u i | ⊗ I ), where U is a projective measurement performed on system A . An alternativedefinition is used in the work of Coles et al. [21]: H ( U | B ) := H ( U ) − χ ( U, B ) , (C1)where χ ( U, B ) is the Holevo quantity S ( ρ B ) − P j p j S ( ρ B,j ). ρ B,j is the state of B when measurement U gets the j thoutcome (with probability p j ), i.e. ρ B,j = Tr A ( | u j ih u j | ρ AB ) /p j .To prove the equivalence of the two definitions, we utilize Lemma 1 in Ref. [29] (see also Page 513 of Ref. [26]),which says that S ( X j p j ρ j ) − X j p j S ( ρ j ) ≤ H ( { p j } ) , (C2)with equality if and only if ρ j are mutually orthogonal.We then have S ( X i ( | u i ih u i | ⊗ I ) ρ AB ( | u i ih u i | ⊗ I )) − S ( ρ B ) (C3)= S ( X j p j | u j ih u j | ⊗ ρ B,j ) − S ( ρ B ) (C4)= H ( U ) + X j p j S ( ρ B,j ) − S ( ρ B ) (C5)= H ( U ) − χ ( U, B ) . (C6)When ρ AB is pure, we can always expand the state as | ψ AB i = P i α i | u i i| ψ B,i i , so ρ B,i is also a pure state. Therefore,in this special case, H ( U | B ) = H ( U ) − S ( ρ B ), which is used in the main article.Now, let’s begin our process to generalize the result of Berta et al. The idea is straightforward: we just replace ρ by ρ AB in our proof in Appendix B and see what happens. There is no doubt that most of the calculation will besimilar, so we just note some key points here.First, recall that we made a connection between information entropy of a measurement with quantum relativeentropy, i.e. S ( ρ || P j [ v j ] ρ [ v j ]) = − S ( ρ ) + H ( V ). As for bipartite systems ρ AB , we have S ( ρ AB || X j [ v j ] ρ AB [ v j ]) (C7)= − S ( ρ AB ) − Tr( ρ AB log( X j [ v j ] ρ AB [ v j ])) (C8)= − S ( ρ AB ) + S ( X j [ v j ] ρ AB [ v j ]) (C9)= − S ( ρ AB ) + S ( ρ B ) + S ( X j [ v j ] ρ AB [ v j ]) − S ( ρ B ) (C10)= H ( V | B ) − S ( A | B ) . (C11)0Recollect that [ v j ] is the measurement projector acting only on system A . This results are obviously similar.Then, our standard method with which the three-measurement inequality is proven can be performed in the samemanner. S ( ρ AB || X j [ v j ] ρ AB [ v j ]) (C12) ≥ S ( X k [ w k ] ρ AB [ w k ] || X j,k | w k i c ( w k , v j ) h v j | ρ AB | v j ih w k | ) (C13)= − S ( X k [ w k ] ρ AB [ w k ]) − Tr( X k [ w k ] ρ AB [ w k ] log( X j,k | w k i c ( w k , v j ) h v j | ρ AB | v j ih w k | )) (C14)= − S ( X k [ w k ] ρ AB [ w k ]) − Tr( ρ AB log( X j,k | w k i c ( w k , v j ) h v j | ρ AB | v j ih w k | )) (C15)= − S ( X k [ w k ] ρ AB [ w k ]) + S ( ρ AB || X j,k | w k i c ( w k , v j ) h v j | ρ AB | v j ih w k | ) + S ( ρ AB ) (C16)= − H ( W | B ) + S ( ρ AB || X j,k | w k i c ( w k , v j ) h v j | ρ AB | v j ih w k | ) + S ( A | B ) . (C17)Here, the term h v j | ρ AB | v j i is simply p j ρ B,j . Hence, the result above is again much similar to our previous result—oneneed only replace H ( · ) by H ( ·| B ), S ( ρ ) by S ( A | B ), and p j by p j ρ B,j . Consequently, a generalized lemma correspondingto Lemma 1 can be obtained easily.
Lemma 2.
Given a bipartite state ρ AB and N projective measurements M , M , ...M N acting on system A , we have − N S ( A | B ) + N X m =1 H ( M m | B ) ≥ S ( ρ AB || X j | u Nj i β Nj h u Nj | ) , (C18) where β Nj := P i ,i ,...i N − c ( ρ AB , u i ) c ( u i , u i ) ...c ( u N − i N − , u Nj ) . We have used the consistent notation c ( ρ AB , u mi ) := h u mi | ρ AB | u mi i = p mi ρ B,m i . We further reduce the r.h.s. to a state-independent bound. S ( ρ AB || X j | u Nj i β Nj h u Nj | ) (C19)= S ( ρ AB || X j | u Nj i [ X i ,i ,...i N − c ( ρ AB , u i ) c ( u i , u i ) ...c ( u N − i N − , u Nj )] h u Nj | ) (C20) ≥ S ( ρ AB || X j | u Nj i [ X i ,i ,...i N − h u i | ρ AB | u i i max i ( c ( u i , u i )) ...c ( u N − i N − , u Nj )] h u Nj | ) (C21)= S ( ρ AB || X j | u Nj i [ X i ,...i N − ρ B max i ( c ( u i , u i )) ...c ( u N − i N − , u Nj )] h u Nj | ) (C22) ≥ S ( ρ AB || X j | u Nj i [ bρ B ] h u Nj | ) (C23)= S ( ρ AB || bI ⊗ ρ B ) (C24)= − S ( A | B ) − log( b ) . (C25)A justification of the last step can be seen in equations 11.104-11.106 on Page 521 of Ref. [26]. Therefore we haveproven that N X m =1 H ( M m | B ) ≥ − log( b ) + ( N − S ( A | B ) . (C26)This is exactly Theorem 3 in the main article. [1] W. Heisenberg, Z. Phys. , 172 (1927). [2] H. P. Robertson, Phys. Rev. , 163 (1929). [3] D. Deutsch, Phys. Rev. Lett. , 631 (1983).[4] H. Maassen and J. B. M. Uffink, Phys. Rev. Lett. ,1103 (1988).[5] N. J. Cerf, M. Bourennane, A. Karlsson, and N. Gisin,Phys. Rev. Lett. , 127902 (2002).[6] Z. X. Xiong, H. D. Shi, Y. N. Wang, L. Jing, J. Lei, L.Z. Mu, and H. Fan, Phys. Rev. A , 012334 (2012).[7] H. Fan, Phys. Rev. Lett. , 177905 (2004).[8] H. Fan, Y. N. Wang, L. Jing, J. D. Yue, H. D. Shi, Y. L.Zhang, and L. Z. Mu, Phys. Rep. , 241 (2014).[9] S. Bandyopadhyay, P. O. Boykin, V. Roychowdhury, andF. Vatan, Algorithmica , 512 (2002).[10] S.Wehner and A.Winter, New J. Phys. , 025009 (2010).[11] I. D. Ivanovic, J. Phys. A: Math. Gen. , 363 (1992).[12] J. Sanchez, Phys. Lett. A , 233 (1993).[13] J. Sanchez-Ruiz, Phys. Lett. A , 125 (1995).[14] S. Friedland, V. Gheorghiu, and G. Gour, Phys. Rev.Lett. , 230401 (2013).[15] Z. Puchala, L. Rudnicki, and L. Zyczkowski, J. Phys. A:Math. Theor. , 272002 (2013).[16] L. Rudnicki, Z. Puchala, and K. Zyczkowski, Phys. Rev.A , 052115 (2014).[17] J. Kaniewski, M. Tomamichel, and S. Wehner, Phys. Rev.A , 012332 (2014). [18] M. Berta, M. Christandl, R. Colbeck, J. M. Renes, andR.Renner, Nature Phys. , 659 (2010).[19] C. F. Li, J. S. Xu, X. Y. Xu, K. Li, and G. C. Guo,Nature Phys. , 752 (2011).[20] R. Prevedel, D. R. Hamel, R. Colbeck, K. Fisher, and K.J. Resch, Nature Phys. , 757 (2011).[21] P. J. Coles, R. Colbeck, L. Yu, and M. Zwolak, Phys.Rev. Lett. , 210405 (2012).[22] M. L. Hu and H. Fan, Phys. Rev. A , 022314 (2013); ibid. , 014105 (2013).[23] A. R´enyi, Proceedings of the fourth Berkeley Symposiumon Mathematics, Statistics and Probability (University ofCalifornia Press, Berkeley, CA, 1961), Vol. 1, p. 547.[24] The details are presented in supplementary material.[25] V. Vedral, Rev. Mod. Phys. , 197 (2002).[26] M. A. Nielsen and I. L. Chuang, Quantum Computa-tion and Quantum Information. (Cambridge UniversityPress, Cambridge, England, 2000).[27] N. J. Cerf and C. Adami, Phys. Rev. Lett. , 5194(1997).[28] M. Horodecki1, J. Oppenheim, and A. Winter, Nature , 673 (2005).[29] P. J. Coles, L. Yu, V. Gheorghiu, and R. B. Griffiths,Phys. Rev. A83