Entropy-power uncertainty relations : towards a tight inequality for all Gaussian pure states
EEntropy-power uncertainty relations :towards a tight inequality for all Gaussian pure states
Anaelle Hertz, ∗ Michael G. Jabbour, and Nicolas J. Cerf Centre for Quantum Information and Communication, ´Ecole polytechnique de Bruxelles,CP 165, Universit´e libre de Bruxelles, 1050 Brussels, Belgium
We show that a proper expression of the uncertainty relation for a pair of canonically-conjugatecontinuous variables relies on entropy power, a standard notion in Shannon information theory forreal-valued signals. The resulting entropy-power uncertainty relation is equivalent to the entropicformulation of the uncertainty relation due to Bialynicki-Birula and Mycielski, but can be furtherextended to rotated variables. Hence, based on a reasonable assumption, we give a partial proof ofa tighter form of the entropy-power uncertainty relation taking correlations into account and pro-vide extensive numerical evidence of its validity. Interestingly, it implies the generalized (rotation-invariant) Schr¨odinger-Robertson uncertainty relation exactly as the original entropy-power uncer-tainty relation implies Heisenberg relation. It is saturated for all Gaussian pure states, in contrastwith hitherto known entropic formulations of the uncertainty principle.
I. INTRODUCTION
The uncertainty principle lies at the heart of quantum physics. It exhibits one of the key divergences between aclassical and a quantum system. Classically, it is in principle possible to specify the precise value of all measurablequantities simultaneously in a given state of a system. In contrast, whenever two quantum observables do not commute,it is impossible to define a quantum state for which their values are simultaneously specified with infinite precision. Aparadigmatic example is given by Heisenberg’s original formulation of the uncertainty principle expressed in terms ofvariances of two canonically-conjugate variables [1, 2], such as position ˆ x and momentum ˆ p , which was later generalizedto a rotation-invariant form by Schr¨odinger [3] and Robertson [4]. A different kind of uncertainty relations, originatedby Bialynicki-Birula and Mycielski [5] again for canonically-conjugate variables, relies on Shannon entropy instead ofvariances as a measure of uncertainty (it was later on developed for discrete observables of finite-dimensional systems[6–8], but we restrict to continuous-variable observables here).This entropic formulation of the uncertainty principle has recently attracted much attention in quantum informationsciences because entropies are the natural quantities of interest in this area (see [9, 10] for a survey). In particular, anextended version of the entropic uncertainty relation was derived, where some available quantum side-information (e.g.,a quantum memory) is taken into account [11, 12]. It expresses the tradeoff between the information that two partiesmay have on non-commuting observables, which is of particular relevance to quantum key distribution. A variantversion of this uncertainty relation formulated in terms of smooth entropies [13] indeed provides a very useful tool forfinite-key security analysis [14], going beyond asymptotic proofs. In the special case of continuous-variable quantumkey distribution, the original entropic uncertainty relation [5] was first applied to proving the optimality of Gaussianindividual attacks at the asymptotic key limit [15]. More recently, a finite-key analysis for certain continuous-variableprotocols was performed based on the smooth-entropy formalism extended to infinite dimensions [16].Entropic uncertainty relations find other applications, for example, in the context of separability criteria. TheDuan-Simon separability criteria [17, 18] based on variances for continuous-variables systems can be reformulatedwith entropies [19], yielding a more sensitive detection of entanglement in some cases. Even more generally, a deepconceptual link between the entropic uncertainty relation and the wave-particle duality has been pointed out [20],which emphasizes the pivotal role of entropies in the uncertainty principle.In this article, we investigate whether tighter entropic uncertainty relations can be derived, which, by takingcorrelations into account, are saturated for all Gaussian pure states (in analogy with the Schr¨odinger-Robertsonuncertainty relations). To reach this goal, we make use of the entropy power , which is a standard notion in Shannoninformation theory for real-valued signals. In Section II, we first review variance- and entropy-based uncertaintyrelations, and then define what we coin the entropy-power uncertainty relation for a pair of canonically-conjugatevariables, namely N x N p ≥ ( (cid:126) / , where N x and N p are entropy powers. It trivially implies the Heisenberg relationas a simple consequence of the definition of entropy power (actually, they coincide for Gaussian states). Then, in ∗ Electronic address: [email protected] a r X i v : . [ qu a n t - ph ] A ug Section III, we find an extended form of the entropy-power uncertainty relation, which is stronger than the regularform for rotated variables as it builds on the covariance matrix γ . It reads as N x N p ≥ σ x σ p | γ | ( (cid:126) / (1)where σ x and σ p are variances. It is partially proven by making use of variational calculus, supplemented with somenatural assumption on the concavity of the uncertainty functional. We also find an extended version of the aboveentropy-power (or entropic) uncertainty relation that is valid for n modes and is saturated for all n -mode Gaussianpure states (the proof is given in Appendix B). In Appendix A, we conduct extensive numerical tests in order toillustrate the validity of our extended uncertainty relations and conjectured concavity. II. FROM VARIANCE-BASED TO ENTROPY-POWER UNCERTAINTY RELATIONSA. Variance-based uncertainty relations
The original uncertainty relation, due to Heisenberg [1] and Kennard [2], relies on the variances of ˆ x and ˆ p . In therest of this paper, we use quantum optics notations, so variables ˆ x and ˆ p stand for the quadrature components of abosonic field (but they can, of course, also be viewed as the position and momentum variables of a mechanical degreeof freedom). Using [ˆ x, ˆ p ] = i (cid:126) , the Heisenberg uncertainty relation is written as σ x σ p ≥ ( (cid:126) / (2)with variances σ x = (cid:104) (ˆ x − ¯ x ) (cid:105) and σ p = (cid:104) (ˆ p − ¯ p ) (cid:105) , and mean values ¯ x = (cid:104) ˆ x (cid:105) and ¯ p = (cid:104) ˆ p (cid:105) . Here, (cid:104)·(cid:105) ≡ Tr(ˆ ρ · )denotes the expectation value of “ · ” in quantum state ˆ ρ . With this convention, the vacuum noise variances are σ x, vac = σ p, vac = (cid:126) /
2. Relation (2) is invariant under ( x, p )-displacements in phase space, since it only depends oncentral moments (esp. second-order moments of the deviations from the means). Furthermore, it is saturated by allpure Gaussian states provided that they are squeezed in the x or p direction only. More precisely, if we define thecovariance matrix γ = (cid:18) σ x σ xp σ xp σ p (cid:19) (3)where σ xp = (cid:104){ ˆ x, ˆ p }(cid:105) / − ¯ x ¯ p is the symmetrized form of the central second-order cross moment, we note that theHeisenberg relation is saturated for pure Gaussian states provided the principal axes of γ are aligned with the x - and p -axes, namely σ xp = 0. The principal axes are the x θ - and p θ -axes for which σ x θ p θ = 0, whereˆ x θ = cos θ ˆ x + sin θ ˆ p ˆ p θ = − sin θ ˆ x + cos θ ˆ p (4)are obtained by rotating x and p by an angle θ as shown in Figure 1. xp x θ p θ θ Figure 1: Principal axes ( x θ , p θ ) of the covariance matrix γ , defined in such a way that σ x θ p θ = 0.The Heisenberg relation was improved by Schr¨odinger and Robertson [3, 4] by taking into account the anticommu-tator between the observables . For two canonically-conjugate variables ˆ x and ˆ p , it is written as | γ | ≥ ( (cid:126) / (5) For any pair of observables ˆ A and ˆ B , the generalized form of the uncertainty relation is σ A σ B ≥ |(cid:104) [ ˆ A, ˆ B ] (cid:105)| + |(cid:104){ ˆ A (cid:48) , ˆ B (cid:48) }(cid:105)| , whereˆ A (cid:48) = ˆ A − (cid:104) ˆ A (cid:105) and ˆ B (cid:48) = ˆ B − (cid:104) ˆ B (cid:105) . where | γ | = σ x σ p − σ xp is the determinant of the covariance matrix. Importantly, relation (5) is saturated by all pureGaussian states, regardless of the orientation of the principal axes of the covariance matrix. Thus, this uncertaintyrelation has the nice property of being invariant under all Gaussian unitary transformations (displacements andsymplectic transformations). B. Entropy-based uncertainty relations
The uncertainty principle may also be expressed using the entropy as a measure of uncertainty. In particular,Bialynicki-Birula and Mycielski [5] proved the following entropic uncertainty relation h ( x ) + h ( p ) ≥ ln( πe (cid:126) ) (6)where h ( x ) and h ( p ) are the Shannon differential entropies of the x - and p -quadratures, namely h ( x ) = − (cid:90) W x ( x ) ln W x ( x ) d x, h ( p ) = − (cid:90) W p ( p ) ln W p ( p ) d p. (7)Here, W x ( x ) = (cid:82) W ( x, p ) d p and W p ( p ) = (cid:82) W ( x, p ) d x denote the marginals of the Wigner function of state ˆ ρ , W ( x, p ) = 12 π (cid:126) (cid:90) ∞−∞ e − ipy (cid:126) (cid:104) x + y/ | ˆ ρ | x − y/ (cid:105) d y (8)so they are classical probability densities.Note that Eq. (6) may look wrong at first sight as we take the logarithm of a quantity with dimension (cid:126) . Thismay be viewed as a feature of the differential entropy itself, since we have a similar issue in Eq. (7) itself, but theproblem actually cancels out in Eq. (6) since we have dimension (cid:126) on both sides of the equality. More rigorously, Eq.(6) may be understood as the limit of a discretized version of the entropic uncertainty relation, with a discretizationstep tending to zero [9]. This problem was absent in the original expression of this uncertainty relation [5] becausethe variable k = p/ (cid:126) was considered instead of p , giving h ( x ) + h ( k ) ≥ ln( πe ). Being aware of this slight abuse ofnotation, we prefer to keep (cid:126) in the rest of this paper.Just as the Heisenberg uncertainty relation, Eq. (6) is saturated by pure Gaussian states whose principal axes arealigned with the x - and p -axes (i.e., σ xp = 0). Indeed, for a Gaussian-distributed variable x G of variance σ x and aGaussian-distributed variable p G of variance σ p , we have h ( x G ) = 12 ln(2 πeσ x ) h ( p G ) = 12 ln(2 πeσ p ) . (9)Hence, summing up these two entropies and using the fact that σ x σ p = ( (cid:126) / for any pure Gaussian state whoseprincipal axes are aligned with the x - and p -axes, we get h ( x G ) + h ( p G ) = ln( πe (cid:126) ) . (10)Remark that we may also re-express the entropic uncertainty relation in terms of relative entropies . More precisely,using a measure of non-Gaussianity that relies on the relative entropy [21], we have D ( x || x G ) = h ( x G ) − h ( x ) ≥ D ( p || p G ) = h ( p G ) − h ( p ) ≥ D ( x || x G ) + D ( p || p G ) ≤ ln (cid:18) σ x σ p (cid:126) / (cid:19) . (12)We see immediately that if the Heisenberg relation is saturated, σ x σ p = (cid:126) /
2, then D ( x || x G ) = D ( p || p G ) = 0, whichmeans that the x - and p -quadratures must both be Gaussian distributed. Thus, as emphasized in ref. [22], the entropicuncertainty relation may also been viewed as an improved version of the Heisenberg relation where the lower boundis lifted up by exploiting an entropic measure of the non-Gaussianity of the state, namely σ x σ p ≥ ( (cid:126) / e D ( x || x G )+2 D ( p || p G ) . (13) The relative entropy between two probability densities f ( x ) and g ( x ) is defined as D ( f || g ) = (cid:82) f ( x ) ln( f ( x ) /g ( x )) d x . It exhibits theproperty that D ( f || g ) ≥
0, and D ( f || g ) = 0 if and only if f ( x ) = g ( x ), ∀ x (almost everywhere). C. Entropy-power uncertainty relations
We will show now that it is possible to rewrite the entropic uncertainty relation in a form similar to the oneexpressed in terms of variances, provided we make use of the notion of entropy power . The entropy power of the x -and p -quadratures are defined as N x = 12 πe e h ( x ) N p = 12 πe e h ( p ) , (14)and we have N x = σ x and N p = σ p if and only if the x - and p -quadratures are Gaussian distributed. Thus, Eq. (6)can be simply reexpressed as N x N p ≥ ( (cid:126) / , (15)which is what we call an entropy-power uncertainty relation for a pair of canonically-conjugate variables, as presentedin the introduction. It closely resembles the Heisenberg relation (2), but with entropy powers instead of variances.Since N x ≤ σ x and N p ≤ σ p , which reflects the fact that the Gaussian distribution maximizes the entropy for afixed variance, we have the chain of inequalities σ x σ p ≥ N x N p ≥ ( (cid:126) / . (16)Hence, the entropy-power uncertainty relation implies the Heisenberg uncertainty relation, and they coincide forGaussian x - and p -distributions (this was already mentioned in [5]). This can also be connected to relative entropiesas a measure of non-Gaussianity. From the definition of N x and N p , we get h ( x ) = 12 ln(2 πeN x ) h ( p ) = 12 ln(2 πeN p ) (17)which implies that D ( x || x G ) = 12 ln (cid:18) σ x N x (cid:19) D ( p || p G ) = 12 ln (cid:32) σ p N p (cid:33) (18)or equivalently σ x = N x e D ( x || x G ) σ p = N p e D ( p || p G ) . (19)It is clear that Eq. (15) becomes more stringent than Eq. (2) as soon as we deviate from a Gaussian state. III. EXTENDED FORMS OF ENTROPIC UNCERTAINTY RELATIONSA. Motivation
Our goal is to address the problem that, unlike the Schr¨odinger-Robertson uncertainty relation, the entropic uncer-tainty relation (6) – or equivalently the entropy-power uncertainty relation (15) – is not saturated by all pure Gaussianstates but only by those whose principal axes are aligned with the x - and p -axes. In other words, we would like tomake Eq. (6) or (15) depend on the possible correlations between x and p (as witnessed, for instance, by σ xp (cid:54) = 0).Ideally, the new inequality should have the property of being invariant under all Gaussian unitary transformations(displacements and symplectic transformations) and being saturated by all pure Gaussian states, regardless of theorientation of the principal axes.A first natural idea is to make use of the joint differential entropy, which is defined as h ( x, p ) = − (cid:90) f ( x, p ) ln f ( x, p ) d x d p (20) The entropy power N ( X ) of a real-valued random variable X is defined as the variance of a Gaussian-distributed random variablehaving the same entropy as X (the mean of X plays no role since the entropy is translation-invariant). Since the distribution withhighest entropy for a given variance is the Gaussian distribution, N ( X ) ≤ σ X , the equality being reached if and only if X is Gaussiandistributed. where f ( x, p ) is the joint probability density of the random variables x and p . The joint entropy can also be expressedas h ( x, p ) = h ( x ) + h ( p ) − I ( x : p ) where I ( x : p ) ≥ h ( x ) + h ( p ) with h ( x, p ). Moving the mutual information I ( x : p ) onthe right-hand side of the inequality, it thus corresponds to an improvement of the lower bound. Moreover, h ( x, p )has the invariance property that we seek. Indeed, if we transform the coordinates according to ( x (cid:48) p (cid:48) ) T = S · ( x p ) T ,where S is the transformation matrix, the joint differential entropy transforms as [23] h ( x (cid:48) , p (cid:48) ) = h ( x, p ) + ln | S | . (21)Thus, if S corresponds to a symplectic transformation, | S | = 1, then the joint differential entropy remains invariant.Of course, h ( x, p ) is also invariant under ( x , p )-displacement, so it looks like a good uncertainty functional.However, we deal with quantum states, so the Wigner function W ( x, p ) is not a genuine probability density andmay admit negative values. Hence, the joint differential entropy of W ( x, p ) is not always defined (one would need tocompute the logarithm of negative values), and so is the mutual information I ( x : p ). Nevertheless, we conjecture thatthe joint differential entropy obeys a valid uncertainty relation if we restrict to states admitting a Wigner functionthat is non-negative everywhere, namely h ( x, p ) ≥ ln( πe (cid:126) ) ∀ states s . t . W ( x, p ) ≥ . (22)This conjecture can equivalently be written as h ( x ) + h ( p ) ≥ ln( πe (cid:126) ) + I ( x : p ) ∀ states s . t . W ( x, p ) ≥ , (23)which is an improvement over Eq. (6) since I ( x : p ) ≥ x and p is not accessible via the second-ordermoments ( σ xp = 0) but via the mutual information I ( x : p ) only. B. Tight entropy-power uncertainty relation involving the covariance matrix
Equations (22) or (23) are not valid for states with negative Wigner functions, but they give us a hint on howto proceed in order to derive an entropic uncertainty relation that is valid for all states and takes correlations intoaccount. While the joint entropy and mutual information are not defined for all states, they are well defined forGaussian states (since their Wigner function is always positive). In particular, the Gaussian mutual information isexpressed as a function of the covariance matrix, I G ( x : p ) = 12 ln (cid:0) σ x σ p / | γ | (cid:1) ≥ . (24)We obtain our tight entropic uncertainty relation simply by substituting I ( x : p ) with I G ( x : p ) in Eq. (22), namely h ( x ) + h ( p ) −
12 ln (cid:0) σ x σ p / | γ | (cid:1) ≥ ln( πe (cid:126) ) . (25)We will show below (under some assumptions) that this inequality holds for all states, regardless of whether theWigner function is positive everywhere or not. Unlike Eq. (22), however, it is not invariant under rotations. Notethat I G ( x : p ) vanishes if the principal axes of the covariance matrix are the x - and p -axes, i.e. σ xp = 0, so that Eq. (25)reduces to the regular entropic uncertainty relation (6) in this case.As before, it is useful to rewrite our new relation in terms of entropy powers (as we presented it in the introduction),resulting in N x N p ≥ σ x σ p | γ | ( (cid:126) / (26)which can be viewed as an improved version of the entropy-power uncertainty relation (15), where the lower bound( (cid:126) / is lifted up when the principal axes differ from the x - and p -axes ( σ xp (cid:54) = 0). If the principal axes correspondto the x - and p -axes, we recover Eq. (15). Alternatively, we may also reexpress our new relation as N x N p σ x σ p | γ | ≥ ( (cid:126) / . (27)Then, using N x ≤ σ x and N p ≤ σ p , we see that our tight entropy-power inequality (26) implies the Schr¨odinger-Robertson uncertainty relation, namely | γ | ≥ N x N p σ x σ p | γ | ≥ ( (cid:126) / . (28)These two inequalities coincide for Gaussian x - and p -distributions. Furthermore, they are both saturated for pureGaussian states regardless the orientation of the principal axes (since | γ | = ( (cid:126) / and N x = σ x , N p = σ p ).In addition, we may reexpress Eq. (26) as | γ | ≥ σ x σ p N x N p ( (cid:126) / (29)which can be viewed as an improved version of the Schr¨odinger-Robertson uncertainty relation where the lower bound( (cid:126) / is lifted up when the x - and p -distributions deviate from Gaussian distributions. In terms of non-Gaussianitymeasures based on relative entropies, it transforms into D ( x || x G ) + D ( p || p G ) ≤ ln (cid:32) (cid:112) | γ | (cid:126) / (cid:33) . (30)which is the counterpart of Eq. (12) but having replaced σ x σ p with | γ | , just as we do when going from the Heisenbergto the Schr¨odinger-Robertson relation. It also corresponds to a stronger version of Eq. (13), which reads | γ | / ≥ ( (cid:126) / e D ( x || x G )+ D ( p || p G ) . (31)To be complete, let us mention that we can express our tight entropic uncertainty relation (25) as h ( x ) + h ( p ) ≥ h ( x G ) + h ( p G ) + ln( µ G ) (32)where x G ( p G ) is Gaussian distributed with variance σ x ( σ p ) and µ G = tr ρ G is the purity of the Gaussian state ρ G associated to the covariance matrix γ .Finally, note that our conjectured rotation-invariant uncertainty relation (22) based on the joint entropy is obviouslyequivalent to Eq. (25) for Gaussian states, so in both cases the bound is reached by any pure Gaussian state, regardlessof the orientation of the principal axes. Thus, by taking the exponential of the joint entropy h ( x, p ) and using thefact that the maximum entropy is reached for a Gaussian distribution, a similar derivation shows that relation (22)also implies the Schr¨odinger-Robertson uncertainty relation. C. Partial proof of relation (25)
We now give a partial proof of our tight entropic uncertainty relation (25) by use of a variational method, in analogyto the procedure used in ref. [25] to prove a noise-dependent entropic uncertainty relation. More precisely, we willprove that any squeezed vacuum state rotated by an arbitrary angle is a local minimum of the uncertainty functional F (ˆ ρ ) = h ( x ) + h ( p ) −
12 ln (cid:0) σ x σ p / | γ | (cid:1) . (33)Since F (ˆ ρ ) is invariant under ( x, p )-displacements, it will imply that all Gaussian pure states are similarly local minima.We assume that these are the unique solutions of our minimization problem. By assuming that the uncertaintyfunctional F (ˆ ρ ) is concave in ρ , which we have verified numerically in Section IV, we also conclude that relation (25)is valid for mixed states as well. We know that this situation prevails for the regular entropic uncertainty relation (6)as well as for our conjectured relation (22), so the above assumptions (unicity and concavity) are very natural.Let us seek for a pure state | ψ (cid:105) that minimizes the functional F ( | ψ (cid:105)(cid:104) ψ | ). For this, we use the Lagrange multipliermethod and insert the normalization of | ψ (cid:105) as a constraint. Since F ( | ψ (cid:105)(cid:104) ψ | ) is invariant under displacements, we mayalso impose with no loss of generality the constraint that mean values vanish, (cid:104) ˆ x (cid:105) = (cid:104) ˆ p (cid:105) = 0. We define J = F ( | ψ (cid:105)(cid:104) ψ | ) + λ ( (cid:104) ψ | ψ (cid:105) −
1) + µ (cid:104) ψ | ˆ x | ψ (cid:105) + ν (cid:104) ψ | ˆ p | ψ (cid:105) (34)where λ , µ and ν are Lagrange multipliers. Since we impose the state to be normalized and centered on zero, we canexpress the second-order moments as σ x = (cid:104) ψ | ˆ x | ψ (cid:105) , σ p = (cid:104) ψ | ˆ p | ψ (cid:105) , and σ xp = (cid:104) ψ |{ ˆ x, ˆ p }| ψ (cid:105) , so that we may replacethe functional F ( | ψ (cid:105)(cid:104) ψ | ) in J by˜ F ( | ψ (cid:105)(cid:104) ψ | ) = h ( x ) + h ( p ) −
12 ln (cid:18) (cid:104) ψ | ˆ x | ψ (cid:105)(cid:104) ψ | ˆ p | ψ (cid:105)(cid:104) ψ | ˆ x | ψ (cid:105)(cid:104) ψ | ˆ p | ψ (cid:105) − (cid:104) ψ |{ ˆ x, ˆ p }| ψ (cid:105) (cid:19) . (35)Now, in order to solve the variational equation ∂J∂ (cid:104) ψ | = 0 (36)we start by expressing the variational derivative of each term of J separately. The first term gives ∂h ( x ) ∂ (cid:104) ψ | = ∂∂ (cid:104) ψ | (cid:18) − (cid:90) W x ( x ) ln W x ( x )d x (cid:19) = ∂∂ (cid:104) ψ | (cid:18) − (cid:90) (cid:104) ψ | x (cid:105)(cid:104) x | ψ (cid:105) ln( (cid:104) ψ | x (cid:105)(cid:104) x | ψ (cid:105) )d x (cid:19) = − (ln W x (ˆ x ) + 1) | ψ (cid:105) (37)and, similarly, the second term gives ∂h ( p ) ∂ (cid:104) ψ | = − (ln W p (ˆ p ) + 1) | ψ (cid:105) . (38)For the third term, we use ∂∂ (cid:104) ψ | ln (cid:18) (cid:104) ψ | ˆ x | ψ (cid:105)(cid:104) ψ | ˆ p | ψ (cid:105)(cid:104) ψ | ˆ x | ψ (cid:105)(cid:104) ψ | ˆ p | ψ (cid:105) − (cid:104) ψ |{ ˆ x, ˆ p }| ψ (cid:105) (cid:19) = (cid:34) ˆ x σ x + ˆ p σ p − ˆ x σ p + ˆ p σ x − { ˆ x, ˆ p } σ xp | γ | (cid:35) | ψ (cid:105) (39)while the last terms give ∂∂ (cid:104) ψ | (cid:18) λ ( (cid:104) ψ | ψ (cid:105) −
1) + µ (cid:104) ψ | ˆ x | ψ (cid:105) + ν (cid:104) ψ | ˆ p | ψ (cid:105) (cid:19) = ( λ + µ ˆ x + ν ˆ p ) | ψ (cid:105) . (40)Putting all this together, the variational equation (36) can be rewritten as an eigenvalue equation for | ψ (cid:105) , (cid:20) − ln W x (ˆ x ) − ln W p (ˆ p ) − λ + µ ˆ x + ν ˆ p − ˆ x σ x − ˆ p σ p + ˆ x σ p + ˆ p σ x − { ˆ x, ˆ p } σ xp | γ | (cid:21) | ψ (cid:105) = 0 . (41)Let us check that Eq. (41) is verified by | ψ (cid:105) = ˆ S | (cid:105) , that is, by a squeezed vacuum state with ˆ S = exp { ( z ∗ ˆ a − z ˆ a † ) } ,where z = re iφ is a complex number. For such a state, the marginals of the Wigner functions are given by W x ( x ) = (2 πσ x ) − e − x σ x , W p ( p ) = (2 πσ p ) − e − p σ p , (42)so that ln W x (ˆ x ) + ln W p (ˆ p ) = − ln(2 πσ x σ p ) − ˆ x σ x − ˆ p σ p . (43)Hence, we can simplify the eigenvalue equation as (cid:104) ln(2 πσ x σ p ) − λ + µ ˆ x + ν ˆ p + ˆ A (cid:105) | ψ (cid:105) = 0 (44)where we have defined the operatorˆ A = ˆ x σ p + ˆ p σ x − { ˆ x, ˆ p } σ xp | γ | = 12 (cid:0) ˆ x ˆ p (cid:1) γ − (cid:18) ˆ x ˆ p (cid:19) . (45)Let us now compute the action of ˆ A on the squeezed vacuum state, that is, ˆ A | ψ (cid:105) = ˆ A ˆ S | (cid:105) = ˆ S ( ˆ S † ˆ A ˆ S ) | (cid:105) . For this,we use the canonical transformation of ˆ x and ˆ p in the Heisenberg picture, namely (cid:18) ˆ S † ˆ x ˆ S ˆ S † ˆ p ˆ S (cid:19) = M (cid:18) ˆ x ˆ p (cid:19) (46)with M = (cid:18) cos θ − sin θ sin θ cos θ (cid:19) (cid:18) e − r e r (cid:19) (cid:18) cos θ sin θ − sin θ cos θ (cid:19) = (cid:18) cosh r − cos φ sinh r − sin φ sinh r − sin φ sinh r cosh r + cos φ sinh r (cid:19) (47)with φ = 2 θ . The covariance matrix γ of state | ψ (cid:105) can be expressed with transformation M applied onto the covariancematrix of the vacuum state γ vac , namely γ = M γ vac M T . (48)Using Eqs. (46) and (48), we getˆ A | ψ (cid:105) = 12 ˆ S (cid:0) ˆ x ˆ p (cid:1) M T γ − M (cid:18) ˆ x ˆ p (cid:19) | (cid:105) = 12 ˆ S (cid:0) ˆ x ˆ p (cid:1) γ − (cid:18) ˆ x ˆ p (cid:19) | (cid:105) = ˆ S | (cid:105) = | ψ (cid:105) (49)implying that the squeezed vacuum state | ψ (cid:105) is an eigenvector of ˆ A with eigenvalue 1. Therefore, the eigenvalueequation can be written as [ln(2 πσ x σ p ) − λ + µ ˆ x + ν ˆ p ] | ψ (cid:105) = 0 . (50)We can determine the value of λ by multiplying this equation on the left by (cid:104) ψ | and using the constraints (cid:104) ψ | ψ (cid:105) = 1and (cid:104) ψ | ˆ x | ψ (cid:105) = (cid:104) ψ | ˆ p | ψ (cid:105) = 0, namely (cid:104) ψ | [ln(2 πσ x σ p ) − λ + µ ˆ x + ν ˆ p ] | ψ (cid:105) = ln(2 πσ x σ p ) − λ = 0 . (51)Therefore, state | ψ (cid:105) is indeed a solution of our extremization problem if we set λ = 1 − ln(2 πσ x σ p ). We are left withequation [ µ ˆ x + ν ˆ p ] | ψ (cid:105) = 0 (52)which is satisfied if we set µ = ν = 0. Summing up, we have proven that, with the appropriate choice of λ , µ and ν , the squeezed vacuum states (with arbitrary squeezing and rotation) are solutions of Eq. (41), so they minimizeour uncertainty functional F ( | ψ (cid:105)(cid:104) ψ | ). Since F ( | ψ (cid:105)(cid:104) ψ | ) is invariant under displacements, the displaced squeezed statesare also solutions, so this result includes all pure Gaussian states. We find the minimum value ln( πe (cid:126) ) simply byevaluating F for any of these states.As mentioned above, this proof does not imply that the pure Gaussian states are the only minimum-uncertaintystates, and we also need to assume the concavity of our uncertainty functional in order to extend the proof to mixedstates. However, these are very natural assumptions, which are verified in the special case of states with σ xp = 0 sincethen we are back to the regular entropic uncertainty relation. Moreover, in Appendix A, we give strong numericalevidence that our tight entropic uncertainty relation is valid. Other numerical tests also corroborate the concavityproperty of the uncertainty functional, while this property is proven in the special case when two states with the samecovariance matrix are mixed. D. Generalization to n modes In ref. [5], Bialynicki-Birula and Mycielski also extended the entropic uncertainty relation to n modes, namely h ( (cid:126)x ) + h ( (cid:126)p ) ≥ n ln( πe (cid:126) ) (53)where the joint differential entropies h ( (cid:126)x ) and h ( (cid:126)p ) are computed from the marginals of the Wigner functions W x ( (cid:126)x )and W p ( (cid:126)p ), with (cid:126)x = ( x , x , · · · , x n ) and (cid:126)p = ( p , p , · · · , p n ). In addition, another n -mode uncertainty relation wasexpressed in ref. [26] for two observables ˆ A and ˆ B defined as linear combinations of the ˆ x i and ˆ p i variables.Naturally, both our entropic uncertainty relations can also be extended to n modes. First, our conjectured rotation-invariant uncertainty relation based on the joint entropy (22) becomes h ( (cid:126)r ) ≥ n ln( πe (cid:126) ) ∀ states s.t. W ( (cid:126)r ) ≥ (cid:126)r = ( x , p , x , p , ..., x n , p n ). Here, the joint differential entropy h ( (cid:126)r ) is invariant under Gaussian n -modeunitaries (all symplectic transformations and displacements) and our conjectured uncertainty relation (54) is saturatedfor all n -mode Gaussian pure states.Second, our tight entropic uncertainty relation (25) can also be extended to h ( (cid:126)x ) + h ( (cid:126)p ) −
12 ln (cid:18) | γ x || γ p || γ | (cid:19) ≥ n ln( πe (cid:126) ) (55)where the covariance matrix γ is defined as γ ij = Tr[ˆ ρ { r i , r j } ] / − Tr[ˆ ρ r i ]Tr[ˆ ρ r j ] and γ x ( γ p ) is the reduced covariancematrix of the x ( p ) quadratures. The proof of this relation can be found in Appendix B (it is obtained following thesame variational method as in the one-mode case). Equation (55) is again saturated by all n -mode Gaussian purestates, as we can easily check by using the fact that h ( (cid:126)x ) = 12 ln((2 πe ) n | γ x | ) h ( (cid:126)p ) = 12 ln((2 πe ) n | γ p | ) (56)for Gaussian distributions, while | γ | = ( (cid:126) / n for Gaussian pure states.In particular, relation (55) is thus saturated by the two-mode vacuum squeezed state with covariance matrix γ = (cid:126) cosh 2 r r
00 cosh 2 r − sinh 2 r sinh 2 r r − sinh 2 r r , (57)obtained by injecting an x -squeezed state and a p -squeezed state (both with a squeezing parameter r ) on a balancedbeam splitter. This is easy to check by computing the entropies with Eq. (56) and using | γ x | = | γ p | = ( (cid:126) / and | γ | = ( (cid:126) / . However, the regular entropic uncertainty relation (53) is already saturated for this state, which isexpected since the state exhibits no x - p correlations. More interestingly, the state resulting from two rotated squeezedstates (one being rotated by π/
4, the other by − π/
4) injected on a balanced beam splitter still saturates relation (55),while it does not any more saturate relation (53). Indeed, the covariance matrix of this state reads γ = (cid:126) cosh 2 r − sinh 2 r r − sinh 2 r − sinh 2 r cosh 2 r − sinh 2 r r . (58)so that we get h ( (cid:126)x ) + h ( (cid:126)p ) = 2 ln ( πe (cid:126) cosh 2 r ) > πe (cid:126) ). But since | γ x | = | γ p | = ( (cid:126) / cosh r and | γ | = ( (cid:126) / ,we get − ln (cid:16) | γ x || γ p || γ | (cid:17) = − r ), implying that relation (55) is saturated by this state.In this context, it is also interesting to rewrite the tight entropic uncertainty relation (55) in term of entropy powers,defined this time for the joint entropy in n dimensions, namely N ( n ) x = 12 πe e n h ( (cid:126)x ) N ( n ) p = 12 πe e n h ( (cid:126)p ) (59)Equation (53) then transforms into a n -mode entropy-power uncertainty relation N ( n ) x N ( n ) p ≥ ( (cid:126) / , (60)which has the same form as relation (15) but for n modes, while equation (55) transforms into a tight version of the n -mode entropy-power uncertainty relation N ( n ) x N ( n ) p ≥ (cid:18) | γ x | | γ p || γ | (cid:19) /n ( (cid:126) / , (61)which is the n -mode counterpart of Eq. (26).Here too, we can use the fact that the maximum entropy for a fixed covariance matrix is given by the Gaussiandistribution, which implies that N ( n ) x ≤ | γ x | /n and N ( n ) p ≤ | γ p | /n . Rewriting Eq. (61) as (cid:16) N ( n ) x N ( n ) p (cid:17) n | γ x | | γ p | | γ | ≥ ( (cid:126) / n , (62)we then see that the n -mode entropy-power uncertainty relation implies the standard (variance-based) n -mode un-certainty relation, namely | γ | ≥ (cid:16) N ( n ) x N ( n ) p (cid:17) n | γ x | | γ p | | γ | ≥ ( (cid:126) / n . (63)0 IV. CONCLUSION
We have shown that the entropic uncertainty relation derived by Bialynicki-Birula and Mycielski can can be ex-pressed as an entropy-power uncertainty relation, which makes a straightforward connection with Heisenberg un-certainty relation : the variances in the latter are simply replaced with entropy powers in the former. Moreover,the entropic version of the uncertainty relation implies the variance-based one as a consequence of the fact that theentropy power of a variable cannot exceed its variance. Then, we have found a tighter form of the entropic uncer-tainty relation, which takes the correlation between the x - and p -variables into account. It can also be expressed asa tighter entropy-power uncertainty relation, Eq. (1), and is saturated for all pure Gaussian states. It is the entropiccounterpart of the Schr¨odinger-Robertson uncertainty relation, which it implies. We have provided a partial proof ofEq. (1) based on variational calculus together with some reasonable assumptions, and have provided, in the AppendixA, strong numerical evidence that it is correct. Interestingly, this tighter entropic and entropy-power uncertaintyrelations can be extended to n modes, and all the above-mentioned properties remain true. Our main result wasinspired from another conjectured uncertainty relation involving the joint entropy, Eq. (22), which is more elegant (itis explicitly invariant under all Gaussian unitaries – displacements, squeezing, and rotations) but is only defined forstates with a non-negative Wigner function. We have numerically verified its validity, but leave its proof for furtherwork. Its n -mode extension is also straightforward.Possible applications of these new entropic uncertainty relations include the elaboration of stronger separabilitycriteria for continuous-variable systems. Both variance- and entropy-based uncertainty relations can be translatedinto a sufficient entanglement condition (a necessary and sufficient condition for Gaussian states) as they can be usedto express a condition on the physicality of the partially-transposed state [17–19]. For example, in ref. [28] it wasshown that an uncertainty relation that is tight for all Fock states [29] yields an entanglement criterion that enablesthe detection of certain non-Gaussian entangled states whose entanglement remains undetected by the Duan-Simoncriterion. Thus, a natural direction for further work would be to exploit our tighter entropic uncertainty relations inorder to improve our tools for discriminating entangled from separable states in continuous-variable quantum systems. Note : The current paper was presented at the 23rd Central European Conference on Quantum Optics (CEWQO2016), Kolymbari, Greece, June 2016. After completion of this work, we learned about an independent work wherethe entropy power is mentioned in the context of uncertainty relations [30].
Acknowlegments : We thank Emmanou¨ıl Grigoriou for performing numerical simulations during a research internshipat QuIC, ULB, in Summer 2016. This work was supported by the F.R.S.-FNRS Foundation under Project No.T.0199.13 and by the Belgian Federal IAP program under Project No. P7/35 Photonics@be. A.H. acknowledgesfinancial support from the F.R.S.-FNRS Foundation and M.G.J. acknowledges financial support from the FRIAfoundation.
APPENDIX A : Numerical tests1. Numerical tests of the uncertainty relation (22)
We have not been able to find an analytical proof of our conjectured rotation-invariant uncertainty relation (22)based on the joint entropy, so we have turned to numerical tests. Since relation (22) is restricted to states with positiveWigner functions, we have tested, in particular, passive states of the harmonic oscillator, i.e., mixtures of Fock stateswith decreasing weight for increasing photon number [27].In Figure 2, we consider extremal passive states (i.e., passive states with equal weights up to a certain photonnumber N and vanishing weights for larger photon numbers) and have plotted the joint entropy h ( x, p ) as a functionof N , see red dots. The dashed line is the lower bound ln( πe (cid:126) ), so we clearly see that the uncertainty relation (22)is obeyed. Since h ( x, p ) is concave in the state, proving (22) for extremal passive states would actually suffice toprove it for all passive states. For comparison with the regular entropic uncertainty relation (6), we have also plotted h ( x ) + h ( p ), see blue dots, which illustrates that our rotation-invariant uncertainty relation provides an improvement.Although the improvement is minor in this example, it is worth noting that Eq. (22) takes into account some x - p correlations that are not visible in the second-order moments (all passive states have σ xp = 0), so no improvement atall would be obtained with our entropic uncertainty relation (25) relying on the covariance matrix.We have also numerically tested other states with positive Wigner functions which are closer to the bound, such asmixtures of two squeezed states, and relation (22) was verified in every tested case.1 x x x x x x x x x x h ( x ) + h ( p ) x h ( x , p ) � Figure 2: Test of the uncertainty relation (22) based on the joint entropy for extremal passive states, with N beingthe highest photon number of the state. The blue dots correspond to h ( x ) + h ( p ), the red dots correspond to h ( x, p ),while the dashed line is the lower bound ln( πe ) [we take (cid:126) = 1].
2. Numerical tests of the uncertainty relations (25) and (26)
We have also conducted many numerical tests in order to verify the accuracy of the tight entropic uncertaintyrelation. For numerical purposes, it was simpler to consider the uncertainty relation in its form with differentialentropies, eq. (25). First, we have considered random pure states, which we generated by applying a random unitarytransformation to the vacuum state. In Figure 3, each blue dot corresponds to h ( x ) + h ( p ) as computed for a randomstate generated with a 4 × | n (cid:105) with n = 0 , , , h ( x ) + h ( p ) that results from Eq. (25), namelyln( πe (cid:126) ) + I G ( x : p ). Here, the Gaussian mutual information is expressed as I G ( x : p ) = −
12 ln (cid:0) − ρ (cid:1) (64)where ρ = σ xp / ( σ x σ p ) stands for the correlation parameter. We clearly see that all points lie above the improvedlower bound, corroborating the new entropic uncertainty relation (25). Note that other tests have been carried outwith unitary transformations of greater dimensions, but this generally yields states with greater values of h ( x ) + h ( p ),which are less interesting for verification purposes.As a more stringent test, we have computed h ( x ) + h ( p ) for some slightly non-Gaussian pure states lying in theneighborhood of the Gaussian pure states that saturate the uncertainty relation. To do so, we generated states ofthe form | ψ (cid:105) ∝ ( | s (cid:105) + (cid:15) | φ (cid:105) ) where | s (cid:105) is a squeezed state, | φ (cid:105) is any other pure state and (cid:15) (cid:28)
1. In Figure 4, we havechosen | φ (cid:105) as some random pure state generated by the above method, (cid:15) = 0 .
01, and a squeezed state | s (cid:105) along anaxis rotated by an angle of θ = π/ x -axis (with a squeezing parameter s ≡ e r = 1 . (cid:104) x | s (cid:105) = (cid:115) s π ( s + 1) exp (cid:32) i (cid:0) s + i (cid:1) x s − i ) (cid:33) (65)which is non-Gaussian, implying that it cannot saturate the ordinary entropic uncertainty relation (6). We haveverified that, even if they lie very close to the boundary, all states | ψ (cid:105) verify the tight entropic uncertainty relation.Similar simulations have also been performed with squeezed states of different parameters and with different valuesof (cid:15) , yet no counterexample was found.
3. Concavity of the uncertainty functional
The regular entropic uncertainty relation (6) was proven for pure states in [5]. However, since the differentialentropy is a concave function of the probability distribution, it is valid to mixed states as well (as mentioned in [5]).2 - - ρ � ( � )+ � ( � ) Figure 3: Test of the tight entropic uncertainty relation (25) for random pure states generated by applying a 4 × h ( x ) + h ( p ), while the red curve represents theimproved lower bound ln( πe ) + I G ( x : p ) [we take (cid:126) = 1]. All quantities are plotted as a function of the correlationcoefficient ρ . - - - - ρ � ( � )+ � ( � ) Figure 4: Test of the tight entropic uncertainty relation (25) for slightly non-Gaussian states of the form | ψ (cid:105) ∝ ( | s (cid:105) + (cid:15) | φ (cid:105) ) where | s (cid:105) is a squeezed state (with s = 1 .
5) along an axis rotated by an angle of θ = π/ | φ (cid:105) is arandom pure state as in Fig. 3, and (cid:15) = 0 .
01. The blue dots correspond to h ( x ) + h ( p ), while the red curverepresents the improved lower bound ln( πe ) + I G ( x : p ) [we take (cid:126) = 1]. All quantities are plotted as a function of thecorrelation coefficient ρ . A zoom in of the interesting region is shown in this figure .Decomposing a mixed state into pure states, the concavity implies that pure states are the “worst cases”, i.e., thelowest value of the functional h ( x ) + h ( p ). Naturally, we also need to investigate the concavity of our new uncertaintyfunctionals. For our conjectured rotation-invariant uncertainty relation based on the joint entropy, we know that thejoint differential entropy is concave since we limit ourselves to positive Wigner functions, which can be viewed asclassical joint probability distributions. Hence, the left-hand side term of Eq. (22) is a concave function of the state.In contrast, it seems hard to prove the concavity of the uncertainty functional F (ˆ ρ ) of Eq. (33) which appears inthe left-hand side of the tight entropic uncertainty relation (25). This is because while h ( x ) and h ( p ) are concave, I G ( x : p ) is not convex. And even if it is known that log( | γ | ) is concave [23], nothing can be said about log( σ x σ p ).Nevertheless, numerical tests corroborate the fact that F (ˆ ρ ) is a concave function of the state. As an example, wehave analyzed mixtures of two pure states of the form λ | ψ (cid:105)(cid:104) ψ | + (1 − λ ) | ψ (cid:105)(cid:104) ψ | , with 0 ≤ λ ≤
1. In Figure 5, wehave numerically verified that F ( λ | ψ (cid:105)(cid:104) ψ | + (1 − λ ) | ψ (cid:105)(cid:104) ψ | ) ≥ λF ( | ψ (cid:105)(cid:104) ψ | ) + (1 − λ ) F ( | ψ (cid:105)(cid:104) ψ | ).3 | 〉 | 〉 | 〉 | 〉 | 〉 +| 〉 ( + ) | 〉 +( + )| 〉 +( + ) | 〉 +( + )| 〉 λ � ( � )+ � ( � )- � � ( ��� ) Figure 5: Test of the concavity of the uncertainty functional F ( ρ ) used in relation (25). We consider three differentbinary mixtures tuned by parameter λ : λ | (cid:105)(cid:104) | + (1 − λ ) | (cid:105)(cid:104) | , λ | (cid:105)(cid:104) | + (1 − λ ) | (cid:105)(cid:104) | , and λ | ψ (cid:105)(cid:104) ψ | + (1 − λ ) | φ (cid:105)(cid:104) φ | ,where | ψ (cid:105) = 7 i | (cid:105) + | (cid:105) and | φ (cid:105) = (3 i + 1) | (cid:105) + (2 + 5 i ) | (cid:105) + (1 + 3 i ) | (cid:105) + (6 + 8 i ) | (cid:105) .Interestingly, we can prove the concavity of F (ˆ ρ ) in some special case by using the expression of the entropicuncertainty relation in terms of non-Gaussianity measures based on relative entropies, Eq. (30). We consider themixture of two states that have the same first- and second-order moments. Hence, the right-hand side term of Eq.(30) is constant and we need to prove that D ( λx + (1 − λ ) x || [ λx + (1 − λ ) x ] G ) ≤ λD ( x || [ x ] G ) + (1 − λ ) D ( x || [ x ] G ) (66)where [ x ] G means that we take the Gaussian distribution that leads to the same variance as the probability distributionof x . (Of course, we have an identical inequality for the p quadrature.) By comparison, the convexity of the relativeentropy implies that D ( λx + (1 − λ ) x || λ [ x ] G + (1 − λ )[ x ] G ) ≤ λD ( x || [ x ] G ) + (1 − λ ) D ( x || [ x ] G ) (67)which is equivalent to the previous inequality since we mix up distributions with the same first- and second-ordermoments.Remark that the uncertainty relation (25) is invariant under displacements, so that, with no loss of generality, weonly need to consider states with zero mean values. Thus, we have proven the concavity of F (ˆ ρ ) when two states withthe same covariance matrix are mixed. Yet, in the general case, we have not been able to prove the concavity. APPENDIX B : Partial proof of equation (55)
The proof follows the same variational method used in the one-mode case, that is, we prove that any n -modesqueezed vacuum state is a local minimum of the uncertainty functional F (ˆ ρ ) = h ( (cid:126)x ) + h ( (cid:126)p ) −
12 ln (cid:18) | γ s || γ p || γ | (cid:19) (68)Since F (ˆ ρ ) is invariant under ( (cid:126)x, (cid:126)p )-displacements, it will imply that all Gaussian pure states are similarly localminima. Note that we assume, as for the one-mode case, that these are the unique solutions of our minimizationproblem and that the uncertainty functional F (ˆ ρ ) is concave in ˆ ρ , so that (55) is valid for mixed states as well.We seek for an n -mode pure state | ψ (cid:105) that minimizes the functional F ( | ψ (cid:105)(cid:104) ψ | ) with constraints on the normalizationof | ψ (cid:105) and mean values of (cid:126)x and (cid:126)p quadratures. We use the Lagrange multiplier method with J = h ( (cid:126)x ) + h ( (cid:126)p ) −
12 ln | γ x | −
12 ln | γ p | + 12 ln | γ | + λ ( (cid:104) ψ | ψ (cid:105) −
1) + n (cid:88) i =1 µ i (cid:104) ψ | ˆ r i | ψ (cid:105) . (69)4Here, λ and µ i are Lagrange multipliers, while the elements of the covariance matrix γ can be expressed as γ ij = (cid:104) ψ | ˆ r i ˆ r j + ˆ r j ˆ r i | ψ (cid:105) / ∂J∂ (cid:104) ψ | = 0, so we write the derivative of each term ∂h ( (cid:126)x ) ∂ (cid:104) ψ | = − (ln W x ( (cid:126)x ) + 1) | ψ (cid:105) ∂h ( (cid:126)p ) ∂ (cid:104) ψ | = − (ln W p ( (cid:126)p ) + 1) | ψ (cid:105) . (70)For the three terms involving the derivative of the determinant of a matrix, we use Jacobi’s formula so that ∂∂ (cid:104) ψ | ln | γ x | = 1 | γ x | ∂∂ (cid:104) ψ | | γ x | = 1 | γ x | Tr (cid:20) | γ x | γ − x ∂γ x ∂ (cid:104) ψ | (cid:21) = n (cid:88) i =1 n (cid:88) j =1 γ − x ik ∂γ x ki ∂ (cid:104) ψ | = n (cid:88) i =1 n (cid:88) j =1 γ − x ik (ˆ x k ˆ x i + ˆ x i ˆ x k )2 | ψ (cid:105) = n (cid:88) i =1 n (cid:88) j =1 ˆ x k γ − x ik ˆ x i n (cid:88) i =1 n (cid:88) j =1 ˆ x i γ − x ik ˆ x k | ψ (cid:105) = (cid:126)x T γ − x (cid:126)x | ψ (cid:105) . (71)where we used the fact that γ − ik = γ − ki since the matrix is symmetric. Similary, we find ∂∂ (cid:104) ψ | ln | γ p | = (cid:126)p T γ − p (cid:126)p | ψ (cid:105) ∂∂ (cid:104) ψ | ln | γ | = (cid:126)r T γ − (cid:126)r | ψ (cid:105) . (72)Finally, the last terms give ∂∂ (cid:104) ψ | (cid:18) λ ( (cid:104) ψ | ψ (cid:105) −
1) + n (cid:88) i =1 µ i (cid:104) ψ | ˆ r i | ψ (cid:105) (cid:19) = (cid:32) λ + n (cid:88) i =1 µ i ˆ r i (cid:33) | ψ (cid:105) . (73)so that the variational equation can be rewritten as an eigenvalue equation for | ψ (cid:105) , (cid:20) − ln W x ( (cid:126)x ) − ln W p ( (cid:126)p ) − λ + n (cid:88) i =1 µ i ˆ x i − (cid:126)x T γ − x (cid:126)x − (cid:126)p T γ − p (cid:126)p + 12 (cid:126)r T γ − (cid:126)r (cid:21) | ψ (cid:105) = 0 . (74)We now check that Eq. (74) is verified by | ψ (cid:105) = ˆ S | (cid:105) , that is, by any n -mode squeezed vacuum state. For such astate, the marginals of the Wigner functions are given by W x ( (cid:126)x ) = ((2 π ) n | γ x | ) − e − (cid:126)x T γ − x (cid:126)x , W p ( (cid:126)p ) = ((2 π ) n | γ p | ) − e − (cid:126)p T γ − p (cid:126)p , (75)so that ln W x ( (cid:126)x ) + ln W p ( (cid:126)p ) = − ln (cid:18) (2 π ) n (cid:113) | γ x || γ p | (cid:19) − (cid:126)x T γ − x (cid:126)x − (cid:126)p T γ − p (cid:126)p (76)We apply (cid:126)r T γ − (cid:126)r on the squeezed vacuum state | ψ (cid:105) by using the canonical transformation of (cid:126)r in the Heisenbergpicture, namely ˆ S † (cid:126)r ˆ S = M(cid:126)r , we find12 (cid:126)r T γ − (cid:126)r | ψ (cid:105) = 12 (cid:126)r T γ − (cid:126)r ˆ S | (cid:105) = 12 ˆ S (cid:126)r T M T γ − M(cid:126)r | (cid:105) = 12 ˆ S (cid:126)r T γ − (cid:126)r | (cid:105) = ˆ S | (cid:105) = | ψ (cid:105) (77)since the covariance matrix γ of state | ψ (cid:105) can be expressed as γ = M γ vac M T . This implies that state | ψ (cid:105) is aneigenvector of (cid:126)r T γ − (cid:126)r with eigenvalue 1. Therefore, using this result together with equation (76), the eigenvalueequation for | ψ (cid:105) can be written as (cid:34) ln (cid:18) (2 π ) n (cid:113) | γ x || γ p | (cid:19) − λ + n (cid:88) i =1 µ i ˆ r i (cid:35) | ψ (cid:105) = 0 . (78)The value of λ is found by multiplying this equation on the left by (cid:104) ψ | and by using the constraints (cid:104) ψ | ψ (cid:105) = 1 and (cid:104) ψ | ˆ r i | ψ (cid:105) = 0 for all i , namely (cid:104) ψ | (cid:34) ln (cid:18) (2 π ) n (cid:113) | γ x || γ p | (cid:19) − λ + n (cid:88) i =1 µ i ˆ r i (cid:35) | ψ (cid:105) = 0 ⇒ λ = 1 − ln (cid:18) (2 π ) n (cid:113) | γ x || γ p | (cid:19) . (79)5We are now left with equation (cid:34) n (cid:88) i =1 µ i ˆ r i (cid:35) | ψ (cid:105) = 0 (80)which is satisfied if we set all the µ i = 0.In conclusion, we have proven that, with the appropriate choice of λ and µ i , the n -mode squeezed vacuum statesare solutions of Eq. (74), so they minimize our uncertainty functional F ( | ψ (cid:105)(cid:104) ψ | ). Since F ( | ψ (cid:105)(cid:104) ψ | ) is invariant underdisplacements, the displaced squeezed vacuum states are also solutions, so this minimization result encapsulates allpure Gaussian states. We find the minimum value n ln( πe (cid:126) ) by evaluating F for any of these states. [1] Heisenberg W 1927 Z. Phys. Statistical Complexity ed. K D Sen (Springer, Berlin) pp. 1-34[10] Coles P J, Berta M, Tomamichel M and Wehner S 2017 Rev. Mod. Phys. (1) 15002[11] Renes J M and Boileau J-C 2009 Phys. Rev. Lett. Elements of Information Theory
New York Wiley[24] Br¨ocker T and Werner R F 1995 J. Math. Phys. S98[28] Hertz A, Karpov E, Mandilara A and Cerf N J 2016 Phys. Rev. A93