Full counting statistics of information content
aa r X i v : . [ c ond - m a t . m e s - h a ll ] A p r EPJ manuscript No. (will be inserted by the editor)
Full counting statistics of information content
Yasuhiro Utsumi , a Department of Physics Engineering, Faculty of Engineering, Mie University, Tsu, Mie 514-8507, Japan
Abstract.
We review connections between the cumulant generatingfunction of full counting statistics of particle number and the R´enyi en-tanglement entropy. We calculate these quantities based on the fermionicand bosonic path-integral defined on multiple Keldysh contours. We re-late the R´enyi entropy with the information generating function, fromwhich the probability distribution function of self-information is ob-tained in the nonequilibrium steady state. By exploiting the distribu-tion, we analyze the information content carried by a single bosonicparticle through a narrow-band quantum communication channel. Theratio of the self-information content to the number of bosons fluctuates.For a small boson occupation number, the average and the fluctuationof the ratio are enhanced.
Measurements of the average current and its fluctuation (noise) have been power-ful tools to study the quantum transport in mesoscopic systems [1]. The probabilitydistribution of current can be treated by the theory of full counting statistics [2,3,4]. Suppose we partition a mesoscopic conductor, i.e., a tunnel junction, into asubsystem A and a subsystem B [Fig. 1 (a)]. By applying bias voltage, electronsflow from subsystem B to subsystem A . The theory of full counting statistics of-fers a method of calculating the probability distribution function of the number ofelectrons in subsystem A , P τ ( N A ) at a given measurement time τ . It is often conve-nient to introduce the Fourier transform of the probability distribution function [5],the characteristic function, Z τ ( e iχ ) = P N A P τ ( N A ) e iN A χ or its logarithm, the cu-mulant generating function, which yields quantities characterizing the profile of theprobability distribution function, k th moments h N kA i = ∂ kiχ Z τ ( e iχ ) (cid:12)(cid:12) iχ =0 or cumulant hh N kA ii = ∂ kiχ ln Z τ ( e iχ ) (cid:12)(cid:12) iχ =0 .The two subsystems can get entangled after exchanging electrons [6]. The amountof the entanglement between subsystems A and B can be quantified by exploitingthe entropy [7,8,9]. It is the von Neumann entropy [9] S (ˆ ρ A ) = − Tr A [ˆ ρ A ln ˆ ρ A ] as-sociated to the reduced density matrix obtained after tracing out degrees of freedomof subsystem B ; ˆ ρ A = Tr B ˆ ρ . In other words, it is the average of the operator ofself-information, or the entanglement Hamiltonian, ˆ I A = − ln ˆ ρ A . The full countingstatistics and the entanglement entropy are related to each other [10,11,12,13]. In a e-mail: [email protected] Will be inserted by the editor
Ref. [10], a nontrivial relation between the current cumulants and the dynamical en-tanglement entropy [14] was demonstrated. The entropy quantifying the entanglementcan be expressed by using the current cumulants of even order as [10], hh I A ii = ∞ X k =1 ( − k +1 (2 π ) k B k hh N kA ii / (2 k )! , (1)where B k are Bernoulli numbers ( B = 1 / B = − / B = 1 / · · · ). TheR´enyi entropy [8,15] of order M , ln S M / (1 − M ), where S M = Tr A ˆ ρ MA for quantumcases (hereafter, we call S M the R´enyi entropy [16]) is another tractable measure ofentanglement. In Ref. [11] the following relation was presented;ln S M = − e iφ Z ∞−∞ dz ln (cid:2) (1 + e iφ z ) M − e iφ z M (cid:3) µ ( z ) , (2)where the phase is φ = 0 for bosons and φ = π for fermions (precisely, the equalityfor fermions was presented in Ref. [11]). The spectral density is related to the currentcumulant generating function; µ ( z ) = e − iφ ∂ z Im ln Z τ ( u = 1 + e − iφ / ( z + i /π .In this article, we review these two universal relations based on the multicontourKeldysh technique introduced in Ref. [16] and developed in Refs. [17,18,19,20]. Inthose previous works, the operator representation was adopted. Here we introducepath-integral representation. We also present another universal relation between theR´enyi entropy of order M , where M is a positive integer, and the current cumulantgenerating function [19,20];ln S M = M − X ℓ =0 ln Z τ ( e iχ ℓ ) , χ ℓ = 2 πℓ + φM − φ . (3)A similar relation connecting the R´enyi entanglement entropy and the partition func-tion was derived before [21]. In the present article, we focus on the dynamical R´enyientanglement entropy in the nonequilibrium steady state realized in the limit of τ → ∞ . We also calculate the probability distribution of the self-information and,to illustrate its usage, analyze the information transmission through a narrow-bandbosonic quantum communication channel [7,8,22].The structure of the paper is as follows. In Sec. 2, we review the full countingstatistics for fermions and bosons. In Sec. 3, we introduce the self-information of theinformation theory and explain that it is a random variable. Then in Sec. 4, we explainthe path-integral approach to the full counting statistics and the R´enyi entanglemententropy. In Sec. 5, we apply our technique to an information transmission problemcarried by a single bosonic particle. In Sec. 6, we summarize this article. Step-by-stepderivations are given in Appendices. Fermions –
Lets us consider electron transmission through the tunnel junction [Fig. 1(a)]. An electron is injected from subsystem B to subsystem A and is transmitted(reflected) with the probability q (1) = T ( q (0) = T = 1 − R ). Suppose during a givenmeasurement time τ , N electrons are injected regularly from subsystem B [Fig. 1(b-1)]. If we detect the first electron in subsystem A , we set x = 1. If not, we set ill be inserted by the editor 3 (b - - - - A B (b) I (bits) P ( I ) 1/21/4 1 2 3 (c)(a) A B m B m A Tunnel barrier
Fig. 1. (a) Bipartition of a tunnel junction into subsystems A and B . (b) Electron scatteringprocesses at the interface between subsystems A and B . (c) Probability distribution functionof self-information. x = 0. Suppose we obtain x n for the n th electron ( n = 1 , · · · , N ). Then we writesuch a sequence of events x x · · · x N , where x n ∈ X = { a , a } = { , } , as x .The probability to find this sequence is q N ( x ) = q ( x ) q ( x ) · · · q ( x N ). For example,when all electrons are transmitted [Fig. 1 (b-2)], the probability is q N (11 · · ·
1) = q (1) q (1) · · · q (1) = T N . On the other hand, when all electrons are reflected [Fig. 1(b-3)], q N (00 · · ·
0) = q (0) q (0) · · · q (0) = R N . The above two events are rare. In agiven sequence x , N p x ( a ) = P Nn =1 δ x n ,a ( a ∈ X ) electrons are transmitted (reflected)for a = 1 ( a = 0) [Fig. 1 (b-4)]. The probability to find such a sequence is q N ( x ) = Q a ∈X q ( a ) Np x ( a ) = T Np x (1) R Np x (0) . Therefore, the probability that N A electronstransmit is, P τ ( N A ) = X x ∈X N q N ( x ) δ N A ,Np x (1) = (cid:18) NN A (cid:19) T N A R N − N A , (cid:18) Nn (cid:19) = N ! n !( N − n )! . (4)This is the binomial distribution. The characteristic function is [2], Z τ ( e iχ ) = X N A P τ ( N A ) e iN A χ = (cid:0) R + T e iχ (cid:1) N . (5)By taking the derivative of its logarithm, we obtain the first cumulant or moment,the average, hh N A ii = h N A i = P N A N A P τ ( N A ) = N T corresponding to the peakposition of the binomial distribution. The second cumulant, the noise, is hh N A ii = h N A i − h N A i = P N A N A P τ ( N A ) − (cid:0)P N A N A P τ ( N A ) (cid:1) = N T R , which correspondsto the width of the distribution.
Bosons –
Let us consider a linear bosonic channel operating at frequency f within anarrow bandwidth B ≪ f . In the duration of the measurement time τ , N = τ B modesare allowed. Suppose the n th mode contains x n ∈ X bosons ( X = { a , a , a , · · ·} = { , , , · · ·} ). Such probability is given by the geometric distribution q r ( x n ) = (1 − r ) r x n , where r = e − β B hf and β B is the inverse temperature of subsystem B . Thereforethe probability to find a configuration x x · · · x N is q N ( x ) = q r ( x ) · · · q r ( x N ). Thenthe probability to find N A bosons among N modes is given by the negative binomialdistribution as, P τ ( N A ) = X x ∈X N q N ( x ) δ N A , P ∞ j =1 ( j − Np x ( a j ) = (1 − r ) N r N A (cid:18) N − N A N − (cid:19) . (6) Will be inserted by the editor
The characteristic function is, Z τ ( e iχ ) = (cid:2) n − B ( hf ) − n + B ( hf ) e iχ (cid:3) − N , n − B ( hf ) = 1 + n + B ( hf ) , (7)where n + B ( ω ) = 1 / ( e β B ω −
1) is the Bose distribution function. The average numberand noise are hh N A ii = n + B and hh N A ii = n + B (1 + n + B ), respectively. Once the cumulant generating function is obtained, the probability distribution func-tion can be calculated by performing the inverse Fourier transform. In the limit oflong measurement time τ → ∞ , the number of electrons grows as N A ∝ τ . Then,within the saddlepoint approximation, P τ ( N A ) = Z π − π dχ π e − iχN A Z τ ( e iχ ) ≈ exp (cid:20) min iχ ∈ R (cid:2) ln Z τ ( e iχ ) − iχN A (cid:1)(cid:21) , (8)which is the Legendre-Fenchel transform [23]. The probability distribution functionis,ln P τ ( N A ) ≈ − N (cid:0) − N A N (cid:1) ln − NAN R + N A N ln NAN T (Fermions) (cid:0) N A N (cid:1) ln n + NAN + N A N ln NAN n + (Bosons) = − N D ( p ∗ k q ) . (9)The result is expressed by the relative entropy D ( p k q ) = P |X | j =1 p ( a j ) ln p ( a j ) /q ( a j )( |X | is the number of elements in the set X ), which measures the difference betweenthe distributions p and q . The classical picture of the full counting statistics in Sec. 2.1is an application of the method of types (Chapter 11 of Ref. [8]). p x ( a ) is called a type , and Eq. (9) is a consequence of Sanov’s theorem (Appendix A). The probabilitydistribution p ∗ is closest to q in relative entropy, i.e., it minimizes D ( p k q ) subjected tothe constraint N A = P |X | j =1 ( j − p ( a j ). For fermions, p ∗ and q are Bernoulli distribu-tions, ( p ∗ (0) , p ∗ (1)) = (1 − N A /N, N A /N ) and ( q (0) , q (1)) = ( R , T ). For bosons, theyare geometric distributions ( p ∗ (0) , p ∗ (1) , · · · ) = ( q / (1+ N/N A ) (0) , q / (1+ N/N A ) (1) , · · · )and ( q (0) , q (1) , · · · ) = ( q r (0) , q r (1) , · · · ). When p ∗ = q , equivalently N A = hh N A ii , thepeak of the distribution ln P τ ( N A ) ≈ We recall the basics of information theory [8,24]. Suppose X = { a , a , a , a } is aset of four symbols and the probability of occurrence of each symbol is q ( a ) = 1 / q ( a ) = 1 /
4, and q ( a ) = q ( a ) = 1 /
8. If the outcome is a , the self-informationassociated with this outcome is, I A ( a ) = − log q ( a ) (bit) , I A ( a ) = − ln q ( a ) (nat) . (10)The self-information of each of the four symbols is I A ( a ) = 1, I A ( a ) = 2, and I A ( a ) = I ( a ) = 3 bits. The Shannon entropy is the average of self-information [8]; H ( q ) = P |X | j =1 q ( a j ) I A ( a j ) = − P |X | j =1 q ( a j ) ln q ( a j ) = (1 / × / × / × / × / ill be inserted by the editor 5 we can consider the probability distribution of the self-information; see Chapter 2.7of Ref. [24]. The probability distribution of self-information (in bits) is, P ( I A ) = P |X | j =1 q ( a j ) δ ( I A − I A ( a j )) = (1 / δ ( I A −
1) + (1 / δ ( I A −
2) + 2 × (1 / δ ( I A − H = h I A i = R dI A P ( I A ) I A .Let us consider the transmission of particles in Sec. 2.1. In the following, we mea-sure the information content in nats . A sequence x carries the self-information amountto I A ( x ) = − ln q N ( x ). The probability distribution function of self-information andits Fourier transform, information generating function [25,26], are, P τ ( I A ) = X x q N ( x ) δ ( I A − I A ( x )) , S − iξ = Z dI A e iξI A P τ ( I A ) = |X | X j =1 q ( a j ) − iξ N . (11)Since there is an apparent formal similarity, in the following we use the terms “infor-mation generating function” and “R´enyi entropy” interchangeably. The informationgenerating functions for fermions and bosons are, S M = (cid:0) T M + R M (cid:1) N , S M = (cid:2) n − B ( hf ) M − n + B ( hf ) M (cid:3) − N . (12)The average is the Shannon entropy, hh I A ii = ∂ iξ ln S − iξ | iξ =0 = N H ( q ) where H ( q ) = H ( T ) = −T ln T − (1 − T ) ln(1 − T ) is the binary entropy for fermions and H ( q ) = H g ( n + B ) = (1 + n + B ) ln(1 + n + B ) − n + B ln n + B is the entropy for bosons. Similar to Eq. (9),the probability distribution is calculated as,ln P τ ( I A ) + I A ≈ N H (cid:18) I A /N + ln T ln( T / R ) (cid:19) , N H g (cid:18) I A /N + ln(1 − r ) − ln r (cid:19) . (13)Let us rewrite Eq. (11) and check its meaning. For N ≫
1, typical sequences arearound the peak position hh I A ii = N H ( q ) of the distribution function P τ ( I A ). For ǫ >
0, the intensity around the peak | I A /N − H ( q ) | ≤ ǫ is the probability to findtypical sequences; R N ( H ( q )+ ǫ ) N ( H ( q ) − ǫ ) dI A P τ ( I A ) = P x ∈ T ( N,ǫ ) q N ( x ) where T ( N, ǫ ) is a set ofall ǫ -typical sequences of length N [9] satisfying e − N ( H ( q )+ ǫ ) ≤ q N ( x ) ≤ e − N ( H ( q ) − ǫ ) . Full counting statistics –
The Hamiltonian of the tunnel junction is given by, ˆ H =ˆ H A + ˆ H B + ˆ V . The Hamiltonians for the leads r = A, B are, ˆ H r = P k ǫ r k ˆ c † rk ˆ c rk , whereˆ c rk annihilates a particle in quantum state k in the lead r . The tunnel Hamiltonianis ˆ V = J P k,k ′ (ˆ c † Ak ˆ c Bk ′ + ˆ c † Bk ′ ˆ c Ak ). Then the probability distribution function of aparticle number in the subsystem A and its characteristic function are P τ ( N A ) = Tr A h ˆ Π N A ˆ ρ A ( τ ) i , Z τ ( e iχ ) = Tr A h e iχ ˆ N A ˆ ρ A ( τ ) i , (14)where ˆ N A = P k ˆ c † Ak ˆ c Ak is the operator of the number of particles in subsystem A and ˆ Π N A = R π − π dχe i ( ˆ N A − N A ) χ / (2 π ) is the projection operator. The phase χ is calledthe counting field . The reduced density matrix of subsystem A at time τ is preparedby the following protocol. Initially at time t = 0, the subsystems are decoupled and Will be inserted by the editor (b) M - , + K , - K , - K , + K , + K M - , - K M, + K M, - K ˆ r eq A ˆ r eq B ˆ r eq A ˆ r eq B ˆ r eq A ˆ r eq B ˆ r eq A ˆ r eq B m t = t t = 0 t ˆ r eq A ˆ r eq B e i ˆ N A c (a)(c) t = t t = 0 t ˆ r eq A ˆ r eq B K + K − e i ˆ N A c t = t t = 0 t ˆ r eq A ˆ r eq B K + K − e i ˆ N A c M − ... K + K − t + t − + − Fig. 2. (a) Keldysh contour K consisting of the forward (upper) branch K + and the back-ward (lower) branch K − . The shaded box is the initial density matrix. The solid dot at t = τ indicates the operator e i ˆ N A χ . (b) The sequence of M Keldysh contours. Time τ on theupper branch of m th Keldysh contour K m, + is connected to time τ on the lower branch of m + 1th Keldysh contour K m +1 , − ( m = 1 , · · · M and K M +1 , − = K , − ). (c) M disconnectedKeldysh contours obtained after the discrete Fourier transform. Solid dots at t = τ indicateoperators e i ˆ N A χ ℓ ( ℓ = 0 , · · · , M − each subsystem is equilibrated with the inverse temperature β A ( B ) and the chemi-cal potential µ A ( B ) . The equilibrium density matrix of the subsystem r is, ˆ ρ eq r = e − β r ( ˆ H r − µ r ˆ N r ) /Z r , where the equilibrium partition function is Z r = Z ( β r , µ r ) =Tr r h e − β r ( ˆ H r − µ r ˆ N r ) i . The explicit form is, ln Z r = − e iφ R dωρ r ( ω ) ln (cid:0) − e iφ − β r ( ω − µ r ) (cid:1) ,where ρ r ( ω ) = P k δ ( ω − ǫ rk ) is the DOS of particles in the lead r . At t = 0, we switchon the coupling ˆ V and let the total system evolve till t = τ . Then the reduced densitymatrix of the subsystem A at t = τ is,ˆ ρ A ( τ ) = Tr B h e − i ˆ Hτ ˆ ρ eq A ˆ ρ eq B e i ˆ Hτ i . (15)We introduce the Keldysh path-integral [27,28,29] representation of the char-acteristic function Z τ ( e iχ ) = R D [ c rk ( t ) ∗ c rk ( t )] e i S ( χ ) / ( Z A Z B ) (see Appendix B fordetailed derivations), where the action is, S ( χ ) = S K ( { c rk ( t ) ∗ , c rk ( t ) } ) + X r = A,B S b . c . ( c rk ( τ ± ) ∗ , c rk ( τ ± ); λ r ) . (16)Here c rk is a complex number (Grassmann number) and c ∗ rk is its complex conjugation(conjugation) for a bosonic (fermionic) field [30]. The action is defined on the Keldyshcontour K [Fig. 2 (a)], i S K = X σ = ± iσ Z τ dt σ (cid:20)X rk c rk ( t σ ) ∗ ( i∂ t σ − ǫ rk ) c rk ( t σ ) − J X k,k ′ ( c Ak ( t σ ) ∗ c Bk ′ ( t σ )+ c Bk ′ ( t σ ) ∗ c Ak ( t σ )) (cid:21) + X rk c rk (0 + ) ∗ ( c rk (0 − ) e − β r ( ǫ r − µ r ) − c rk (0 + )) , (17) ill be inserted by the editor 7 where time t ± is defined on K ± . The first term is defined on the forward (upper)and backward (lower) branches K + and K − . The second term imposes the boundarycondition at t = 0 + and t = 0 − . The action determining the boundary condition at t = τ + and t = τ − is, i S b . c . ( c rk ( τ ± ) ∗ , c rk ( τ ± ); λ r ) = X k c rk ( τ − ) ∗ ( c rk ( τ + ) e iλ r − c rk ( τ − )) , (18)where λ A = χ + φ and λ B = φ . Then, after the functional integral, and in the limitof long measurement time, we obtainln Z τ ( e iχ ) Z ( e iχ ) ≈ − e iφ τ π Z dω ln det (cid:2) τ − J τ g A,χ + φ ( ω ) τ g B,φ ( ω ) (cid:3) , (19)where τ = diag(1 ,
1) is the identity matrix and τ = diag(1 , −
1) is the Pauli matrixin the Keldysh space. The 2 × g r,λ ( ω ) = − πi ρ r ( ω ) (cid:20) / n + r,λ ( ω ) n + r,λ ( ω ) e − iλ n − r,λ ( ω ) e iλ / n + r,λ ( ω ) (cid:21) , (20)where we neglected the real part. We introduced, n − r,λ ( ω ) = 11 − e − β r ( ω − µ r )+ iλ , n + r,λ ( ω ) = e − β r ( ω − µ r )+ iλ n − r,λ ( ω ) , (21)which is the particle distribution function when λ = φ . For φ = 0, the Bose distribu-tion function n + r, = n + r = 1 / ( e β r ( ω − µ r ) −
1) and n − r, = n − r = 1 + n + r are obtained.For φ = π , the Fermi distribution function n + r,π = − f + r = − / ( e β r ( ω − µ r ) + 1) and n − r,π = f − r = 1 − f + r are obtained. At τ = 0, the two subsystems are decoupledand the characteristic function is, ln Z ( e iχ ) = ln Z ( β A , µ A + iχ/β A ) /Z ( β A , µ A ) = − e iφ R dωρ A ( ω ) ln( n − A,φ ( ω ) − n + A,φ ( ω ) e iχ ).Further calculations lead to:ln Z τ ( e iχ ) Z ( e iχ ) = − e iφ τ π Z dω ln ˜ n + A,φ ( ω ) e iχ − ˜ n − A,φ ( ω ) n + A,φ ( ω ) e iχ − n − A,φ ( ω ) , (22)where we omitted a constant to satisfy the normalization condition Z τ (1) = 1. Weintroduced the effective particle distribution function, ˜ n ± A,φ ( ω ) = R ( ω ) n ± A,φ ( ω ) + T ( ω ) n ± B,φ ( ω ), where the transmission and reflection probabilities are T ( ω ) = 1 −R ( ω ) = 4 π J ρ A ( ω ) ρ B ( ω ) / [1 + π J ρ A ( ω ) ρ B ( ω )] . For fermions, φ = π , Eq (5)is obtained from Eq. (22) by taking the zero temperature limit β A , β B → ∞ forthe energy-independent transmission probability T ( ω ) = T . The number of injectedfermions is N = τ ( µ B − µ A ) / (2 π ) for µ B > µ A . For bosons φ = 0, Eq. (7) is derivedfor the narrow-band channel, i.e., the energy filter T ( ω ) = h B δ ( ω − hf ), where thebandwidth B is much smaller than the signal frequency f , when the subsystem A isempty, n + A, ( hf ) = 0. R´enyi entanglement entropy –
The probability distribution function of self-informationand the information generating function are P τ ( I A ) = Tr A [ˆ ρ A ( τ ) δ ( I A + ln ˆ ρ A ( τ ))] , S − iξ = Tr A (cid:2) ˆ ρ A ( τ ) − iξ (cid:3) . (23)The spectrum of the entanglement Hamiltonian, the entanglement spectrum [38],is closely related to this distribution; Tr A h δ ( I A − ˆ I A ) i = e I A P τ ( I A ). This relation Will be inserted by the editor implies h e I A i = rankˆ ρ A , which is reminiscent of the Jarzynski equality [39,40]. As anexample, let us consider the density matrix ˆ ρ A = (cid:16)P |X | j =1 q ( a j ) | j ih j | (cid:17) ⊗ N , where | j i isan orthonormal set. Then Eq. (23) reduces to the classical case, Eq. (11). By applyingJensen’s inequality to the Jarzynski equality h e I A i = |X | N , we obtain h I A i ≤ N ln |X | ,i.e., N ln |X | is the maximum entropy.At τ = 0, when the subsystems are decoupled, the information generating functionis, s M = Tr A (cid:2) ˆ ρ M eq A (cid:3) = Z ( M β A , µ A ) /Z MA . At finite τ , when the coupling inducedthe correlations between the two subsystems, the information generating function iscalculated by the replica trick: We first calculate it for a positive integer M and thenperform the analytic continuation M → − iξ . The path-integral representation of theR´enyi entropy of a positive integer M order, S M = Tr A [ˆ ρ A ( τ ) · · · ˆ ρ A ( τ )], is evaluatedby extending the contour K to a sequence of M Keldysh contours [16] [Fig. 2 (b)]; S M = ( Z A Z B ) − M R D [ c mrk ( t m ) ∗ c mrk ( t m )] e i ˜ S , where the action is,˜ S = M X m =1 S K m ( { c mrk ( t m ) ∗ , c mrk ( t m ) } ) + S b . c . ( c mBk ( τ m, ± ) ∗ , c mBk ( τ m, ± ); φ ) + S m b . c .A . (24)Here, time t m, ± is defined on the upper (lower) branch of m th Keldysh contour K m, ± .The fields c mrk and thus the action S K m are defined on the m th Keldysh contour K m .The definition of the action S K m is the same as Eq. (17). The action imposing theboundary condition at t = τ m, + and t = τ m, − for the fields of the subsystem B , S b . c . ,is the same as Eq. (18). For the fields of the subsystem A , the action S m b . c .A imposesthe boundary condition connecting t = τ m, + on the upper branch of m th Keldyshcontour and t = τ m +1 , − on the lower branch of m + 1th Keldysh contour; i S m b . c .A = X k c m +1 Ak ( τ − ) ∗ ( c mAk ( τ + ) − c m +1 Ak ( τ − )) , (25)where c M +1 Ak ( τ − ) = c Ak ( τ − ) e − iφ . This action can be diagonalized by the discreteFourier transform; c mrk ( t ) = 1 √ M M − X ℓ =0 c ℓrk ( t ) e − i (2 πℓ + φ ) m/M , (26)which fulfills the periodic or anti-periodic boundary condition c m + Mrk ( t ) = c mrk ( t ) e − iφ .The resulting action is, M X ℓ =1 S m b . c .A = M − X ℓ =0 S b . c . ( c ℓAk ( τ ± ) ∗ , c ℓAk ( τ ± ); χ ℓ + φ ) , χ ℓ = 2 πℓ + φM − φ . (27)The discrete Fourier transform introduces a discrete counting field χ ℓ at t = τ .Since the action defined on the Keldysh contour Eq. (17) is quadratic, it is diago-nal after the Fourier transform; P Mm =1 S K m ( { c ∗ mrk , c mrk } ) = P M − ℓ =0 S K ( { c ∗ ℓrk , c ℓrk } ).The action imposing the boundary condition for subsystem B is also diagonal in ℓ ; P Mm =1 S b . c . ( c mBk ( τ m, ± ) ∗ , c mBk ( τ m, ± ); φ ) = P M − ℓ =0 S b . c . ( c ℓBk ( τ ± ) ∗ , c ℓBk ( τ ± ); φ ). Inthis way, we separate connected M Keldysh contours [Fig. 2 (b)] into disconnected M Keldysh contours [Fig. 2 (c)]. The action is expressed by using the action of thecumulant generating function (16),˜ S = M − X ℓ =0 S ( χ ℓ ) . (28) ill be inserted by the editor 9 By noticing that the Jacobian of the Fourier transform is 1, we obtain the relation (3).For further calculations, we define u = e iχ and rewrite the summation over ℓ asthe contour integral [21];ln S M = Z C du πi M − X ℓ =0 ln Z τ ( u ) u − e iχ ℓ = Z C du πi ∂ u ln (cid:16) u M − e iφ (1 − M ) (cid:17) ln Z τ ( u ) , (29)where the contour C is taken so that it encircles poles at u = e iχ ℓ ( ℓ = 0 , · · · , M − τ → ∞ Eq. (22). The integrand hasa branch cut on the real axis connecting two branch points u + = ˜ n − A,φ ( ω ) / ˜ n + A,φ ( ω ) and u − = n − A,φ ( ω ) /n + A,φ ( ω ). By a variable transform z = − e − iφ / (1 − u ), which transformsa unit circle to a line Re z = − e iφ /
2, and by noticing that the branch cut stays on thereal axis after this transform [Fig. 3 (b)], Eq. (29) can be transformed into Eq. (2).The explicit form of the spectral density associated to Eq. (22) is, µ ( z ) = Z dω h τ π δ ( z − e − iφ ˜ n + A,φ ( ω )) + (cid:16) ρ A ( ω ) − τ π (cid:17) δ ( z − e − iφ n + A,φ ( ω )) i . (30)The first term in the square brackets contains the effective distribution function andthus is related to particle transmission and reflection. The second term is the bulkthermodynamic entropy of subsystem A and the overcounting term. In the limit ofzero temperature, Eq. (30) is the effective-transparency density [41] and the abovediscussions are applicable to a finite τ case since, for noninteracting fermions, singu-larities are always on the negative real axis of complex u -plane [42].The information generating functions in Eq (12) are obtained by substitutingEq. (30) into Eq. (2). For fermions φ = π , again we take the zero temperaturelimit β A , β B → ∞ and consider the energy-independent transmission probability.For bosons φ = 0, we set T ( ω ) = h B δ ( ω − hf ) and n + A, ( hf ) = 0.The relation (1) is obtained by expanding the RHS of Eq. (3) in powers of χ ℓ andthen performing the summation over ℓ ;ln S M = ∞ X k =1 hh N kA ii k ! (cid:18) πiM (cid:19) k × (cid:26) ζ ( − k, − ζ ( − k, M ) ( φ = 0) ζ ( − k, − M ) − ζ ( − k, M ) ( φ = π ) . (31)Here ζ ( s, a ) = P ∞ ℓ =0 ( a + ℓ ) − s is the Hurwitz zeta function. By the analytic contin-uation M → − iξ and by differentiating in ξ , we obtain Eq. (1) except for theimaginary number − iπ hh N A ii for bosons, which may be an artifact and is neglected. It is known that noninteracting bosons cannot be entangled if the initial decoupledsystems are in equilibrium [6,43]. Therefore, our average self-information does notmeasure the amount of entanglement for bosons. Here we present that the probabilitydistribution function of self-information can be used to analyze the performance ofa quantum communication channel [8] by regarding subsystem A as a receiver sideand subsystem B as a transmitter side. Let us consider a single narrow-band bosonicchannel. The transmitter side generates signals, the thermal noise of temperature β − B , with average power P A = hf B n + B . We set β A → ∞ to suppress the detectornoise. Then we ask a question addressed in Ref. [22]: How much information canbe transmitted by a single boson?
The quantity we consider is the ratio of the self-information content to the number of bosonic particles, η = I A /N A . It is a random (b) z (a) uu + u - z + z - Fig. 3. (a) Contour of integral C encircling singularities at u = e i ((2 πℓ + φ ) /M − φ ) (in this panel, φ = π and M = 8) denoted by crosses. The dashed line indicates the unit circle. There is abranch cut connecting two branch points u + = ˜ n − A,φ ( ω ) / ˜ n + A,φ ( ω ) and u − = n − A,φ ( ω ) /n + A,φ ( ω )on the real axis (in this panel, we assumed u + > u − ). (b) The contour C and singularitiesafter the variable transform z = − e − iφ / (1 − u ). The dashed line indicates Re z = − e − iφ / variable, since both the numerator and the denominator are random variables. It isalso an analog of the efficiency, the ratio of output to input, whose fluctuations havebeen addressed recently [44,45,46,47]. In the limit of long measurement time τ → ∞ ,the average approaches h η i ≈ h I A i / h N A i = H g ( n + B ) /n + B as predicted in Ref. [22].In order to analyze the probability distribution of η , we introduce the self-informationassociated with a state after the projective measurement of the particle number N A in the subsystem A ; ˆ I ′ A = − ln ˆ ρ ′ A , where ˆ ρ ′ A = P N A ˆ Π N A ˆ ρ A ˆ Π N A . The joint prob-ability distribution of I ′ A and N A is P τ ( I ′ A , N A ) = Tr A h ˆ Π N A ˆ ρ A ˆ Π N A δ ( I ′ A + ln ˆ ρ ′ A ) i .The information generating function is, S − iξ ( χ ) = Z dI ′ A X N A e iN A χ + iI ′ A ξ P τ ( I ′ A , N A ) = Tr A (cid:20)(cid:16) e i ˆ N A χ/ (1 − iξ ) ˆ ρ A (cid:17) − iξ (cid:21) , (32)where we used the local particle number super-selection [6,20,48,49], which ensures[ˆ ρ A , ˆ N A ] = 0 [20]. Then for the narrow-band channel when the detector noise is absent β A → ∞ , the characteristic function is ln S M ( χ ) = − N ln[ n − B ( hf ) M − e iχ n + B ( hf ) M ].The joint probability distribution is then, P ( I ′ A , N A ) = P τ ( N A ) δ (cid:0) I ′ A + ln(1 − r ) N r N A (cid:1) ,where P τ ( N A ) is given by Eq. (6). The probability distribution is, P τ ( η ) = ∞ X N A =1 Z dI ′ A P τ ( I ′ A , N A ) δ ( η − I ′ A /N A ) ≈ | N ∗ A | P ( N ∗ A ) / ( η + ln r ) , (33)where N ∗ A /N = − ln(1 − r ) / ( η + ln r ). From the condition N ∗ A ≥
0, we observe thatthe fluctuation of η is bounded below η > − ln r = β B hf . In the limit of τ → ∞ , wecan adopt Eq. (9) and see that the peak position is at 1 / (1 + N/N ∗ A ) = r equivalently η = h η i .Figure 4 (a) shows the boson occupation number n + B dependence of the averagevalue. In the limit of small boson occupation number n + B ≪ h η i ≈− ln n + B , as predicted in Ref. [22]. Figure 4 (b) shows the probability distributionsfor various boson occupation numbers. For large η , the probability decays in power ill be inserted by the editor 11 - - - h / ln 2 l n P ( h ) / ( t B ) n + = 10 . n + (a) (b) h / l n B B
Fig. 4. (a) The information content carried by a single boson particle. (b) The probabilitydistribution of ratio η = I A /N A . Dashed lines indicate − N ln n − B . law fashion P τ ( η ) ≈ (1 + n + B ) − N /η . This implies that, for a small boson occupationnumber, the information carried by a single particle fluctuates strongly. It can beunderstood by the following: A sequence x satisfying N A = P ∞ j =1 ( j − N p x ( a j )carries the self-information I A = − ln q N ( x ) = − N A ln r − N ln(1 − r ). For small n + B = r/ (1 − r ) ≪
1, the ratio for this particular sequence is η ≈ − ln n + B + N n + B /N A . Thefirst term is the average value and the second term is the fluctuation. The fluctuationis strongly enhanced for a sequence, in which the number of bosons is much smallerthan the average number of signal quanta N A ≪ N n + B = τ P A / ( hf ).For fermions, the probability distribution of the ratio was analyzed in Ref. [20]. In summary, we present the fermionic and bosonic path integral for the full count-ing statistics and the R´enyi entanglement entropy. The key relation (3) holds forthe quadratic action, i.e., noninteracting particles. We analyzed the ratio of self-information to the number of bosons transmitting through a narrow-band quantumchannel. For the small occupation number of bosons, the average of the ratio diverges.At the same time, for an event in which the number of bosons is much smaller thanthe average number of signal quanta for a given signal power, the fluctuation of theratio is enhanced.We thank Hiroaki Okada for valuable discussions. This work was supported byJSPS KAKENHI Grants No. 17K05575 and No. JP26220711.
A Short derivation of Sanov’s theorem
Let X = { a , · · · , a |X | } and consider the distribution q on X . Let X , · · · , X N be asequence of random variables drawn i.i.d. according to q ( x ). The probability that thesample average of g ( X ) is equal to α is, P N X n =1 g ( x n ) = N α ! = X x ∈X N q N ( x ) δ N X n =1 g ( x n ) − N α ! . (34) By using the multinomial coefficient, it is written as, P ≈ N |X | Z dp · · · dp |X | (cid:18) NN p · · · N p |X | (cid:19) q ( a ) Np · · · q ( a |X | ) Np |X| × δ N |X | X j =1 p j g ( a j ) − α δ N |X | X j =1 p j − . (35)By rewriting the delta function as, e.g., δ ( x ) = R dξe − iξx / (2 π ) and by using Stirling’sapproximation, we obtain P ≈ exp (cid:0) − N min { p j } ,iξ,iλ ∈ R J ( p, ξ, λ ) (cid:1) within the saddle-point approximation, where J = D ( p k q )+ iξ (cid:16)P |X | j =1 p j g ( a j ) − α (cid:17) + iλ (cid:16)P |X | j =1 p j − (cid:17) .For the precise derivation, see Ref. [8].The result is rephrased as P ≈ e − ND ( p ∗ k q ) where p ∗ is the closest to q in relativeentropy under the constraint P |X | j =1 p j g ( a j ) = α and P |X | j =1 p j = 1. For our problems,we set a j = j − |X | = 2( ∞ ) for fermions (bosons). Equation (9) is obtained bysetting α = N A /N and g ( a j ) = j −
1. Equation (13) is obtained by setting α = I A /N and g ( a j ) = − ln q ( a j ). B Path integral on multiple Keldysh contours
We adopt the time-slicing technique [29,50,51,52,53] to introduce the path-integralrepresentation of Eq. (32); S M ( χ ) = Tr A (cid:20)(cid:16) e i ˆ N A χ/M ˆ ρ A ( τ ) (cid:17) M (cid:21) . This is the char-acteristic function for M = 1, S ( χ ) = Z τ ( e iχ ) Eq. (14), and the R´enyi entan-glement entropy for χ = 0, S M = S M (0) Eq. (23). Initially, when the two sub-systems are decoupled, it is s M ( χ ) = Tr A (cid:20)(cid:16) e i ˆ N A χ/M ˆ ρ eq A (cid:17) M (cid:21) = Z ( M β A , µ A + iχ/ ( M β A )) /Z ( β A , µ A ) M . For simplicity, we consider one quantum state in each sub-system, and thus the Hamiltonians are, ˆ H A = ǫ A ˆ a † ˆ a , ˆ H B = ǫ B ˆ b † ˆ b , and ˆ V = J (ˆ a † ˆ b +ˆ b † ˆ a ). To obtain the path-integral representation, we discretize each branch of M Keldysh contours into N − dτ = τ / ( N − m th Keldysh contour is t m,j = ( N − j ) dτ for j = 1 , · · · , N and that on the upper branch is t m,j = ( j − N − dτ for j = N + 1 , · · · , N . Then, at time t m,j , we insert the closure relation, e.g.,ˆ1 = R da ∗ m,j da m,j e − a ∗ m,j a m,j | a m,j ih a m,j | / N , where N = 1(2 πi ) for fermions (bosons)(we follow the convention of the textbook [30]) and ˆ a † | a m,j i = a m,j | a m,j i is the co-herent state.By inserting closure relations for subsystem A at τ m, + = t m, N and τ m, − = t m, and using the trace expression, Tr A ˆ O = R da ∗ , da , e − a ∗ , a , h e − iφ a , | ˆ O| a , i / N ,the R´enyi entropy becomes, S M ( χ ) = Tr A h e i ˆ N A χ/M ˆ ρ A ( τ ) · · · e i ˆ N A χ/M ˆ ρ A ( τ ) i = Z da ∗ M, N da M, N N Z da ∗ M, da M, N· · · Z da ∗ , N da , N N Z da ∗ , da , N e − P Mm =1 ( a ∗ m, N a m, N + a ∗ m, a m, ) ×h e − iφ a , | e i ˆ N A χ/M | a M, N ih a M, N | ˆ ρ A ( τ ) | a M, ih a M, | e i ˆ N A χ/M | a M − , N i · · ·×h a , N | ˆ ρ A ( τ ) | a , ih a , | e i ˆ N A χ/M | a , N ih a , N | ˆ ρ A ( τ ) | a , i . (36) ill be inserted by the editor 13 t = t t = 0 t m ˆ r eq A ˆ r eq B K + , m NN +12 N K m , − e i ˆ N A c / M e i ˆ N A c / M Fig. 5.
Discrete time points on the m th Keldysh contour. Then by using the relation, h a | e i ˆ N A χ/M | a ′ i = e a ∗ a ′ e iχ/M , we obtain, S M ( χ ) = M Y m =1 Z da ∗ m, N da m, N da ∗ m, da m, N ρ A ( a m, N , a m, ; τ ) e P Mm =1 i S m b . c .A , (37)The action, i S m b . c .A = a ∗ m +1 , (cid:0) a m, N e iχ/M − a m +1 , (cid:1) where a M +1 , = e − iφ a , , cor-responds to Eq. (25) and imposes the boundary condition of the field a at τ m, + and τ m +1 , − . ρ A ( a m, N , a m, ; τ ) = h a m, N | ˆ ρ A ( τ ) | a m, i e − a ∗ m, N a m, N is the reduced densitymatrix expressed by the double path-integral [27,28,54]. ρ A ( a N , a ; τ ) = N − Y j =2 Z da ∗ j da j N N Y j =1 Z db ∗ j db j N e i S K + i S b . c . ( b N ) ,b ∗ N ) ; φ ) Z A Z B , (38)where we omitted the replica index m . The action corresponding to Eq. (17) is, i S K ≈ N − X j = N +1 + N − X j =1 (cid:20) − a ∗ j +1 ( a j +1 − a j ) − b ∗ j +1 ( b j +1 − b j ) − i sgn( j − N ) H ( a ∗ j +1 , b ∗ j +1 , a j , b j ) dτ (cid:21) + a ∗ N +1 (cid:16) a N e − β A ( ǫ A − µ A ) − a N +1 (cid:17) + b ∗ N +1 (cid:16) b N e − β B ( ǫ B − µ B ) − b N +1 (cid:17) . (39)The action i S b . c . ( b m, N ) , b ∗ m, N ) ; φ ) = b ∗ m, (cid:0) b m, N e iφ − b m, (cid:1) corresponds to Eq. (18)and imposes the boundary condition of the field b at τ m, ± . Summarizing above, S M ( χ ) = lim N →∞ M Y m =1 2 N Y j =1 Z da ∗ m,j da m,j N Z db ∗ m,j db m,j N e i ˜ S , (40)where the action is,˜ S = M X m =1 S K m ( { a m,j ( b m,j ) , a ∗ m,j ( b ∗ m,j ) } ) + S b . c . ( b m, N ) , b ∗ m, N ) ; φ ) + S m b . c .A . (41)In the following, we perform the functional integral. The matrix form of the actionis, ˜ S = a † g − A,M a + b † (cid:16) M ⊗ g − B,φ (cid:17) b − iJdτ (cid:2) a † ( M ⊗ P ) b + b † ( M ⊗ P ) a (cid:3) , (42) where M is M × M identity matrix and ⊗ stands for the Kronecker product. Vectors a and b consist of 2 N M components of fields, e.g., a T = ( a T , · · · , a TM ) where a m =( a m, , · · · , a m, N ) T . The inverse Green function of subsystem A is a block (skew)circulant for φ = 0 ( φ = π ) and that of subsystem B is block diagonal. They are, e.g.,for M = 3, i g − A,M = X A e iχ/M + iφ Y e iχ/M Y X A e iχ/M Y X A , (43) M ⊗ i g − B,φ = i g − B,φ i g − B,φ
00 0 i g − B,φ , i g − B,φ = X B + e iφ Y . (44)2 N × N submatrices are, P = τ ⊗ p N, − , Y = τ + ⊗ p N, + and X r = τ ⊗ ( − N + p N, − ) + τ † + ⊗ p N, + e − β r ( ǫ r − µ r ) + idτ ǫ r τ ⊗ p N, − , (45)where τ + = p , + is a 2 × p N, ± are N × N matrices, and their ( i, j )-components are [ p N, + ] ij = δ i, δ j,N and [ p N, − ] ij = δ i,j +1 . Explicit forms are, e.g., for N = 3, p N, + = , p N, − = . (46)The block (skew) circulant matrix g − A,M can be block-diagonalized by using the dis-crete Fourier transform corresponding to Eq. (26), a m = P M − ℓ =0 a ℓ e − i (2 πℓ + φ ) m/M / √ M .Then, by introducing 2 N × N sub-matrix i g − A,χ ℓ + χ/M + φ = X A + e i (2 πℓ + φ + χ ) /M Y ,the action becomes, i ˜ S = M − X ℓ =0 h a † ℓ i g − A,χ ℓ + χ/M + φ a ℓ + b † ℓ i g − B,φ b ℓ − iJdτ (cid:16) a † ℓ Pb ℓ + b † ℓ Pa ℓ (cid:17)i . (47)This action corresponds to Eq. (16) for M = 1 and to Eq. (28) for χ = 0. Sincethe Jacobian of the discrete Fourier transform is 1, by exploiting the Gauss integral,ln R Q Nj =1 ( da ∗ j da j / N ) e − a † Ma = − e iφ ln det M , where M is a 2 N × N matrix, theR´enyi entropy is calculated as,ln S M ( χ ) s M ( χ ) = − e iφ M − X ℓ =0 ln det (cid:2) N − ( Jdτ ) g B,φ Pg A,χ ℓ + χ/M + φ P (cid:3) . (48)Here the Green function is,[ g r,λ ] ij = − i n − r,λ e − iǫ r ( j − i ) dτ (1 < j < i < N ) n + r,λ e − iλ + iǫ r (2 N − i − j +1) dτ (1 < j < N, N + 1 ≤ i ≤ N ) n − r,λ e − iǫ r ( i − j ) dτ ( N + 1 < j < i < N ) n + r,λ e − iǫ r ( j − i ) dτ (1 < i < j < N ) n − r,λ e iλ − iǫ r (2 N − i − j +1) dτ (1 < i < N, N + 1 < j < N ) n + r,λ e − iǫ r ( i − j ) dτ ( N + 1 < i < j < N ) . (49) ill be inserted by the editor 15 In the time-continuous limit dτ →
0, we set t = t i and t ′ = t j . It is t i = ( N − i ) dτ ∈ K − for 1 ≤ i ≤ N and t i = ( i − N − dτ ∈ K + for N + 1 ≤ i ≤ N . Then, in the 2 × g r,λ ( t, t ′ ) = (cid:20) g r,λ ( t ∈ K + , t ′ ∈ K + ) g r,λ ( t ∈ K + , t ′ ∈ K − ) g r,λ ( t ∈ K − , t ′ ∈ K + ) g r,λ ( t ∈ K − , t ′ ∈ K − ) (cid:21) = − ie − iǫ r ( t − t ′ ) × (cid:20) n − r,λ θ ( t − t ′ ) + n + r,λ θ ( t ′ − t ) n + r,λ e − iλ n − r,λ e iλ n − r,λ θ ( t ′ − t ) + n + r,λ θ ( t − t ′ ) (cid:21) . (50)In the limit of τ → ∞ , the logarithm in Eq. (48) is expanded as,ln det[ · · · ] = − J Z τ dt dt Tr (cid:2) τ g B,φ ( t , t ) τ g A,χ ℓ + χ/M + φ ( t , t ) (cid:3) + · · ·≈ − τ π J Z dω Tr (cid:2) τ g B,φ ( ω ) τ g A,χ ℓ + χ/M + φ ( ω ) (cid:3) + · · · = τ π Z dω Tr ln (cid:2) τ − J τ g A,χ ℓ + χ/M + φ ( ω ) τ g B,φ ( ω ) (cid:3) , (51)where the Fourier transform of Eq. (50) is, g r,λ ( ω ) = P . ω − ǫ r τ − πi δ ( ω − ǫ r ) (cid:20) / n + r,λ ( ω ) n + r,λ ( ω ) e − iλ n − r,λ ( ω ) e iλ / n + r,λ ( ω ) (cid:21) . (52)The above calculations can be readily extended to many states ǫ r → ǫ rk , and thenEq. (20) is g r,λ ( ω ) = P k g rk,λ ( ω ). By combining Eqs. (48) and (51), we obtain,ln S M ( χ ) s M ( χ ) ≈ − e iφ M − X ℓ =0 τ π Z dω Tr ln (cid:2) τ − J τ g A,χ ℓ + χ/M + φ ( ω ) τ g B,φ ( ω ) (cid:3) , (53)which corresponds to Eq. (19) for M = 1 and to Eq. (3) for χ = 0. References
1. Ya. M. Blanter, and M. B¨uttiker, Phys. Rep. (2000) 1.2. L. S. Levitov, H. W. Lee, and G. B. Lesovik, J. Math. Phys. (1996) 4845.3. Quantum Noise in Mesoscopic Physics, Vol. 97 of NATO Science Series II: Mathemat-ics, Physics and Chemistry edited by Yu. V. Nazarov (Kluwer Academic Publishers,Dordrecht/Boston/London, 2003).4. D. A. Bagrets, Y. Utsumi, D. S. Golubev, and Gerd Sch¨on, Fortschritte der Physik (2006), 917-938.5. See, e.g., R. V. Hogg, J. W. McKean, and A. T. Craig, Introduction to MathematicalStatistics , 6th ed. (Pearson Education, Upper Saddle River, New Jersey, 2005).6. C. W. J. Beenakker,
Proceedings of the International School of Physics Enrico Fermi ,Vol. 162 (IOS Press, Amsterdam, 2006).7. C. E. Shannon, Bell System Techn. J. (1948) 379-423, 623-656.8. T. M. Cover and J. A. Thomas, Elements of Information Theory , 2nd ed. (Wiley-Interscience, New York, 2006).9. M. A. Nielsen and I. L. Chuang,
Quantum Computation and Quantum Information ,(Cambridge University Press, New York, 2000).10. I. Klich and L. S. Levitov, Phys. Rev. Lett. (2009) 100502.11. H. Francis Song, S. Rachel, C. Flindt, I. Klich, N. Laflorencie, and K. Le Hur, Phys.Rev. B (2012) 035409.6 Will be inserted by the editor12. R. S¨usstrunk and D. A. Ivanov, EPL , 235412 (2009).15. A. R´enyi, in Proceedings of the Fourth Berkeley Symposium on Mathematics, Statistics,and Probability (University of California Press, Berkeley, CA, 1960), p. 547.16. Yu. V. Nazarov, Phys. Rev. B (2011) 205437.17. M. H. Ansari and Yu. V. Nazarov, Phys. Rev. B (2015) 174307; ZhETF, 2016, (2016) 453.18. M. H. Ansari, arXiv:1605.04620; Phys. Rev. B (2017) 174302.19. Y. Utsumi, Phys. Rev. B (2015) 165312.20. Y. Utsumi, Phys. Rev. B (2017) 085304.21. H. Casini, and M. Huerta, J. Phys. A, (2009) 504007; arXiv:0905.2562v3.22. Y. Yamomoto and H. A. Haus, Rev. Mod. Phys. (1986) 1001.23. H. Touchette, Phys. Rep. (2009) 1.24. R. M. Fano, Transmission of Information: A Statistical Theory of Communications ,(The M.I.T. Press, Cambridge, 1961).25. S. W. Golomb, IEEE Trans. Inform. Theory
IT-12 (1966) 75.26. S. Guiasu and C. Reischer, Information Sciences, (1985) 235.27. Gerd Sch¨on and A. D. Zaikin, Phys. Rep. (1990) 237-412.28. U. Weiss, Quantum Dissipative Systems , 4th ed. (World Scientific, Singapore, 2012).29. A. Kamenev,
Field Theory of Nonequilibrium Systems , (Cambridge University Press,Cambridge, 2011).30. J. W. Negele, and H. Orland,