Theory of Entropy Production in Quantum Many-Body Systems
aa r X i v : . [ c ond - m a t . s t a t - m ec h ] J un Theory of Entropy Production in Quantum Many-Body Systems
E. Solano-Carrillo and A. J. Millis
Department of Physics, Columbia University, New York, NY 10027, USA
We define the entropy operator as the negative of the logarithm of the density matrix, give aprescription for extracting its thermodynamically measurable part, and discuss its dynamics. Foran isolated system we derive the first, second and third laws of thermodynamics. For weakly-coupledsubsystems of an isolated system, an expression for the long time limit of the expectation value ofthe rate of change of the thermodynamically measurable part of the entropy operator is derivedand interpreted in terms of entropy production and entropy transport terms. The interpretationis justified by comparison to the known expression for the entropy production in an aged classicalMarkovian system with Gaussian fluctuations and by a calculation of the current-induced entropyproduction in a conductor with electron-phonon scattering.
I. INTRODUCTION
Attempts to show how nonequilibrium thermodynamicbehavior emerges from the underlying quantum mechan-ics of individual particles is now being dubbed quan-tum thermodynamics.
Several approaches have arisen,revealing important aspects in this endeavor, such ashow thermal fluctuations and external driving mecha-nisms affect the stochastic course of nonequilibrium pro-cesses of small systems, which has led to fluctuationtheorems going beyond the results from the Kubo lin-ear response theory, as well as generalized fluctuation-dissipation relations as studied in isolated quantum sys-tems after a quench. Other aspects, more in the spiritof traditional nonequilibrium statistical mechanics, in-clude thermalization in isolated quantum systems, and the establishment of steady states in open quan-tum systems. A unified treatment along the linesof the classical theory of nonequilibrium thermodynam-ics is of crucial importance for a clear identification ofthe quantum-to-classical correspondence and the newfeatures brought about by fully quantum-mechanicalnonequilibrium behavior.The remarkable success of the classical theory inthe description of macroscopic phenomena in fluids moti-vates us to ask what are the basic ingredients of this for-malism that such a unified treatment of quantum thermo-dynamics must also contain. We remind that the buildingblocks of the classical theory are: (i) macroscopic observ-ables, explicitly defined as a set of thermodynamicallymeasurable or slowly-varying quantities, (ii) conservationlaws for these variables and, as a foundational pillar, (iii)an entropy balance equation is established, splitting therate of change of entropy as a part which is irreversiblyproduced, in accordance with the second law of thermo-dynamics, and a part which is transported. The validityof this theory relies on the local equilibrium assumption,whereby the nonequilibrium thermodynamic entropy isconsidered locally as a function of the same extensivevariables as in equilibrium.Although significant attempts to give a meaning toentropy out of equilibrium have long been known inquantum statistics, a complete theory of quantum entropy production has not been provided yet. The mainproblem is how to conceive an adequate quantum entropybalance equation without assuming local equilibrium.For an isolated system, there is no entropy to be trans-ported outside the system and hence the entropy balanceequation reduces to finding the right quantum expressionfor entropy whose rate of change is non-negative, accord-ing to the second law of thermodynamics, this rate thenbeing the entropy production. Important efforts havebeen devoted to obtain such an expression from the den-sity matrix, but the third law of thermodynamics,involving the vanishing entropy of pure states has notbeen satisfactorily established.On the other hand, for a subsystem of an isolatedsystem the establishment of a quantum entropy balanceequation has been partially addressed by assumingthat the rate of change of an adopted expression for thenonequilibrium entropy of the subsystem, obtained fromthe reduced density matrix, is directly connected, as inthe classical theory, with the rate of change of its energy.This involves the identification of a microscopic expres-sion for heat which is not unique and therefore quiteproblematic, but most importantly, does not constitute afull deviation from the local equilibrium assumption, aswe show later.The purpose of this paper is to provide a more gen-eral treatment of quantum entropy production and thenlay the foundation of a unified theory of quantum ther-modynamics in close correspondence with the classicaltheory. We introduce a new thermodynamic entropy op-erator ˆ S t for isolated quantum many-body systems andshow that the rate of change of its expectation value isnon-negative, according to the second law of thermody-namics. Unlike previous approaches, we establish thethird law of thermodynamics as a well-defined vanishingof the thermodynamic entropy for pure states.The quantum entropy balance equation for a given sub-system of an isolated system is obtained by first study-ing the time evolution of h ∂ t ˆ S t i for the isolated systemfrom first principles, i.e. from the Liouville-von Neu-mann equation for the density matrix, using the stan-dard generalized master equation approach of nonequi-librium statistical mechanics, and by subsequentlymaking reasonable assumptions regarding the factoriza-tion properties of the nonequilibrium probability distri-bution of microscopic states over the degrees of freedomof the different subsystems.We restrict here to weakly-coupled subsystems to showhow our theory is consistent with the classical theory, toelucidate the manner in which the local equilibrium ap-proximation can be fully abandoned, and to pave the wayto study cases of strong coupling between subsystems forwhich the aforementioned factorization properties of theprobability distribution of microscopic states become themain subject of study, marking a deep connection withquantum information theory. A detailed investigation ofa new methodology to approach these cases will be con-sidered elsewhere.The pursue of the so outlined research program is es-sential both for a more fundamental understanding ofnonequilibrium behavior, and because entropy produc-tion is inherent to dissipation so that a good atomic-scaledescription may have technological impact, e.g. by en-abling better control of waste heat and thermoelectriceffects in single-molecule electronics, guiding the ef-ficient design of quantum refrigerators and quantumheat machines, nanosized photoelectric devices, nan-othermoelectric engines based on quantum dots, etc.,which are envisioned as practical applications of quantumthermodynamics.It turns out, as we show here with a particular exampleof electronic conduction in the presence of phonon modesplaying the role of a reservoir, that our theory gives anexplicit expression for the Joule heating from a calcu-lation of the steady state electronic entropy productionalone. This represents an important progress since this isdone without calculating the rate of change of the energyof the electron subsystem.This paper is organized as follows: in section II wegive a brief review of entropy production and the secondlaw of thermodynamics as they manifest in phenomeno-logical thermodynamics. In section III we discuss thelocal equilibrium assumption from a quantum perspec-tive, with a derivation of the first law of thermodynamicsfrom the expression for h ∂ t ˆ S t i in this case, which is shownto hold for quasistatic transformations or slow processes.This section, which mainly discusses how the foundationsof the classical theory are to be understood quantum-mechanically, serves as a motivation to introduce the op-erator ∂ t ˆ S t for general isolated quantum systems of whichpossible reservoirs are part of.A transition is made in section IV to the generalizedthermodynamic description of quantum systems. Thesecond and third law of thermodynamics are establishedhere for any isolated system, and an entropy balanceequation is derived, splitting h ∂ t ˆ Si into entropy pro-duction and entropy transport terms. In section V, weshow how the theory is consistent with Onsager’s clas-sical stochastic entropy production in an aged system.Finally, in section VI we calculate the electronic entropyproduction in a simple metal consisting of independent electrons weakly coupled to phonons in the presence ofan external electric field, deriving the Joule heating, andwe conclude with section VII. II. ENTROPY PRODUCTION INPHENOMENOLOGICAL THERMODYNAMICS
The thermodynamic definition of entropy changes forany kind of process in a closed system (not interchangingparticles with the reservoirs) was given by Clausius atthe very end of his monumental 1865 paper.
If thesystem, which is considered to be in contact with a set ofheat sources at different temperatures T , follows a path γ in the space of thermodynamic states, joining the initialand final arbitrary states A and B , respectively, then thethermodynamic entropy change in the process is S B − S A = N C [ γ ] + Z BA ( ¯ dQ/T ) γ , (1)where ¯ dQ is an infinitesimal amount of heat absorbedfrom (or surrendered to) the heat source at temperature T , and the quantity N C [ γ ], representing what came tobe known as the “uncompensated heat of Clausius”, isa functional of the process. Clausius defined it in such away that N C [ γ ] ≡ − I γ ¯ dQ/T = − Z BA ( ¯ dQ/T ) γ − Z AB ( ¯ dQ/T ) γ R , (2)where γ R is an arbitrary reversible path which is “imag-ined” to bring the system back to its initial state A . Heproved that N C [ γ ] ≥ , (Clausius’ inequality) , (3)for any γ , which was a generalization of Carnot’s resultsfor cyclic processes; the equality holding if and only if γ is a reversible path. This is the starting point of allthe discussions found in textbooks of the second law ofthermodynamics, and is therefore regarded here as thefundamental expression for this law.A classical formulation of nonequilibrium thermody-namics has been founded by taking as starting point(1) written in differential form and generalized to applylocally in small volume elements, δv , of a system d S = d i S + d e S , (4)where d i S ≡ dN C is the entropy produced, during an in-finitesimal time interval, due to irreversible processes tak-ing place inside the volume element, and d e S the entropysupplied from its surroundings ( ≡ ¯ dQ/T for a closed ele-ment). The second law of thermodynamics requires onlythat the entropy produced satisfies d i S ≥ , (Clausius’ inequality) . (5)The theory so obtained for the phenomenological entropyproduction, Π δv = d i S /dt , successfully describes slow processes or phenomena where the decay time of localperturbations is very short compared to the global relax-ation time, as in chemical reactions, diffusion processes,heat conduction, and their cross effects in gases and liq-uids. However, it requires fundamental modifications for fast processes and, in the following, we argue froma quantum-mechanical perspective why this happens tobe the case, setting the stage and motivating the methodfor the subsequent development of our theory. III. LOCAL EQUILIBRIUM AND QUASISTATICQUANTUM TRANSFORMATIONS
Consider an isolated macroscopic system, possibly con-taining a set of particle and heat reservoirs which is di-vided into macroscopic subsystems. Microscopically, thetotal system is defined by the Hamiltonianˆ H = X l ˆ H l + X l 0, and then the parameters x lλ should vary with time so slowly that the state ˆ ̺ r can beinterpreted as “moving” in a locus of equilibrium states,so that k ∂ t ˆ ̺ r k → qua-sistatic (or reversible) transformations for which the firstlaw involving thermodynamic entropy changes applies,hence the superindex “r” standing for reversible, and thesystematic omission of the time subindex in the variables.Note that in this case, the quantity N C [ γ ] in (1) vanishesfor any γ = { x lλ ( t ) , ∀ λ, l and t ∈ [ t A , t B ] } .A nonzero entropy production appears instead whenthe subsystems are macroscopic at the atomic scale, butcompared to the size of the total system, they are smallvolume elements, δv l . In this case, an entropy balanceequation may be obtained from (20), by using the rela-tions ∂ t h ˆ H l i = − X m J lm H + X λ ∂ h ˆ H l i ∂x lλ ∂ t x lλ , (22) ∂ t h ˆ N l i = − X m J lm N + X λ ∂ h ˆ N l i ∂x lλ ∂ t x lλ , (23)which state that the average macroscopic energy andnumber of particles of a given subsystem can only changeby transport to other subsystems, defining the corre-sponding currents J lm H and J lm N in terms of quantitiesproportional to the particle velocities, with an appropri-ate microscopic account for the heat currents, plus termsallowing the technical possiblity of a creation or destruc-tion of particles induced by the variation of the externalconstrainsts. Substituting these in (20) we get ∂ t h ˆ S r l i = 1 T l X λ (cid:20) ∂∂x lλ ( h ˆ H l i − µ l h ˆ N l i ) + F lλ (cid:21) ∂ t x lλ − T l X m (cid:0) J lm H − µ l J lm N (cid:1) = Π δv l − Φ δv l , (24)the first term in the first equality being the entropy pro-duction term, Π δv l , and the second one the entropy trans-port term, Φ δv l . Results consistent with the classicaltheory are obtained when particle creation or destruc-tion is not observed macroscopically, in which case (22)and (23) are just the usual conservation laws (continuityequations) and the entropy production in the subsystemreduces to the well-known sum of products of thermody-namic forces times the rate of change of their conjugateexternal parametersΠ δv l = 1 T l X λ F lλ ∂ t x lλ , (classical). (25)The presentation given here can be straightforwardlygeneralized by considering local equilibrium Gibbs en-sembles more general than (8), that is, by augmenting thethermodynamic entropy operator (16) with terms pro-portional to the components of the macroscopic linearand angular momentum operators of each subsystem, with (22) and (23) expanded to include the conservationlaws of their respective expectation values.Note that we have kept the superindex “r” (althoughnot strictly with its original connotation) in (24) because,even though the thermodynamic limit is not taken foreach susbsytem, which would make ∆ → δv l , are macroscopic at the atomic scale stillimplies that ∆ is very small and hence, from (21), thatthe variations ∂ t ˆ ρ r should correspondingly be very smallin the norm. As mentioned in section II, we then seewhy the classical theory works well for slow processes,i.e. those for which the time to get relaxation to equilib-rium within each volume element is much shorter thanthe time to get equilibrium among them.The discussion in this section elucidates the problemswith the local equilibrium assumption and previous the-ories of entropy production, which rely on expressions ofthe type (20) together with conservation laws, like (22)and (23), as in the classical theory. As we have madeexplicit, developing a theory of entropy production from(20) inherently assumes that the correlations among thesubsystems of a large isolated system are negligible forall times , and using (22) in this theory takes for grantedthat an appropriate mechanical description of the micro-scopics of heat currents have been univocally achieved.We now propose a way to derive an entropy balanceequation for the subsystems of a general isolated sys-tem from first principles, starting from the Liouville-vonNeumann equation for the density matrix of the isolatedsystem, which does not rely on the above assumptions. IV. MASTER EQUATION FOR THETHERMODYNAMIC ENTROPY OPERATOR We generalize the thermodynamic description to in-clude subsystems which are not distinguished by spatialboundaries and which are not necessarily macroscopic atthe atomic scale. The key point to borrow from ther-modynamics is the existence of the thermodynamic basis {| α i} and the interpretation of thermodynamic observ-ables as those which are diagonal in this basis. That is,we consider an isolated quantum system (containing pos-sible reservoirs) which has a Hamiltonian ˆ H representingthe energy of uncoupled subsystems, as before, and studythe dynamics when the perturbation, ˆ V , mixing the de-grees of freedom of the different subsystems, or a set ofthem, is turned on.The Hamiltonian of the total system is then given byˆ H = ˆ H + ˆ V , and the situations of interest include phe-nomena such as quantum quenches, or the responseto applied fields. After preparation of the systemin an initial statistical state of the formˆ ρ = exp( − ˆ S ) , (26)with ˆ S an arbitrary (in general unbounded) hermitianoperator with [ ˆ S , ˆ V ] = 0, the nonequilibrium state isdescribed by the evolved density matrix ˆ ρ t , and we definethe entropy operator ˆ S t byˆ ρ t = exp( − ˆ S t ) , or ˆ S t = − ln ˆ ρ t , (27)which can always be written since the density matrix ispositive-definite. This exponential representation of thedensity matrix is not new; it is a generalized form of the nonequilibrium statistical operator introduced byZubarev, and obtained for the case of steady statesby Hershfield .As discussed in the previous section, our new thermo-dynamic entropy operator, ˆ S t = −D ln ˆ ρ t , is obtainedfrom ˆ S t by projecting to the space of operators diag-onal in the basis {| α i} of eigenstates of ˆ H . We nowestablish the second law of thermodynamics for nonequi-librium transformations of the total system. For this,we consider for simplicity the specific situation of initialstates diagonal in the thermodynamic basis, e.g. thoseof local equilibrium form as in (8), for which ˆ S ∼ = 0or ˆ S = ˆ S . These initial states are usually assumed inpractice , e.g. in transport problems.Let us denote the diagonal (or thermodynamic) partof the density matrix of the system asˆ ̺ t = D ˆ ρ t . (28)The occupation probability of the state | α i is obtainedby taking matrix elements P α ; t = h α | ˆ ̺ t | α i . The proofnow follows in steps by first using a corollary to Klein’sinequality, which states that for any concave function f ( x ) we have Tr f (ˆ ̺ t ) ≥ Tr f (ˆ ρ t ) . (29)By choosing the concave function f ( x ) = − x ln( x ), weeasily get − Tr ˆ ̺ t ln(ˆ ̺ t ) ≥ − Tr ˆ ρ t ln ˆ ρ t or S d ; t ≥ S vN ; t , (30)where we have denoted S d ; t = − P α P α ; t ln P α ; t as thediagonal entropy and S vN ; t is the well-knownvon Neumann entropy. Using the time-invariance of S vN ; t under the unitary evolution of the isolated systemtogether with the fact that the initial state is diagonal,so that S d ;0 = S vN ;0 , then (30) implies S d ; t ≥ S d ;0 . (31)We use this result and the Husimi-Mori lemma, which states that for any convex function g ( x ) and state | ψ i we have h ψ | g (ˆ ρ t ) | ψ i ≥ g ( h ψ | ˆ ρ t | ψ i ) , (32)to show that, if we choose the convex function g ( x ) = − ln( x ) so that −h α | ln ˆ ρ t | α i ≥ − ln P α ; t , the thermody-namic entropy satisfies S t = h ˆ S t i = − X α P α ; t h α | ln ˆ ρ t | α i ≥ S d ; t ≥ S , (33)where S = S d ;0 = S vN ;0 by the assumption of the initialdiagonal state. For our isolated system for which thereis no entropy to be transported outside of its boundaries,this proves that S t satisfies the second law of thermody-namics.Note that, by splitting ˆ ρ t = ˆ ̺ t + ˆ ρ ∼ t and using theconvenient resolvent representation of the logarithm ofan operator sum ln( ˆ A + ˆ B ) = Z ∞ dx (cid:18) x + 1 − x + ˆ A + ˆ B (cid:19) , (34)we can expand the thermodynamic entropy as S t = S d ; t + X α , β ( = α ) (cid:20) P β ; t − P α ; t ) − P α ; t ( P β ; t − P α ; t ) ln P β ; t P α ; t (cid:21) × |h α | ˆ ρ ∼ | β i| + O ( h| ˆ ρ ∼ |i ) , (35)with S t − S d ; t ≥ becomes the thermodynamic entropy according to(35), and due to the quasistatic (or slow) nature of theglobal transformations involved in this case, the ther-modynamic basis may be referred to as the adiabaticbasis. The thermodynamic entropy, unlike the diagonal andvon-Neumann entropies, satisfies the third law of thermo-dynamics in a transparent way. The third law states thatthe thermodynamic entropy at zero temperature must bezero. The standard argument is that at zero tempera-ture any physical state is pure . For an arbitrary purestate | ψ i , there is always an orthonormal basis of Hilbertspace which has this state as one of its elements (con-struct it via the Gram-Schmidt procedure starting from | ψ i ). Denote this basis, {| ψ r i} , and order its elementssuch that | ψ i = | ψ i . We take this basis as the referencefor “diagonal”. With this we then have for the diagonaland von-Neumann entropies S d ( ψ ) = S vN ( ψ ) = − X r P r ln P r , = − · ln(1) − X r =1 · ln(0) , (36)where P r is the probability that the system be found instate | ψ r i . Eq. (36) is usually understood to be zero, although it is clearly an undetermined quantity since,taken at face value, − · ln(0) = 0 · ∞ .The thermodynamic entropy of pure states is well-defined and readily vanishes. In order to show this, wedenote the density matrices (projectors) ˆ ρ r = | ψ r ih ψ r | ,with P r ˆ ρ r = ˆ1. We can then writeln ˆ ρ = ln(ˆ1 − P r =1 ˆ ρ r ) = − P ∞ u =1 ( P r =1 ˆ ρ r ) u /u. (37)Using this, we can compute the thermodynamic entropyof the state | ψ i as S ( ψ ) = − X r h ψ r | ˆ ρ D ln(ˆ ρ ) | ψ r i = −h ψ | ln(ˆ ρ ) | ψ i . (38)This clearly vanishes exactly since | ψ i = | ψ i is orthog-onal to all | ψ r =1 i involved in the last equality of (37).This establishes the third law of thermodynamics.We are after an entropy balance equation for the sub-systems, so we need an equation of motion for ˆ S t and aprocedure to get from this one for each subsystems, as inthe previous section. This can be obtained by first not-ing that the usual unitary evolution of the density matriximplies that ˆ S t also satisfies the Liouville-von Neumannequation satisfied by ˆ ρ t . We have i∂ t ˆ S t = [ ˆ H, ˆ S t ] ≡ L ˆ S t . (39)This allows us to follow exactly the same procedure origi-nally used with the density matrix to derive an equa-tion of motion for its diagonal part, ˆ ̺ t , the so-calledNakajima-Zwanzig generalized master equation. That is,we split the entropy operator into a diagonal and non-diagonal part, with respect to the eigenbasis of ˆ H , asˆ S t = ˆ S t + ˆ S ∼ t , and obtain an equation of motion for thediagonal part using Zwanzig’s integral i∂ t ˆ S t = D L ˆ S t + D L e − it N L ˆ S ∼ − i Z t dτ K τ ˆ S t − τ , (40)where the memory kernel is defined as K τ = D L e − iτ N L N L. (41)Now, it is easy to verify that D L D = 0 for anyHamiltonian, therefore the first term in (40) vanishesand, with our initial diagonal states implying ˆ S ∼ = 0,we are left with the integro-differential equation ∂ t ˆ S t = − Z t dτ K τ ˆ S t − τ . (42)Although an exact solution for (42), as well as for thesimilar equation satisfied by ˆ ̺ t , can easily be found by aLaplace transformation followed by an inversionˆ S t = 12 πi Z c + i ∞ c − i ∞ ds e st s + K s ˆ S , with c > , (43)where K s is the Laplace transform of the memory kernel,obtained from (41) as K s = D L s + i N L N L, (44)we restrict here, for the sake of a clear presentation andfor comparison with the classical results, to the Born-Markov approximation for weakly coupled subsystems,leaving a more general discussion for another publication.This approximation, which is justified in the limit of veryweak coupling potentials, ˆ V , and very long times (VanHove limit ) amounts to neglecting memory effects in(42). In practice, this works for times after any transienteffect or prethermalization plateau of the isolatedsystem has passed. We then have in this limit ∂ t ˆ S t = − lim s → + K s ˆ S t , (45)where K s and ˆ S t , after being expanded in powers of ˆ V ,are truncated up to the lowest orders, for which the well-known identity for the resolvent operator expansion( A + B ) − = A − − A − B ( A + B ) − , (46)is very useful. Taking expectation value of (45), andnoting that for a diagonal operator ˆ G we have h ˆ Gi t =Tr ˆ ρ t ˆ G = Tr ˆ ̺ t ˆ G , the average rate of change of the ther-modynamic entropy in the Born-Markov limit is then h ∂ t ˆ S t i = X αα ′ P α W αα ′ ln P α P α ′ . (47)with the transition rates W αα ′ = 2 πδ ( ε α − ε α ′ ) | V αα ′ | ,calculated in the lowest order in the coupling potentialusing Fermi’s golden rule. Here, we have derived thetransition rates from (44) and (45), by using the repre-sentation of the delta function lim s → + Re 1 s + iω = πδ ( ω ) . (48) Moreover, P α = h α | ˆ ̺ (0) t | α i is the occupation probabil-ity of the state | α i in its lowest-order approximation, which also satisfies the Born-Markov limit of the gener-alized master equation, that is, the transport (or Pauli)equation ∂ t P α = X α ′ ( P α ′ W α ′ α − P α W αα ′ ) . (49)The right hand side of (47) can be rearranged to yieldthe quantum version, in the Born-Markov limit, of theentropy balance equation. We find h ∂ t ˆ S t i = Π − Φ , (50)where the average rate of entropy produced in the systemis interpreted as Π = 12 X α , α ′ ( P α W αα ′ − P α ′ W α ′ α ) ln P α W αα ′ P α ′ W α ′ α , (51)and the average entropy flux to the surroundings asΦ = 12 X α , α ′ ( P α W αα ′ − P α ′ W α ′ α ) ln W αα ′ W α ′ α . (52)Of course, the latter must be zero for an isolated sys-tem since a global entropy current finds nowhere to go inthis case. The vanishing of this quantity is clearly seenfrom the symmetry of the transition rates W αα ′ underthe interchange of indices, resulting from the hermiticityof the perturbation ˆ V . A nonvanishing entropy currentis obtained however when we consider the local entropyproduction in a subsystem of a larger system, as in theelectrical conduction problem of section VI.Note that Π is a sum of terms of the form ( x − y ) ln( x/y )so is always non-negative . It vanishes for reversible trans-formations (local equilibrium) or in equilibrium due todetailed balance, P r α W αα ′ = P r α ′ W α ′ α , this being a sta-tistical statement of the second law of thermodynamics inthe Clausius form. The outlined method is the one thatwe shall follow in section VI for the electrical conductionproblem to derive an entropy balance equation for theelectronic subsystem in the Born-Markov limit, based onthe transport equation for the total electrons + phonons+ field system, without any need to invoke expressionslike (20) together with extra conservation laws.One of the advantages of our approach, besides be-ing grounded on fundamental facts regarding the natureof thermodynamic observables is that, as opposed to ac-tively studied relative-entropy formulations of quan-tum entropy production, it can be generalized to initialstates with correlations among the subsystems, i.e. notof the local equilibrium form. This is very importantsince the neglection of correlations in the state of an iso-lated system is inconsistent with the specification of itsenergy. We have safely ignored this fact in our presentdiscussion because the consideration of a nonvanishingsecond term in (40), due to ˆ S ∼ = 0, only adds the term12 πi Z c + i ∞ c − i ∞ ds e st s + K s D L s + i N L ˆ S ∼ , (53)to the solution (43). However, it is easily seen that ex-pressions containing ˆ S ∼ contribute higher order termsin the weak coupling expansion embodied in the Born-Markov limit and then are negligible; the same happens for the contributions coming from ˆ ρ ∼ in the Born-Markovlimit of the generalized master equation for ˆ ̺ t . There-fore, our formalism has room to study memory effects andstrong correlations in the initial state by only straightfor-ward modifications. These memory effects are the onesresponsible for heat transport depending on the pathof thermodynamic states in phenomenological thermo-dynamics. V. RELATION WITH CLASSICALSTOCHASTIC THERMODYNAMICS We now show that our result, (47), is consistent withthe result for the average rate of change of the thermody-namic entropy obtained in Onsager’s classical theory. Weconsider an isolated macroscopic system which has beenleft alone for a very long time (aged system). The classi-cal thermodynamic state is described by a set of exten-sive variables, such as energy, mass, electric charge, etc.,which randomly fluctuate about their equilibrium valuesand whose values define the classical state of the system.This state is represented by the symbol a t (shifted tovanish in equilibrium), whose successive values in timedescribe a stationary stochastic process.It can be shown that, if the fluctuations follow a Gaus-sian process, which can be argued to be the case if theextensive variables are algebraic sums of very many in-dependent (weakly coupled) “microscopic” quantities sothat the central limit theorem can be invoked and, if inaddition the process is Markovian, then the joint proba-bility distribution,Ω( a ′ , ∆ t, a ′′ ) = P a ′ P a ′ a ′′ (∆ t ) , (54)for observing the values a t ′ = a ′ and a t ′′ = a ′′ at therespective times separated by an interval ∆ t = t ′′ − t ′ ,with P a ′ a ′′ (∆ t ) the corresponding conditional probabilityto make a transition between these states, is given byOnsager’s principle, which we write as a ′ , ∆ t, a ′′ ) = S a ′ + S a ′′ + Z t ′′ t ′ dτ ˙ S ! min , (55)where the path of integration is the trajectory, a τ , whichmakes the integral a minimum, subject to the conditions a t ′ = a ′ and a t ′′ = a ′′ . Clearly, if we take the limit∆ t → 0, the integral tends to ˙ S a ′ ∆ t , where ˙ S a ′ is theentropy production rate in the state a ′ , whose entropyis related to the probability distribution, P a ′ , by Boltz-mann’s principle. Subtracting the time-reversed expres-sion of Onsager’s principle from (55) we get, in the limit∆ t → 0, the alternative formln Ω( a ′ , ∆ t, a ′′ )Ω( a ′′ , − ∆ t, a ′ ) = 12 ( ˙ S a ′ + ˙ S a ′′ )∆ t, (56) We now average (56) over the joint distribution (54),which is expanded up to linear order in ∆ t by writingthe transition probabilities to go from a ′ to a ′′ after atime ∆ t as P a ′ a ′′ (∆ t ) = δ a ′ a ′′ + W a ′ a ′′ ∆ t = P a ′′ a ′ ( − ∆ t ) , (57)the last equality being the statement of Onsager’s mi-croscopic reversibility, and leading to the symmetryof the transition rates, W a ′ a ′′ , under the interchange ofindices. This symmetry allows to write the averaged left-hand side of (56) as ∆ t P a ′ a ′′ P a ′ W a ′ a ′′ ln( P a ′ /P a ′′ ) andthe right-hand side as ∆ t P a ′ P a ′ ˙ S a ′ Therefore, by rec-ognizing the latter sum as h ˙ Si , we get the expression h ˙ Si = X a ′ a ′′ P a ′ W a ′ a ′′ ln P a ′ P a ′′ , (58)which gives the desired link with our theory, by compar-ing with (47). We remark that (56) is of the same form ln P ∆ t ( σ ) P ∆ t ( − σ ) = σ ∆ t (59)as Gallavotti and Cohen fluctuation theorem, if we read(1 / S a ′ + ˙ S a ′′ ), as a realization of the random number σ = (1 / S a t ′ + ˙ S a t ′′ ), representing the average entropyproduction in going from a t ′ to a t ′′ during a time interval∆ t along the stochastic trajectory of states; and translatethe joint probability, Ω( a ′ , ± ∆ t, a ′′ ), to have the state re-alizations a t ′ = a ′ and a t ′′ = a ′′ , in a forward (+∆ t ) orbackward ( − ∆ t ) evolution, to the corresponding proba-bilities P ∆ t ( ± σ ) to have the realization, (1 / S a ′ + ˙ S a ′′ ),of σ or its time-reversed value. VI. ENTROPY PRODUCTION INELECTRICAL CONDUCTION We next apply the formalism to a model of indepen-dent electrons coupled to phonons in the presence of anelectric field. We are interested in the average rate of en-tropy produced in the electronic system and transportedto the phonons in the steady state. The picture is thenthat of a large system divided into three subsystems, theelectrons, the phonons, and the sources of the field. Inthe thermodynamic description we parametrize, as usual,the coupling to the latter by introducing E t and forget-ting about the structure of this subsystem.The Hamiltonian of the total system is thenˆ H = ˆ H el + ˆ H ph + ˆ H el-ph + ˆ H F ; t , (60)where ˆ H el = P k ǫ k ˆ c † k ˆ c k is the kinetic energy opera-tor for the electrons, which are assumed to be free ex-cept for their interaction with the field and the phonons,the energy operator of the phonon subsystem is ˆ H ph = P q ω q ˆ a † q ˆ a q , and the electron-phonon interaction is bilin-ear in electron operators and linear in phonon operatorsˆ H el-ph = X qkk ′ M qk ′ k ˆ c † k ′ ˆ c k (ˆ a q + ˆ a †− q ) , (61)with M qk ′ k representing the strength of the coupling. Thegeneralization to multiple electronic bands and multiplephonon branches is straightforward and does not changethe results. Finally, ˆ H F ; t represents the effects of theapplied electric field, E t , and can be written in first-quantized notation as ˆ H F ; t = − e E t · P e ˆ x e , where ˆ x e is the displacement of electron e from some arbitrarilychosen reference position.Up to time t = 0 we have a collection of electrons inlocal equilibrium with the lattice vibrations of a metalat a temperature T , and no applied electric electric field,i.e. E = 0. The initial state is then of the formˆ ρ = Z − exp[ − ( ˆ H − µ ˆ N el ) /T ] , (62)where Z = Z el Z ph is the grand partition function, ˆ N el isthe operator for the total number of electrons, and ˆ H isthe Hamiltonian of the uncoupled subsystemsˆ H = ˆ H el + ˆ H ph , (63)whose eigenstates, constituing the thermodynamic basis,are | α i = | n n · · · n k · · · i| N N · · · N q · · · i = | n ; N i , (64)which represent the number of electrons, { n k } , andphonons, { N q } , in each single-particle state.The electric field is turned on at time t = 0 + to a con-stant value, i.e. E t = E for t > 0, and the subsystems aresubsequently coupled. In the notation of section IV wethen have in the generalized thermodynamic descriptionˆ H = ˆ H + ˆ H F , ˆ V = ˆ H el-ph . (65)Note that ˆ V is the coupling which fully mixes the de-grees of freedom of the different subsystems (like the ˆ H lm in section III), which need not be separated by spatialboundaries. We now explain with some detail how theperturbation scheme developed in section IV applies tothe present case. However, we only need to concentrateon how the transport equation is obtained in the Born-Markov limit, since this suffices to get the average rateof entropy production.The idea is then to first derive the transport equa-tion for the total system from the Liouville-von Neumannequation; we do it much in the same spirit as Kohn andLuttinger did for elastic electronic scattering and gen-eralized by Argyres to inelastic scattering. Having thistransport equation, the average rate of change of the to-tal thermodynamic entropy in the Born-Markov limit is h ∂ t ˆ S t i = − X α ( ∂ t P α ) ln P α , (66) as can easily be verified by using (49) in (47). By proceed-ing with the transport or quantum Boltzmann equationfor the electronic subsystem, we obtain a simple expres-sion for the electronic entropy production.For the purpose of the present discussion, it suffices towork with the Liouville-von Neumann equation to firstorder in the electric field. That is, with ˆ ρ t = ˆ ρ + ˆ ρ t and ˆ ρ t linear in the electric field, we write i∂ t ˆ ρ t = [ ˆ H + ˆ V , ˆ ρ t ] + [ ˆ H F , ˆ ρ ] , (67)where ˆ ρ = 0. The Laplace transform of this equation,with ˆ ρ s = R ∞ e − st ˆ ρ t , reads is ˆ ρ s = ( L + L V )ˆ ρ s + s − L F ˆ ρ . (68)With ˆ ̺ s = D ˆ ρ s and ˆ ρ ∼ s = N ˆ ρ s , we separate thisequation into a diagonal and a non-diagonal part, ob-taining, respectively, the coupled algebraic equations is ˆ ̺ s = D L V ˆ ρ ∼ s + s − D L F ˆ ρ , (69)[ is + N ( L + L V )] ˆ ρ ∼ s = N L V ˆ ̺ s + s − N L F ˆ ρ . (70)Solving for ˆ ρ ∼ s in (70) and substituting the result in (69)we get a decoupled equation for ˆ ̺ s , which in the lowestBorn approximation for the electron-phonon scatteringreads is ˆ ̺ s = D L V is + N L N L V ˆ ̺ s + s − D L F ˆ ρ . (71)From this, the transport equation for the total systemeasily arises in the Born-Markov limit by taking theLaplace inverse and neglecting memory terms. In termsof the occupation probabilities P α = h α | ˆ ̺ t | α i we get ∂ t P α = 1 i ( L F ˆ ρ ) α + X α ′ ( P α ′ W α ′ α − P α W αα ′ ) (72)with the transition rates induced by the electron-phononcoupling W αα ′ = 2 πδ ( ε α − ε α ′ ) |h α | ˆ H el-ph | α ′ i| . We havethen derived the transport equation for the total system,in terms of which the average rate of change of the totalthermodynamic entropy can be calculated, in the Born-Markov limit, using (66).To proceed with the calculation of the entropy produc-tion of the electronic subsystem, we note that P α = P el n P ph N χ el-ph nN , (73)where P el n is the probability that the electrons are in theFock state | n i regardless of the state of the phonons, P ph N is the probability that the phonons are in the Fock state | N i regardless of the state of the electrons, and χ el-ph nN isthe conditional probability that the total system is in thestate | α i in (64), given that the electron and phonon sub-systems are in states | n i and | N i , respectively, without“knowing” about each other. Clearly, χ el-ph nN is a functionof the electron-phonon coupling strength, and can thenbe expanded in a power series of it χ el-ph nN = 1 + χ el-ph(1) nN + χ el-ph(2) nN + · · · . (74)0In the lowest Born approximation for the electron-phonon scattering, the electron and phonon subsystemsare uncorrelated, i.e. χ el-ph nN = 1, which is the usual Born-Oppenheimer approximation, and then by substituting(72) and (73) into (66), the average rate of change of thethermodynamic entropy of the total system turns out tobe additive. For the electronic subsystem we have h ∂ t ˆ S t i el = − X n ( ∂ t P el n ) ln P el n , (75)where the normalization condition P N P ph N = 1 has beenused. Here, the transport equation for the electronic sub-system is obtained from (72) by summing over N∂ t P el n = 1 i X N ( L F ˆ ρ ) nN,nN + X n ′ ( P el n ′ Γ n ′ n − P el n Γ nn ′ ) , (76)where we have defined the phonon-averaged reducedtransition rates Γ nn ′ asΓ nn ′ = X N P ph N X N ′ W nN,n ′ N ′ . (77)We can still go further and use the assumed statisticalindependence of the electrons to factorize their probabil-ity distribution into the probabilities of the one-electronstates P el n = p n p n · · · p n k · · · , (78)where p n k is the probability that the one-electron statewith quantum number k has occupation n k = 0 , 1. Sub-stituting this in (75) we obtain an additive contributionto the average rate of change of the thermodynamic en-tropy of the electronic subsystem h ∂ t ˆ S t i el = − X k,n k ( ∂ t p n k ) ln p n k = − X k ( ∂ t f k ) ln f k − f k , (79)where in the last equality we identify the nonequilibriumone-electron distribution as f k = P n k n k p n k = p n k =1 and use P n k ′ p n k ′ = 1 to express p n k =0 = 1 − f k . Thetransport equation for f k is obtained by multiplying (76)by n k and summing over all n . To this end, note thatΓ nn ′ = X k,k ′ ( k = k ′ ) w kk ′ n k (1 − n k ′ ) × |h· · · n k ′ − · · · n k + 1 · · · | n ′ i| , (80)which is obtain by using (61) explicitly, where the one-electron transition rate from state k to state k ′ is w kk ′ = 2 π X q | M qk ′ k | h ¯ N ( ω q ) δ ( ǫ k ′ − ǫ k − ω q )+ (cid:2) N ( ω q ) (cid:3) δ ( ǫ k ′ − ǫ k + ω q ) i , (81)with ¯ N ( ω q ) = P N P ph N h N | ˆ a † q ˆ a q | N i the average num-ber of phonons in the single-particle state with quantum number q . We assume that the phonon subsystem can bekept in equilibrium at temperature T (hence the depen-dence of ¯ N on ω q only), no matter the nonequilibriumstate of the electrons, as is the case for a good enoughheat reservoir.That the phonons can be considered as a heat reservoirin the Born-Oppenheimer approximation can be seen bylooking at the transport equation for the phonon subsys-tem, obtained from (72) by summing over n∂ t P ph N = X N ′ ( P ph N ′ Θ N ′ N − P ph N Θ NN ′ ) , (82)where we have defined the electron-averaged reducedtransition rates asΘ NN ′ = X n P el n X n ′ W nN,n ′ N ′ . (83)Here we observe the important fact that the contributionfrom the first term of (72) vanishes due to the null valueof the trace of the commutator [ ˆ H F , ˆ ρ ] in the subspaceof electrons. This allows the existence of a steady statesolution of (82) for which detailed balance holds, whichis then an equilibrium solution. In any case, the assump-tion that the phonons are in equilibrium is not necessaryfor the following derivation of the electronic entropy pro-duction, as ¯ N ( ω q ) in (81) can be replaced by the morecomplicated average obtained by using the nonequilib-rium solution of (82), not investigated here.The transport equation for the one-electron distribu-tion is then found to be, from (76) ∂ t f k = 1 i X nN n k ( L F ˆ ρ ) nN,nN + X nn ′ n k ( P el n ′ Γ n ′ n − P el n Γ nn ′ ) . (84)This is just the quantum Boltzmann equation. To writeit in the familiar form we first note that, by writing ˆ H el in first-quantized form and using the well-known formula[ ˆ x e , f ( ˆ p e )] = i ∇ ˆ p e f ( ˆ p e ) we have L F ˆ ρ = i ( e/T ) E · ˆ v ˆ ρ ,where ˆ v = P e ( ˆ p e /m ) = P k v k ˆ c † k ˆ c k is the velocity oper-ator of all electrons, with v k = ∇ k ǫ k the band velocity.Therefore, the first term in (84) is( ∂ t f k ) drift = 1 i X nN n k ( L F ˆ ρ ) nN,nN = e E T · Tr (ˆ n k ˆ v ˆ ρ ) , = e E · v k f k (1 − f k ) /T = − e E · ∇ k f k , (85)where f k = Tr(ˆ n k ˆ ρ ) is the equilibrium Fermi-Dirac one-electron distribution. The second term in (84) can bewritten, using (80), as( ∂ t f k ) coll = X nn ′ n k ( P el n ′ Γ n ′ n − P el n Γ nn ′ ) , = X n P el n n k X k ′ ,k ′′ w k ′ k ′′ (1 − n k ′ ) n k ′′ − X n P el n n k X k ′ ,k ′′ w k ′ k ′′ n k ′ (1 − n k ′′ ) . (86)1Therefore, by noting that n k (1 − n k ) = 0 and using (78)we see that the terms which do not cancel in the abovesums are( ∂ t f k ) coll = X k ′ [ f k ′ w k ′ k (1 − f k ) − f k w kk ′ (1 − f k ′ )] . (87)We have thus arrived to the familiar form of the quan-tum Boltzmann equation by substituting (85) and (87)into (84). With this, we can rewrite the average rate ofchange of the thermodynamic entropy of the electronicsubsystem, from (79), as h ∂ t ˆ S t i el = Π el − Φ el , (88)which is the entropy balance equation for the electronicsubsystem, with the average electronic entropy produc-tion rateΠ el = 12 X kk ′ [ f k ′ w k ′ k (1 − f k ) − f k w kk ′ (1 − f ′ k )] × ln f k ′ w k ′ k (1 − f k ) f k w kk ′ (1 − f ′ k ) , (89)which, similar to (51), is a sum of terms of the form( x − y ) ln( x/y ) and then satisfies the second law of ther-modynamics; and the entropy flux from the electrons tothe phonons isΦ el = 12 X kk ′ [ f k ′ w k ′ k (1 − f k ) − f k w kk ′ (1 − f ′ k )] ln w k ′ k w kk ′ + X k ( ∂ t f k ) drift ln f k − f k . (90)In the steady state the left-hand side of (88) is exactlyzero and then all the entropy produced in the electronicsystem is transported to the phonons. We now wantto show that this steady-state entropy flux toward thelattice vibrations gives the expression of the well-knownJoule heating.We need a solution, f k = f k + δf k , of the quantumBoltzmann equation which, to linear order in the electricfield strength, we write formally as δf k = X k ′ W − kk ′ (cid:20) e E · v k ′ T f k ′ (cid:0) − f k ′ (cid:1)(cid:21) , (91)where the linearized collision operator W , has matrix el-ements W kk ′ = f k w kk ′ + w k ′ k (cid:0) − f k (cid:1) − δ kk ′ τ k . (92)The quasiparticle relaxation time τ k is given by1 τ k = X k ′ [ f k ′ w k ′ k + w kk ′ (1 − f k ′ ) ] , (93)and becomes equal to the momentum relaxation time ifthe transition rates w kk ′ are independent of the anglebetween k and k ′ . We now expand (89). Because both the logarithm andthe prefactor vanish in equilibrium the leading contribu-tion is O ( δf ) . The term from expanding the logarithmis easily seen to be δf k ′ f k ′ (1 − f k ′ ) − δf k f k (1 − f k ) , while the term coming from the prefactor is W kk ′ δf k ′ − W k ′ k δf k . (94)Combining these equations with (91) yieldsΠ el = X k e E · v k T δf k = σE T , (95)where, in the last equality, we recognize the electric cur-rent as e h ˆ v i = P k e v k δf k = σ E , with σ the electricconductivity. Thus we see that to leading order in theelectron-phonon coupling and the electric field, and onthe assumption that the phonons act as a reservoir, theelectronic entropy production predicted by our formulais exactly the result expected from the Joule heating, T Π el = σE , implied by the electric field. Therefore,as desired, we have arrived at an expression of energydissipation from a first-principle calculation of entropyproduction, not the other way around, as in previous ap-proaches.We remark that the results for the entropy produc-tion presented here are beyond the linear response the-ory. This is because, even when starting from the linearin the electric field correction to the density matrix, ˆ ρ t (see Eq. (67)), we derived the leading contribution to theelectronic entropy production which is quadratic in theelectric field. This is in contrast to past approaches for the calculation of the Joule heating, which requiresgoing to the second order in the electric field contributionto the density matrix ˆ ρ t for the calculation of the rateof change of the energy of the electrons. A field-theoreticapproach for the calculation of higher order termsin the entropy production, beyond the Born-Markov ap-proximation will be treated elsewhere.It is illustrative to evaluate the result explicitly, assum-ing e.g. dispersionless optical phonons ω q = ω . With | M qk ′ k | = M δ q,k ′ − k , and assuming a degenerate electronsystem (i.e. T ≪ ǫ F ) we obtainΠ el = ( e E / πmM ) D ǫ F D ǫ F − ω + D ǫ F + ω ( ǫ F /T ) sinh( ω /T ) , (96)with D ǫ the electron density of states. In this case, theentropy production becomes large at low temperaturesdue to an increase in the conductivity (phonons not ther-mally activated and then scarcity of scattering centers),and hence in the Joule heating; this is expected when theonly scattering mechanism is from optical phonons.Finally, we would like to point out the connection of theresult (95) with the discussion in section III concerning2the foundations of the classical theory. With only theaction of one of the subsystems (the sources of the E -field) treated parametrically, with the three spatial com-ponents of the field E λ playing the role of the externalparameters to the electronic subsystem, we can define anoperator ˆ F λ = ∂ ˆ H F /∂E λ for the force exerted on theelectrons upon variation of the field and write T ˆΠ el = X λ ˆ F λ ∂ t E λ = ∂ t X λ ˆ F λ E λ − X λ ( ∂ t ˆ F λ ) E λ = ∂ t ˆ H F + e ˆ v · E = e ˆ v · E , (97)where ˆ H F = − e P λ,e E λ ˆ x λe , and to get the last equalitywe use ∂ t ˆ H F = ∂ t ˆ H = 0, since the total system is iso-lated. Taking expectation value of (97), the last equalityis just (95) and the form of the first equality is reminis-cent of the classical expression (25).We then see that, although in the present discussionthe subsystems are not separated by spatial boundaries(the essence of the generalized thermodynamic descrip-tion) and there is no local equilibrium at all times: thephonons remain in equilibrium, as implied by the as-sumption that they constitute a good heat reservoir, butthe electrons attain a nonequilibrium steady state; thecommon feature with the discussion in section III is thecomplete factorization of the probability distribution ofthe system over the degrees of freedom of the differentsubsystems (uncorrelated subsystems), here manifested as the Born-Oppenheimer approximation. An appropri-ate account of the quantum correlations between subsys-tems is therefore the key to purely quantum thermody-namic behavior. VII. CONCLUSION We have developed a theory for the entropy productionin quantum many-body systems by introducing an en-tropy operator and calculating the average rate of changeof its thermodynamically measurable part. We show thatthe laws of thermodynamics are satisfied exactly withinour formalism. In the Born-Markov approximation whichdescribes the physics of weakly-coupled subsystems ofan isolated system in the long-time limit, the theory re-produces the entropy balance equation which is funda-mental in classical nonequilibrium thermodynamics andthe Joule heating contribution to the entropy produc-tion expected in a standard conductor. Applications toother systems as well as generalizations beyond the weak-coupling limit will be presented elsewhere. ACKNOWLEDGMENTS ESC would like to acknowledge the support from theFulbright-Colciencias fellowship. The work of AJM wassupported by the National Science Foundation undergrant DMR-1308236. J. Gemmer, M. Michel, and G. Mahler, Quantum Thermo-dynamics (Springer, Berlin, 2004). J. Millen and A. Xuereb, New J. Phys. , 011002 (2016). J. Goold, M. Huber, A. Riera, L. del Rio, andP. Skrzypzyk, arXiv:1505.07835. S. Vinjanampathya and J. Anders, arXiv:1508.06099. C. Bustamante, J. Liphardt, and F. Ritort, Phys. Today , 43 (2005). M. Esposito, U. Harbola, and S. Mukamel, Rev. Mod.Phys. , 1665 (2009). M. Campisi, P. H¨anggi, and P. Talkner, Rev. Mod. Phys. , 771 (2011). R. Dorner, J. Goold, C. Cormick, M. Paternostro, andV. Vedral, Phys. Rev. Lett. , 160601 (2012). E. Khatami, G. Pupillo, M. Srednicki, and M. Rigol, Phys.Rev. Lett. , 050403 (2013). M. Marcuzzi and A. Gambassi, Phys. Rev. B , 134307(2014). F. H. L. Essler, S. Evangelisti, and M. Fagotti, Phys. Rev.Lett. , 247206 (2012). O. Penrose, Rep. Prog. Phys. , 1937 (1979). A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalat-tore, Rev. Mod. Phys. , 863 (2011). V. I. Yukalov, Laser Phys. Lett. , 485 (2011). L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol,arXiv:1509.06411. C. Gogolin and J. Eisert, arXiv:1503.07538. J. Eisert, M. Friesdorf, and C. Gogolin, Nature Phys. ,124 (2015). P. Dutt, J. Koch, J. Han, and K. Le Hur, Ann. Phys. ,2963 (2011). M. Esposito, M. A. Ochoa, and M. Galperin, Phys. Rev.Lett. , 080602 (2015). J.-T. Hsiang and B. L. Hu, Ann. Phys. , 139 (2015). T. Yuge and A. Sugita, J. Phys. Soc. Jpn. , 014001(2015). H. Ness, Phys. Rev. E , 062119 (2014). D. Manzano and P. I. Hurtado, Phys. Rev. B , 125138(2014). D. Xu and J. Cao, Front. Phys. , 110308 (2016). I. Prigogine, Introduction to Thermodynamics of Irre-versible Processes (Wiley & Sons, New York, 1967). S. R. de Groot and P. Mazur, Non-Equilibrium Thermo-dynamics (Dover, New York, 1984). D. Jou, J. Casas-V´azquez, and G. Lebon, Rep. Prog. Phys. , 1105 (1988). D. Jou, J. Casas-V´azquez, and G. Lebon, Rep. Prog. Phys. , 1035 (1999). E. H. Lieb and J. Yngvason, Proc. R. Soc. A , 20130408(2013). H. Mori, J. Phys. Soc. Jpn. , 1029 (1956). H. Mori, Phys. Rev. , 1829 (1958). D. N. Zubarev, Fortschr. Phys. , 125 (1970). D. N. Zubarev, Cond. Matt. Phys. (1994). B. Robertson, Phys. Rev. , 151 (1966). A. Polkovnikov, Ann. Phys. , 486 (2011). T. N. Ikeda, N. Sakumichi, A. Polkovnikov, and M. Ueda,Ann. Phys. , 338 (2015). H. Spohn, J. Math. Phys. , 1227 (1978). H. Spohn and J. L. Lebowitz, Adv. Chem. Phys. , 109(1978). M. Esposito, K. Lindenberg, and C. Van den Broeck, NewJ. Phys. , 013013 (2010). H. Hossein-Nejad, E. O’Reilly, and A. Olaya-Castro, NewJ. Phys. , 075014 (2015). P. Mehta and N. Andrei, Phys. Rev. Lett. , 086804(2008). M. Suzuki, Physica A , 1904 (2011). M. Suzuki, Physica A , 1074 (2012). L. Pucci, M. Esposito, and L. Peliti, J. Stat. Mech. p.P04005 (2013). L. Van Hove, Physica , 441 (1957). I. Prigogine and P. R´esibois, Physica , 629 (1961). S. Fujita, Physica , 281 (1962). S. Nakajima, Prog. Theor. Phys. , 948 (1958). R. Zwanzig, J. Chem. Phys , 1338 (1960). C. Jarzynski, Nature Phys. , 105 (2015). S. V. Aradhya and L. Venkataraman, Nature Nanotech. ,399 (2013). W. Lee et al., Nature , 209 (2013). J. P. Pekola, Nature Phys. , 118 (2015). T. Feldmann and R. Kosloff, Phys. Rev. E , 051114(2012). R. Uzdin, A. Levy, and R. Kosloff, Phys. Rev. X , 031044(2015). B. Rutten, M. Esposito, and B. Cleuren, Phys. Rev. B ,235122 (2009). M. Esposito, N. Kumar, K. Lindenberg, and C. Van denBroeck, Phys. Rev. E , 031117 (2012). M. Esposito, K. Lindenberg, and C. Van den Broeck, EPL , 60010 (2009). R. Clausius, Ann. Phys. (Leipzig) , 353 (1865). W. H. Cropper, Am. J. Phys. , 1068 (1986). R. M. Velasco, L. G.-C. Scherer, and F. J. Uribe, Entropy , 82 (2011). E. Fermi, Thermodynamics (Dover, New York, 1956). L. D. Landau and E. M. Lifshitz, Statistical Physics , vol. 5(Butterworth-Heinemann, Oxford, 1980), 3rd ed. R. Kubo, J. Phys. Soc. Jpn. , 570 (1957). J. G. Kirkwood, J. Chem. Phys , 180 (1946). M. S. Green, J. Chem. Phys. , 1281 (1952). T. Yamamoto, Prog. Theor. Phys. , 11 (1953). R. Zwanzig, Phys. Rev. , 983 (1961). J. W. Gibbs, Elementary Principles in Statistical Mechan-ics (C. Scribner’s Sons, New York, 1902). M. Suzuki, Prog. Theor. Phys. , 475 (1998). S. Hershfield, Phys. Rev. Lett. , 2134 (1993). A. Wehrl, Rev. Mod. Phys. , 221 (1978). L. F. Santos, A. Polkovnikov, and M. Rigol, Phys. Rev.Lett. , 040601 (2011). E. Levi, M. Heyl, I. Lesanovsky, and J. P. Garrahan,arXiv:1510.04634. K. Husimi, Proc. Phys. Math. Soc. Japan , 264 (1940). A. Polkovnikov, Phys. Rev. Lett. , 220402 (2008). R. Zwanzig, Physica , 1109 (1964). L. Van Hove, Physica , 517 (1955). E. B. Davies, Commun. Math. Phys. , 91 (1974). M. Stark and M. Kollar, arXiv: 1308.1610. B. Bertini, F. H. L. Essler, S. Groha, and N. Robinson,arXiv:1506.02994. N. Nessi and A. Iucci, arXiv:1503.02507. B. A. Lippmann and J. Schwinger, Phys. Rev. , 469(1950). J. Schnakenberg, Rev. Mod. Phys. , 571 (1976). J.-L. Luo, C. Van den Broeck, and G. Nicolis, Z. Phys. B , 165 (1984). U. Seifert, Phys. Rev. Lett. , 040602 (2005). M. Esposito and C. Van den Broeck, Phys. Rev. E ,011143 (2010). T. Tom´e and M. J. de Oliveira, Phys. Rev. Lett. ,020601 (2012). J. Philippot, Physica , 490 (1961). L. Onsager, Phys. Rev. , 2265 (1931). N. Hashitsume, Prog. Theor. Phys. , 461 (1952). L. Onsager and S. Machlup, Phys. Rev. , 1505 (1953). Note1, we are interpreting the normal forward evolutionas the mirror image in time of that originally consideredby Onsager and Machlup since, as pointed out by them,the mirror image is the only one that satisfies the initialconditions nontrivially. H. B. G. Casimir, Rev. Mod. Phys. , 343 (1945). G. Gallavotti and E. G. D. Cohen, Phys. Rev. Lett. ,2694 (1995). W. Kohn and J. M. Luttinger, Phys. Rev. , 590 (1957). P. N. Argyres, J. Phys. Chem. Solids , 66 (1961). J. Rammer and H. Smith, Rev. Mod. Phys. , 323 (1986). J. Rammer,