Stochastic approach to entropy production in chemical chaos
aa r X i v : . [ n li n . C D ] A ug Stochastic approach to entropy production in chemical chaos
Pierre Gaspard Center for Nonlinear Phenomena and Complex Systems,Universit´e Libre de Bruxelles (U.L.B.), Code Postal 231, Campus Plaine, B-1050 Brussels,Belgium
Methods are presented to evaluate the entropy production rate in stochastic reactive systems. These methodsare shown to be consistent with known results from nonequilibrium chemical thermodynamics. Moreover, itis proved that the time average of the entropy production rate can be decomposed into the contributions ofthe cycles obtained from the stoichiometric matrix in both stochastic processes and deterministic systems.These methods are applied to a complex reaction network constructed on the basis of R¨ossler’s reinjectionprinciple and featuring chemical chaos.
Far from equilibrium, autocatalysis may gener-ate self-sustained oscillations in chemical reactionnetworks such as the Belousov-Zhabotinsky sys-tem. The oscillations may be periodic, quasiperi-odic, or chaotic. In this latter case, they are re-ferred to as chemical chaos or chemical turbu-lence, since pioneering work by Otto R¨ossler lead-ing to the discovery of this phenomenon. Suchreactive systems are driven away from equilib-rium with the free energy of reactants suppliedfrom outside, so that energy is dissipated and en-tropy produced inside. In chemical reaction net-works, some reactions may be driven by the otherones in the direction opposite to spontaneous dis-sipation, leading to chemical energy conversionlike in engines. Moreover, molecular fluctuationsare known to manifest themselves in small sys-tems, where time evolution is ruled by stochasticprocesses. For such fluctuating chemical reactionnetworks, the evaluation of thermodynamic prop-erties remains today a challenging problem.
I. INTRODUCTION
The theoretical prediction of chemical chaos, alsocalled chemical turbulence, and its experimental discov-ery owe much to the pioneering contributions of OttoR¨ossler. In 1976, he found chaotic behavior by numer-ical integration in a four-dimensional dynamical systemmodeling reaction and transport by diffusion between twocompartments. He also invented a series of prototypechaotic systems in dimension three, which is the mini-mal dimension where chaotic behavior may exist for cou-pled first-order ordinary differential equations, as well ashyperchaos in dimension four.
Inspired by the geom-etry of flows in three-dimensional phase spaces, R¨osslerintroduced the reinjection principle . According to thisprinciple, chaotic behavior may be induced if the dynam-ics evolves on a sigmoidal slow manifold with motionspiraling out of its lower branch, jumping to its upperbranch with reinjection back onto the lower branch.Moreover, in 1978, R¨ossler cosigned with Wegmann apaper reporting experimental evidence for chaos in theBelousov-Zhabotinsky reaction. This paper confirmed the prediction of chaos in far-from-equilibrium chemicalreactions and triggered a surge in the experimentalstudy of chemical chaos.
On the side of theory, R¨ossler proposed with Willam-owski in 1980 a chemical reaction network obeying themass action law and generating chaotic behavior ina three-dimensional phase space.
Later on, otherchemical reaction networks with chemical chaos wereproposed.
These models are important because theyprovide a mechanistic understanding of chemical chaosin terms of colliding and reacting particles. In this re-gard, such reaction networks play a pivotal role becausethey can be used to define Markovian stochastic pro-cesses for modeling chemical reactions at the mesoscopiclevel of description where molecular fluctuations mani-fest themselves. In particular, stochastic processes basedon the Willamowski-R¨ossler chemical reaction networkhave been analyzed to investigate the effects of molec-ular fluctuations on chemical chaos.
These investi-gations have shown that the stationary probability dis-tribution of the stochastic process in phase space typi-cally has a coarse structure looking alike the invariantprobability density of the corresponding chaotic attrac-tor in the weak noise limit. Furthermore, the entropyproduction rate has been studied for the deterministicWillamowski-R¨ossler chemical reaction network.
Entropy production is the keystone of nonequilibriumthermodynamics.
In reaction networks driven farfrom equilibrium by the supply of free energy from re-actant species, entropy is produced due to energy dissi-pation by the chemical reactions. Since several reactionsare often coupled together in the network, such systemscan perform energy conversion from one form to anotherlike in engines. Here, the form of energy is the free en-ergy stored in either one or another of the reactant orproduct species. Such chemical energy conversions canbe characterized by their rate and their contribution toentropy production in the framework of nonequilibriumthermodynamics. In small reactive systems where molec-ular fluctuations manifest themselves, similar issues canbe addressed within stochastic thermodynamics, which isa research field pioneered in the mid-eighties by Nicolis,Van den Broeck, and coworkers.
Recent advances instochastic thermodynamics have shown how to system-atically establish the balances of energy and entropy inmany kinds of stochastic physicochemical systems.
The purpose of this paper is to show that the meth-ods of stochastic thermodynamics can be applied to com-plex chemical reaction networks featuring chemical chaos.The vehicle of the present study is the reversible versionof the chemical reaction network proposed in Ref. 24.The inclusion of the reversed reactions is essential inorder to investigate thermodynamic properties becausethe entropy production rate would be infinite if the re-versed reactions had vanishing rates. The chemical re-action network of Ref. 24 is constructed using R¨ossler’sreinjection principle and its deterministic dynamics haschaotic attractors in a three-dimensional phase space.Shil’nikov conditions for homoclinic chaos are satisfiedfor a codimension-one subset in parameter space. Thereaction network is composed of ten reactions and somany reversed ones, several of them being trimolecular.The entropy production rate evaluated by the methodsof stochastic thermodynamics is shown to agree with thevalues given by standard thermodynamics. Furthermore,it is proved that, in reactive systems obeying the massaction law, the entropy production rate can be decom-posed into different contributions corresponding to thecycles of the stoichiometric matrix. In the studied re-action network, the chemical conversion of one speciesinto another is shown to change its direction, which ischaracterized in terms of the thermodynamic efficiencyof the conversion. These thermodynamic properties areobtained in the deterministic and stochastic approacheswith consistent values between the two.The paper is organized as follows. Section II summa-rizes the stochastic approach to reactive systems. Themethod to obtain the entropy production rate in this ap-proach is presented in Sec. III. The results obtained in thedeterministic limit are given in Sec. IV. In this section, itis shown that the time average of the entropy productionrate can be decomposed into the affinities and the ratesassociated with cycles of the stoichiometric matrix. Fur-thermore, another method is inferred to compute the en-tropy production rate in the stochastic approach, whichis equivalent to the one presented in Sec. III. In Sec. V,the results are applied to the chemical reaction networkproposed in Ref. 24. Section VI draws the conclusion.
II. STOCHASTIC APPROACH TO REACTIVESYSTEMSA. Network of elementary reactions
The following chemical reaction network is considered: e X i =1 ˜ ν (+) iρ A i + s X j =1 ν (+) jρ X j W + ρ ⇋ W − ρ e X i =1 ˜ ν ( − ) iρ A i + s X j =1 ν ( − ) jρ X j (1)with ρ = 1 , , . . . , r , where { A i } ei =1 are reactant or prod-uct molecular species, { X j } sj =1 are intermediate molecu- lar species, while ˜ ν ( ± ) iρ and ν ( ± ) jρ are non-negative integersgiving the respective numbers of molecules that are in-coming or outgoing the reaction ρ . The concentrationsof these species inside the volume V of the system arerespectively denoted { a i } ei =1 and { x j } sj =1 . The system issupposed to be well mixed so that these concentrationsare uniform on average across the volume V .The reason for making the distinction between the in-termediate and other species is that it is often assumedin the literature that some species { A i } ei =1 have their concentrations held constant during the re-active process. This is achieved by maintaining thesespecies in excess with respect to the intermediate species: a i ≫ x j . The species in excess are said to be chemostat-ted since their chemical potentials remain constant intime. The reaction network is driven out of equilib-rium if the concentrations of the chemostatted speciesare not in their equilibrium Guldberg-Waage ratios. Inthis regard, these species are playing the roles of reac-tant and product for the process. The reactant speciesshould be supplied from outside and the product speciesevacuated from inside in order to maintain constant theirconcentrations. In particular, this assumption is madefor enzymatic reactions in biochemical kinetics, wherethe substrate and product species are often supposed tohave fixed concentrations.
In contrast, the molecular numbers
XXX ( t ) ≡{ X j ( t ) } sj =1 ∈ N s of the intermediate species inside thesystem are evolving in time, because reactive events arerandomly occurring at the transition rates W ± ρ for theforward and backward reactions (1), respectively. Hence,the concentrations x j ( t ) ≡ X j ( t ) /V of the intermediatespecies are also varying in time.Even though the reactant and product species havetheir concentrations { a i } ei =1 held fixed inside the system,these species should be supplied from or evacuated tothe exterior of the system, so that the numbers AAA ( t ) = { A i ( t ) } ei =1 of the reactant and product species that areconsumed or produced are changing in time. Actually,these numbers are drifting in time, as long as the reactionnetwork is maintained in a nonequilibrium regime. B. Stochastic reactive process
The time evolution of the molecular numbers
AAA ( t )and XXX ( t ) can be expressed in terms of the numbers nnn ( t ) ≡ { n ρ ( t ) } ± rρ = ± of forward and backward reactiveevents that have occurred since the beginning of the pro-cess at time t = 0. These counters allow us to determinethe fluxes of matter in the reaction network (1) as well asthe composition of the system at every instant of time.Introducing the stoichiometric coefficients of the differentspecies according to˜ ν iρ ≡ ˜ ν ( − ) iρ − ˜ ν (+) iρ and ν jρ ≡ ν ( − ) jρ − ν (+) jρ , (2)the molecular numbers can be written at time t as A i ( t ) = A i (0) + r X ρ =1 ˜ ν iρ [ n + ρ ( t ) − n − ρ ( t )]( i = 1 , , . . . , e ) , (3) X j ( t ) = X j (0) + r X ρ =1 ν jρ [ n + ρ ( t ) − n − ρ ( t )]( j = 1 , , . . . , s ) , (4)in terms of the random variables n ± ρ ( t ) counting thereactive events of the forward and backward elementaryreactions ρ = 1 , , . . . , r .Equation (4) shows that we need at least as many ob-served intermediate species { X j } as there are reactions,in order to determine the signed cumulated numbers N ρ ≡ n + ρ − n − ρ of elementary reactions ρ = 1 , , . . . , r from the observation of the intermediate species. Actu-ally, this inference is possible if s ≥ r and if the ma-trix ( ν jρ ) of stoichiometric coefficients for j = 1 , , . . . , s and ρ = 1 , , . . . , r has full rank (i.e., its rank is equalto the number r of elementary reactions), so that thesigned cumulated numbers { N ρ ( t ) } rρ =1 of reactive eventscan be determined from the molecular numbers { X j ( t ) − X j (0) } sj =1 . However, if s < r , there are too few interme-diate species to determine the signed cumulated numbers { N ρ ( t ) } rρ =1 of elementary reactions. In such cases, someof the species { A i } should also be observed to performthis determination.The transition rates are denoted as follows, W ρ ( X ′ X ′ X ′ | XXX ) for the transition
XXX ρ −→ X ′ X ′ X ′ = XXX + ννν ρ , (5)where ρ = ± , ± , . . . , ± r , XXX = { X j } sj =1 , and ννν ρ = { ν jρ } sj =1 are the stoichiometric coefficients (2). According to the mass action law of chemical kinetics,the transition rates of the forward reactions are givenby W ρ ( XXX + ννν ρ | XXX ) =
V k ρ e Y i =1 a ˜ ν (+) iρ i s Y j =1 ν (+) jρ Y n =1 X j − n + 1 V (6)in terms of the concentrations { a i } ei =1 of reactant orproduct species, the integers ˜ ν ( ± ) iρ and ν ( ± ) jρ defined inEq. (1), and the molecular numbers { X j } sj =1 of the in-termediate species involved in the reaction. Similar ex-pressions hold for the transition rates of the backwardreactions.With these transition rates, we can define a Marko-vian stochastic process for the time evolution of the jointrandom variables XXX ( t ) and nnn ( t ). Gillespie’s algorithmprovides an exact Monte Carlo method for the numericalsimulation of such Markovian stochastic processes. C. Chemical master equations
If ∆ nnn ρ = − ∆ nnn − ρ denotes the jump of the counters nnn ( t )upon the reactive event ρ , the master equation rulingthe time evolution of the joint probability distribution P ( XXX, nnn, t ) of the reactive process is given by ddt P ( XXX, nnn, t )= ± r X ρ = ± h W ρ ( XXX | XXX − ννν ρ ) P ( XXX − ννν ρ , nnn − ∆ nnn ρ , t ) − W − ρ ( XXX − ννν ρ | XXX ) P ( XXX, nnn, t ) i (7)with the stoichiometric coefficients ννν ρ = − ννν − ρ . Summing over the counters nnn , we obtain the standardchemical master equation ddt P ( XXX, t ) = ± r X ρ = ± h W ρ ( XXX | XXX − ννν ρ ) P ( XXX − ννν ρ , t ) − W − ρ ( XXX − ννν ρ | XXX ) P ( XXX, t ) i (8)for the marginal probability distribution of the molecularnumbers of the intermediate species P ( XXX, t ) = X nnn P ( XXX, nnn, t ) . (9)We note that the probability distribution (9) of the in-termediate species may reached a stationary distribution P st ( XXX ), which justifies the terminology of nonequilib-rium steady state .In general, the counters nnn ( t ) undergo random walkswith positive diffusivities according to the central limittheorem. Moreover, if the process is driven out of equi-librium, these variables are randomly drifting with non-vanishing rates, even if the molecular numbers XXX ( t ) of theintermediate species follow a stationary process. Theirrandom drifts are known to obey a multivariate fluctua-tion theorem. If the reservoirs of reactant and prod-uct species are arbitrarily large, the counters n ± ρ can bedefined modulo N with N ≫
1. In this case, the sta-tionary probability distribution of the stochastic processis given by P st ( XXX, nnn ) = P st ( XXX ) / N r . D. Random paths and their probability distribution
The random paths of the process ruled by the masterequation (7) can be denoted X ( t ) = ( XXX , nnn ) → ( XXX , nnn ) → · · · → ( XXX m , nnn m ) , (10)or, more shortly, X ( t ) = XXX ρ −→ XXX ρ −→ · · · ρ m −→ XXX m , (11)where { ρ , ρ , . . . , ρ m } is the sequence of reactive eventsoccurring along the path. If the reactive event ρ occurs,the corresponding counter is incremented by unity ac-cording to n ρ ρ → n ρ + 1 (for ρ = ± , ± , . . . , ± r ), so thatEqs. (10) and (11) provide equivalent descriptions of thepath.The probability of the random paths (10) or (11) canbe obtained by considering their stroboscopic observa-tions every time step τ . Since the process is Markovian,the probability that the random variables take the val-ues ( XXX l , nnn l ) at time t l = t + lτ while the initial valuesare sampled according to their stationary distribution isgiven by P st [ X ( t )] = n Y l =1 P τ ( XXX l , nnn l | XXX l − , nnn l − ) P st ( XXX , nnn ) (12)in terms of the conditional probability to find the values( XXX l , nnn l ) at time t l = t l − + τ given that the values wereequal to ( XXX l − , nnn l − ) at time t l − . For small enoughsampling time τ , these conditional probabilities can beexpressed as P τ ( X ′ X ′ X ′ , n ′ n ′ n ′ | XXX, nnn ) = W ρ ( X ′ X ′ X ′ | XXX ) τ + O ( τ ) , if n ′ ρ = n ρ + 1 , and n ′ ˜ ρ = n ˜ ρ , ∀ ˜ ρ = ρ ;1 − γ ( XXX ) τ + O ( τ ) , if n ′ ρ = n ρ , ∀ ρ ; (13)with the following rate of escape from the state XXX , γ ( XXX ) ≡ ± r X ρ = ± W ρ ( XXX + ννν ρ | XXX ) . (14)If the transitions XXX k = XXX k − + ννν ρ k occur at the suc-cessive times t < t < · · · < t k < · · · < t m , the pathprobability (12) is thus obtained in the limit τ → P st [ X ( t )] = e − γ ( XXX m )( t − t m ) × m Y k =1 h W ρ k ( XXX k | XXX k − ) τ e − γ ( XXX k − )( t k − t k − ) i × P st ( XXX , nnn ) . (15)We note that the time reversal of the path (11) can bewritten as X R ( t ) = XXX m − ρ m −→ · · · − ρ −→ XXX − ρ −→ XXX − ρ −→ XXX (16)in terms of the reversed sequence defined by also reversingthe elementary reactions. The probability of this reversedpath is given by an expression similar to Eq. (15). III. ENTROPY PRODUCTIONA. Time evolution of entropy
The expression of the entropy production rate can beobtained by considering the time evolution of entropy according to the stochastic process ruled by the chemicalmaster equation (8). In this framework, the entropy canbe defined as S ( t ) = X XXX h S ( XXX ) − k B ln P ( XXX, t ) i P ( XXX, t ) , (17)where S ( XXX ) is the entropy of the system containing themolecular numbers
XXX for the intermediate species and − k B ln P ( XXX, t ) is the contribution to entropy due to theprobability distribution over the different possibilities ofmolecular numbers
XXX . The time derivative of the to-tal entropy (17) according to the reduced master equa-tion (8) can be decomposed as follows, dSdt = d e Sdt + d i Sdt (18)into the flow of entropy exchange d e S/dt between thesystem and its environment and the entropy productionrate given by d i Sdt = k B X XXX r X ρ =1 h W ρ ( XXX | XXX − ννν ρ ) P ( XXX − ννν ρ , t ) − W − ρ ( XXX − ννν ρ | XXX ) P ( XXX, t ) i × ln W ρ ( XXX | XXX − ννν ρ ) P ( XXX − ννν ρ , t ) W − ρ ( XXX − ννν ρ | XXX ) P ( XXX, t ) ≥ . (19)This latter is always non negative in agreement with thesecond law of thermodynamics. This entropy productionrate evolves in time and is expected to reach its asymp-totic value when the probability distribution convergestowards the stationary solution P st ( XXX ) of the reducedmaster equation (8). The entropy production rate is van-ishing if the conditions of detailed balance are satisfied W ρ ( XXX | XXX − ννν ρ ) P eq ( XXX − ννν ρ ) = W − ρ ( XXX − ννν ρ | XXX ) P eq ( XXX ) , (20)in which case the system is at thermodynamic equilib-rium P st = P eq . Otherwise, the system is running in a nonequilibrium steady state . B. Random paths and entropy production
Recently, a method using the probabilities (15) of therandom paths (10) or (11) has been proposed to eval-uate the entropy production rate in stochastic reactivesystems.
This method is in direct relation with Gille-spie’s algorithm simulating the coupled reactions ofthe network (1). The method consists in comparing theprobability of the random path (11) with the probabil-ity of its time reversal (16). Because of Eq. (15), thelogarithm of the ratio of these probabilities is given byΣ( t ) ≡ ln P st [ X ( t )] P st [ X R ( t )] = Z ( t ) + ln P st ( XXX ) P st ( XXX m ) (21)in terms of the quantity Z ( t ) ≡ ln m Y k =1 W ρ k ( XXX k | XXX k − ) W − ρ k ( XXX k − | XXX k ) , (22)involving the transition rates of the forward and back-ward elementary reactions that have occurred during thepath (11). Indeed, the probabilities (15) for the pathand its time reversal contain the same exponential fac-tors, which thus cancel each others, so that there re-mains the ratios of the rates of opposite transitions inEq. (22) and the ratio of the stationary probability dis-tributions at both ends of the random path, giving thelast term in Eq. (21). Under nonequilibrium conditions,the quantity (22) is expected to grow linearly in time,while the last term of Eq. (21) becomes negligible forlong enough time. Therefore, the mean growth rate ofthe quantity (21) is determined by the quantity (22): R ≡ lim t →∞ t h Σ( t ) i = lim t →∞ t h Z ( t ) i , (23)where h·i denotes the statistical average over the pathprobability distribution (15).Remarkably, the rate (23) is giving the entropy produc-tion rate (19) with the stationary distribution P st ( XXX ): R = 1 k B d i Sdt (cid:12)(cid:12)(cid:12)(cid:12) st ≥ , (24)as proved in Ref. 43. Accordingly, the entropy pro-duction can be evaluated using the quantity (22) alongthe random paths (11) giving the sequence of elemen-tary reactive events, which are simulated with Gille-spie’s algorithm. This powerful method has been usedin stochastic linear reaction networks, Schl¨ogl’smodel for bistability, the Brusselator model of chem-ical clocks, as well as other reaction networks. The method has also been tested for a stochastic linearreaction network, using diffusion approximations to thechemical master equation. Here below, this method willbe applied to evaluate entropy production in the challeng-ing case of chemical chaos.Moreover, the quantity (22) has been shown to obeya fluctuation theorem, from which Eq. (24) can bededuced.
These results have been extended in par-ticular to the quantity (21), leading to the recent devel-opment of stochastic thermodynamics.
IV. DETERMINISTIC LIMIT
At macroscales, the molecular fluctuations becomenegligible in front of the macroscopic mean numbers ofmolecules of the different species. In this limit, themacroscopic kinetic equations are recovered, as well asthe expression of entropy production rate known in chem-ical nonequilibrium thermodynamics.
Moreover, theHill-Schnakenberg method of cycle decomposition canbe used for the time average of the entropy productionrate.
A. Kinetic equations
The macroscopic concentrations of the intermediatespecies can be defined as x j ≡ V h X j i (25)with the mean numbers h X j i ≡ X XXX X j P ( XXX, t ) . (26)The reaction rates can be similarly defined as w ± ρ ≡ V X XXX W ± ρ ( XXX ± ννν ρ | XXX ) P ( XXX, t ) (27)for ρ = 1 , , . . . , r . Equivalently, the reaction rates canbe expressed as w ± ρ = 1 V ddt h n ± ρ ( t ) i (28)in terms of the counters of the elementary chemical re-actions and using the statistical average with respect tothe joint probability distribution P ( XXX, nnn, t ).If the volume V is large enough, the marginal distribu-tion P ( XXX, t ) may be supposed to be peaked around themost probable values for the molecular numbers. In thislimit, the reaction rates can be expressed as follows interms of the most probable values for the concentrations xxx = { x j } sj =1 , w ± ρ ( xxx ) = k ± ρ e Y i =1 a ˜ ν ( ± ) iρ i s Y j =1 x ν ( ± ) jρ j (29)for ρ = 1 , , . . . , r , which are the known expressions forthe macroscopic reaction rates according to the mass ac-tion law. As a consequence of Eq. (4), the macroscopic kineticequation for the concentration x j is given by dx j dt = r X ρ =1 ν jρ ( w + ρ − w − ρ ) (30)in terms of the stoichiometric coefficients (2). Similarly,the mean rate of exchange with the exterior of the systemin order to maintain fixed the concentration a i of speciesA i inside the volume V can be deduced from Eq. (3)giving ddt h A i i = V r X ρ =1 ˜ ν iρ ( w + ρ − w − ρ ) . (31) B. Macroscopic entropy production
The probability distribution P ( XXX, t ) being peakedaround the most probable values
XXX = V xxx ruled by thekinetic equations (30), the entropy production rate (19)becomes d i Sdt = k B V r X ρ =1 ( w + ρ − w − ρ ) ln w + ρ w − ρ ≥ V → ∞ . This isthe expression expected from nonequilibrium chemicalthermodynamics. The sum is taken over all the ele-mentary chemical reactions in the network. The entropyproduction rate (32) is in general non negative, vanish-ing at the equilibrium concentrations xxx eq such that themacroscopic conditions of detailed balance are satisfied: w + ρ ( xxx eq ) = w − ρ ( xxx eq ). C. Time average and stoichiometric matrix
Defining the net reaction rates as w ρ ≡ w + ρ − w − ρ (33)for ρ = 1 , , . . . , r , the coupled kinetic equations (30) canbe written in the following form, dxxxdt = ννν · (34)using the vector = { w ρ } rr =1 of reaction rates andthe matrix of stoichiometric coefficients ννν ≡ { ννν ρ } rρ =1 =( ν jρ ) j =1 ,...,s ; ρ =1 ,...r .This set of first-order differential equations defines the dynamical system ruling the time evolution of the molec-ular concentrations in the phase space xxx ∈ R s . Thesolutions of these differential equations from given ini-tial conditions define the flow of the dynamics in phasespace: xxx t = ΦΦΦ t xxx . In chemical kinetics, this dynamicsis typically dissipative in the sense that the phase-spacevolumes are contracted on average, so that the trajecto-ries are converging in time towards an attractor of zeroLebesgue measure in phase space. This attractor maybe a fixed point corresponding to a macroscopic station-ary state, a limit cycle describing periodic oscillations,a torus if the oscillations are quasiperiodic, or a frac-tal attractor if the dynamics is chaotic and manifestingsensitivity to initial conditions characterized by positiveLyapunov exponents. Even if the dynamics is time dependent, we can al-ways consider the stationary probability density p st ( xxx )associated with the attractor, using the time average( · ) ≡ lim T →∞ T Z T ( · ) dt . (35)Indeed, according to the Birkhoff-Khinchin ergodic the-orem, this time average defines a stationary probabil-ity distribution, which almost always exists and has thetrajectory issued from its initial condition as support. Moreover, if the dynamics is unstable enough on the at-tractor, the property of ergodicity may hold and the sta-tionary probability density can be constructed as a so-called SRB measure. However, in the following, we shallonly use the time average (35).A basic property of the time average (35) is that dfdt = 0 (36)for the time derivative of any function f = f ( xxx ). There-fore, we have that dxxxdt = ννν · = 0 . (37)Consequently, the time average of the reaction rates be-longs to the right null space of the stoichiometric matrix.This latter can be decomposed into the right null eigen-vectors eee c = { e ρc } rρ =1 such that ννν · eee c = 0 for c = 1 , , . . . , o . (38)Each right null eigenvector defines a cycle c , which is asequence of elementary reactions (weighted by the com-ponents e ρc of the eigenvector) such that the molecularnumbers XXX of intermediate species come back to theirinitial values once the cycle is closed. We note that dif-ferent sets of right null eigenvectors can be obtained usinglinear combinations.Besides, the left null space of the stoichiometric matrixcontains all the constants of motion L = ℓℓℓ T · xxx such that dL/dt = 0. Accordingly, this space is spanned by theleft null eigenvectors, ℓℓℓ T · ννν = 0. If l is the numberof constants of motion, the rank of the stoichiometricmatrix is given by rank( ννν ) = s − l = r − o (39)with the number s of intermediate species, r of reactions,and o of cycles. D. Cycle decomposition
Because of Eq. (37), the time average of the reactionrates can be decomposed according to = o X c =1 eee c J c (40)onto the basis of the o right null eigenvectors (38). Thecoefficients J c of this decomposition define the rates as-sociated with the o cycles of the reaction network. Thecomponents of Eq. (40) are given by w ρ = o X c =1 e ρc J c . (41)The affinity associated with the c th cycle is defined as A c ≡ ln r Y ρ =1 (cid:18) w + ρ w − ρ (cid:19) e ρc . (42)An important property is that these affinities do not de-pend on the concentrations xxx of the intermediate speciesif the reaction rates obey the mass action law (29). In-deed, the ratios of these opposite reaction rates can bewritten as follows in terms of the stoichiometric coeffi-cients (2): w + ρ w − ρ = k + ρ k − ρ e Y i =1 a − ˜ ν iρ i s Y j =1 x − ν jρ j . (43)Accordingly, the affinity (42) becomes A c = r X ρ =1 e ρc ln w + ρ w − ρ (44)= r X ρ =1 e ρc ln k + ρ k − ρ e Y i =1 a − ˜ ν iρ i ! − s X j =1 r X ρ =1 ν jρ e ρc ln x j . Since P rρ =1 ν jρ e ρc = 0 because of Eq. (38), we find A c = r X ρ =1 e ρc ln k + ρ k − ρ e Y i =1 a − ˜ ν iρ i ! , (45)thus establishing the property.The remarkable result is that the time average of themacroscopic entropy production rate (32) can also be de-composed into the o cycles if the reaction rates obey themass action law (29). For notational simplicity, we con-sider the dimensionless entropy production rate per unitvolume given by σ ≡ r X ρ =1 w ρ ln w + ρ w − ρ ≥ . (46)According to the mass action law, we have that σ = r X ρ =1 w ρ ln k + ρ k − ρ e Y i =1 a − ˜ ν iρ i ! − s X j =1 r X ρ =1 ν jρ w ρ ln x j . (47)Here, we note that the time derivative of the function f ≡ s X j =1 x j ln x j (48)is given by dfdt = s X j =1 dx j dt (ln x j + 1) = s X j =1 r X ρ =1 ν jρ w ρ (ln x j + 1) , (49)where we recognize the last term of Eq. (47). Accordingto Eqs. (36) and (37), we conclude that the time average of this last term is vanishing, so that the time average ofEq. (47) is given by σ = r X ρ =1 w ρ ln k + ρ k − ρ e Y i =1 a − ˜ ν iρ i ! . (50)Now, using the decomposition (41), we obtain σ = o X c =1 J c r X ρ =1 e ρc ln k + ρ k − ρ e Y i =1 a − ˜ ν iρ i ! (51)Finally, using to the expression (45) for the affinities, wefind σ = o X c =1 A c J c . (52)Hence, for reaction kinetics obeying the mass action law,the time average of the entropy production rate can al-ways be decomposed as the sum of the affinities and ratesassociated with the o cycles defined in terms of the sto-ichiometric matrix. It should be pointed out that linearcombinations may transform the sets of right null eigen-vectors and, thus, the corresponding affinities and rates,but the entropy production rate (52) remains invariantunder such transformations.We note that all the affinities (42) are vanishing if thereaction network is at thermodynamic equilibrium. E. Entropy production by counting reactive events
The result (52) is remarkable beyond the frameworkof the deterministic dynamics. Indeed, in the stochasticapproach, we may define the following random variable,˜Σ( t ) ≡ o X c =1 A c Υ c ( t ) (53)in terms of the affinities (45) associated with the cyclesand the corresponding signed cumulated numbers Υ c ( t ).These latter are related to the signed cumulated numbers N ρ ≡ n + ρ − n − ρ of elementary reactions [which have beenintroduced in Eq. (4)] according to N ρ ( t ) = o X c =1 e ρc Υ c ( t ) (54)in analogy with the decomposition (41). Since the sta-tistical averages of the signed cumulated numbers Υ c ( t )give the corresponding rates by˜ J c = 1 V lim t →∞ t h Υ c ( t ) i , (55)we have that the entropy production rate can be evalu-ated as follows,˜ σ = 1 V lim t →∞ t h ˜Σ( t ) i = o X c =1 A c ˜ J c , (56)using the statistical average of the random variable (53)defined for the stochastic process of Subsec. II D. Thisfurther result provides a complementary method to eval-uate the entropy production rate directly by counting thenumbers of reactive events occurring along the paths (10)simulated by Gillespie’s algorithm, besides the methodusing mean growth rate of the quantity (22). We notethat the random variable (53) obeys a fluctuation theo-rem as a consequence of the multivariate fluctuation the-orem for currents proved in Ref. 45. V. MODEL OF CHEMICAL CHAOSA. The reaction network
We consider the reversible version of the model ofchemical chaos proposed in Ref. 24. This model has s = 3intermediate species X = X , Y = X , and Z = X , whichis the minimum number required in order to have chaoticbehavior in the deterministic dynamics. The reactionnetwork of the model is defined according toA + X k +1 ⇋ k − , A + 2 X k +2 ⇋ k − , X + Y k +3 ⇋ k − A + Y , X + Z k +4 ⇋ k − A + Z , A + X + Y k +5 ⇋ k − X + 2 Y , A + Y + Z k +5 ⇋ k − , Y k +7 ⇋ k − A , A + X k +8 ⇋ k − X + Z , A + 2 Z k +9 ⇋ k − , Z k +10 ⇋ k − A . (57)This model forms a network of r = 10 elementary for-ward reactions and so many backward reactions. Sev-eral elementary reactions are trimolecular. The kineticsis supposed to obey the mass action law with the reac-tion constants k ± ρ for ρ = 1 , , . . . , r . Every one of theten reactions involves a reactant or production species A i ( i = ρ = 1 , , . . . , r ), which is chemostatted. B. Macroscopic kinetics and chaos
Their macroscopic reaction rates are given by w +1 = k +1 a +1 x , w − = k − x , (58) w +2 = k +2 a +2 x , w − = k − x , (59) w +3 = k +3 x y , w − = k − a y , (60) w +4 = k +4 x z , w − = k − a z , (61) w +5 = k +5 a x y , w − = k − x y , (62) w +6 = k +6 a y z , w − = k − y z , (63) w +7 = k +7 y , w − = k − a , (64) w +8 = k +8 a x , w − = k − x z , (65) w +9 = k +9 a z , w − = k − z , (66) w +10 = k +10 z , w − = k − a , (67)in terms of the concentrations ( x, y, z ) of the intermediatespecies (X , Y , Z). According to Eq. (30) with the nota-tion (33), the kinetic equations for the concentrations ofthe intermediate species take the following form,˙ x = w + w − w − w , (68)˙ y = w + w − w , (69)˙ z = w + w − w . (70)These differential equations can be numerically inte-grated using a Runge-Kutta algorithm of orders 4 and5 with a variable time step. The following parametervalues are chosen: k +1 = 0 . , . < k +2 < . , k +3 = 0 . , k +4 = k +5 = 1 ,k +6 = 0 . , k +7 = 1 . , k +8 = 100 , k +9 = 300 , k +10 = 500 ,k − = 50 , k − ρ = 0 .
01 for ρ = 1-8 , , and a i = 1 for i = 1-10 . (71) z k + 2 FIG. 1. Bifurcation diagram of the dynamical system ofEqs. (68)-(70) with the reaction rates (58)-(67) and the pa-rameter values (71). The axis k +2 is scanned every step∆ k +2 = 10 − . The dots depict every extremum where˙ z ( t ) = 0 in the time interval 500 < t < < t < x , y , z yxz x , y , z yxz
150 200 time
150 200 time (b)(a) FIG. 2. (a) The concentrations ( x, y, z ) versus time t for the dynamical system of Eqs. (68)-(70) with the reaction rates (58)-(67)and the parameter values (71) with k +2 = 0 .
55. (b) The concentrations ( x, y, z ) versus time t for the corresponding stochasticprocess with the extensivity parameter V = 250. z y z y (a) (b) FIG. 3. (a) The chaotic attractor formed by the trajectory of Fig. 2(a) for 100 < t < < t < V = 250. The corresponding bifurcation diagram is shown inFig. 1. We observe the following regimes:stationary: k +2 . . , periodic: 0 . . k +2 . . , chaotic: 0 . . k +2 , with windows of mixed-mode oscillations in the chaoticregime.For the value k +2 = 0 .
55, a trajectory and the attrac-tor are plotted in Fig. 2(a) and Fig. 3(a), respectively.We see that the dynamics is chaotic for this parametervalue. The attractor in Fig. 3(a) shows that chaos arisesdue to R¨ossler’s mechanism of reinjection in phase space.For the parameter values (71), the equation ˙ z = 0 de-fines a sigmoidal slow manifold in phase space, allowingthe reinjection in the vicinity of the saddle focus on thelower branch of the slow manifold. The trajectories are spiraling out of this saddle focus. Once the amplitudeof the oscillations is large enough, the trajectories arejumping to the upper branch of the slow manifold, wherethey move towards its edge, before being reinjected backonto the lower branch. This reinjection allows the exis-tence of a homoclinic orbit of Shil’nikov type, which isknown to generate unstable periodic orbits of arbitrar-ily long period, as well as Smale’s horseshoes of chaoticsubsets. C. Cycle decomposition
As explained in Subsecs. IV C and IV D, the time av-erages of the reaction rates and entropy production ratecan be decomposed into the cycles given by the right nulleigenvectors of the stoichiometric matrix. For the reac-0tion network (57), this latter is given by w w w w w w w w w w ννν = xyz − − − − . (72)Looking for its left and right null eigenvectors, we findthat the reaction network has l = 0 constant of motionand o = 7 cycles, so that the rank of the stoichiometricmatrix is equal torank( ννν ) = s − l = r − o = 3 . (73)The right null eigenvectors (38) associated with the o = 7cycles are listed here below together with their affini-ties (42), their rates given by Eq. (40), and their corre-sponding overall reaction:cycle 1 : eee T1 = (1 , − , , , , , , , , , A = ln w +1 w − w − w +2 = ln k +1 a k − k − k +2 a , J = − w , A → A ; (74)cycle 2 : eee T2 = (1 , , , , , , , , , , A = ln w +1 w +3 w − w − = ln k +1 a k +3 k − k − a , J = w , A → A ; (75)cycle 3 : eee T3 = (1 , , , , , , , , , , A = ln w +1 w +4 w − w − = ln k +1 a k +4 k − k − a , J = w , A → A ; (76)cycle 4 : eee T4 = (0 , , , , , − , , , , , A = ln w +5 w − w − w +6 = ln k +5 a k − k − k +6 a , J = − w , A → A ; (77)cycle 5 : eee T5 = (0 , , , , , , , , , , A = ln w +5 w +7 w − w − = ln k +5 a k +7 k − k − a , J = w , A → A ; (78)cycle 6 : eee T6 = (0 , , , , , , , , − , , A = ln w +8 w − w − w +9 = ln k +8 a k − k − k +9 a , J = − w , A → A ; (79)cycle 7 : eee T7 = (0 , , , , , , , , , , A = ln w +8 w +10 w − w − = ln k +8 a k +10 k − k − a , J = w , A → A . (80) The explicit calculation of the affinities (42) confirms thatthey do not depend on the concentrations ( x, y, z ) of theintermediate species, as expected according to Eq. (45).Moreover, Eq. (40) allows us to relate the time averagesof the reaction rates to the rates J c associated with thecycles. In addition to the reaction rates appearing inEqs. (74)-(80), we also have the following relations forthe other rates: w = J + J + J , w = J + J , and w = J + J , whereupon the time averages of the kineticequations (68)-(70) are equal to zero, as required.As a consequence of Eq. (52), the time average of theentropy production rate is obtained as follows, σ = −A w + A w + A w −A w + A w − A w + A w . (81) D. Simulating the stochastic process
For the reaction network (57), the stochastic pro-cess can be defined in terms of the transition rates de-duced with Eq. (6) and corresponding to the macroscopicrates (58)-(67). This Markovian process can be simulatedusing Gillespie’s algorithm.
At every transition, therandom variables are updated according to t ′ = t + ∆ tX ′ X ′ X ′ = XXX + ννν ρ ,n ′ n ′ n ′ = nnn + ∆ nnn ρ , (82)where the waiting time ∆ t before the transition has thefollowing exponential probability distribution of param-eter given by the escape rate (14), p (∆ t ) = γ ( XXX ) e − γ ( XXX ) ∆ t , (83) ννν ρ = { ν jρ } sj =1 are the stoichiometric coefficients of the in-termediate species, and ∆ nnn ρ = { δ ρρ ′ } ± rρ ′ = ± are the jumpsof the reaction counters upon the reactive event ρ . Thisstochastic process is ruled by the master equation (7).Moreover, the time evolution of the quantity (22) is sim-ulated during the process with the following addition atevery transition (82): Z ′ = Z + ln W ρ ( X ′ X ′ X ′ | XXX ) W − ρ ( XXX | X ′ X ′ X ′ ) . (84)A stochastic trajectory for the concentrations xxx = XXX/V = ( x, y, z ) of the intermediate species simulatedwith the Gillespie algorithm is shown in Fig. 2(b) forthe value V = 250 of the extensivity parameter enteringthe expressions (6) of the transition rates. We observethat the trajectory is noisy, but keep the main featuresof the deterministic trajectory depicted in Fig. 2(a). Inparticular, the noisy oscillations have amplitude rang-ing between similar values and the episodic oscillationsof small amplitude are also manifesting themselves. Thestochastic trajectory is plotted in Fig. 3(b) in the sameplane ( y, z ) as in Fig. 3(a), depicting a noisy attractor.1Here also, the key features of the deterministic attractorsare observed, namely noisy oscillations close to the lowerbranch of the slow manifold with random jumps towardsthe upper branch, and random reinjections back onto thelower one. Noise tends to disperse the excursions of thestochastic trajectory beyond the region of the attractor,but this dispersion is mild, as seen in Fig. 3(b). Accord-ingly, the features of chemical chaos are robust under theeffect of stochasticity in this model. E. Entropy production rate
Here, our aim is to compare the different methods toevaluate the entropy production rate in the determin-istic and stochastic approaches. The evaluation is per-formed in units with Boltzmann’s constant is equal tounity: k B = 1.For the deterministic system, the time average of theentropy production rate (46) is obtained by integrat-ing the ordinary differential equations (68)-(70) using aRunge-Kutta algorithm of orders 4 and 5 with a variabletime step. Furthermore, the time averages of the rates J c associated with the seven cycles (74)-(80) can alsobe calculated to get the value (52). The comparison be-tween these values shows agreement within great numer-ical accuracy. In particular, for k +2 = 0 .
55 in the chaoticregime, we find σ = 3116 .
06 and P c =1 A c J c = 3116 . t = 1000. There-fore, these numerical calculations confirm the validity ofthe result (52), according to which the time average ofthe entropy production rate can be decomposed into thecontributions of the cycles for the deterministic chaoticdynamics.In the stochastic approach, the entropy productionrate can be evaluated as the mean growth rate of thequantity (22), which is given in Gillespie’s algorithm withEq. (84), or as the mean growth rate of the sum (53) ofthe affinities {A c } c =1 and the signed cumulated num-bers { Υ c ( t ) } c =1 associated with the cycles. Accordingto Eqs. (74)-(80), these numbers are given in terms ofthe signed cumulated numbers of elementary reactions { N ρ = n + ρ − n − ρ } ρ =1 respectively by Υ = − N ,Υ = N , Υ = N , Υ = − N , Υ = N , Υ = − N ,and Υ = N .These mean growth rates are computed using Gille-spie’s algorithm. First, Eqs. (23) and (56) give equal val-ues R = ˜ σV for the entropy production rate up to greatnumerical accuracy, which confirms the validity of the de-composition into cycles for the entropy production ratein the stochastic approach. Next, in order to comparewith the deterministic value of the entropy productionrate, the mean growth rates are calculated for increasingvalues of the extensivity parameter: V = 100, 150, 200,and 250. The so-computed rates are used for extrapolat-ing the value in the limit V → ∞ by supposing a con-vergence as V − . For k +2 = 0 .
55 in the chaotic regimeshown in Fig. 2(b) and Fig. 3(b), the different rates and
TABLE I. Contributions to entropy production of the reactionnetwork (57) for the parameter values (71) with k +2 = 0 . t = 1000 with a Runge-Kuttaalgorithm of orders 4 and 5 with a variable time step. Thetime average of the entropy production rate (46) for the de-terministic system takes the rounded value σ = 3116 and thesame rounded value is given by Eq. (52). For the stochas-tic process, Gillespie’s algorithm is used over a time interval t = 1000 for V = 100, 150, 200, and 250 and the given valuesare obtained by extrapolation. The mean growth rate (23)and Eq. (56) give the value R/V = ˜ σ = 3177 ± A c denotes the affinity (45) associated with the cycle c , J c the time average of the corresponding deterministic rategiven in Eqs. (74)-(80), ˜ J c the mean value of the stochas-tic rate obtained with Eq. (55). The confidence interval isestimated from the values obtained for each quantity in thesimulations with V = 100, 150, 200, and 250. c A c J c A c J c A c ˜ J c . − . − . − . ± . .
006 1 .
105 8 .
85 8 . ± .
43 8 .
70 0 .
512 4 .
45 4 . ± .
64 1 . − . − . − . ± .
015 9 .
47 2 .
37 22 . . ± .
06 7 . − . − . − ± .
03 172 . . ± P c =1 A c J c ± corresponding contributions to the entropy productionrate are given in Table I.Furthermore, the deterministic and stochastic entropyproduction rates (EPR) are computed as a function of theparameter k +2 , as shown in Fig. 4. For the deterministicdynamics, we see that the time average of the entropyproduction rate obtained with Eq. (46) and Eq. (52) co-incide within great numerical accuracy. The dependenceof the entropy production rate versus k +2 is smooth inthe stationary and periodic regimes. However, we ob-serve in Fig. 4 an irregular dependence in the chaoticregime. The reason is that the attractor is not struc-turally stable in this regime, but it undergoes complexbifurcations between chaotic and periodic attractors, asseen in Fig. 1. The orbits include oscillations of smalland large amplitudes, which is a characteristic feature ofmixed-mode oscillations. Accordingly, the time averageof the entropy production rate undergoes variations dueto the bifurcations between these chaotic and periodic at-tractors. The general trend is the increase of the entropyproduction rate with the parameter k +2 . In the stochas-tic approach, the mean entropy production rate is evalu-ated from the mean growth rate (23) of the quantity (22)(divided by the extensivity parameter V ), as well as fromEq. (56) based on the cycle decomposition. Again, bothmethods agree with each other up to great numericalaccuracy. In order to compare with the deterministic re-2 deterministic EPRdeterministic EPR c stochastic EPRstochastic EPR c E P R k + 2 steady periodic chaotic FIG. 4. Entropy production rate (EPR) of the reaction net-work (57) for the parameter values (71) versus k +2 , as com-puted with different methods in the deterministic and stochas-tic approaches. The deterministic EPR (open squares) isobtained with the time average of the entropy productionrate (46). The deterministic EPR c (filled circles) is calculatedby the sum (52) over the o = 7 cycles of the reaction network.The stochastic EPR (filled squares) is computed by extrapo-lation of the values R/V given by Eq. (23) using Gillespie’salgorithm with V = 100, 150, 200, and 250. The stochasticEPR c (open circles) is similarly obtained from Eq. (56) in thesame simulations using Gillespie’s algorithm. The averagesare carried out over the time interval t = 1000. sults, the mean values of the entropy production rate arecalculated by extrapolations from the values obtained insimulations with Gillespie’s algorithm for V = 100, 150,200, and 250. The values obtained with the stochasticapproach are plotted in Fig. 4, showing agreement withthe deterministic values up to the statistical errors of thestochastic simulations.Therefore, the rate of entropy production can be calcu-lated consistently in both the deterministic and stochas-tic approaches. The essential aspect is the role of elemen-tary chemical reactions, as emphasized with the methodbased on the cycle decomposition where the entropy pro-duction rate is directly obtained by counting the reactiveevents due to elementary chemical reactions. F. Changing the direction of chemical conversion
In the chaotic regime for k +2 = 0 .
55, Table I showsthat the affinity and the mean rate are opposite to eachother for the cycles c = 1, c = 4, and c = 6, so thatthese cycles have negative contributions to the entropyproduction rate. This is consistent with the second lawof thermodynamics because the other cycles have largeenough positive contributions, so that the sum of all thecontributions gives a positive entropy production, as re-quired. Thus, the cycles c = 1, c = 4, and c = 6 arerunning in the direction that is opposite to the one oftheir spontaneous relaxation towards equilibrium. These particular cycles of reactions are driven in the oppositedirection because of their coupling to the other cycles inthe network.Now, we note that the parameter k +2 only appears inthe affinity A of the first cycle (74). Therefore, if k +2 istuned as a control parameter, the affinity A is changing,although the values of all the other affinities remain con-stant. For the set (71) of parameter values, this affinity isgiven by A = ln(0 . /k +2 ), so it is positive if k +2 < . k +2 > .
6. Since the mean rate J remains negative on both sides of k +2 = 0 .
6, the con-tribution A J to the entropy production is changingsign if the control parameter crosses the value k +2 = 0 . → A for k +2 > .
6, but in the direction of the oppo-site conversion A → A for k +2 < .
6, while remainingin the chaotic regime. Remarkably, the direction of thischemical conversion can be controlled by tuning the pa-rameter k +2 .Since the reactive system is out of equilibrium, thechemical conversion A → A , which has a direction op-posite to the spontaneous one, is driven by the otherreactions in the system. Like in engines, the efficiency ofthis opposite conversion can be introduced as η ≡ − A J P c =2 A c J c . (85)This quantity is positive in the regime of the conversionA → A , where it characterizes its efficiency. Accord-ing to the second law of thermodynamics requiring thatthe entropy production rate (52) or (56) is always non -10 -4 -4 h deterministic h stochastic h k + 2 steady periodic chaotic FIG. 5. Efficiency (85) of the opposite chemical conversionA → A in the reaction network (57) for the parameter val-ues (71) versus k +2 , as computed in the deterministic andstochastic approaches. The deterministic values of the effi-ciency η (open squares) are obtained using the time aver-ages of the cycles (74)-(80). The stochastic values of η (filledsquares) are computed by extrapolating the values obtainedusing Gillespie’s algorithm with V = 100, 150, 200, and 250.The averages are carried out over the time interval t = 1000.The statistical errors on the stochastic values are here smallerthan or the same size as the filled squared. η ≤
1. In thestochastic approach, the efficiency is given by Eq. (85)with the rates ˜ J c defined by Eq. (55).The efficiency (85) can be computed in the determinis-tic and stochastic approaches, as shown in Fig. 5. We seethat the values obtained in both approaches are in agree-ment. The efficiency is indeed observed to change signas expected at the parameter value k +2 = 0 .
6, confirm-ing that the chemical conversion proceeds in the oppositedirection A → A for k +2 < .
6, but in spontaneous di-rection A → A for k +2 > .
6. The efficiency of theseconversions is very small, because the contribution of thecycle c = 1 to the entropy production rate is the smallestamong the seven cycles, as seen in Table I. Nevertheless,the study shows the possibility of changing the directionof chemical conversion even if the reaction network isrunning in chaotic regimes. VI. CONCLUSION
In this paper, methods to evaluate the entropy pro-duction rate in chemical chaos are developed in both thedeterministic and stochastic approaches. Since trajecto-ries are time dependent, time average is considered inboth approaches. In general, time average defines somestationary probability distribution. In deterministic sys-tems, this stationary distribution is a solution of thegeneralized Liouvillian equation and it has the chaoticattractor for support. In stochastic processes, the sta-tionary distribution is a solution of the chemical masterequation. The latter distribution is expected to reduceto the former in the weak noise limit.The model of chemical chaos that is here considered is a reversible chemical reaction network composed of tenforward and so many backward reactions, ruling the timeevolution of three intermediate species. The bifurcationdiagram of the deterministic dynamical system showsthat the parametric domains with chaotic attractors arealternating with small windows of mixed-mode periodicoscillations. In the corresponding stochastic process, thestructures in phase space and parametric space that arefiner than the noise amplitude are thus blurred by therandom fluctuations. However, the larger structures areobserved to be robust.In deterministic systems, it is shown for chemical ki-netics obeying the mass action law that the time aver-age of the entropy production rate can be decomposedin terms of the affinities and mean rates associated withthe right null eigenvectors of the stoichiometric matrix.These null eigenvectors define cycles composed of consec-utive reactions such that the molecular numbers of inter-mediate species come back to their initial values. Theassociated affinities only depend on the concentrationsof the chemostatted species and the mean rates give theamounts of chemostatted species that are consumed orsynthesized in the cycle. The results of this paper showthat this decomposition, which is well known in station- ary states of deterministic chemical reaction networks,is also valid in time-dependent periodic, quasiperiodic,or chaotic regimes for the time average of the entropyproduction rate.In this way, the comparison can be made with theentropy production rate evaluated for the correspond-ing stochastic process by averaging either in time orwith respect to the stationary probability distributionof the process under the assumption of ergodicity. In thestochastic approach, as well as for deterministic systems,the entropy production rate is evaluated using the ratesof the forward and backward elementary chemical reac-tions. For stochastic processes, the rates can be obtainedby counting the random numbers of reactive events. Ev-ery reactive event contributes to entropy production bythe logarithm of the ratio of the opposite transition ratesof the elementary reaction occurring at the transition.Consequently, the known expression for the entropy pro-duction rate is recovered in the deterministic limit.Equivalently, the entropy production rate can be eval-uated in the stochastic approach by counting the randomnumbers of cycles that are wound during the process, ev-ery cycle adding a contribution equal to the associatedaffinity. The advantage of this method is that the en-tropy production rate can be evaluated as some linearcombination of reaction rates.The application of these methods to the chemical re-action network of Ref. 24 in the stationary, periodic, andchaotic regimes shows that they are consistent with eachother. On the one hand, the values of the entropy pro-duction rate given by the cycle decomposition are in ex-cellent agreement with the values of the standard methodup to great numerical accuracy. On the other hand, thevalues obtained for the stochastic process are observed toagree with the values of the deterministic system in theweak noise limit.Furthermore, the cycle decomposition reveals that, inthe chaotic regime, some cycles contribute negatively tothe positive entropy production rate, meaning that thesecycles are driven by the other ones in the direction thatis opposite to spontaneous dissipation. Accordingly, thechemical reaction network performs energy conversioneven in the chaotic regime. Moreover, the direction ofconversion in one of the cycles can be reversed by tuningthe associated control parameter, although remaining inthe chaotic regime. This cycle is driven opposite to itsspontaneous direction below a critical value of this con-trol parameter, where the thermodynamic efficiency ofenergy conversion is evaluated.To conclude, the methods of stochastic thermodynam-ics can be applied to complex chemical reaction net-works running from stationary to chaotic regimes andthey are consistent with the standard methods of chem-ical nonequilibrium thermodynamics.4 ACKNOWLEDGMENTS
This research is financially supported by the Universit´elibre de Bruxelles (ULB) and the Fonds de la RechercheScientifique - FNRS under the Grant PDR T.0094.16 forthe project “SYMSTATPHYS”. O. E. R¨ossler, Z. Naturforsch. A , 1168 (1976). O. E. R¨ossler, Phys. Lett. A , 397 (1976). O. E. R¨ossler, Z. Naturforsch. A , 1664 (1976). O. E. R¨ossler,
Continuous Chaos , in: H. Haken, Editor,
Syner-getics: A Workshop (Springer, New York, 1977) pp. 184-199. O. E. R¨ossler, Ann. N.Y. Acad. Sci. , 376 (1979). O. E. R¨ossler, Phys. Lett. A , 155 (1979). O. E. R¨ossler, Z. Naturforsch. A , 259 (1976). O. E. R¨ossler and K. Wegmann, Nature , 89 (1978). D. Ruelle, Trans. New York Acad. Sci. , 66 (1973). J. L. Hudson, M. Hart, and D. Marinko, J. Chem. Phys. , 1601(1979). J. L. Hudson and J. C. Mankin, J. Chem. Phys. , 6171 (1981). J. S. Turner, J.-C. Roux, W. D. McCormick, and H. L. Swinney,Phys. Lett. A , 9 (1981). R. H. Simoyi, and A. Wolf, and H. L. Swinney, Phys. Rev. Lett. , 245 (1982). H. L. Swinney, Physica D , 3 (1983). I. R. Epstein, Physica D , 47 (1983). J.-C. Roux, Physica D , 57 (1983). J.-C. Roux, R. H. Simoyi, and H. L. Swinney, Physica D , 257(1983). P. Berg´e, Y. Pomeau, and C. Vidal,
Order within Chaos , (Wiley,New York, 1984). F. Argoul, A. Arn´eodo, and P. Richetti, Phys. Lett. A , 269(1987). S. K. Scott,
Chemical chaos (Clarendon Press, Oxford, 1991) I. R. Epstein and J. A. Pojman,
An Introduction to NonlinearChemical Dynamics (Oxford University Press, New York, 1998). K.-D. Willamowski and O. E. R¨ossler, Z. Naturforsch. A , 317(1980). B. D. Aguda and B. L. Clarke, J. Chem. Phys. , 7428 (1988). P. Gaspard and G. Nicolis, J. Stat. Phys. , 499 (1983). N. Samardzija, L. D. Greller, and E. Wasserman, J. Chem. Phys. , 2296 (1989). X.-G. Wu and R. Kapral, Phys. Rev. Lett. , 1940 (1993). J. G¨u´emez and M. A. Mat´ıas, Phys. Rev. E , R 2351 (1993). P. Geysermans and G. Nicolis, J. Chem. Phys. , 8964 (1993). P. Geysermans and F. Baras, J. Chem. Phys. , 1402 (1996). V. Voorsluijs and Y. De Decker, Physica D , 1 (2016). J. M. Garcia-Fern´andez, J. M. Nieto-Villar, and J. Rieumont-Briones, Physica Scripta , 643 (1996). J. W. Stucki and R. Urbanczik, Z. Naturforsch. A , 599 (2005). T. De Donder and P. Van Rysselberghe,
Affinity (Stanford Uni-versity Press, Menlo Park CA, 1936). I. Prigogine,
Introduction to Thermodynamics of IrreversibleProcesses (Wiley, New York, 1967). G. Nicolis, Rep. Prog. Phys. , 225 (1979). S. R. de Groot and P. Mazur,
Non-Equilibrium Thermodynamics (Dover, New York, 1984). D. Kondepudi and I. Prigogine,
Modern Thermodynamics: FromHeat Engines to Dissipative Structures (Wiley, Chichester, 1998). P. H. Richter, P. Rehmus, and J. Ross, Prog. Theor. Phys. ,385 (1981). J.-L. Luo, C. Van den Broeck, and G. Nicolis, Z. Phys. B - Con-densed Matter , 165 (1984). C. Y. Mou, J.-L. Luo, and G. Nicolis, J. Chem. Phys. , 7011(1986). C. Van den Broeck,
Stochastic Thermodynamics , in: W. Ebel-ing and H. Ulbricht, Editors,
Selforganization by Nonlinear Ir-reversible Processes (Springer, Berlin, 1986) pp. 57-61. J. L. Lebowitz and H. Spohn, J. Stat. Phys. , 333 (1999). P. Gaspard, J. Chem. Phys. , 8898 (2004). D. Andrieux and P. Gaspard, J. Chem. Phys. , 6167 (2004). D. Andrieux and P. Gaspard, J. Stat. Phys. , 107 (2007). D. Andrieux and P. Gaspard, J. Chem. Phys. , 154506 (2008). D. Andrieux and P. Gaspard, Phys. Rev. E , 031137 (2008). H. Tomita and M. M. Sano, Prog. Theor. Phys. , 515 (2008). U. Seifert, Phys. Rev. Lett. , 040602 (2005). U. Seifert, Eur. J. Phys. E , 26 (2011). U. Seifert, Rep. Prog. Phys. , 126001 (2012). M. Esposito and C. Van den Broeck, Phys. Rev. Lett. ,090601 (2010). M. Esposito and C. Van den Broeck, Phys. Rev. E , 011143(2010). M. Esposito and C. Van den Broeck, Phys. Rev. E , 011144(2010). M. Esposito, Phys. Rev. E , 041125 (2012). C. Van den Broeck,
Stochastic Thermodynamics: A Brief Intro-duction , in: C. Bechinger, F. Sciortino, and P. Ziherl, Editors,
Physics of Complex Colloids , Proceedings of the InternationalSchool of Physics “Enrico Fermi”, vol. 184 (IOS Press, Amster-dam; SIF, Bologna, 2013). C. Van den Broeck and M. Esposito, Physica A , 6 (2015). J. M. Horowitz, J. Chem. Phys. , 044111 (2015). Y. De Decker, Physica A , 178 (2015). Y. De Decker, J.-F. Derivaux, and G. Nicolis, Phys. Rev. E ,042127 (2016). T. J. Xiao, Z. Hou, and H. Xin, J. Chem. Phys. , 114506(2008). T. J. Xiao, Z. Hou, and H. Xin, J. Phys. Chem. B , 9316(2009). T. J. Xiao and Z. Hou, Sci. Chin. Chem. , 396 (2010). T. Rao, T. J. Xiao, and Z. Hou, J. Chem. Phys. , 214112(2011). T. J. Xiao and Y. Zhou, Chin. J. Chem. Phys. , 61 (2018). L. P. Shil’nikov, Sov. Math. Dokl. , 163 (1965). F. Schl¨ogl, Z. Phys. , 446 (1971). F. Schl¨ogl, Z. Phys. , 147 (1972). G. Nicolis and I. Prigogine, Proc. Natl. Acad. Sci. (USA) ,2102 (1971). G. Nicolis, J. Stat. Phys. , 195 (1972). G. Nicolis and I. Prigogine,
Self-Organization in NonequilibriumSystems (Wiley, New York, 1977). M. Polettini and M. Esposito, J. Chem. Phys. , 024117(2014). R. Rao and M. Esposito, Phys. Rev. X , 041064 (2016). R. Rao and M. Esposito, New J. Phys. , 023007 (2018). R. Rao and M. Esposito, J. Chem. Phys. , 245101 (2018). I. H. Segel,
Enzyme Kinetics (Wiley, New York, 1975). A. Wachtel, R. Rao, and M. Esposito, New J. Phys. , 042002(2018). C. W. Gardiner,
Handbook of Stochastic Methods for Physics,Chemistry and the Natural Sciences , 3rd edition (Springer,Berlin, 2004). D. T. Gillespie, J. Comput. Phys. , 403 (1976). D. T. Gillespie, J. Phys. Chem. , 2340 (1977). J. Schnakenberg, Rev. Mod. Phys. , 571 (1976). N. G. van Kampen,
Stochastic Processes in Physics and Chem-istry (North-Holland, Amsterdam, 1981). P. Gaspard, New J. Phys. , 115014 (2013). T. L. Hill,
Free Energy Transduction and Biochemical Cycle Ki-netics (Dover, New York, 2005). A. Blokhuis, D. Lacoste, and P. Gaspard, J. Chem. Phys. ,144902 (2018). G. Nicolis,
Introduction to nonlinear science (Cambridge Uni-versity Press, Cambridge UK, 1995). I. P. Cornfeld, S. V. Fomin, and Ya. G. Sinai,
Ergodic Theory (Springer, New York, 1982). J.-P. Eckmann and D. Ruelle, Rev. Mod. Phys.57