A proof of unlimited multistability for phosphorylation cycles
aa r X i v : . [ q - b i o . M N ] A p r A proof of unlimited multistability for phosphorylation cycles
Elisenda Feliu , Alan D. Rendall and Carsten Wiuf April 8, 2019
Abstract
The multiple futile cycle is a phosphorylation system in which a molecular substrate might bephosphorylated sequentially n times by means of an enzymatic mechanism. The system has beenstudied mathematically using reaction network theory and ordinary differential equations. It isknown that the system might have at least as many as 2 ⌊ n ⌋ + 1 steady states (where ⌊ x ⌋ is theinteger part of x ) for particular choices of parameters. Furthermore, for the simple and dual futilecycles ( n = 1 ,
2) the stability of the steady states has been determined in the sense that the onlysteady state of the simple futile cycle is globally stable, while there exist parameter values forwhich the dual futile cycle admits two asymptotically stable and one unstable steady state. Forgeneral n , evidence that the possible number of asymptotically stable steady states increases with n has been given, which has led to the conjecture that parameter values can be chosen such that ⌊ n ⌋ + 1 out of 2 ⌊ n ⌋ + 1 steady states are asymptotically stable and the remaining steady statesare unstable.We prove this conjecture here by first reducing the system to a smaller one, for which we finda choice of parameter values that give rise to a unique steady state with multiplicity 2 ⌊ n ⌋ + 1.Using arguments from geometric singular perturbation theory, and a detailed analysis of the centremanifold of this steady state, we achieve the desired result. Post-translational modifications of proteins are ubiquitous in almost all molecular processes at thecellular level. Perhaps the most important post-translational modification process is that of phospho-rylation, where no, one or several phosphate groups are attached to specific sites of a protein, therebycreating different protein phosphoforms. It is estimated that more than one third of all proteins ineukaryotes are temporarily phosphorylated [5]. Moreover, the number of phosphorylation sites variesgreatly from protein to protein and, for example, exceeds n = 20 in the case of the tumour suppressorprotein p53 [25] and n = 80 in the case of the tau protein, a microtubule stabiliser [17]. However, thephosphorylation process appears in many different guises. One particularly widespread variant is thatof distributive sequential phosphorylation where a protein is phosphorylated one site at a time andin sequential order. Dephosphorylation is the reverse process and removes phosphate groups in thereverse order.The standard model of distributive sequential phosphorylation consists of 6 n molecular reactionsX i − + E −− ⇀↽ −− Y ,i −−→ X i + E , X i + F −− ⇀↽ −− Y ,i −−→ X i − + F , i = 1 , . . . , n, where n denotes the number of phosphorylation sites, X i , i = 0 , . . . , n , denotes the phosphoform with i phosphate groups attached, E is an enzyme (a kinase) that catalyses the phosphorylation process, Fis another enzyme (a phosphatase) that catalyses the dephosphorylation process, and Y ,i and Y ,i , Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, [email protected] Institut f¨ur Mathematik, Johannes Gutenberg-Universit¨at Mainz, Staudingerweg 9, D-55099 Mainz, [email protected] Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, [email protected] = 1 , . . . , n , are enzyme-substrate complexes. The model is also known as the multiple futile cycle [27].Assuming mass-action kinetics [7], the reaction network might be modelled by a system of ordinarydifferential equations (ODEs) in the concentrations of the substances (the phosphoforms, enzyme-substrate complexes and free enzymes). It leads to a high-dimensional ODE system in 3 n + 3 variables(concentrations) and 6 n constants (one for each reaction). It has been argued, but not proved, that thenumber of possible stable steady states of the system increases linearly with n , thereby allowing thecell a large amount of functional flexibility. The phenomenon has been named unlimited multistability [26, 27].In the case n = 1, the simple futile cycle, it is known that there is a unique positive steadystate for fixed total amounts of substrate (protein) and enzymes, and that this solution is globallyasymptotically stable [1]. In the case n = 2, the dual futile cycle, the conclusion was first reachedwith the help of simulations that for certain choices of parameters there are three steady states, twoof which are stable [21]. It was later proved rigorously that at most three steady states exist and thatfor certain parameter values there are indeed three [27]. In that paper nothing was proved about thestability of these solutions. Later it was formally proven that for suitable choices of parameters andtotal amounts of substrate and enzymes these three solutions are hyperbolic, two are asymptoticallystable and the third is a saddle [15]. The aim of this paper is to extend these results on stability togeneral values of n .For general n , there can be more than two stable steady states [14]. Evidence that there can beas many as 2 ⌊ n ⌋ + 1 steady states with ⌊ n ⌋ + 1 of them being stable was presented in [26]. Here ⌊ x ⌋ denotes the floor function, the largest integer smaller than a real number x . In more detail, itwas shown that steady states are in one to one correspondence with the intersections of two curvesin the plane and these curves were plotted numerically. It was also indicated how in a certain formallimit these intersections correspond to the roots of a polynomial in one variable. Starting from theseconsiderations it has been proved analytically that there are parameters for which the number of steadystates indicated in [26] exist but their stability was not treated [27].The number of steady states might be larger than 2 ⌊ n ⌋ + 1 for certain choices of parameter values.For n = 2, the bound 2 ⌊ n ⌋ + 1 is an upper bound, but for n = 3, there can be as many as five steadystates [13].We prove that there exist parameter values for which the ODE system has ⌊ n ⌋ + 1 stable and ⌊ n ⌋ unstable steady states (for fixed total amounts of substrate and enzymes). The line of argument isthe following. First we will apply geometric singular perturbation theory (GSPT) to reduce the ODEsystem in 3 n + 3 variables to an ODE system in n + 1 variables, the phosphoform concentrations.This system is known as the Michaelis-Menten limit (or system) of the original system [15]. Forthe Michaelis-Menten system we will show that there exists a choice of parameter values for whichthere is one asymptotically stable steady state with multiplicity 2 ⌊ n ⌋ + 1. To achieve this, we apply acombination of linear algebra and dynamical systems theory. The main hurdle is to establish asymptoticstability of the steady state as it is not hyperbolic but has a centre manifold of dimension one (forfixed total amount of substrate). The leading non-zero coefficient of the system in the direction of thecentre manifold is negative but depends on n . Having established this, we again apply perturbationtheory to conclude the existence of parameter values for which there are 2 ⌊ n ⌋ + 1 steady states ofwhich ⌊ n ⌋ + 1 are asymptotically stable and ⌊ n ⌋ are unstable. With this in place, we finally lift thesteady states to the original system while preserving their stability properties. We consider a protein X and let X i be the protein with i = 0 , . . . , n phosphate groups attached toit. The kinase which catalyses the phosphorylation of X is denoted by E and the phosphatase whichcatalyses the dephosphorylation of X by F. The phosphorylation and dephosphorylation of the proteinproceeds through the formation of enzyme-substrate complexes. For i = 1 , . . . , n , we let Y ,i denotethe enzyme-substrate complex formed by X i − and E, and similarly by Y ,i the enzyme-substrate2omplex formed by X i and F. The reaction mechanism under consideration consists of the reactionsX i − + E a ,i −−− ⇀↽ −−− d ,i Y ,i k ,i −−→ X i + E , X i + F a ,i −−− ⇀↽ −−− d ,i Y ,i k ,i −−→ X i − + F , i = 1 , . . . , n, where the labels of the reactions indicate the reaction rate constants. These are positive real numbers.Let x i be the concentration of X i for i = 0 , . . . , n , y ,i and y ,i the concentrations of Y ,i and Y ,i ,respectively, for i = 1 , . . . , n , and let x E and x F be the concentrations of E and F, respectively. Underthe assumption of mass-action kinetics, the evolution equations become dx i dt = − a ,i +1 x i x E − a ,i x i x F + d ,i +1 y ,i +1 + d ,i y ,i + k ,i y ,i + k ,i +1 y ,i +1 , i = 0 , . . . , n, (1) dy ,i dt = a ,i x i − x E − ( d ,i + k ,i ) y ,i , i = 1 , . . . , n, (2) dy ,i dt = a ,i x i x F − ( d ,i + k ,i ) y ,i , i = 1 , . . . , n, (3) dx E dt = n X i =1 − a ,i x i − x E + ( d ,i + k ,i ) y ,i , (4) dx F dt = n X i =1 − a ,i x i x F + ( d ,i + k ,i ) y ,i , (5)where a , = d , = k , = 0, a ,n +1 = d ,n +1 = k ,n +1 = 0. The quantities E tot = x E + n X i =1 y ,i , (6) F tot = x F + n X i =1 y ,i , (7) X tot = n X i =0 x i + n X i =1 ( y ,i + y ,i ) (8)are conserved. Here E tot , F tot and X tot are known as the total amounts of kinase, phosphataseand substrate, respectively. Note that using the equation for the conserved quantities E tot and F tot ,we might eliminate the variables x E and x F from the right hand side of equations (1)-(5) and thusthe evolution equations for these two variables can be discarded. The relation (8) might be used toeliminate a further variable but this will not be done here since it would destroy the symmetry of thesystem.We next introduce rescaled variables that generalise the rescaling used in the case of the dual futilecycle [15]. For a parameter ǫ >
0, let x E = ǫ e x E , x F = ǫ e x F , y j,i = ǫ e y j,i and τ = ǫt . Denote the derivativewith respect to τ by a prime and drop the tildes for convenience. This leads to the equations dx i dτ = − a ,i +1 x i x E − a ,i x i x F + d ,i +1 y ,i +1 + d ,i y ,i + k ,i y ,i + k ,i +1 y ,i +1 , i = 0 , . . . , n, (9) ǫ dy ,i dτ = a ,i x i − x E − ( d ,i + k ,i ) y ,i , i = 1 , . . . , n, (10) ǫ dy ,i dτ = a ,i x i x F − ( d ,i + k ,i ) y ,i , i = 1 , . . . , n, (11) ǫ dx E dτ = n X i =1 − a ,i x i − x E + ( d ,i + k ,i ) y ,i , (12) ǫ dx F dτ = n X i =1 − a ,i x i x F + ( d ,i + k ,i ) y ,i . (13)3orrespondingly, we get rescaled conserved quantities E tot and F tot identical to (6) and (7) (aftercancelling ǫ ), and the conserved quantity X tot = n X i =0 x i + ǫ n X i =1 ( y ,i + y ,i ) . The expressions in the rescaled variables remain regular as ǫ tends to zero, in the sense that thefunctions on the right hand sides of the equations extend in a C ∞ -manner to ǫ = 0, and in the limitsome of the evolution equations reduce to algebraic equations. This is an example of the standardsituation considered in GSPT. (For an introduction to GSPT see [19][Chapter 1].) In general we havea system of equations of the form x ′ = f ( x, y, ǫ ) ,ǫy ′ = g ( x, y, ǫ ) . Here the prime denotes the derivative with respect to τ and ǫ is a parameter. Introducing a new timevariable t = τ /ǫ and denoting the derivative with respect to t by a dot, leads to the extended system ˙ x = ǫf ( x, y, ǫ ) , ˙ y = g ( x, y, ǫ ) , (14)˙ ǫ = 0 . The variable x is generally referred to as the slow variable and y as the fast variable.Assume that the equation g ( x, y,
0) = 0 is equivalent to y = h ( x ) for a continuously differentiablefunction h so that the zero set of g is a manifold. Then for ǫ = 0 the two equations for x and y in (14) are equivalent to the system x ′ = f ( x, h ( x ) , limiting system . Ofparticular importance are the eigenvalues of the derivative of g with respect to y . In this context theseare known as transverse eigenvalues .The central conclusion is that when none of the transverse eigenvalues has zero real part, there existsan invariant manifold for the extended system called the slow manifold with the following properties: • Its restriction to ǫ = 0 is the zero set of g . • The restriction of the extended system to the invariant manifold is a system which, when writtenin terms of the time variable τ , depends in a regular manner on ǫ and agrees with the limitingsystem for ǫ = 0.The meaning of the word ‘regular’ here is not only that the right hand sides of the equations arefunctions of the parameter which for ǫ = 0 are as differentiable as the original system but also thatthe equations can be solved for the time derivatives.Consider now the evolution equations (9)-(13) for the multiple futile cycle with the variables x E and x F eliminated using the rescaled conservation equations, the analogues of (6) and (7). Then thevariables x i might be taken as the slow variables x in the general set-up and the variables y j,i as thefast variables y . Here, the equation g ( x, y,
0) = 0 is equivalent to setting the right hand side of (2)and (3) to zero and eliminating x E and x F using the conservation equations. By setting (2) and (3)to zero we obtain y ,i = K − ,i x i − x E , y ,i = K − ,i x i x F , where K j,i = d j,i + k j,i a j,i , for j = 1 , , i = 1 , . . . , n. (15)Using the conserved quantities E tot and F tot , we further have by insertion E tot = x E n X ℓ =1 K − ,ℓ x ℓ − ! , F tot = x F n X ℓ =1 K − ,ℓ x ℓ ! , y ,i = K − ,i E tot x i − P nℓ =1 K − ,ℓ x ℓ − , y ,i = K − ,i F tot x i P nℓ =1 K − ,ℓ x ℓ (16)for i = 1 , . . . , n . These expressions define the function h above.The transverse eigenvalues are by definition the eigenvalues of the linearisation L of the right handside of the evolution equations for the concentrations of the substrate-enzyme complexes with respectto those complexes. That is, the linearisation of the equations for i = 1 , . . . , n , a ,i x i − ( E tot − n X ℓ =1 y ,ℓ ) − ( d ,i + k ,i ) y ,i a ,i x i ( F tot − n X ℓ =1 y ,ℓ ) − ( d ,i + k ,i ) y ,i . The matrix L can be seen as the sum of a diagonal matrix L and another matrix L . The diagonalelements of L are − ( d ,i + k ,i ) for the rows corresponding to y ,i , i = 1 , . . . , n , and − ( d ,i + k ,i )for the rows corresponding to y ,i , i = 1 , . . . , n . The matrix L is block diagonal with one blockcorresponding to each of the two enzymes. We only need to consider one of these blocks as the otherone can be treated in a strictly analogous manner. Consider the block corresponding to the enzymeE. All entries of the i -th row of the block are equal to − a ,i x i − . The transverse eigenvalues havenegative real part because all eigenvalues of − L have positive real part, as shown in the lemma below. Lemma 2.1. If M = ( m ij ) is a matrix with elements of the form m ij = a ij + b i where a ij = 0 for all i = j and all a ii and b i are positive, then all eigenvalues of M have positive real parts. The proof is given in Subsection 5.1.We conclude that GSPT applies to this situation and that there exists a slow manifold. Therestriction of the extended system to the slow manifold gives a family of dynamical systems dependingregularly on the parameter ǫ . The level sets of the conserved quantity defined by X tot are transverseto the slow manifold and so restricting to a constant value of this conserved quantity also results ina regular parameter-dependent dynamical system, which we call the completely reduced system . Notethat the dimension of the restriction of the extended system to the slow manifold is greater by onethan that of the completely reduced system.At this point we need some material from the theory of dynamical systems. Background on thiscan be found in [22]. Let ˙ x = f ( x ) be a system of ODEs on R n , where f is C . Let x be a steady stateof this system, that is, a point where f ( x ) = 0. Let A = Df ( x ) denote the derivative (Jacobian)of the right hand side of the equations at x . Then R n can be written as the direct sum of threelinear subspaces E − , E c and E + , which are spanned by the real and imaginary parts of the generalisedeigenvectors of A corresponding to the eigenvalues whose real parts are negative, zero and positiverespectively. These subspaces are called the stable, centre and unstable subspaces at x . If there areno eigenvalues with zero real part, so that E c is trivial, then the point x is said to be hyperbolic.Suppose now that we have a parameter-dependent system ˙ x = f ( x, α ) where f ( x ,
0) = 0 and f is C in its dependence on both x and α . The point x is a steady state of the system for α = 0. If thissteady state is hyperbolic, then it follows that for α close to zero there exists a unique steady stateclose to x and that it is hyperbolic. The dimensions of the stable and unstable subspaces of thissteady state are independent of α for α small.Returning to our concrete example, if the limiting system has k hyperbolic steady states, thenso does the system defined by applying all three conservation laws to the original system for ǫ small(playing the role of α here). The stability type of these steady states is preserved, in the sense thatthe dimensions of the stable and unstable manifolds are independent of ǫ . In particular, if for one ofthe steady states of the limiting system, the stable subspace is the whole of R n , then this is also truefor the corresponding steady state with ǫ >
0. Moreover, this is known to imply that the steady stateis asymptotically stable. In general a steady state of the restriction of the system to the slow manifoldis a steady state of the extended system and the dimension of its stable manifold in the extended5ystem is the sum of the dimension of its stable manifold in the extended system restricted to the slowmanifold and the number of transverse eigenvalues with negative real part.Using the fact that the right hand sides of (10) and (11) are zero for ǫ = 0 and the expressions in(16), it follows that the limiting system, which in this case will also be referred to as the Michaelis-Menten (or MM) system, consists of the equations dx i dτ = − k ,i +1 K − ,i +1 E tot x i P nℓ =1 K − ,ℓ x ℓ − + k ,i K − ,i E tot x i − P nℓ =1 K − ,ℓ x ℓ − − k ,i K − ,i F tot x i P nℓ =1 K − ,ℓ x ℓ + k ,i +1 K − ,i +1 F tot x i +1 P nℓ =1 K − ,ℓ x ℓ for i = 0 , . . . , n , where we adopt the convention that the symbols K − , , K − , , K − ,n +1 and K − ,n +1 aredefined to be zero.The rest of the paper is devoted to proving the following theorems. Theorem 2.2.
There exists a choice of positive parameters (reaction rate constants and X tot ) such thatthe Michaelis-Menten system admits ⌊ n ⌋ + 1 steady states in the linear invariant subspace defined bythe total amount X tot . Furthermore, relatively to this invariant subspace, ⌊ n ⌋ + 1 of these steady statesare asymptotically stable and hyperbolic, and ⌊ n ⌋ are unstable and hyperbolic with a one-dimensionalunstable manifold. As a consequence, GSPT allows us to conclude that the original system with evolution equations(1)-(5) also admits a choice of positive parameters such that there are ⌊ n ⌋ + 1 asymptotically stablesteady states and ⌊ n ⌋ unstable steady states. Indeed, given that the transverse eigenvalues are allnegative this means that for ǫ small there exist steady states of the original system corresponding tothe steady states of the MM system. They are hyperbolic and have corresponding stability properties.The sinks remain sinks and the dimension of the unstable manifolds of the other steady states remainsone. This completes the proof of the theorem below. Theorem 2.3.
There exists a choice of positive parameters (reaction rate constants and total amounts)such that the n -site phosphorylation system with evolution equations (1) - (5) admits ⌊ n ⌋ + 1 steadystates in the linear invariant subspace defined by the total amounts E tot , F tot and X tot . Further-more, relatively to this invariant subspace, ⌊ n ⌋ + 1 of these steady states are asymptotically stable andhyperbolic, and ⌊ n ⌋ are unstable and hyperbolic with a one-dimensional unstable manifold. This section is devoted to selecting ‘nice’ parameters such that the MM system has 2 ⌊ n ⌋ + 1 positivesteady states with the same X tot , of which ⌊ n ⌋ + 1 are asymptotically stable. To achieve this, wefirst select parameter values such that (1 , . . . , ∈ R n +1 > is the only positive steady state of the MMsystem and has multiplicity 2 ⌊ n ⌋ + 1. We then study the stability of this steady state. To this end,we need some centre manifold theory and this will now be reviewed. Background on this can be foundin [4] and [20]. Consider once again the system ˙ x = f ( x ) and a steady state x . Suppose that f is ofclass C k for some finite k ≥
1. We restrict consideration to a small neighbourhood of x . There existmanifolds V − , V c and V + of class C k passing through x that are invariant under the flow generatedby the differential equations and are tangent to E − , E c and E + , respectively, at x . They are referredto as stable, centre and unstable manifolds of the system at x . V − and V + are unique but V c is ingeneral not unique. Fortunately, this lack of uniqueness is usually not a problem in applications aswill also be illustrated in our case (to be argued later). It may be noted in passing that the analoguesof the statements just made are in general not true if C k with k finite is replaced everywhere by C ∞ inthe sense that there might not exist a C ∞ centre manifold [20, Section 5.1].We show that the Jacobian of the MM system evaluated at the steady state has n − X tot , this impliesthat in the system defined by restriction to a fixed value of X tot , this point has a one-dimensional6entre manifold. We proceed to study this centre manifold and conclude that the steady state isasymptotically stable. Finally, we use a perturbation argument to ensure that modified parameterscan be found for which there are 2 ⌊ n ⌋ + 1 positive steady states, of which ⌊ n ⌋ + 1 are asymptoticallystable and the remaining ones unstable.In the following we assume n ≥
2, and note that the case n = 1 has already been solved [1]. Theoutline of this section is as follows: § § § ⌊ n ⌋ + 1 asymptotically stablesteady states of the MM system and these are then lifted to the original system. ⌊ n ⌋ + 1 We let X tot = n + 1 , F tot = E tot = 1 , k ,i = K ,i , k ,i = K ,i , i = 1 , . . . , n. To simplify the notation, we define α i = K − ,i , β i = K − ,i , i = 1 , . . . , n. Given arbitrary positive values of K j,i one can always find positive values of d j,i , a j,i , k j,i correspondingto K j,i , see (15). We will therefore be concerned with choosing K j,i .With these definitions the MM system becomes: dx dτ = − x P nℓ =1 α ℓ x ℓ − + x P nℓ =1 β ℓ x ℓ ,dx i dτ = x i − − x i P nℓ =1 α ℓ x ℓ − + x i +1 − x i P nℓ =1 β ℓ x ℓ , i = 1 , . . . , n − , (17) dx n dτ = x n − P nℓ =1 α ℓ x ℓ − − x n P nℓ =1 β ℓ x ℓ . Recall that the sum of these equations is zero.We start by reducing the steady state system to a polynomial equation in one variable. We equatethe right-hand side of (17) to zero. Let u = 1 + P nℓ =1 β ℓ x ℓ P nℓ =1 α ℓ x ℓ − . (18)By setting the first equation in (17) to zero, we obtain x = ux . Using this relation and dx dτ = 0 weobtain x = u x = u x . By iteration we obtain x i = u i x , i = 0 , . . . , n. (19)The assumption on the conserved quantity, X tot = n + 1, implies n + 1 = x (1 + u + · · · + u n ) , hence x = n + 11 + u + · · · + u n . (20)Using (18) and (19) we find 1 + n X ℓ =1 β ℓ u ℓ x = u + n X ℓ =1 α ℓ u ℓ x , x in (20) and multiplication by 1 + u + · · · + u n gives1 + u + · · · + u n + n X ℓ =1 β ℓ u ℓ ( n + 1) = u (1 + u + · · · + u n ) + n X ℓ =1 α ℓ u ℓ ( n + 1) . Finally, rearranging the terms results in the following polynomial equation in one variable: u n +1 + ( n + 1)( α n − β n ) u n + . . . + ( n + 1)( α i − β i ) u i + . . . + ( n + 1)( α − β ) u − . (21)We conclude that the positive steady states of the MM system are in one-to-one correspondence withthe positive solutions of the univariate polynomial equation in (21). Given a solution u of this equation,then x , . . . , x n are found from (20) and (19).For any choice of values of α i , β i such that( n + 1)( α i − β i ) = ( ( − i +1 (cid:0) n +1 i (cid:1) for n even( − i +1 (cid:16)(cid:0) ni (cid:1) − (cid:0) ni − (cid:1)(cid:17) for n odd i = 1 , . . . , n, (22)the polynomial on the left-hand side of (21) is ( u − n +1 for n even and ( u − n ( u + 1) for n odd. Sothe only positive root is u = 1, which in turn implies that x = 1, and the steady state is p = (1 , . . . , ⌊ n ⌋ + 1. Note that positive α i , β i can always be found such that (22)is satisfied.We move on to show that α i , β i additionally can be chosen such that the Jacobian of the MMsystem evaluated at p has rank n − X tot .If the non-zero eigenvalues have negative real part, then the stability of the steady state might beunderstood from studying the behaviour of the system along the direction corresponding to the secondzero eigenvalue. This is done in Section 3.2.For this purpose we introduce a combinatorial quantity. Define γ n ( i ) := ( − i +1 n +1 (cid:0) ni (cid:1) for n even, ( − i +1 n +1 (cid:16)(cid:0) n − i (cid:1) − (cid:0) n − i − (cid:1)(cid:17) = ( − i +1 ( n − i ) n ( n +1) (cid:0) ni (cid:1) for n odd, (23)and i = 0 , . . . , n . Note that γ n (0) = γ n ( n ) = − n +1 for all n . Let γ n = ( γ n (0) , . . . , γ n ( n )).Furthermore, we recall a standard binomial identity [23]:0 = n X j =0 ( − j P ( j ) (cid:18) nj (cid:19) for any polynomial P of degree smaller than n. (24)In particular, we have0 = n X j =0 ( − j (cid:18) nj (cid:19) for n > , n X j =0 ( − j ( j + i ) (cid:18) nj (cid:19) = n X j =1 ( − j j (cid:18) nj (cid:19) for any i ∈ Z and n > . Additionally, we will apply Pascal’s recurrence: (cid:18) ni − (cid:19) + (cid:18) ni (cid:19) = (cid:18) n + 1 i (cid:19) , (25)which is valid for non-negative integers, i, n ≥
0. 8 roposition 3.1.
For any real α , define β i := α − γ n ( i ) − n +1 , i = 1 , . . . , n, and α i := β i − , i = 2 , . . . , n. Then (22) is satisfied and further, β n = α , and n X i =1 α i = 1 + n X i =1 β i = α n. Proof.
By definition β n = α − ( − n +1 ) − n +1 = α . For i = 1 , . . . , n the relation α i = α − γ n ( i − − n +1 holds since α i = β i − for i ≥ γ n (0) = − n +1 .This gives α i − β i = − γ n ( i −
1) + γ n ( i ) . For n even we have α i − β i = ( − i +1 n + 1 (cid:18) ni − (cid:19) + ( − i +1 n + 1 (cid:18) ni (cid:19) = ( − i +1 n + 1 (cid:18) n + 1 i (cid:19) , where (25) is applied. If n is odd, then using the expression of γ n ( i ) (23), we obtain: α i − β i = ( − i +1 n + 1 (cid:18)(cid:18) n − i − (cid:19) − (cid:18) n − i − (cid:19) + (cid:18) n − i (cid:19) − (cid:18) n − i − (cid:19)(cid:19) = ( − i +1 n + 1 (cid:18)(cid:18) ni (cid:19) − (cid:18) ni − (cid:19)(cid:19) where (25) is used twice. This concludes the proof of (22).It remains to show that 1 + P ni =1 α i = 1 + P ni =1 β i = α n . The first equality is a consequence ofdefinition of α i and α = β n . For the second equality, we have1 + n X i =1 β i = 1 + α + n − X i =1 (cid:18) α − γ n ( i ) − n + 1 (cid:19) = 1 + n α − n − n + 1 − n − X i =1 γ n ( i ) . If ( n + 1) P n − i =1 γ n ( i ) = 2 then we are done. To show this, apply (24) with P ( j ) = n − j for n oddand P ( j ) = n for n even (recalling that n > n + 1) n − X i =1 γ n ( i ) = − n ( n +1) n − X i =1 ( − i P ( i ) (cid:18) ni (cid:19) = n ( n +1) ( P (0) + ( − n P ( n ))= ( nn ( n +1) = n +1 for n even , n − ( − n ) n ( n +1) = n +1 for n odd . This concludes the proof.According to Proposition 3.1 and the discussion after (22), by choosing α > β i > i = 1 , . . . , n , then p = (1 , . . . ,
1) is a steady state of the MM system. In the remainingpart of the text, we consider α i , β i chosen such that this is the case. We proceed to study the Jacobianmatrix J ∈ R ( n +1) × ( n +1) of the MM system (17) evaluated at p , with this choice of parameters.We obtain the following partial derivatives of the MM system (17), evaluated at p : ∂∂x j (cid:18) x i P nℓ =1 α ℓ x ℓ − (cid:19) (cid:12)(cid:12)(cid:12) x = p = ( α n − α j +1 ( α n ) for i = j, − α j +1 ( α n ) for i = j,∂∂x j (cid:18) x i P nℓ =1 β ℓ x ℓ (cid:19) (cid:12)(cid:12)(cid:12) x = p = ( α n − β j ( α n ) for i = j, − β j ( α n ) for i = j, α n +1 = β = 0. The matrix J might be reformulated in terms of two other matrices: α nJ = J + J = − . . . − . . . . . . − . . . − + 1 α n α α − β α − β . . . α n − β n − − β n . . . . . . − α − ( α − β ) − ( α − β ) . . . − ( α n − β n − ) β n . The rows 2 , . . . , n of α n J are zero because the j -th entry is − α j + α j + β j − − β j − = 0. UsingProposition 3.1, the matrix α nJ becomes the following symmetric matrix: e J := α nJ = − n . . . − n − . . . − . . . . . . − − n . . . − n ∈ R ( n +1) × ( n +1) . (26)The matrix e J is independent of the choice of α . The rows 2 , . . . , n of this matrix are clearly linearlyindependent. Proposition 3.2.
Consider the matrix e J in (26) . • e J has rank n − . An orthogonal basis of the kernel of e J is formed by the vector (1 , . . . , andthe vector v = ( v , . . . , v n ) ∈ R n +1 with v i = n − i, i = 0 , . . . , n. • e J has n − real negative eigenvalues, counted with multiplicity, and two zero eigenvalues.Proof. It is straightforward to see that (1 , . . . ,
1) belongs to the kernel of e J , because the column sumsare zero. This vector is linearly independent of v . To show that v also is in the kernel of e J we notethat the scalar product of v with the rows i = 2 , . . . , n is v i − − v i − + v i = n − i − − n − i + 2) + n − i = 0 . For rows with index 1 and n + 1, we have (cid:18) − n (cid:19) v + v − n v n = − n + 1 + n − − −
1) = 0 , − n v + v n − + (cid:18) − n (cid:19) v n = − − n + 2) + ( n −
1) = 0 . Hence v is in the kernel of e J . Since the rank of e J is at least n −
1, (1 , . . . ,
1) and v form a basis of thekernel. A simple computation shows that the sum of the entries of v is zero, that is, (1 , . . . , · v = 0: n X i =0 ( n − i ) = n ( n + 1) − n X i =0 i = n ( n + 1) − n ( n +1)2 = 0 . Hence the basis is orthogonal.To study the eigenvalues of e J we proceed as follows. Since e J is real and symmetric, all eigenvaluesare real. We already know that zero is an eigenvalue with multiplicity two. By Descartes’ rule ofsigns, if all coefficients of the characteristic polynomial of e J are non-negative, then the number ofpositive roots is zero. Hence the remaining n − e J are non-negative, we show thatfor i = 1 , . . . , n −
1, all non-zero principal minors of size i of e J have sign ( − i . Clearly, this holds forthe principal minors of size 1, since the diagonal entries of e J are all negative (recall n > i of the form A i := − a . . . − a . . . . . . − a i − . . . − a i (27)for the following cases:(1) a j = 2 for all j = 1 , . . . , i .(2) a = 1 − n and a j = 2 for j > a i = 1 − n and a j = 2 for j < i .For the three cases, define respective vectors: (1) x = (1 , . . . , x = ( n, n − , . . . , n − i + 1), and(3) x = ( n − i + 1 , . . . , n − , n ). Then the components of − A i x are non-negative. Using [12, Theorem5.4] on the matrix − A i and subsequently using [12, Theorem 5.1 2 ◦ ], we conclude that the principalminors of − A i are non-negative, hence in particular det( A i ) is either zero or has sign ( − i .For a subset H of { , . . . , n + 1 } of size n + 1 − j , j = 2 , . . . , n −
1, we consider the submatrix e J H of size j of e J obtained by removing the columns and rows with indices in H . If H contains 1 or n + 1,then the submatrix e J H is a block matrix with blocks of the form (27). Hence the determinant of e J H is the product of the determinants of the blocks, and it is either zero or has sign ( − j .If H contains neither 1 nor n + 1, then e J H has the form e J H = − n z . . . − n z B ...... 00 z − n . . . z − n , where z = 1 if 2 H and z = 0 otherwise, z = 1 if n H and z = 0 otherwise, and B is a blockmatrix of size j − − z = z = 0 then det( e J H ) equals det( B ) times the determinant of (cid:18) − n − n − n − n (cid:19) . Since the later is positive, the determinant has the desired sign.If z = 1 and z = 0 (and analogously for z = 0 , z = 1), then we expand the determinant alongthe last column and obtaindet( e J H ) = ( − n ) det − n . . . B ...0 + ( − j n det B ...0 − n . . . . Note in this case we necessarily have j ≥
3. Let B denote the matrix obtained by removing the firstcolumn and the first row of B . Then B is a block matrix of size j − − j − . We then havedet( e J H ) = ( − n ) det( B ) − det( B ) − n det( B ) = (1 − n ) det( B ) − det( B ) .
11y assumption n ≥
2. Further, det( B ) has sign ( − j − if non-zero, and det( B ) has sign ( − j − ifnon-zero. Hence, the determinant has sign ( − j if non-zero.Finally, assume z = 1 and z = 1. Then j ≥
4. The matrix B has at least two blocks. If B hasmore than two blocks, say ℓ blocks B , . . . , B ℓ , the determinant of e J H agrees with the product of thedeterminants of B i , i = 2 , . . . , ℓ −
1, times the determinant of a matrix similar to e J H but with only twoblocks B and B ℓ . Since the determinant of B i , i = 2 , . . . , ℓ −
1, has the desired sign, it is sufficient toshow that the determinant of e J H has sign ( − j whenever B has exactly two blocks: e J H = − n . . . . . . . . . − n − . . . . . . . . . . . . . . . . . . . . . ...0 . . . − . . . . . . . . . ...0 . . . − . . . . . . . . . . . . . . . . . . − . . . . . . . . . . . . ... 1 − . . . . . . . . . . . . ... ... . . . . . . . . . ...0 . . . . . . . . . − − n . . . . . . . . . − n . We will show by induction in j that the determinant is ( − j n − jn , which has the desired sign. Forthe induction basis we need to consider the two smallest cases j = 4 ,
5, and it is also convenient tocheck separately the case j = 6. For j = 4, the matrix B is − n − n − − − n − n , which has determinant n − n . Similarly, for j = 5 ,
6, we check that the determinant is − n − n and n − n respectively.Assume now that the statement holds for all 4 ≤ j ′ ≤ j −
1. We will prove the statement for j ′ = j ,with j ≥
7. Let i, i + 1 be the indices of the last column of the first block and the first column ofthe second block of B , respectively. Since j ≥ i ≥
2, at least one of the two inequalities hold i ≥ j − i ≥
4. We assume that i ≥
4, meaning the first block of B has at least size 3, and theother case follows symmetrically. The statement will follow by applying the Laplace expansion of thedeterminant of e J H along the rows i − , i . For this, we let e J H, { i − ,i } , { j ,j } be the matrix obtained byremoving the ( i − i -th rows and the j -th and j -th columns of e J H . Thendet( e J H ) = det (cid:18) −
20 1 (cid:19) det (cid:16) e J H, { i − ,i } , { i − ,i − } (cid:17) + det (cid:18) − − (cid:19) det (cid:16) e J H, { i − ,i } , { i − ,i } (cid:17) − det (cid:18) − (cid:19) det (cid:16) e J H, { i − ,i } , { i − ,i } (cid:17) . The matrix e J H, { i − ,i } , { i − ,i − } has one zero column, hence the first term is zero. By the inductionhypothesis, det (cid:16) e J H, { i − ,i } , { i − ,i } (cid:17) = ( − j − n − j +2 n . The ( i − e J H, { i − ,i } , { i − ,i } hasonly one nonzero entry, equal to one, in the ( i − i − e J H of size j −
3. Here we use that i ≥
4. Theseconsiderations give:det( e J H ) = 3( − j − n − j +2 n + 2( − j − n − j +3 n = ( − j − ((3 − n − jn + 3 n − n = ( − j n − jn , as claimed. This concludes the proof. 12 .2 Study of the centre manifold The next step towards the proof of Theorem 2.2 is to complete the study of the stability propertiesof the steady state p by inspecting its centre manifold. Recall the MM system with our choice ofparameters given in (17), and the two orthogonal vectors spanning the kernel of the Jacobian of thissystem evaluated at the steady state p (see Proposition 3.2): v = ( v , . . . , v n ) , v i = n − i, i = 0 , . . . , n,e = (1 , . . . , . Recall also that the dynamics of (17) around p is confined to the linear subspace n + 1 = x + · · · + x n .Let us refer to the system within this subspace as the restricted system. Since v satisfies e · v = 0, thecentre subspace of the steady state with respect to the restricted system is spanned by v . This impliesthat the centre manifold of p is one dimensional and admits a parametrisation (at least locally around p ) of the form x i ( s ) = 1 + v i s + h i ( s ) , i = 0 , . . . , n, (28)where h ( s ) = 0 , h n ( s ) = − n − X ℓ =0 h ℓ ( s ) , (29)and all h i vanish at least as fast as s near 0. Let h ( s ) = ( h ( s ) , . . . , h n ( s )). Since x ( s ) = 1 + ns ,we have s ( x ) = x − n for x on the centre manifold (at least locally around p ). Although the centremanifold may not be uniquely determined this will not cause a problem. The general theory says thatfor two different choices of the centre manifold, if the system, and hence the functions h i , are C k , thenthe functions h i for the two choices of the centre manifold agree up to order k . Thus we choose a fixedcentre manifold and the arguments which follow are independent of that choice. Note that since thesystem itself is C ∞ there exists a centre manifold of class C k for any finite k . To justify the calculationsin the following we just need to choose k sufficiently large.We investigate the equations of the centre manifold by selecting n − ν , . . . , ν n − in h v, e i ⊥ ⊆ R n +1 . Using (28) we obtain the equations ν i · x ( s ) = ν i · h ( s ), i = 1 , . . . , n − , or equally, in terms of the variable x (on the centre manifold), ν i · x = ν i · h ( s ( x )) , i = 1 , . . . , n − . If f denotes the right-hand side of the MM system (17), then evaluation at x = x ( τ ) and differentiationwith respect to time τ gives ν i · f ( x ( τ )) = ν i · h ′ ( s ( x ( τ ))) ddτ s ( x ( τ ))= ν i · h ′ ( s ( x ( τ ))) (cid:18) ddτ x ( τ ) − n (cid:19) = f ( x ( τ )) n ν i · h ′ ( s ( x ( τ ))) , or simply, ν i · f ( x ) = f ( x ) n ν i · h ′ ( s ( x )) , i = 1 , . . . , n − . (30)(Note that x ( τ ) and x ( s ) mean two different things.)Consider the vector field f multiplied by the denominators of f and expressed in terms of the13arameterisation of the centre manifold. It takes the form w ( s ) = − x ( s ) n X ℓ =1 β ℓ x ℓ ( s ) ! + x ( s ) n X ℓ =1 α ℓ x ℓ − ( s ) ! ,w i ( s ) = ( x i − ( s ) − x i ( s )) n X ℓ =1 β ℓ x ℓ ( s ) ! + ( x i +1 ( s ) − x i ( s )) n X ℓ =1 α ℓ x ℓ − ( s ) ! , i = 1 , . . . , n − ,w n ( s ) = x n − ( s ) n X ℓ =1 β ℓ x ℓ ( s ) ! − x n ( s ) n X ℓ =1 α ℓ x ℓ − ( s ) ! . Let w ( s ) = ( w ( s ) , . . . , w n ( s )). Then, using (30) and the definition of w , we find n ( ν j · w ) = ( ν j · h ′ ) w , j = 1 , . . . , n − , (31)where the dependence on s is suppressed. This expression implies that the linear combination on theleft side vanishes at least at one order higher than w , since h ′ vanishes at least at order one.Our goal is to show that the first non-zero coefficient in the Taylor expansion of w is negative andof order n + 1 if n is even and of order n if n is odd. If this is so, then it must be that locally around p the flow of the vector field w ( s ) is attracted towards p as a negative coefficient of w ( s ) impliesthe absolute value | s | is decreasing towards zero. Specifically, we prove the following theorem, where w ( m ) ( s ) denotes the m -th order term in the Taylor expansion of w . Theorem 3.3.
The first non-vanishing term in the Taylor expansion of w is negative. Specifically,the first non-vanishing term is w ( M )0 ( s ) = − · n +2 n ( n + 1) ( n + 2) s M , with M = n + 1 if n is even and M = n if n is odd. In order to prove Theorem 3.3, we investigate the order terms w ( m ) i ( s ) of w i ( s ) through a series oftechnical lemmas. We start by introducing a new quantity. LetΓ n ( s ) := n X ℓ =0 γ n ( ℓ ) h ℓ ( s ) = γ n · h ( s ) , (32)and note that the zeroth and first order terms of Γ n vanish. The proof of the next lemma is inSubsection 5.2. Lemma 3.4.
Based on (28) , the following two relations hold n X ℓ =1 β ℓ x ℓ ( s ) = α n − α ns − Γ n ( s ) . n X ℓ =1 α ℓ x ℓ − ( s ) = α n + α ns − α h n ( s ) − Γ n ( s ) . Using Lemma 3.4 and (28), we have the following expressions: w ( s ) = 2 α n ( n − s + α ( nh − h n ) + α ((2 − n ) h n + nh ) s − α h n h + (2 s − h )Γ n ,w i ( s ) = − α ns + α n ( h i − − h i + h i +1 ) + α ( − nh i − + nh i +1 + 2 h n ) s (33)+ α h n ( h i − h i +1 ) + ( − h i − + 2 h i − h i +1 )Γ n , for i = 1 , . . . , n − ,w n ( s ) = 2 α n ( n − s + α ( nh n − + (1 − n ) h n ) − α n (2 h n + h n − ) s + α h n + ( h n − h n − − s )Γ n , s of h i and Γ n is omitted to ease notation. The details of the proof of (33) arenot given. A key ingredient in the proof of Theorem 3.3 is the following function of s : χ n ( s ) = n X j =1 jw j ( s ) = − v · w ( s ) , (34)where the second equality follows from the trivial fact that e · w = 0 and the definition of v . We furtherobtain (see Subsection 5.3 for a proof): χ n ( s ) = ( h n ( s ) − ns )Γ n ( s ) , (35)which implies that terms of χ n of order zero, one and two vanish. We also note that w (0) ( s ) = w (1) ( s ) =0, which trivially follows from (33). Lemma 3.5.
Consider the following statements, where m ≥ and ν , . . . , ν n − is a basis of h v, e i ⊥ ⊆ R n +1 :(i) w ( m − ( s ) = 0 .(ii) ν j · w ( m ) ( s ) = 0 for all j = 1 , . . . , n − .(iii) w ( m ) ( s ) = K m vs m , that is, w ( m ) i ( s ) = K m v i s m for all i = 0 , . . . , n and some K m ∈ R .(iv) χ ( m ) n ( s ) = − K m n ( n +1)( n +2)6 s m for some K m ∈ R .Then, for m ≥ , the following implications hold:(i) ⇒ (ii) ⇒ (iii) ⇒ (iv) . Proof. (i) ⇒ (ii) follows from (31). If (ii) holds, since e · w = 0, we have w ( m ) ∈ h ν , . . . , ν n − , e i ⊥ = h v i ,which implies (iii). Finally, assume (iii), then, using (34), we have χ ( m ) n = n X j =1 jw ( m ) j = n X j =1 jK m ( n − j ) s m = K m s m n n X j =1 j − n X j =1 j = K m s m (cid:16) n ( n +1)2 − n ( n +1)(2 n +1)6 (cid:17) = − K m n ( n +1)( n +2)6 s m . This completes the proof.Let c ( m )0 = (cid:16) − n ( n − s − ((2 − n ) h n + nh ) s + h n h − (2 s − h ) Γ n ( s ) α (cid:17) ( m ) ,c ( m ) i = (cid:16) ns − ( − nh i − + nh i +1 + 2 h n ) s − h n ( h i − h i +1 ) − ( − h i − + 2 h i − h i +1 ) Γ n ( s ) α (cid:17) ( m ) ,c ( m ) n = (cid:16) − n ( n − s + n (2 h n + h n − ) s − h n − ( h n − h n − − s ) Γ n ( s ) α (cid:17) ( m ) , (36)where i = 1 , . . . , n −
1, and consider the matrix B = . . . n − − n . . . n − − n ... . . . . . . . . . ... ...... . . . . . . 1 2 − ( n − n ... . . . . . . . . . 1 − ( n − n . . . . . . . . . − ∈ R ( n − × ( n − . emma 3.6. If w ( m ) = 0 and Γ ( m − n = 0 , then h ( m ) i = ih ( m )1 + (cid:16) B (cid:0) c ( m )3 n , . . . , c ( m ) n n , c ( m )0 (cid:1) t (cid:17) i − , i = 2 , . . . , n. Proof.
First of all note that e · w ( m ) = 0, and also ( n + 1 , n, . . . , , · w ( m ) = 0. Indeed, ( n +1 , n, . . . , ,
1) = − (0 , . . . , n ) + ( n + 1) e and hence, by (35),( n + 1 , n, . . . , , · w ( s ) ( m ) = − χ n ( s ) ( m ) = (cid:16) ( h n ( s ) − ns )Γ n ( s ) (cid:17) ( m ) = 0 , by hypothesis.The system w ( m ) = 0 can be written in matrix form as b A (cid:0) h ( m )1 , . . . , h ( m ) n (cid:1) t = c ( m ) (37)with c ( m ) = (cid:0) c ( m )0 , . . . , c ( m ) n (cid:1) t and b A = n . . . − − n n . . . n − n n . . . ...... . . . . . . . . . 0... ... n − n n . . . . . . n − n ∈ R ( n +1) × n . This matrix has rank n −
1, since the rows 2 , . . . , n are linearly independent. Since e b A = ( n +1 , n, . . . , , b A = 0, also e · c ( m ) = ( n + 1 , n, . . . , , · c ( m ) = 0. Hence, by deleting the second andthe third row of b A , moving the first row to be the last, the first column to be the last, dividing thefirst n − n , and reorganising the vector ( h ( m )1 , . . . , h ( m ) n ), system (37) can berewritten as A h ( m )2 ... h ( m ) n h ( m )1 = c ( m )3 n ... c ( m ) n n c ( m )0 , with A = − . . .
00 . . . . . . . . . ... ...... . . . 1 − − nn . . . . . . − n ∈ R ( n − × n . (38)A straightforward computation shows that BA = . . . . . . −
20 1 . . . . . . ... − − n + 10 . . . . . . − n . Multiplying both sides of equality (38) by B gives the expression in the statement.Let ℓ [ k ] = ℓ ( ℓ − · · · ( ℓ − k + 1) = ℓ !( ℓ − k )! = (cid:18) ℓk (cid:19) k !be the k -th descending factorial for k = 0 , . . . , n −
1. By definition, ℓ [ k ] = 0 if k > ℓ . If k = 0, then ℓ [0] = 1 for all ℓ ≥
0. Furthermore, 0 [0] = 1 and 0 [ k ] = 0 for k >
0. We introduce the following vectorsin R n +1 : e [ k ] = (cid:0) [ k ] , . . . , n [ k ] (cid:1) , k = 0 , . . . , n. In particular, e [1] = (0 , , , . . . , n ) and e [2] = (0 , , , , . . . , i ( i − , . . . , n ( n − emma 3.7. For k = 0 , . . . , n − , if n is even, and for k = 0 , . . . , n − , if n is odd, we have e [ k ] · γ n = 0 . Furthermore, e [ n ] · γ n = − n ! n + 1 for n even , e [ n − · γ n = − n − n + 1 for n odd . The above lemma shows that γ n is orthogonal to e [ k ] for all k = 0 , . . . , n − n is even, and forall k = 0 , . . . , n − n odd.We note also the following identities for any non-negative integer k ≥ i ≥ k : i X ℓ =0 ℓ [ k ] = i X ℓ = k (cid:18) ℓk (cid:19) k ! = (cid:18) i + 1 k + 1 (cid:19) k ! = 1 k + 1 i [ k +1] + i [ k ] . (39) Lemma 3.8.
Let
M > and assume that Γ ( m ) n = 0 , for all ≤ m ≤ M . Then we have(i) w ( m ) = 0 for ≤ m ≤ M + 1 ,(ii) for ≤ m ≤ M + 1 , h ( m ) ( s ) = m X j =1 c mj e [ j ] s m , or equivalently h ( m ) ℓ ( s ) = m X j =1 c mj ℓ [ j ] s m , ℓ = 0 , . . . , n, where c mj ∈ R , j = 1 , . . . , m , are constants such that c mm = ( − m m m ! .Proof. (i) The proof is by induction on m . By construction w (0) = 0 for m = 0. Now assume that w ( m ) = 0 for an index satisfying 0 ≤ m ≤ M . We show that the statement holds for m + 1. Since w ( m ) = 0, then from Lemma 3.5, w ( m +1) = K m +1 vs m +1 and χ ( m +1) n = − K m +1 n ( n +1)( n +2)6 s m +1 . Onthe other hand, by assumption, Γ ( j ) n = 0 for 0 ≤ j ≤ M , in particular for 0 ≤ j ≤ m , and combiningthis with equation (35) yields χ ( m +1) n ( s ) = m − X j =2 h ( j ) n ( s )Γ ( m +1 − j ) n ( s ) − ns Γ ( m ) n ( s ) = 0 . Hence K m +1 = 0, and thus w ( m +1) = 0. This shows (i).(ii) is proven by induction in m . Consider m = 2. The form of h (2) i follows from Lemma 3.6, usingthat c (2)0 = − n ( n − s , c (2) i = 4 ns , i = 0 , n and c (2) n = − n ( n − s , see (36). Indeed, h (2) i equals ih (2)1 plus the ( i − B ( c (2)3 n , . . . , c (2) n n , c (2)0 ) t , which is n c (2) i +1 + n c (2) i +2 + · · · + ( n − i ) n c (2) n − in c (2)0 = s (cid:0) n − i + n − i − X k =1 k − n − n − i ) (cid:1) = 2 i ( i − s . In vector notation, this is h (2) = h (2)1 e [1] + 2 s e [2] . Since h (2)1 is of the form c s for some c ∈ R wehave h (2) = (cid:0) c e [1] + c e [2] (cid:1) s , c = 2 , and the claim is true for m = 2.Now assume the claim is true for all m ′ such that 2 ≤ m ′ ≤ m ≤ M and consider m ′ = m + 1. Westart by describing h ( m +1) . Since w ( m +1) = 0 by (i) and Γ ( m ) n = 0, we might apply Lemma 3.6. Since17 ( m ′ ) n = 0 for 0 ≤ m ′ ≤ m by assumption, and m ≥
2, it holds that c ( m +1)0 = − (cid:16) (2 − n ) h ( m ) n + nh ( m )1 (cid:17) s + m − X j =2 h ( j )1 h ( m +1 − j ) n ,c ( m +1) i = − (cid:16) − nh ( m ) i − + nh ( m ) i +1 + 2 h ( m ) n (cid:17) s − m − X j =2 (cid:0) h ( j ) i − h ( j ) i +1 (cid:1) h ( m +1 − j ) n , i = 1 , . . . , n − ,c ( m +1) n = n (cid:16) h ( m ) n + h ( m ) n − (cid:17) s − m − X j =2 h ( j ) n h ( m +1 − j ) n . Then, by Lemma 3.6, h ( m +1) i equals ih ( m +1)1 plus the ( i − B (cid:0) c ( m +1)3 n , . . . , c ( m +1) n n , c ( m +1)0 (cid:1) t . That is, for i = 2 , . . . , n , h ( m +1) i = ih ( m +1)1 + n c ( m +1) i +1 + n c ( m +1) i +2 + · · · + ( n − i ) n c ( m +1) n − in c ( m +1)0 = ih ( m +1)1 + D s + D , where D = n − X j = i +1 ( j − i ) (cid:0) h ( m ) j − − h ( m ) j +1 − n h ( m ) n (cid:1) + ( n − i ) (cid:0) h ( m ) n + h ( m ) n − (cid:1) + in (cid:0) (2 − n ) h ( m ) n + nh ( m )1 (cid:1) = ih ( m )1 + h ( m ) n (cid:16) − n (cid:16) n − X j = i +1 ( j − i ) (cid:17) + 2( n − i ) + in (2 − n ) (cid:17) + ( n − i ) h ( m ) n − + n − X j = i +1 ( j − i ) (cid:0) h ( m ) j − − h ( m ) j +1 (cid:1) = ih ( m )1 + h ( m ) n (cid:16) i (1 − i ) n (cid:17) + h ( m ) i + 2 n − X ℓ = i +1 h ( m ) ℓ = ih ( m )1 + i (1 − i ) n h ( m ) n + h ( m ) i + 2 n X ℓ = i +1 h ( m ) ℓ , and D = − n − X ℓ = i +1 ℓ − in m − X j =2 ( h ( j ) ℓ − h ( j ) ℓ +1 ) h ( m +1 − j ) n − ( n − i ) n m − X j =2 h ( j ) n h ( m +1 − j ) n − in m − X j =2 h ( j )1 h ( m +1 − j ) n = − m − X j =2 n − X ℓ = i +1 ℓ − in ( h ( j ) ℓ − h ( j ) ℓ +1 ) ! + ( n − i ) n h ( j ) n + in h ( j )1 ! h ( m +1 − j ) n = − n m − X j =2 (cid:16) n X ℓ = i +1 h ( j ) ℓ + ih ( j )1 (cid:17) h ( m +1 − j ) n . Let g ( q ) be defined component-wise as g ( q ) i = P nℓ = i +1 h ( q ) ℓ . Then, the above result might be givenin vector notation as: h ( m +1) = h ( m +1)1 e [1] + (cid:16) h ( m )1 e [1] − n h ( m ) n e [2] + h ( m ) + 2 g ( m ) (cid:17) s − n m − X j =2 (cid:16) g ( j ) + h ( j )1 e [1] (cid:17) h ( m +1 − j ) n . (40)Using P nℓ =0 h ( q ) ℓ = 0 (29), equation (39), and the induction hypothesis, we note that g ( q ) i = − i X ℓ =0 h ( q ) ℓ = − i X ℓ =0 q X j =1 c qj ℓ [ j ] s q = − q X j =1 c qj i X ℓ =0 ℓ [ j ] s q = − q X j =1 c qj (cid:16) i [ j +1] j +1 + i [ j ] (cid:17) s q = − q +1 X j =1 (cid:16) c q ( j − j + c qj (cid:17) i [ j ] s q , ≤ q ≤ m, c q = c q ( q +1) = 0. Consequently, g ( q ) = − q +1 X j =1 (cid:18) c q ( j − j + c qj (cid:19) e [ j ] s q , ≤ q ≤ m. Note that h ( q ) j ( s ) = z qj s q for some z qj ∈ R . Using this expression and the induction hypothesis on(40), we obtain h ( m +1) = z ( m +1)1 s m +1 e [1] + z m s m e [1] − z mn n s m e [2] + m X j =1 c mj e [ j ] s m − m +1 X j =1 (cid:18) c m ( j − j + c mj (cid:19) e [ j ] s m s − n m − X j =2 j +1 X k =1 − (cid:18) c j ( k − j + c jk (cid:19) e [ k ] s j + z j s j e [1] ! z ( m +1 − j ) n s m +1 − j . This expression shows that h ( m ) takes the form stated in the lemma. The coefficient of s m +1 e [ m +1] is,by the induction hypothesis, c ( m +1)( m +1) = − (cid:18) c mm m + 1 + c m,m +1 (cid:19) = − c mm m + 1 = ( − m +1 m +1 ( m + 1)! , as required. Lemma 3.9. Γ ( m ) n = 0 for all m = 0 , . . . , n − if n even and for all m = 0 , . . . , n − if n odd.Proof. Let M = n − n is even and M = n − n is odd. The proof is by induction in m .By construction Γ (0) n = 0. Assume Γ ( m ′ ) n = 0 for all 0 ≤ m ′ ≤ m < M , and consider m + 1. ByLemma 3.8(ii) and the induction hypothesis, the vector of coefficients of s m +1 in h ( m +1) lives in thevector space spanned by the vectors e [1] , . . . , e [ m +1] . Now, using (32) and Lemma 3.7, we haveΓ ( m +1) n = γ n · h ( m +1) = 0 , since m + 1 ≤ M .We are now in a situation where we can prove Theorem 3.3. Proof of Theorem 3.3.
Let M = n + 1 if n is even and M = n if n is odd. By Lemma 3.9, Γ ( m ) n = 0for all 0 ≤ m ≤ M − w ( m ) = 0 (and in particular w ( m )0 = 0), for all1 ≤ m ≤ M − w ( M )0 = K M ns M and χ ( M ) n ( s ) = − K M n ( n + 1)( n + 2)6 s M . Furthermore, by Lemma 3.8(ii), the vector of coefficients of s M − in h ( M − lives in the vector spacespanned by the vectors e [1] , . . . , e [ M − and the coefficient of e [ M − is c M − ,M − . Using (32) andLemma 3.7, we have Γ ( M − n ( s ) = γ n · h ( M − ( s ) = c M − ,M − ( γ n · e [ M − ) s M − . By (35), we have χ ( M ) n ( s ) = M − X j =2 h ( j ) n ( s )Γ ( M − j ) n ( s ) − ns Γ ( M − n ( s ) = − n c M − ,M − ( γ n · e [ M − ) s M , where we use that the first summand vanishes. This implies that − K M n ( n + 1)( n + 2)6 = − n c M − ,M − ( γ n · e [ M − ) , w is n times K M = 12 c M − ,M − ( γ n · e [ M − )( n + 1)( n + 2) . If n is even, M − n , ( − n = 1 , and we have by Lemma 3.7 and Lemma 3.8(ii) that, K n +1 = − · n n ! n !( n + 1) ( n + 2) = − · n +2 ( n + 1) ( n + 2) . Similarly, if n is odd, then M − n − K n = − · n − · · ( n − n − n + 1) ( n + 2) = − · n +2 ( n + 1) ( n + 2) . The statement now follows from w ( M )0 = K M ns M . This concludes the proof of Theorem 3.3.These computations are supported by computations in Maple for up to n = 15. We have shown that the MM system admits a steady state of multiplicity 2 ⌊ n ⌋ + 1 with n − w ( s ) is negative. Forbrevity, as in the previous section, we denote this multiplicity by M . Next we will perturb the systemso that the multiple steady state splits into M steady states of multiplicity one.To do this note first that given any polynomial of degree n + 1 whose leading term is 1 and whoseconstant term is − α i and β i in (21) so as to reproduce thegiven polynomial. Moreover this can be done in such a way that if the coefficients in the polynomialare varied the coefficients α i and β i depend smoothly on the coefficients of the polynomial. (In whatfollows the term ’smooth’ is used to mean C ∞ .) Next we introduce a specific family of polynomialsdepending on a parameter µ by replacing the function ( u − k +1 used previously by( u − k Y ℓ =1 (cid:0) u − (1 + ℓµ ) (cid:1)(cid:0) u − (1 + ℓµ ) − (cid:1) . This works for n even. We construct a similar family of polynomials if n is odd. We can then choosecoefficients α i ( µ ) and β i ( µ ) depending smoothly on µ to reproduce this family. After that we canchoose parameters of the MM system depending smoothly on µ so as to give rise to these coefficients α i and β i . In the end we have a family of MM systems depending smoothly on the parameter µ .For µ = 0, we have the multiple steady state which has been studied before. For convenience itwill be referred to as the bifurcation point. As µ is varied the root of the polynomial at u = 1, whichcorresponds to the bifurcation point, splits into simple roots. As is common in bifurcation theorywe introduce a suspended system by adjoining the equation µ ′ = 0 to the given evolution equations.This increases the dimension of the system by one. The bifurcation point is also a steady state ofthe suspended system. Its centre manifold as a solution of that system is of dimension two and isfoliated by invariant curves of constant µ . The invariant curve with µ = 0 is the centre manifold of thebifurcation point with respect to the MM system studied previously. The invariant curve for a non-zerovalue of µ will be referred to as the ‘perturbed centre manifold’ although it should be emphasised thatit itself is not the centre manifold of anything. It is a general property of a centre manifold of anysteady state that all other steady states sufficiently close to the original one lie on that centre manifold.Thus for µ small all steady states of the parameter-dependent MM system for a fixed value of µ closeto the bifurcation point lie on the perturbed centre manifold.The non-zero roots u of the polynomial are simple. This suggests that the corresponding steadystates of the MM system might be hyperbolic, but this is not obvious. It will now be proved that it isin fact true. The restriction of the system to the two-dimensional centre manifold of the bifurcationpoint with respect to the suspended system can be thought of as a one-dimensional dynamical system20epending on the parameter µ . Let us write it in the form s ′ ( τ, µ ) = q ( s, µ ), where q ( s,
0) is therestriction of the vector field f to the centre manifold parametrised with s , and ′ denotes derivativewith respect to τ . The function q has one zero for µ = 0 and M zeroes for µ = 0. The aim is to showthat q ′ is non-vanishing at each zero of q for µ = 0 and µ sufficiently small. This will be proved bycontradiction.If the statement is false, then there is a sequence µ j such that µ j → j → ∞ , and such thatthe function q ( s, µ j ) has at least one degenerate root. In particular q ( s, µ j ) has at least M + 1 rootscounted with multiplicity. Then q ′ has at least M (distinct) roots, all of which must lie betweenthe smallest and the largest positive roots of q . Continuing in this fashion, it can be concluded that q ( M ) ( s, µ j ) has at least one root, which is between the smallest and the largest root of q ( s, µ j ). Henceby continuity it follows that q ( M ) (0 ,
0) = 0. This contradicts Theorem 3.3, saying that q ( M ) (0 , = 0,and so in reality q ′ does not vanish at any zero of q for µ = 0. Note that it does not matter whetherwe argue about the vector field w or f in Theorem 3.3. Since w = rf with r ( s ) an arbitrarily oftendifferentiable positive function with r (0) = 1, then f ( s ) ≈ − as M for some a > s = 0.It follows from this argument that for any µ = 0 the steady states of the restriction of the dynamicalsystem to the perturbed centre manifold are hyperbolic. Since the other eigenvalues of the linearisationat the bifurcation are negative, it follows by continuity that at nearby points with µ = 0, all eigenvalueshave non-zero real parts. More information can be obtained by considering the sign of the function q ′ at those points where it is not zero. At each steady state the sign changes as no zeros are degenerate.The sign can be determined by continuity since in Theorem 3.3 we have established the sign in thecase µ = 0. Namely, since M is odd and q ( M ) (0 , <
0, locally around zero, q ( s,
0) is positive for s < s >
0. It follows that within the perturbed centre manifold sinks and sourcesalternate and the outermost steady states are sinks. Hence if these points, considered as steady statesof the full MM system, are ordered in a suitable way, sinks alternate with saddle points whose unstablemanifolds are one-dimensional and the outermost ones are sinks. This completes the proof of Theorem2.2.
Perhaps it is of relevance to point out differences and similarities to previous work. The proof ofTheorem 2.3 is substantially different from the proof given in [15] for the case n = 2. Generalisingthe proof of [15] seems non-trivial and impracticable already for n = 4, the first case that would havegiven results establishing the existence of more than the two stable steady states known for n = 2.The present proof revolves around several key points. First of all, a reduction of the systemusing time scale separation that produces a one-dimensional centre manifold is performed. A similarprocedure has been applied to investigate the stability of steady states in a dynamical system modellinga different biological situation, the Calvin cycle [6]. Secondly, we obtain the desired number of steadystates from a single bifurcation point. Since the centre manifold is one-dimensional, this allows us toconclude the stability of the steady states as we unfold the bifurcation point by parameter perturbation.It is natural to wonder to what extent these techniques could be developed into a general procedureor statement, as unlimited multistationarity has been established for other families of systems [11, 18].For the specific reduction of the system to be applicable, we need the transverse eigenvalues to havenegative real parts. This is true not only for this case but in general for reduction of systems by(so-called) removal of non-interacting species [8–10, 24]. So this part of the proof could potentiallybe generalised. However, it seems non-trivial to devise techniques to guarantee and analyse a one-dimensional centre manifold for which a bifurcation point exists.As mentioned in the introduction, the steady states whose stability has been investigated here arenot the most general steady states of the multiple futile cycle, in the sense that there might exist morethan 2 ⌊ n ⌋ + 1 for some parameter choices. It was proven in [27] that 2 n − n = 3. For n ≥ n − Recall the statement of the lemma: If M = ( m ij ) is a square matrix such that m ij = a ij + b i where a ij = 0 for all i = j and all a ii and b i are positive, then all eigenvalues of M have positive real part.To prove this result, consider, for n ≥
1, the following n × n matrix: A n = a + b b . . . b b a + b . . . b ... ... . . . ... b n b n . . . a n + b n , where a i , b i >
0. We want to show that all eigenvalues of A n have positive real part. Given a matrix B ∈ R n × n and two sets I, J ⊆ { , . . . , n } of cardinality r ≤ n , we denote by B I,J ∈ R r × r the submatrixof B consisting of the rows indexed by I and the columns indexed by J . Then • A matrix B ∈ R n × n is a P-matrix if all principal minors, det( B I,I ) for I ⊆ { , . . . , n } are positive. • A matrix B ∈ R n × n is sign-symmetric if for every pair sets I, J ⊆ { , . . . , n } of cardinality r ≤ n ,it holds that sign(det( B I,J )) sign(det( B J,I )) ≥ . It follows from [3], that if B is a P-matrix and sign-symmetric, then all eigenvalues of B havepositive real part. Therefore, it is enough to show that A n is a P -matrix and sign-symmetric.We first show that A n is a P-matrix for all n ≥
1. Because non-maximal principal minors of A n are of the same form as A n but of smaller size, it is sufficient to prove that det( A n ) > n ≥ A n from all other columns and obtain a new matrix A ′ n : A ′ n = a . . . b a . . . b ... ... . . . ... ...0 0 . . . a n − b n − − a n − a n − a n . . . a n + b n . r i denote the i -th row of A ′ n . We replace the last row of A ′ n , r n , by the linear combination( a n /a ) r + ( a n /a ) r + · · · + ( a n /a n − ) r n − + r n and obtain the matrix: A ′′ n = a . . . b a . . . b ... ... . . . ... ...0 0 . . . a n − b n − . . . a n + P ni =1 ( a n b i /a i ) . The matrix A ′′ n is upper-triangular matrix and the diagonal entries are positive. Therefore, the de-terminant of A ′′ n is positive, and as a consequence det( A n ) is also positive. This shows that A n is aP-matrix.To see that A n is sign-symmetric for all n ≥
1, we will show thatsign(det( A n ) I,J ) = sign(det( A n ) J,I ) , (41)for all I, J ⊆ { , . . . , n } of cardinality r . Clearly, we only need to check the case I = J .If I ∩ J contains strictly less than r − A n ) I,J ) = det(( A n ) J,I ) = 0. Indeed,assume there are two indices i, j ∈ I that are not in J . Then ( A n ) I,J has rows ( b i b i · · · b i ) and( b j b j · · · b j ) and hence the determinant is zero. By choosing two indices i, j ∈ J that are not in I we argue that det(( A n ) J,I ) = 0 as well.Therefore, we only need to check (41) when
I, J differ in only one index. Let I ∩ J = { ℓ , . . . , ℓ r − } with ℓ < . . . < ℓ r − and i, j, s, k such that I = { ℓ , . . . , ℓ k , i, ℓ k +1 , . . . , ℓ r − } ,J = { ℓ , . . . , ℓ s , j, ℓ s +1 , . . . , ℓ r − } , where ℓ k < i < ℓ k +1 and ℓ s < j < ℓ s +1 . The matrix ( A n ) I,J might be constructed by taking a matrixof type A r − , built from the data a ℓ , . . . , a ℓ r − and b ℓ , . . . , b ℓ r − , and adding a row ( b i . . . b i ) after the k -th row and a column with entries b ℓ , . . . , b ℓ k , b i , b ℓ k +1 , . . . , b ℓ r − after the s -th column. By applyingthe permutation that sends the k -th row to the first row and the s -th column to the first column, weconclude that det( A n ) I,J agrees with ( − k + s times the determinant of a matrix of the form B r = β β . . . β β α + β . . . β ... ... . . . ... β r β r . . . α r + β r with ( β , . . . , β r ) = ( b i , b ℓ , . . . , b ℓ r − ) and ( α , . . . , α r − ) = ( a ℓ , . . . , a ℓ r − ).Similarly, the matrix ( A n ) J,I might be constructed by taking the matrix A r − as above, and addinga row ( b j . . . b j ) after the s -th row and a column with entries b ℓ , . . . , b ℓ s , b j , b ℓ s +1 , . . . , b ℓ r − after the k -th column. Therefore, det( A n ) J,I agrees with ( − k + s times the determinant of a matrix of the form B r above, now with β = b j .Therefore, it is enough to show that the sign of det( B r ) does not depend on the values of α i > , β i >
0. We proceed similarly to the argument given for the first statement. We subtract the firstcolumn of B r to all other columns and obtain a new matrix B ′ r equal to: B ′ r = β . . . β α . . . β r − . . . α r − . The determinant of B ′ r is β α · · · · · α r − . Therefore the sign of det( B ′ r ), and hence of det( B r ), is +1and is independent of α i , β i , as desired. This concludes the proof.23 .2 Proof of Lemma 3.4 Proof.
Recall that β i = α − γ n ( i ) − n +1 (Proposition 3.1), that γ n ( n ) = − n +1 and that P nℓ =1 ℓγ ( ℓ ) = 0(Lemma 3.7). By definition of v ℓ and Proposition 3.1, we have n X ℓ =1 β ℓ v ℓ = n X ℓ =1 β ℓ ( n − ℓ ) = n n X ℓ =1 β ℓ − n X ℓ =1 ℓ α + 2 n X ℓ =1 ℓγ n ( ℓ ) + n +1 n X ℓ =1 ℓ = n ( α n − − α n ( n + 1) + n = − α n. Then by definition of Γ n ( s ) and using P nℓ =1 h ℓ = 0 (29), we have n X ℓ =1 β ℓ h ℓ ( s ) = ( α − n +1 ) n X ℓ =1 h ℓ ( s ) − n X ℓ =1 γ n ( ℓ ) h ℓ ( s ) = − Γ n ( s ) . Using this information, Proposition 3.1 and that h = 0, we compute the two factors:1 + n X ℓ =1 β ℓ x ℓ ( s ) = 1 + n X ℓ =1 β ℓ + n X ℓ =1 β ℓ v ℓ s + n X ℓ =1 β ℓ h ℓ ( s ) = α n − α ns − Γ n ( s ) . n X ℓ =1 α ℓ x ℓ − ( s ) = 1 + n X ℓ =1 α ℓ + n X ℓ =1 α ℓ v ℓ − s + n X ℓ =1 α ℓ h ℓ − ( s )= α n + n − X ℓ =1 β ℓ v ℓ s + α v s + n − X ℓ =1 β ℓ h ℓ ( s ) + α h ( s )= α n − α ns − ( α ( − n )) s + α ns − α h n ( s ) − Γ n ( s )= α n + α ns − α h n ( s ) − Γ n ( s ) . (35) We will show that χ n ( s ) = ( h n ( s ) − ns )Γ n ( s ) . Consider the expression of w i in (33). We find χ n ( s ) = n X j =1 jw j ( s ) = n − X j =0 x j ( s ) n X ℓ =0 β ℓ x ℓ ( s ) ! − n X j =1 x j ( s ) n X ℓ =0 α ℓ x ℓ − ( s ) ! . Using e · h = e · v = 0 and h = 0 yields n − X j =0 x j ( s ) = n − X j =0 (1 + v j s + h j ( s )) = n + ns − h n ( s ) , n X j =1 x j ( s ) = n X j =1 (1 + v j s + h j ( s )) = n − ns. Combined with Lemma 3.4, this gives χ n ( s ) = (cid:0) n + ns − h n ( s ) (cid:1)(cid:0) α n − α ns − Γ n ( s ) (cid:1) − (cid:0) n − ns (cid:1)(cid:0) α n + α ns − α h n ( s ) − Γ n ( s ) (cid:1) = ( h n ( s ) − ns )Γ n ( s ) . .4 Proof of Lemma 3.7 Proof.
Let P n ( ℓ ) = n for n even, and P n ( ℓ ) = n − ℓ for n odd. Then γ n ( ℓ ) = ( − ℓ +1 P n ( ℓ ) n ( n +1) (cid:0) nℓ (cid:1) . Thisimplies for k ≥ n X ℓ =0 ℓ [ k ] γ n ( ℓ ) = n X ℓ = k ℓ [ k ] ( − ℓ +1 P n ( ℓ ) n ( n + 1) (cid:18) nℓ (cid:19) = n ( n +1) n X ℓ = k ( − ℓ +1 P n ( ℓ ) ℓ !( ℓ − k )! n ! ℓ !( n − ℓ )!= ( − k +1 n ! n ( n + 1)( n − k )! n − k X j =0 ( − j P n ( j + k ) (cid:18) n − kj (cid:19) . Since P n ( j + k ) has degree 0 in j for n even and degree 1 for n odd, the identity (24) implies P nℓ =0 ℓ [ k ] γ n ( ℓ ) = 0 , for n even and n − k >
0, that is, k < n , and for n odd and n − k > k < n −
1. This shows the first part of the statement.For n even and k = n , the above reduces to n X ℓ =0 ℓ [ n ] γ n ( ℓ ) = ( − n +1 n ! n ( n + 1) ·
0! ( − P n ( n ) (cid:18) (cid:19) = − n ! n +1 , as in the statement. For n odd and k = n −
1, it reduces to n X ℓ =0 ℓ [ n − γ n ( ℓ ) = ( − n n ! n ( n + 1) · (cid:16) P n ( n − (cid:18) (cid:19) − P n ( n ) (cid:18) (cid:19)(cid:17) = − ( n − n + 1) ((2 − n ) − ( − n )) = − n − n + 1) . This concludes the proof of the lemma.
Acknowledgements.
EF and CW have been supported by the Independent Research Fund ofDenmark. ADR is grateful for the hospitality of the Department of Mathematical Sciences, Universityof Copenhagen, while an important part of this work was being done.
References [1] D. Angeli and E. D. Sontag. Translation-invariant monotone systems, and a global convergenceresult for enzymatic futile cycles.
Nonlinear Anal.-Real World Appl. , 9(1):128–140, 2008.[2] M. Banaji. Inheritance of oscillation in chemical reaction networks.
Appl. Math. Comput. , 325:191–209, 2018.[3] D. Carlson. A class of positive stable matrices.
J. Res. Natl. Bureau Stand. , 78B:1–2, 1974.[4] J. Carr.
Applications of Centre Manifold Theory . Springer, Berlin, 1981.[5] P. Cohen. The origins of protein phosphorylation.
Nat. Cell Biol. , 4(5):E127–E130, 2002.[6] S. Disselnk¨otter and A. D. Rendall. Stability of stationary solutions in models of the Calvin cycle.
Nonlinear Anal.-Real World Appl. , 34:481–494, 2017.[7] P. ´Erdi and J. T´oth.
Mathematical Models of Chemical Reactions. Theory and Applications ofDeterministic and Stochastic Models . Princeton University Press, Princeton, 1989.[8] E. Feliu, S. Walcher, and C. Wiuf. Noninteracting species and reduction of reaction networks. inpreparation .[9] E. Feliu and C. Wiuf. Variable elimination in chemical reaction networks with mass-action kinetics.
SIAM J. Appl. Math. , 72:959–981, 2012. 2510] E. Feliu and C. Wiuf. Simplifying biochemical models with intermediate species.
J. R. S. Interface ,10:20130484, 2013.[11] S. Feng, M. S´aez, C. Wiuf, E. Feliu, and O. S. Soyer. Core signalling motif displaying multistabilitythrough multi-state enzymes.
J. R. S. Interface , 13(123), 2016.[12] M. Fiedler and V. Pt´ak. On matrices with non-positive off-diagonal elements and positive principalminors.
Czechoslovak Mathematical Journal , 12:382–400, 1962.[13] D. Flockerzi, K. Holstein, and C. Conradi. n -site phosphorylation systems with 2 n − Bull. Math. Biol. , 76(8):1892–1916, 2014.[14] J. Gunawardena. Multisite protein phosphorylation makes a good threshold but can be a poorswitch.
Proc. Natl. Acad. Sci. U.S.A. , 102:14617–14622, 2005.[15] J. Hell and A. D. Rendall. A proof of bistability for the dual futile cycle.
Nonlinear Anal.-RealWorld Appl. , 24:175–189, 2015.[16] J. Hell and A. D. Rendall. Sustained oscillations in the MAP kinase cascade.
Math. Biosci. ,282:162–173, 2016.[17] G. V. W. Johnson and W. H. Stoothoff. Tau phosphorylation in neuronal cell function anddysfunction.
J. Cell Science , 117(24):5721–5729, 2004.[18] V. B. Kothamachu, E. Feliu, L. Cardelli, and O. S. Soyer. Unlimited multistability and booleanlogic in microbial signalling.
J. R. S. Interface , (DOI: 10.1098/rsif.2015.0234), 2015.[19] C. Kuehn.
Multiple time scale dynamics , volume 191 of
Applied Mathematical Sciences . Springer,Cham, 2015.[20] Y. A. Kuznetsov.
Elements of Applied Bifurcation Theory . Springer, Berlin, 1995.[21] N. I. Markevich, J. B. Hoek, and B. N. Kholodenko. Signaling switches and bistability arisingfrom multisite phosphorylation in protein kinase cascades.
J. Cell Biol. , 164:353–359, 2004.[22] L. Perko.
Differential equations and dynamical systems , volume 7 of
Texts in Applied Mathematics .Springer-Verlag, New York, third edition, 2001.[23] S. Ruiz. An algebraic identity leading to Wilson’s theorem.
The Mathematical Gazette ,80(489):579–582, 1996.[24] M. S´aez, C. Wiuf, and E. Feliu. Graphical reduction of reaction networks by linear elimination ofspecies.
J. Math. Biol. , 74:195–237, 2017.[25] E. A. Slee, B. Benassi, R. Goldin, S. Zhong, I. Ratnayaka, G. Blandino, and X. Lu. Phosphory-lation of ser312 contributes to tumor suppression by p53 in vivo.
PNAS , 107(45):19479–19484,2010.[26] M. Thomson and J. Gunawardena. Unlimited multistability in multisite phosphorylation systems.
Nature , 460:274–277, 2009.[27] L. Wang and E. D. Sontag. On the number of steady states in a multiple futile cycle.