CLUE: Exact maximal reduction of kinetic models by constrained lumping of differential equations
Alexey Ovchinnikov, Isabel Cristina Pérez Verona, Gleb Pogudin, Mirco Tribastone
CCLUE: Exact maximal reduction of kinetic modelsby constrained lumping of differential equations
Alexey Ovchinnikov ∗ , Isabel P´erez Verona † , Gleb Pogudin ‡§ , Mirco Tribastone ¶ Abstract
Detailed mechanistic models of biological processes can pose significant challenges for analysis andparameter estimations due to the large number of equations used to track the dynamics of all distinctconfigurations in which each involved biochemical species can be found. Model reduction can help tamesuch complexity by providing a lower-dimensional model in which each macro-variable can be directlyrelated to the original variables.We present CLUE, an algorithm for exact model reduction of systems of polynomial differentialequations by constrained linear lumping. It computes the smallest dimensional reduction as a linearmapping of the state space such that the reduced model preserves the dynamics of user-specified linearcombinations of the original variables. Even though CLUE works with nonlinear differential equations,it is based on linear algebra tools, which makes it applicable to high-dimensional models. Using casestudies from the literature, we show how CLUE can substantially lower model dimensionality and helpextract biologically intelligible insights from the reduction.An implementation of the algorithm and relevant resources to replicate the experiments herein re-ported are freely available for download at https://github.com/pogudingleb/CLUE.
Kinetic models of biochemical systems hold the promise of being able to unravel mechanistic insights inliving cells as well as predict the behavior of a biological process under unseen circumstances, which is afundamental premise for many applications including control and synthesis.In order to obtain an accurate model, however, it is often necessary to incorporate a substantial amountof detail about the specific mechanisms of interaction between the different components of a biologicalsystem. In many cases, this may lead to an overall representation which hinders physical intelligibility. Forexample, a mechanistic description of protein phosphorylation—a basic, ubiquitous process in signalingpathways [Pawson and Scott, 2005]—may yield models with a combinatorially large number of variables,particularly in the case of multisite phosphorylation [Salazar and H¨ofer, 2009].Model reduction represents a promising class of methods designed for obtaining a lower-dimensionalrepresentation that retains some dynamical features of interest to the modeler. The substantial body ofresearch available is motivated by the fact that it is a cross-cutting concern throughout many scientific andengineering disciplines to be able to effectively work with simple but accurate models of complex systems.Specifically, for applications to systems biology the availability of a smaller model can be particularlybeneficial in order to reduce the number of kinetic parameters [Danø et al., 2006], whose measurementsand calibration is a well-known hindrance, see, e.g., [Babtie and Stumpf, 2017].Techniques based on balanced truncation and singular value decomposition can dramatically lower thedimensionality of a model with small approximation errors [Antoulas, 2005]. Since the reduced modelpreserves the input/output behavior, it can be conveniently used in place of the original model to speed upthe computation time of a numerical simulation. However, the coordinate transformation typically destroys ∗ [email protected] , Department of Mathematics, CUNY Queens College, Queens, NY, 11367 and CUNY GraduateCenter, New York, NY, 10016, USA † [email protected] , IMT School for Advanced Studies, Lucca, 55100, Italy ‡ [email protected] , LIX, CNRS, ´Ecole Polytechnique, Institute Polytechnique de Paris, Palaiseau, 91120,France § Corresponding author. The authors are ordered alphabetically. ¶ [email protected] , IMT School for Advanced Studies, Lucca, 55100, Italy a r X i v : . [ q - b i o . M N ] A p r he structure, leading to a loss of physical interpretability of the model. This is recognized as an importantproperty to be maintained in applications to systems biology, especially if the model is used to validatemechanistic hypotheses [Schmidt et al., 2008, Sunnaker et al., 2011, Apri et al., 2012].For models of biochemical systems, many reduction methods are based on exploiting time-scale sep-aration [Okino and Mavrovouniotis, 1998]. One of the most well-known approaches is quasi-steady-stateapproximation [Segel and Slemrod, 1989], in which, roughly speaking, “fast” variables can be approx-imated as reaching their stationary values such that they can be replaced by constants (solutions of theassociated system of equations) in the dynamical model for the “slow” variables. Another class of reduc-tion techniques based on sensitivity analysis studies how model parameters and variables affect the desiredoutput, suggesting the elimination of the least influential ones [Snowden et al., 2017].Exact model reduction aims at lowering dimensionality without introducing approximation errors inthe reduced model. Conservation analysis detects linear combinations of variables that remain constant atall times [Vallabhajosyula et al., 2005]. Exact lumping is a more general approach whereby it is possibleto write a self-consistent system of dynamical equations for a set of macro-variables, where each macro-variable represents a combination of the original ones [Okino and Mavrovouniotis, 1998]. Linear lumping,known as early as in the work by Wei and Kuo [1969], expresses such combinations as a linear mappingon the original state variables. To maintain some degree of physical interpretability, the lumping may berestricted only to a part of the state space. Li and Rabitz [1991] allow the specification of linear combinationof variables that ought to be preserved. More recently, Cardelli et al. [2017a] presented a lumping algorithmthat identifies a partition of the state variables such that in the lumped system each macro-variable representsthe sum of the original variables of a block. Specialized criteria for exact linear lumping have also beenstudied for classes of biochemical models for signaling pathways, e.g., [Borisov et al., 2005, Conzelmannet al., 2006, Feret et al., 2009], for example by analyzing higher-level descriptions such as rule-basedsystems from which ordinary differential equation (ODE) models can be generated [Danos and Laneve,2004, Blinov et al., 2004].Here we present CLUE, an algorithm for constrained linear lumping, applicable to models as ODEswith polynomial derivatives. The constraints represent the linear combinations of state variables that oughtto be maintained in the reduced model, similarly to Li and Rabitz [1991]. The algorithm hinges on thefundamental observation by the same authors [Li and Rabitz, 1989, 1991] that exact lumpings correspond tothe subspaces that are invariant under the Jacobian of the ODE system. For finding these subspaces, Li andRabitz [1989, 1991] suggest two ways: (i) produce a finite set of constant matrices such that every commoninvariant subspace of these matrices would be invariant for the Jacobian (but not necessarily vice versa);and (ii) find eigenvectors and eigenvalues of the Jacobian symbolically, and explore their combinations.In the former approach, the obtained set of matrices might be too restrictive, so a lumping might not befound even if there is one. The latter approach is limited to small sized systems because it involves findingsymbolic expressions to the eigenvalues of a nonconstant matrix (the Jacobian of the system) and requireshuman intervention for exploring various combinations of these eigenvectors (for example, [Li and Rabitz,1989, §4, Example 2]).Our main contribution is twofold. First, we provide a set of constant matrices whose common invariantsubspaces are exactly the invariant subspaces of the Jacobian. This allows us to obtain a fully algorithmicmethod for finding a constrained lumping based purely on linear algebra. Second, we improve the algo-rithm by eliminating redundant computation from the invariant subspace generation and by using modularcomputation to avoid intermediate expression swell. This enables the analysis of models with several thou-sands of equations on commodity hardware. Together, our results allow us to study large-scale biochemicalmodels, of which we present a number of case studies, showing the degree of lumpability achieved as wellas the physical interpretation of the reduced system. Definition 1 (Lumping) . Consider a system of ODEs with polynomial right-hand side in the form˙ x = f ( x ) , (1)where x = ( x , . . . , x n ) T , f = ( f , . . . , f n ) T , and f , . . . , f n ∈ R [ x ] . We say that a linear transformation y = L x with y = ( y , . . . , y m ) T , L ∈ R m × n , and rank L = m is a lumping of (1) if there exist g = ( g , . . . , g m ) T with2 , . . . , g m ∈ R [ y ] such that ˙ y = g ( y ) for every solution x of (1). We say that m is the dimension of the lumping . The variables y in the reducedsystem are called macro-variables . Example 1.
Consider the system˙ x = x + x x + x , ˙ x = x − x , ˙ x = x + x . (2)We claim that the matrix L = (cid:18) (cid:19) (3)gives a lumping of (2) of dimension two. Indeed, (cid:18) ˙ y ˙ y (cid:19) = (cid:18) ˙ x ˙ x + x (cid:19) = (cid:18) ( x + x ) x + x (cid:19) = (cid:18) y y (cid:19) , so we can take g ( y , y ) = y and g ( y , y ) = y .The lumping matrix of (3) turns out to exactly preserve the solution of variable x . In general, oneconsiders a vector x obs of combinations of the original variables that are to be recovered in the reducedsystem; that is, x obs is a vector of linearly independent forms in x such that x obs = A x . Then we say that alumping y = L x is a constrained linear lumping if each entry of x obs is a linear combination of the entriesof y . Example 2.
Using the system (2), setting x obs = A x , with A = (cid:18) (cid:19) , yields that the from Eq. (3) is a constrained linear lumping because x obs = (cid:18) x x + x + x (cid:19) = (cid:18) y y + y (cid:19) . Instead, setting x obs = ( x ) does not give a constrained lumping for L because ( , , ) does not belong tothe row space of L .For a given vector x obs , there may be more than one constrained linear lumping. We define two lumpings y = L x and y = L x to be equivalent if there exists an invertible matrix T such that L = T L . It is possibleto prove that, for every nonzero vector x obs , there exists a unique (up to equivalence) lumping of the smallestpossible dimension.Let J ( x ) be the Jacobian matrix of f . From [Li and Rabitz, 1989], L is a lumping of (1) if and onlyif the row space of LJ ( x ) is contained in the row space of L for all x . The universal quantifier in thischaracterization can be handled in different ways (e.g., see [Li and Rabitz, 1989, §3] and [Brochot et al.,2005, pages 722-723]). We eliminate it as follows. Since the entries of J ( x ) are polynomials in x , we canwrite J ( x ) as J ( x ) = N ∑ i = J i m i , (4)where m , . . . , m N are distinct monomials in x and J , . . . , J N are matrices over R . Then, the fact that therow space of LJ ( x ) is contained in the row space of L for every x is equivalent to the containment of therow space of LJ i in the row space of L for every i = , . . . , N (proved in the supplementary material, seeLemma A.1; the equivalence does not hold for the method from [Li and Rabitz, 1989, §3], see Remark A.1).This leads to the following algorithm: we start with matrix A and add products of its rows with the matrices J , . . . , J N as long as the dimension of the row space grows. This is detailed in Algorithm 1.3 lgorithm 1 Simplified algorithm for finding a constrained lumping of the smallest possible dimension
Input a system ˙ x = f ( x ) of n ODEs with a polynomial right-hand side and an s × n matrix A over R ofrank s > Output a matrix L such that y : = L x is a constrained lumping with observables A x of smallest possibledimension. (Step 1) Compute J ( x ) , the Jacobian matrix of f ( x ) . (Step 2) Represent J ( x ) as J m + . . . + J N m N , where m , . . . , m N are distinct monomials in x , and J , . . . , J N are nonzero matrices over R . (Step 3) Set L : = A . (Step 4) Repeat (a) for every M in J , . . . , J N and row r of L , if rM does not belong to the row space of L , append rM to L . (b) if nothing has been appended on the previous step, exit the repeat loop and go to (Step 5) . (Step 5) Return L . Example 3.
We illustrate Algorithm 1 by applying it to the system in Eq. (2) by choosing A = ( , , ) (thuscorresponding to recovering x in the reduced system). The Jacobian matrix of f ( x ) in Eq. (2) is J ( x ) = x + x x + x − , which can be decomposed as J m + J m + J m , where m = m = x , m = x , and J = − , J = , J = . Starting with L = ( , , ) , for r = ( , , ) we compute the products: rJ = ( , , ) , rJ = ( , , ) , rJ = ( , , ) . Since rJ belongs to the row space of L while rJ does not, we set L = (cid:18) (cid:19) . The third vector, rJ , is proportional to the second row of the new L , so we skip it. Since a new row, ( , , ) ,has been added in part (a) of (Step 4) , we do not exit the loop. Setting now r = ( , , ) , we get rJ = ( , , ) , rJ = ( , , ) , rJ = ( , , ) . Since all these vectors belong to the row space of L , the iteration terminates and the as-computed L givesthe lumping of the smallest dimension from which we can recover the original quantities specified through A . This L is not equal to the one in (3) but is equivalent to it in the above sense. The CLUE algorithm was implemented in Python using the SymPy library [Meurer et al., 2017]. The sourcecode and all examples from Section 4 are available at https://github.com/pogudingleb/CLUE.4 lgorithm 2
Finding the smallest invariant subspace
Input an s × n matrix A over field K and a list M , . . . , M ‘ of n × n matrices over K ; Output an r × n matrix L over K such that • the row span of A is contained in the row span of L . • for every 1 i ‘ , the row span of span of LM i is contained in the row span of L ; • r is the smallest possible. (Step 1) Let L be the reduced row echelon form of A . (Step 2) Set P be the set of indices of the pivot columns of L . (Step 3) While P = ∅ do(a) For every j ∈ P and every 1 i ‘ i. Let r be the row in L with the index of the pivot being j .ii. Reduce rM i with respect to L . If the result is not zero, append it as a new row to L .iii. Reduce other rows with respect the new one in order to bring L into the reduced row echelonform.(b) Let e P be the set of indices of the pivot columns of L .(c) Set P : = e P \ P . (Step 4) Return L .For our implementation, we keep the general framework of Algorithm 1 but replace (Step 3) and (Step 4) , the most time-consuming parts, with a more efficient algorithm. (Step 3) and (Step 4) ofAlgorithm 1 solve the following problem: given a set of n -dimensional vectors and a set of n × n matrices,find a basis of the smallest vector space that is invariant under the matrices and that contains the vectors.We present and implement two algorithms, Algorithm 2 and 3. The latter is faster but requires thatall input matrices have rational entries; this turned out to be the case for the majority of the models weconsidered. Therefore, our implementation uses Algorithm 3 for systems with rational coefficients andAlgorithms 2 for other cases (e.g., if coefficients involve √
2, like in [Li and Rabitz, 1989, §4]). Algorithm 2is a result of applying the following observations to (Step 3) and (Step 4) of Algorithm 1: • If we maintain L in the reduced row echelon form , we can test whether a vector rM belongs to therow space of L in O ( n ) instead of computing rank, e.g., in O ( n ) using Gaussian elimination or inO ( n . ) using more advanced algorithms [B¨urgisser et al., 1997, Section 16.5]. • We do not need to consider all the products of the from rM but only the ones corresponding to thepivots of L added at the previous iteration of the loop. The largest number of products consideredis reduced from about n N / nN ; for justification and details, see the proof of Proposi-tion A.1.In CLUE, we also take advantage of the fact that the input matrices are almost always very sparse.Since the algorithm is to find an exact reduction, all computations in the case of rational coefficients inAlgorithm 2 are performed over rational numbers with long arithmetic. Although the rationals in the outputare typically simple (at most two digits in the numerator and denominator), the intermediate results mightcontain huge ones. For example, in the analysis of the model from [Barua et al., 2009] (available in theonline repository), we have encountered rationals with more than 10000 digits.We overcome this difficulty by running Algorithm 2 several times modulo different primes such thatthe size of matrix entries is bounded by the prime. For each run, we reconstruct a possible output overrational numbers using rational reconstruction (see [von zur Garthen and Gerhard, 2013, § 5.10]) and verifyits correctness. We proceed with the next prime until the output is correct. This is detailed in Algorithm 3.In the Supplementary Materials (Proposition I.1), we show that the algorithm returns a correct result afterconsidering finitely many primes. In our implementation, we go through the primes starting with 2 − lgorithm 3 Finding the smallest invariant subspace (modular)
Input s × n matrix A and a list M , . . . , M ‘ of n × n matrices over Q ; Output an r × n matrix L over Q such that: • the row span of A is contained in the row span of L . • for every 1 i ‘ , the row span of LM i is contained in the row span of L ; • r is the smallest possible. (Step 1) Repeat the following(a) Pick a prime number p that does not divide any of the denominators in A , M , . . . , M ‘ and has notbeen chosen before.(b) Compute the reductions e A , e M , . . . , e M ‘ modulo p .(c) Run Algorithm 2 on e A , e M , . . . , e M ‘ as matrices over F p and denote the result by e L .(d) Apply the rational reconstruction algorithm ([von zur Garthen and Gerhard, 2013, § 5.10], [Wanget al., 1982]) to construct a matrix L over Q such that the reduction of L mod p equals e L .(e) Check whether the row span of L contains the row span of L and is invariant under M , . . . , M ‘ . Ifyes, exit the loop. (Step 2) Return the matrix L from step (d) of the last iteration of the loop.In all practical examples we considered, one prime was enough. One could accumulate the results fordifferent primes and use Chinese remaindering [von zur Garthen and Gerhard, 2013, § 5.4]. We do not dothis because it would be harder to filter out “bad primes”. The correctness of Algorithms 2 and 3 is provedin Proposition A.1 and Supplementary Materials (Proposition I.1), respectively. We report the performanceof CLUE on a set of benchmarks in Supplementary Materials (Section III). In this section, we show the applicability of CLUE to the reduction of biological models through a num-ber of case studies published in the literature, including some taken from the BioModels repository [Liet al., 2010]. We additionally compare CLUE against the forward equivalence from [Cardelli et al., 2017a].Forward equivalence identifies a partition of an ODE system with polynomial derivatives which induces alumping where each macro-variable is equal to the sum of variables in each partition block. Using estab-lished terminology for lumping methods [Wei and Kuo, 1969, Okino and Mavrovouniotis, 1998, Snowdenet al., 2017], forward equivalence can be understood as a form of proper lumping because each originalvariable contributes exactly to one macro-variable of the reduced system. By contrast, in general, con-strained linear lumping yields an improper lumping matrix because the linear combination can be arbitrary.Similarly to constrained linear lumping, for forward equivalence there exists the notion of coarsest parti-tion. This is the partition with the smallest number of blocks, thus leading to a reduced ODE system of thesmallest dimension. In addition, forward equivalence can be computed with respect to constraints, whichare encoded as an initial partition of variables. The algorithm for computing the coarsest partition itera-tively splits each block of the initial partition until the criteria for forward equivalence are satisfied. Forthe comparison, we used ERODE [Cardelli et al., 2017b], which implements the reduction algorithm forforward equivalence. To the best of our knowledge, ERODE is the only publicly available software tool thatsupports exact lumping for polynomial differential equations. For each case study, both CLUE and forwardequivalence were initialized so as to preserve the same observables in the reduced models.For this study, we computed reductions that were independent from the specific choice of the kineticparameters used in the model. This was done as follows. Given the original model in the form ˙ x = f ( x , k ) ,where k is the vector of mass-action kinetic parameters, we considered an extended ODE system with theadditional set of equations ˙ k =
0. This ensures that the reduction is independent from the initial conditionsof the extended variables, hence of the choice of the original parameters.6 ars ParamsModel Orig. FE CLUE Orig. FE CLUE
Li et al. [2006] 21 19 15 25 22 22Proctor et al. [2014] 74 71 43 132 130 73Sneddon et al. [2011] m = m = m = m = m = m = Using two different examples, we illustrate how CLUE can decompose models of signaling pathways if onlycertain observables of interest are chosen. Figure 1(A) depicts a quorum sensing network for AI 2 biosyn-thesis and uptake pathways in
E. coli [Li et al., 2006]; the biochemical model is available in the BioModelsrepository as
MODEL8262229752 . The substrate Methionine (Met) transforms into S-adenosylmethionine(SAM). The blue branch of the pathway is involved in the production of AI 2. The green branch depictsdecarboxylation of SAM, which ultimately produces MTR and Adenine. The dynamics of both branchesare mediated by Pfs.The original model has 21 variables, one for each biochemical species depicted in the pathway. Byfixing the output signal AI 2 as the variable to be preserved, CLUE reduces the system to 15 variables.An inspection of the reduced model reveals that CLUE removes the biochemical species depicted as greenboxes in Fig. 1(A), while the remaining variables are not aggregated further. Overall, this leads to a reducedmodel that can be interpreted as the network in Fig. 1(B). The reduction can be explained by the fact thatnone of the eliminated variables contribute to the dynamics of the chosen observables. This is because theinteractions between the green pathway and the blue one occur only through Pfs, which acts a catalyst inall the reactions in which it is involved.The largest reduction by forward equivalence that preserves AI 2 has 19 variables. It only aggregatesAdenine, MTR, and Spermidine in the same block, while keeping all the other variables separated. Thisreduction is, however, a trivial one because these are end products of the pathways that do not interact withany other species. Mathematically, this results in the differential equation of an end-product variable notfeaturing the variable itself in the right-hand side. As a consequence, end-product variables can always berewritten in terms of a lumped variable that represents their sum.7 etSAMSAHSHR
Pfs prot
DPD HomoAI2i AI2e
LuxS prot
LuxS mRNA
LuxS gene
SAM
Decarb
MTASpermidinePutrescine MTRAdenine
MetSAMSAHSHR
Pfs prot
DPD HomoAI2i AI2e
LuxS prot
LuxS mRNA
LuxS gene
A. B.
Nut
Pfs gene
Pfs mRNA
Nut
Pfs gene
Pfs mRNA
Figure 1: (A) Model for AI-2 synthesis adapted from [Li et al., 2006]. (B) Reduced network obtained withCLUE.
IL1RIL1 OSMROSM
OSM pathwayIL1 pathwayADAMTS4cJunMMP MMP MMP DUSP MKP PP MMP
Act SP TIMP TIMP cFoscJuncJun cJuncFos AggrecanCollagenAggFragCollagenCollFragActivatorsADAMTS4 A. IL1RIL1 OSMROSM
OSM pathwayIL1 pathwayADAMTS4cJunDUSP MKP PP MMP
Act cFoscJuncJun cJuncFos B. proActivators Figure 2: (A) Adaptation of the three molecular pathways from [Proctor et al., 2014]. (B) Reduced modelobtained while preserving the phosphorylated forms of cJun and cFos. Dotted boxes represent abstractionsof groups of biochemical species which are not fully shown here to reduce clutter.A similar pattern of modular decomposition arises in a pathway of cartilage breakdown from [Proc-tor et al., 2014], illustrated in Fig. 2(A). The model is available in the BioModels repositoryas
BIOMD0000000504 . The system comprises three modules: an Interleukin-1 (IL ) signaling pathway,an OSM signaling pathway, and a circuit of activation of proMMPs that concludes with the degradation ofAggrecan and Collagen.In the first module, IL binds its receptor (ISMR) to start a cascade of phosphorylation events (notshown) that activates cJun. After dimerization, cJun upregulates collagenases MMP { } and phos-phatases MKP1, PP 44 and DUSP16. In the second module, OSM binds to the receptor OSMR; the pathwayconcludes with the phosphorylation of cFos. The active cFos can reversibly bind to phosphorylated cJunin a complex cJun-cFos which acts as transcription factor and upregulates the transcription factor SP 1,TIMPs { } , cFos, cJun, a generic MMP Activator and all the upregulated components from IL 1 module. Inthe third module, the Aggrecan-Collagen complex separates due to the interaction with ADAMTS 4, and8he units of Aggrecan in the complex transform into fragments (AggFrag). The units of Collagen interactwith several Activators (collagenases such as MMP { } or MMP Act ) that destroy the protein structure,producing collagen fragments (CollFrag).The original model consists of 74 variables. By preserving the phosphorylated molecules of cFos andcJun, which are some of the species of interest in the study by Proctor et al. [2014], CLUE removes thepathway for the decomposition of the Aggrecan-Collagen complex, together with the mRNA variants ofMMP { } , TIMP { } , and SP . The reduced model with 43 variables can be interpreted as the networkin Fig. 2 (B). Again, CLUE simplifies branches of the pathway that do not affect the dynamics of theobservables. The reduction by forward equivalence, instead, collapses only the variables corresponding tothe species Aggrecan, AggFrag, Collagen, and CollFrag, providing a model with 71 variables. Differentlyfrom the previous example, this block collapses end species (AggFrag and CollFrag) together with aninput species (Aggrecan) which is assumed to have no dynamics (i.e., zero derivative), as well as a species(Collagen) that undergoes degradation. Here we study a basic mechanism of protein phosphorylation, a fundamental process in eukaryoticcells [Gunawardena, 2005], to show how CLUE can help cope with the combinatorial growth of mech-anistic models for proteins with multiple sites [Salazar and H¨ofer, 2009]. We consider a model phosphory-lation/dephosphorylation of a substrate with m independent and identical binding sites, taken from [Sneddonet al., 2011]. Each site can be in four different states: phosphorylated and unbound, unphosphorylated andunbound, phosphorylated and bound to a phosphatase, unphosphorylated and bound to a kinase. Thus,the model is described by 4 m + S(p1 P ,p2 U )KinasePhosphatase B. - A. Repeatedvariables
Figure 3: Application of CLUE to a protein phosphorylation model with m = m = m = m could not be analyzed with our prototype implementation due to memory issues). Instead, forwardequivalence detects the assumption of the binding sites being identical [Cardelli et al., 2017a, Table S1].Each macro-variable represents complexes equal up to permutation of the identities of the binding sites,leading to a polynomial growth in m of the number of variables in the reduced model. B A LL A.
1. 2. 3.4.
Extra cellular ligandRTK receptor B. Figure 4: (A) Components in a model for RTK signaling adapted from [Borisov et al., 2008]: a bivalentLigand (L), two adapter proteins A and B and a receptor with three binding sites: ligand binding site inthe extracellular region (brown); protein binding sites in the intracellular region (red/blue circles). (B)macro-variables obtained in the reduced model.
We now consider an example of ordered phosphorylation, taken from [Borisov et al., 2008], in a receptortyrosine kinase (RTK) signaling pathway where receptor autophosphorylation via dimerization is precededby ligand binding. Figure 4(A) shows the molecular complexes involved in the pathway. The receptorinteracts with a bivalent ligand and two adapter proteins, A and B. Protein A has a single site that bindsto the receptor. Protein B is a scaffold protein with three binding sites: one extracellular site dedicated toreceptor-binding and two intracellular tyrosine residues. The phosphorylation state of the tyrosine residuesin B is independent of the state of the receptor-binding site. Upon phosphorylation of the intracellular sites,the receptor can bind the adapter proteins A and B.The model, originally expressed in the rule-based language BioNetGen [Blinov et al., 2004], has 213variables. Applying CLUE to preserve the concentration of free ligand yields a reduced model where150 variables are removed and the remaining 63 are lumped into four macro-variables, depicted in Fig. 4-B. These represent: the free ligand (Fig. 4-B1); all configurations of the free receptor regardless of thephosphorylation state of the intracellular terminals or of protein binding (Fig. 4-B2); all variables thatrepresent the bound ligand (Fig. 4-B3) and the dimerized form (Fig. 4-B4) regardless of the intracellularstates. Instead, forward equivalence gives 66 variables, aggregating B-bound receptor units regardless ofthe phosphorylation state of B.
We presented CLUE, an algorithm for the reduction of polynomial ODEs by exact lumping, with the pos-sibility to fix original variables (or their linear combinations) to be recovered in the reduced system. Froma practical viewpoint, the specification of such constraints allows the preservation of the dynamics of keybiochemical species of interest to the modeler. Importantly, although it is acknowledged that linear lumping10ay lead to loss of structure in the reduced model, e.g., [Snowden et al., 2017], the reductions presentedhere admitted a biochemical interpretation in most cases. From a computational viewpoint, CLUE casts theanalysis of polynomial equations into a linear-algebra framework, allowing reductions for models of dimen-sion over than 15 ,
000 variables using a prototype implementation. This makes CLUE a general-purposetool that adds to the wide range of existing methods. In particular, since it reduces exactly, it can be usedas a pre-processing for techniques that seek more aggressive reductions using approximate methods, or asa complementary method to those that use orthogonal model properties, e.g. time-scale separation.
Funding
This work has been supported by the National Science Foundation [CCF-1563942 and 1564132, DMS-1760448, 1853650, and 1853482] and by the MIUR PRIN project “SEDUCE”, no. 2017TWRCNB.
References
A. Antoulas.
Approximation of Large-Scale Dynamical Systems . Advances in Design and Control. SIAM,2005.M. Apri, M. de Gee, and J. Molenaar. Complexity reduction preserving dynamical behavior of biochemicalnetworks.
Journal of Theoretical Biology , 304(0):16–26, 2012.A. Babtie and M. Stumpf. How to deal with parameters for whole-cell modelling.
J. of The Royal SocietyInterface , 14(133):20170237, 2017.D. Barua, J. R. Faeder, and J. M. Haugh. A bipolar clamp mechanism for activation of jak-family proteintyrosine kinases.
PLoS Comput. Biol. , 5(4):e1000364, 04 2009.M. Blinov, J. Faeder, B. Goldstein, and W. Hlavacek. BioNetGen: software for rule-based modeling ofsignal transduction based on the interactions of molecular domains.
Bioinformatics , 20(17):3289–3291,2004.N. Borisov, N. Markevich, J. Hoek, and B. Kholodenko. Signaling through receptors and scaffolds: Inde-pendent interactions reduce combinatorial complexity.
Biophysical Journal , 89(2):951 – 966, 2005.N. Borisov, A. Chistopolsky, J. Faeder, and B. Kholodenko. Domain-oriented reduction of rule-basednetwork models.
IET systems biology , 2(5):342–351, 2008.C. Brochot, J. T´oth, and F. Bois. Lumping in pharmacokinetics.
Journal of Pharmacokinetics and Phar-macodynamics , 32(5):719–736, 2005.P. B¨urgisser, M. Clausen, and A. Shokrollahi.
Algebraic Complexity Theory . Springer-Verlag Berlin Hei-delberg, 1997.L. Cardelli, M. Tribastone, M. Tschaikowski, and A. Vandin. Maximal aggregation of polynomial dynami-cal systems.
PNAS , 114(38):10029–10034, 2017a.L. Cardelli, M. Tribastone, A. Vandin, and M. Tschaikowski. ERODE: A tool for the evaluation andreduction of ordinary differential equations. In
TACAS 2017 , volume 10206 of
LNCS , 2017b.H. Conzelmann, J. Saez-Rodriguez, T. Sauter, B. Kholodenko, and E. Gilles. A domain-oriented approachto the reduction of combinatorial complexity in signal transduction networks.
BMC Bioinformatics , 7(1):34, 2006.S. Danø, M. F. Madsen, H. Schmidt, and G. Cedersund. Reduction of a biochemical model with preservationof its basic dynamic properties.
The FEBS Journal , 273(21):4862–4877, 2006.V. Danos and C. Laneve. Formal molecular biology.
Theoretical Computer Science , 325(1):69–110, 2004.J. Feret, V. Danos, J. Krivine, R. Harmer, and W. Fontana. Internal coarse-graining of molecular systems.
PNAS , 106(16):6453–6458, 2009. 11. Gunawardena. Multisite protein phosphorylation makes a good threshold but can be a poor switch.
PNAS ,102(41):14617–14622, 2005.C. Li, M. Donizelli, N. Rodriguez, H. Dharuri, L. Endler, V. Chelliah, L. Li, E. He, A. Henry, M. Stefan,J. Snoep, M. Hucka, N. Le Nov`ere, and C. Laibe. BioModels Database: An enhanced, curated andannotated resource for published quantitative kinetic models.
BMC Systems Biology , 4:92, 2010.G. Li and H. Rabitz. A general analysis of exact lumping in chemical kinetics.
Chemical EngineeringScience , 44(6):1413–1430, 1989.G. Li and H. Rabitz. New approaches to determination of constrained lumping schemes for a reactionsystem in the whole composition space.
Chemical Engineering Science , 46(1):95–111, 1991.J. Li, L. Wang, Y. Hashimoto, C. Tsao, T. Wood, J. Valdes, E. Zafiriou, and W. Bentley. A stochastic modelof escherichia coli AI-2 quorum signal circuit reveals alternative synthesis pathways.
Molecular systemsbiology , 2(1), 2006.A. Meurer, C. P. Smith, M. Paprocki, O. ˇCert´ık, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K.Moore, S. Singh, T. Rathnayake, S. Vig, B. E. Granger, R. P. Muller, F. Bonazzi, H. Gupta, S. Vats, F. Jo-hansson, F. Pedregosa, M. J. Curry, A. R. Terrel, v. Rouˇcka, A. Saboo, I. Fernando, S. Kulal, R. Cimrman,and A. Scopatz. SymPy: symbolic computing in Python.
PeerJ Computer Science , 3:e103, 2017.M. Okino and M. Mavrovouniotis. Simplification of mathematical models of chemical reaction systems.
Chemical Reviews , 2(98):391–408, 1998.T. Pawson and J. D. Scott. Protein phosphorylation in signaling — 50 years and counting.
Trends inBiochemical Sciences , 30(6):286–290, 2005.C. Proctor, C. Macdonald, J. Milner, A. Rowan, and T. Cawston. A computer simulation approach toassessing therapeutic intervention points for the prevention of cytokine-induced cartilage breakdown.
Arthritis & rheumatology , 66(4):979–989, 2014.C. Salazar and T. H¨ofer. Multisite protein phosphorylation — from molecular mechanisms to kinetic mod-els.
FEBS Journal , 276(12):3177–3198, 2009.H. Schmidt, M. Madsen, S. Danø, and G. Cedersund. Complexity reduction of biochemical rate expressions.
Bioinformatics , 24(6):848–854, 2008.L. Segel and M. Slemrod. The quasi-steady-state assumption: A case study in perturbation.
SIAM Review ,31(3):446–477, 1989.M. Sneddon, J. Faeder, and T. Emonet. Efficient modeling, simulation and coarse-graining of biologicalcomplexity with NFsim.
Nature methods , 8(2):177, 2011.T. Snowden, P. van der Graaf, and M. Tindall. Methods of model reduction for large-scale biologicalsystems: A survey of current methods and trends.
Bulletin of Mathematical Biology , 79(7):1449–1486,2017.M. Sunnaker, G. Cedersund, and M. Jirstrand. A method for zooming of nonlinear models of biochemicalsystems.
BMC Systems Biology , 5(1):140, 2011.R. Vallabhajosyula, V. Chickarmane, and H. Sauro. Conservation analysis of large biochemical networks.
Bioinformatics , 22(3):346–353, 2005.J. von zur Garthen and J. Gerhard.
Modern Computer Algebra . Cambridge University Press, 2013.P. Wang, M. Guy, and J. Davenport. P -adic reconstruction of rational numbers. SIGSAM Bulletin , 16(2):2–3, 1982.J. Wei and J. Kuo. Lumping analysis in monomolecular reaction systems. analysis of the exactly lumpablesystem.
Industrial & Engineering Chemistry Fundamentals , 8(1):114–123, 02 1969.12 ppendix A: Proofs for Algorithms 1 and 2
Let Mat m , n ( K ) denote the space of m × n matrices over a field K . For M ∈ Mat m , n ( K ) , rspan K ( M ) denotesthe row span of M over K .Lemma A.1 is used by Algorithm 1 to pass from the invariance under the Jacobian to the invarianceunder a finite set of constant matrices. Lemma A.1.
Let M ( x ) ∈ Mat n , n ( K [ x ]) , where x = ( x , . . . , x r ) and char K = . We write M ( x ) = M m + . . . + M N m M so that M , . . . , M N ∈ Mat n , n ( K ) and m , . . . , m N are distinct monomials in x . Then, for a vectorsubspace V ⊂ K n , the following are equivalent:(1) V is invariant under M ( x ∗ ) for every x ∗ ∈ K r ;(2) V is invariant under M i for every i N.Proof.
Assume that V is invariant under M , . . . , M N . Since, for every x ∗ ∈ K r , M ( x ∗ ) is an K -linearcombination of M , . . . , M N , V is invariant under M ( x ∗ ) as well.Assume that V is invariant under M ( x ∗ ) for every x ∗ ∈ K r . Consider v ∈ V . Since ∀ x ∗ ∈ K r M ( x ∗ ) v ∈ V ,for every 1 i r , ∂ M ∂ x i ( x ∗ ) v ∈ V as well. Consider one of M , . . . , M N , say M . Let m = x d . . . x d r r . Iteratingthe argument with derivative, we obtain ∀ x ∗ ∈ K r ∂ d + ... + d r M ∂ x d . . . ∂ x d r r ( x ∗ ) v ∈ V . Taking x ∗ = , we deduce that M v ∈ V . Remark A.1.
A different approach to replacing the Jacobian with a finite set of constant matrices wassuggested in [Li and Rabitz, 1989, Sect. 3(A)]:1. Write the Jacobian J ( x ) = ∑ a i j ( x ) E i j , where E i j is the matrix with one in the ( i , j ) -th cell and zeroeseverywhere else;2. Combine together summands with proportional a i j ( x ) obtaining a representation J ( x ) = ∑ b j ( x ) B j with constant B j ;3. Return B j ’s.Consider the system ( ˙ x = ( x + x ) + ( x + x ) , ˙ x = ˙ x = ˙ x = x . Then the procedure from [Li and Rabitz, 1989, Section 3(A)] will lead to thefollowing decomposition J ( x ) = ( x + x + x ) E + ( x + x ) E + ( x + x ) E . The smallest subspace containing ( , , , ) and right-invariant under E , E , E is the whole space, sothis approach will not produce a nontrivial lumping. On the other hand, using Lemma A.1, we arrive at J ( x ) = x ( E + E + E ) + x ( E + E ) + x ( E + E ) . The matrices 2 E + E + E , E + E , and E + E have a common proper invariant subspace contain-ing ( , , , ) , and this yields a nontrivial lumping: y = x , y = x + x , y = x + x . Proposition A.1.
Algorithm 2 is correct. roof. Bringing a matrix to the reduced row echelon form does not change the row span, and adding extrarows might only enlarge it, so the row span of the output of Algorithm 2 contains the row span of A .Now we will show that the row span of the output of the algorithm is invariant under M , . . . , M N . Wedenote the values of L and P before the i -th iteration of the while loop (Step 3) by L i and P i , respectively.We set L and P to be the 0 × n matrix and ∅ , respectively. We will show by induction on k that, for every k > i ‘ , we have rspan K ( L k M i ) ⊂ rspan K ( L k + ) . (5)The case k = k >
0. Let L + bethe matrix consisting of the rows of L k with the pivot columns in P k , and let L − be the matrix consistingof the remaining rows. Fix 1 i ‘ . Then rspan K ( L + M i ) ⊂ rspan K L k + because the rows of L + will beprocessed in the next iteration of the while loop. By the construction, rspan K L k − ⊂ rspan K L k . The rows of L k − and L + are linearly independent because they form a (nonreduced) row echelon form after permutingrows and columns. Therefore, rspan K L k = rspan K L + + rspan K L k − . This impliesrspan K ( L − M i ) ⊂ rspan K ( L + M i ) + rspan K ( L k − M i ) . The inductive hypothesis implies thatrspan K ( L − M i ) ⊂ rspan K ( L + M i ) + rspan K L k ⊂ rspan K L k + . Therefore, rspan K ( L k M i ) ⊂ rspan K L k + .Assume that there were N iterations of the while loop. Then we consider one extra iteration. Since P = ∅ , this iteration will not do anything, so L N + = L N + . Therefore, rspan K ( L N + M i ) ⊂ rspan K ( L N + ) for every 1 i ‘ due to (5). This implies that rspan K of the output of the algorithm is invariant under M , . . . , M ‘ .To prove the minimality of r , consider V , the smallest subspace of K n invariant under M , . . . , M ‘ andcontaining the rows of the input matrix A . We will show by induction on i that rspan K ( L i ) ⊂ V . Sincerspan K ( L ) = rspan K A , rspan K ( L ) ⊂ V . Assume that the statement is true for some i >
1. At the i -thiteration of the while loop, we consider vectors of the form rM i , where r ∈ rspan K ( L i ) . Since r ∈ V and V is M i -invariant, these vectors also belong to V . Consequent computation of the row echelon form does notchange the row span. Hence, the row span of the output is invariant under M , . . . , M ‘ and contained in V ,so it coincides with V . This proves the minimality of r .14 upplementary materials CLUE: Exact maximal reduction of kinetic modelsby constrained lumping of differential equations
Alexey Ovchinnikov, Isabel Pérez Verona, Gleb Pogudin, Mirco Tribastone
This document is structured as follows: • In Section I, we will prove the correctness and termination of Algorithm 3, in which we will use thecorrectness of Algorithm 2 established in the main paper (see Proposition A.1). • In Section II, we reprove the criterion for lumping in terms of the Jacobian of the system [3, Section 2]for the sake of completeness. • In Section III, we report the runtimes of our implementation ( https://github.com/pogudingleb/CLUE )on a set of benchmarks.
I Proof of correctness and termination of Algorithm 3
For the convenience of the reader while navigating between the main paper and the Supplementary materials,we recall:
Algorithm 2
Finding the smallest invariant subspace
Input an s × n matrix A over field K and a list M , . . . , M ‘ of n × n matrices over K ; Output an r × n matrix L over K such that • the row span of A is contained in the row span of L . • for every 1 i ‘ , the row span of span of LM i is contained in the row span of L ; • r is the smallest possible. (Step 1) Let L be the reduced row echelon form of A . (Step 2) Set P be the set of indices of the pivot columns of L . (Step 3) While P = ∅ do(a) For every j ∈ P and every 1 i ‘ i. Let r be the row in L with the index of the pivot being j .ii. Reduce rM i with respect to L . If the result is not zero, append it as a new row to L .iii. Reduce other rows with respect the new one in order to bring L into the reduced row echelonform.(b) Let e P be the set of indices of the pivot columns of L .(c) Set P : = e P \ P . (Step 4) Return L . 1 a r X i v : . [ q - b i o . M N ] A p r lgorithm 3 Finding the smallest invariant subspace (modular)
Input s × n matrix A and a list M , . . . , M ‘ of n × n matrices over Q ; Output an r × n matrix L over Q such that: • the row span of A is contained in the row span of L . • for every 1 i ‘ , the row span of LM i is contained in the row span of L ; • r is the smallest possible. (Step 1) Repeat the following(a) Pick a prime number p that does not divide any of the denominators in A , M , . . . , M ‘ and has notbeen chosen before.(b) Compute the reductions e A , e M , . . . , e M ‘ modulo p .(c) Run Algorithm 2 on e A , e M , . . . , e M ‘ as matrices over F p and denote the result by e L .(d) Apply the rational reconstruction algorithm ([4, § 5.10], [6]) to construct a matrix L over Q suchthat the reduction of L mod p equals e L .(e) Check whether the row span of L contains the row span of L and is invariant under M , . . . , M ‘ . Ifyes, exit the loop. (Step 2) Return the matrix L from step (d) of the last iteration of the loop.We also recall: • Mat m , n ( K ) denotes the space of m × n matrices over a field K . • For M ∈ Mat m , n ( K ) , rspan K ( M ) is the row span of M over K .The following lemma is used in Proposition I.1 for showing the correctness and termination of Algorithm 3. Lemma I.1.
Let A ∈ Mat s , n ( Q ) , M , . . . , M ‘ ∈ Mat n , n ( Q ) and L the result of applying Algorithm 2 to thesematrices. For every prime number p that does not divide the denominators of the entries of A , M , . . . , M ‘ ,we denote the result of applying Algorithm 2 to the reductions of these matrices modulo p by L ∗ p . Then(1) for all but finitely many primes, L ∗ p is equal to L modulo p;(2) the number of rows in L ∗ p does not exceed the number of rows in L.Proof. To show (1), consider the run of Algorithm 2 on A , M , . . . , M ‘ . The operations performed with thematrix entries in the algorithm are arithmetic operations and checking for nullity. There is a finite list ofnonzero rational numbers q , . . . , q N checked for nullity in the algorithm. Consider a prime number p suchthat the reductions of q , . . . , q N modulo p are defined and not zero. Since the arithmetic operations commutewith reducing modulo p and we have chosen p so that all nullity checks will also commute with reductionmodulo p , the result of the algorithm modulo p , that is L ∗ p , will be equal to the reduction of L modulo p .We now show (1). The number of rows in L is the dimension of the space generated by the rows of A and their images under all possible products of M , . . . , M ‘ . Consider the ∞ × n matrix R formed fromthe matrices of the form AX , where X ranges over all possible products of M , . . . , M ‘ , stacked on top ofeach other. Let R p be the reduction of R modulo p . For every integer r , having rank at most r can be2xpressed as a system of polynomial conditions in the matrix entries (that is, all ( r + ) × ( r + ) minors arezero). Therefore, rank R p rank R . Since the numbers of rows in L and L ∗ p are equal to rank R and rank R p ,respectively, the second part of the lemma is proved. Proposition I.1.
Algorithm 3 is correct and terminates in finite time.Proof.
First we will show the correctness. Consider the output of Algorithm 3, call it L . Since the stop-ping criterion for the loop in (Step 1) is rspan Q ( A ) ⊂ rspan Q ( L ) and the invariance of rspan Q ( L ) under M , . . . , M ‘ , it remains to prove the minimality of the number of rows in L . Due to Proposition A.1 fromthe main paper (correctness of Algorithm 2), it would be equivalent to show that the number of rows in L is equal to the number of rows in the output of Algorithm 2 on A , M , . . . , M ‘ , call it L . The second part ofLemma I.1 implies that the number of rows of every matrix e L computed in (Step 1) does not exceed thenumber of rows in L . Then the same is true for L . Since the number of rows in L is the smallest possible, itis the same as the number of rows in L , so the output of the algorithm will be correct.Now we will prove the termination. Let N be the maximum of the absolute values of the numeratorsand denominators of the entries of L . Consider a prime number p such that L ∗ p (see Lemma I.1) is equalto the reduction of L modulo p and p > N . Then [6] and [5, Lemma 2] imply that the result of rationalreconstruction in (d) for e L = L ∗ p will be equal to L , so the algorithm will terminate. Lemma I.1(1) implies thatall but finitely many primes satisfy the above properties, so the algorithm will reach one of these numbersand terminate. II Proof for the lumping criterion from [2]
In Lemma II.1 and Proposition II.1, we reprove the criterion for lumping in terms of the Jacobian of thesystem [3, Section 2] for the sake of completeness.
Lemma II.1.
Let p ( x ) ∈ R [ x ] , where x = ( x , . . . , x n ) , and L ∈ Mat s , n ( R ) . Let V ⊂ R n be the orthogonalcomplement to rspan R ( L ) . Then p ( x ) can be written as a polynomial in L x if and only if ∀ v ∈ R n the operatorD v : = v ∂∂ x + . . . + v n ∂∂ x n annihilates p ( x ) .Proof. Denote the rows of L by r , . . . , r s . Assume that there exists a polynomial q in y , . . . , y s such that p ( x ) = q ( L x ) . Then ∀ v ∈ V D v p ( x ) = D v q ( L x ) = ( v , r ) ∂ q ∂ y ( L x ) + . . . + ( v , r s ) ∂ q ∂ y s ( L x ) = . To prove the lemma in the other direction, choose an orthonormal basis u , . . . , u ‘ of V . Since therows of L and u , . . . , u ‘ span the whole space, there exists a polynomial q in y , . . . , y s + ‘ such that p ( x ) = q ( L x , ( u , x ) , . . . , ( u ‘ , x )) . Then, for every 1 i ‘ , using D v ( u , x ) = ( v , u ) , we have D u i p ( x ) = D u i q ( L x , ( u , x ) , . . . , ( u ‘ , x )) = ( u i , u i ) ∂ q ∂ y s + i ( L x , ( u , x ) , . . . , ( u ‘ , x ))= ∂ q ∂ y s + i ( L x , ( u , x ) , . . . , ( u ‘ , x )) . Therefore, q does not involve y s + i , so we get a representation of p as a polynomial in L x . Proposition II.1.
A matrix L ∈ Mat s , n ( R ) is a lumping for a n-dimensional system ˙ x = f ( x ) if and only if, ∀ x ∈ R n , rspan R ( L ) is invariant under J ( x ) , the Jacobian matrix of f . roof. We will use the notation from Lemma II.1. For v ∈ V , D v L f ( x ) = (cid:16) v , (cid:16) ∂∂ x , . . . , ∂∂ x n (cid:17)(cid:17) L f ( x ) = ( LJ ( x )) v . Therefore, Lemma II.1 implies that L is a lumping of ˙ x = f ( x ) if and only if rspan R ( LJ ( x )) is orthogonal to V for every x . The latter is equivalent to the invariance of rspan R ( L ) under J ( x ) for every x ∈ R n . III Performance
In Table 1, we report the runtimes of CLUE on the models collected in https://github.com/pogudingleb/CLUE/tree/master/examples . The runtimes are measured on a laptop with a 1.60GHz CPU and 8GB RAM. Thenames of the models in the table coincide with the names of the corresponding folders in the repository. Inthe table, “ < < m =
2) 24 12 < m =
3) 72 12 < m =
4) 264 12 4ProteinPhosphorylation ( m =
5) 1032 12 16ProteinPhosphorylation ( m =
6) 4104 12 253ProteinPhosphorylation ( m =
7) 16392 12 3992Barua 505 402 96MODEL1001150000 203 120 3fceri_ji - 1 380 8 3fceri_ji - 2 380 338 63fceri_ji - 3 380 84 12fceri_ji - 4 380 120 16fceri_ji - 5 380 120 16Table 1: Performance of CLUE on a set of benchmarks4 eferences [1] L. Cardelli, M. Tribastone, A. Vandin, and M. Tschaikowski. ERODE: A tool for the evaluation andreduction of ordinary differential equations. In
TACAS 2017 , volume 10206 of
LNCS , 2017. URL https://doi.org/10.1007/978-3-662-54580-5_19 .[2] G. Li and H. Rabitz. A general analysis of exact lumping in chemical kinetics.
Chemical EngineeringScience , 44(6):1413–1430, 1989. URL https://doi.org/10.1016/0009-2509(89)85014-6 .[3] G. Li and H. Rabitz. New approaches to determination of constrained lumping schemes for a reactionsystem in the whole composition space.
Chemical Engineering Science , 46(1):95–111, 1991. URL https://doi.org/10.1016/0009-2509(91)80120-N .[4] J. von zur Garthen and J. Gerhard.
Modern Computer Algebra . Cambridge University Press, 2013.[5] P. Wang. A p -adic algorithm for univariate partial fractions. In Proceedings of SYMSAC’81 , pages212–217, 1981. URL https://doi.org/10.1145/800206.806398 .[6] P. Wang, M. Guy, and J. Davenport. P -adic reconstruction of rational numbers. SIGSAM Bulletin , 16(2):2–3, 1982. URL https://doi.org/10.1145/1089292.1089293https://doi.org/10.1145/1089292.1089293