Dimensionality reduction via path integration for computing mRNA distributions
DDimensionality reduction via path integration for computingmRNA distributions
Jaroslav [email protected]
Abstract
Inherent stochasticity in gene expression leads to distributions of mRNA copy numbers in apopulation of identical cells. These distributions are determined primarily by the multitude ofstates of a gene promoter, each driving transcription at a different rate. In an era where single-cell mRNA copy number data are more and more available, there is an increasing need for fastcomputations of mRNA distributions. In this paper, we present a method for computing separatedistributions for each species of mRNA molecules, i. e. mRNAs that have been either partially orfully processed post-transcription. The method involves the integration over all possible realizationsof promoter states, which we cast into a set of linear ordinary differential equations of dimension M × n j , where M is the number of available promoter states and n j is the mRNA copy number ofspecies j up to which one wishes to compute the probability distribution. This approach is superiorto solving the Master equation (ME) directly in two ways: a) the number of coupled differentialequations in the ME approach is M × Λ × Λ × ... × Λ L , where Λ j is the cutoff for the probability ofthe j th species of mRNA; and b) the ME must be solved up to the cutoffs Λ j , which are ad hoc andmust be selected a priori . In our approach, the equation for the probability to observe n mRNAsof any species depends only on the the probability of observing n − n . To demonstrate the validity of ourderivations, we compare our results with Gillespie simulations for ten randomly selected systemparameters. a r X i v : . [ q - b i o . M N ] J un NTRODUCTION
In the last decade, single-cell RNA sequencing techniques have advanced to a point wheremRNA distributions can be obtained for thousands of genes with high accuracy [1]. Thesetype of data offer insights into the stochastic processes that govern gene regulatory networks.For this reason, computational techniques that can interpret these data are in high demand.One of the aspects of gene regulation that single-cell RNA data can shed light on is thepromoter architectures for individual genes. Knowing the mRNA distribution associatedwith a gene, it is in principal possible to reverse-engineer the promoter architecture thatgives rise to said distribution. One approach to achieving this goal is to compute the mRNAprobability distributions (PD) for a large number of promoter architectures and select theone(s) that best fits the data. However, this requires fast methods of computing mRNAPDs.The two most conventional methods of computing PDs for gene products (predominantlyRNA and protein) are: solving the Master equation (ME) [2] and the Gillespie algorithm(GA) [3]. What makes these two methods attractive is that they are derived from firstprinciples; in fact, the GA is derived from the ME, which makes them different sides ofthe same coin. In practice the ME is useful only when solvable analytically or when it canbe numerically integrated. New analytic and numerical techniques for solving the ME areconstantly being developed, either by means of improving stochastic simulation algorithms[4–8], or by solving the ME exactly/approximately [9–17], or by a mix of the former two[18–24]. In this paper, we enlarge this list by one.Our approach is to reduce the ME for the mRNA and the promoter to a separate ME foreach mRNA species. This is accomplished thanks to a theorem we have proven in an earlierpaper [11], which allows one to write the generating function (an alternative representationof the ME) for the mRNA as a modified ME for the promoter. In this fashion, the individualprobability distributions for any one of the species of mRNA can be computed separately.The paper is structured as follows: in section 1 we introduce the physical system underconsideration and write down the ME for it. In section 2, we state the aforementionedtheorem without proof and proceed to apply it to the system introduced in section one. Wederive the ME for the individual species of mRNA for arbitrary initial conditions. In section3, we test our method against Gillespie simulations for different promoter architectures and2
IG. 1. A system of a promoter and two species of mRNA. a) Transcription factor 3 binds toand dissociates from the promoter site 1 at the rate α and β , , , respectively. b) Transcriptionfactor 5 binds to and dissociates from the promoter site 2 at the rate α and ˜ β , , , respectively.c) Transcription factor 5 binds to and dissociates from the promoter site 2 at the rate α and γ , respectively. d) Transcription, forward and backward post-transcription process, and mRNAdegradation occurring at the rates R , g , g and d , respectively. discuss the results, advantages and drawbacks of our method. In the concluding section wesummarize our work. MASTER EQUATION: DIRECT APPROACH
The system we wish to describe consists of a gene promoter and mRNA molecules thatcan be in different post-transcription states. Figure 1 (a-c) shows three possible promoterstates and the processes that cause one state to transform into another. Figure 1 (d) showsthe transcription process, the post-transcription processes acting on a newly transcribedmRNA, and the degradation of a fully processed mRNA. If we let x i be the state of theempty promoter sight i , and y ki be the state of the promoter sight i occupied by transcription3actor (TF) k , then the reactions that change the state of the promoter can be written as x i α ki −−−−→ y ki i = 1 , ..., My ki b ki −−−−→ x i i = 1 , ..., M ; k = 1 , ..., N (1)where α ki is the TF association rate and b ki = γ ki x i − x i +1 + (cid:88) l (cid:32) β kli,i − y li − x i +1 + β kli,i +1 y li +1 x i − ) + (cid:88) lp ˜ β klpi y li − y pi +1 (cid:33) , (2)where β kli,i − is the dissociation rate of the k th TF from the promoter site i when site i − l th TF; β kli,i +1 is the dissociation rate of the k th TF from the promoter site i when site i + 1 is occupied by the l th TF; and ˜ β klpi is the dissociation rate of the k th TFfrom the promoter site i when site i − l th TF and site i + 1 is occupiedby the p th TF. The variables x i and y ki can only take the values 0 and 1. When x i = 1, the i th promoter site is empty; when x i = 0, it is occupied by a TF (any TF). When y ki = 1,the i th promoter site is occupied by the k th TF; when y ki = 0, it is empty. A promoter stateis determined by a unique combination of ones and zeros taken by the variables x i and y ki ,according to the available promoter sites and the number of TFs trying to bind them. Forexample, for M = 2 and N = 2, a promoter state where TF 1 is bound to promoter site 2, theset of variables ( x , x , y , y , y , y ) would have the values (1 , , , , , s that labels different promoter states. For example, we could label thestate specified by (1 , , , , ,
0) as s = 1 and the state specified by (1 , , , , ,
0) as s = 2.Then, the transition from s = 1 to s = 2 would correspond to a process in which the firstTF dissociates from the second promoter site.The reactions that change the copy numbers of the mRNA species are these: ∅ R ( s ) −−−−→ m m j f j −−−−→ m j + 1 j = 1 , ..., L − m j g j −−−−→ m j − j = 2 , ..., Lm P d −−−−→ ∅ , (3)where m is the copy number of the newly transcribed mRNA molecules, m j ( j >
1) isthe copy number of those mRNA molecules that have undergone j − m L being the copy number of the fully processed mRNs; R ( s ) is the promoterstate-dependent transcription rate, f j is the rate of conversion from mRNA species j tomRNA species j + 1, g j is the rate of the conversion from mRNA species j to mRNA species j −
1, and d is the degradation rate of mRNA species L . The master equation for the entiresystem reads ddt P = MP + R [ P ( m − − P ]+ P − (cid:88) j =1 f j [( m j + 1) P ( m j + 1 , m j +1 − − m j P ]+ P (cid:88) j =2 g j [( m j + 1) P ( m j + 1 , m j − − − m j P ]+ d [( m L + 1) P ( m L + 1) − m L P ] . (4)We have employed a short hand notation in which P is short for P ( m , m , ..., m L , t ), P ( m j +1 , m j +1 −
1) is short for P ( m , ..., m j + 1 , m j +1 − , ..., m L , t ), etc. The elements of the vector P , P s , are the probabilities to observe a specific set of copy numbers ( m , m , ..., m P ) andthe promoter state s . The matrix M gives the propensities for transitions between promoterstates. Each element of the diagonal matrix R gives the transcription rate for a uniquepromoter state. Since the evolution of the probability of the promoter state does not dependon m , we can sum both sides of Eq. (4) over all m j to obtain a ME for the promoter: ddt ˜P = M ˜P , (5)where each element of the vector ˜P , ˜ P s , is the probability to observe the promoter in a state s . Before we continue, we must establish a connection between the variables x i and y ki andthe variable s . To do so, we begin with the ME for the promoter,˙ P = (cid:88) i α i [( x i + 1) P ( x i + 1 , y i − − x i P ]+ (cid:88) i b i [( y i + 1) P ( x i − , y i + 1) − y i P ] , (6)and define a set S = (cid:8) [ x , ..., x N ] , [ y , ..., y N ] , ... [ y M , ..., y MN ] (cid:9) , (7)5uch that S s represents S for particular values of the variables x i and y kj . For example, if s = 1, we might have S = (cid:8) [1 , ..., , ..., ↑ i th site , [0 , ..., , ..., [0 , ..., , ..., ↑ i th site occupied by k th TF , ..., [0 , ..., (cid:9) , (8)which represents a state with the k th TF bound to the i th site. The square brackets inside S are imaginary, serving only as a visual aid; hence, S can be thought of as a vector. Howwe index the promoter states is of no consequence, only that every state has a unique index.In terms of S , we can write x i = S si and y ki = S skM + i , where the subscript labels the elementof S s . Defining the probability vector as P = P ( S ) P ( S ) ··· , (9)the ME (6) can be written in the desired form: dP ( S s ) dt = (cid:88) s (cid:48) (cid:40)(cid:88) ik a ki C ikss (cid:48) (1 , −
1) + (cid:88) ik b ki C ikss (cid:48) ( − , − (cid:88) ik (cid:0) a ki S si + b ki S skM + i (cid:1)(cid:41) P ( S s (cid:48) ) , (10)where the matrices C ikss (cid:48) (1 , −
1) and C ikss (cid:48) ( − ,
1) are defined as C ikss (cid:48) (1 , −
1) = , if S s (cid:48) i = S si + 1 , S s (cid:48) kM + i = S s (cid:48) kM + i − , otherwise C ikss (cid:48) ( − ,
1) = , if S s (cid:48) i = S si − , S s (cid:48) kM + i = S s (cid:48) kM + i + 10 , otherwiseThe expression in the curly brackets in Eq. (10) is the sought after matrix M . Converting x i and y ki into the new variables S s in the dissociation rate, Eq. (2), b ki = γ ki S si − S si +1 + (cid:88) l ( β kli,i − S slM + i − S si +1 + β kli,i +1 S slM + i +1 S si − ) + (cid:88) lp ˜ β klpi S slM + i − S spM + i +1 , (11)completes the switch between the two types of variable.6n principal, Eq. (4) can be solved numerically by imposing upper bounds on all thevariables m j , which is not known a priori but must be guessed, e. g. by first computingaverage, ¯ m j , and the standard deviations, σ j , for every m j (which can be done analytically)and then setting the cutoff to ¯ m j plus some multiple of σ j . This ad hoc way of truncating,however, poses the problem that if the cutoff is too small, the computed probability distri-bution will be incorrect. Furthermore, the dimension of the problem, i.e. the number ofequations that must be solved, scales as Λ × Λ × ... × Λ L × M , where Λ j is the cutoff forthe j th species of mRNA, and M is the dimension of M which equals the number of possiblepromoter states. Given a large enough L , and large enough Λ j s, the task of solving Eq. (4)directly may become computationally unfeasible. In the next section, we present a differentway of solving Eq. (4), one that reduces the dimension of the problem to n j × M , where n j is the copy number for the j th species of mRNA up to which we wish to know the probabilitydistribution of m j . MASTER EQUATION: PATH INTEGRAL APPROACH
Suppose that we are able to observe the state of the promoter in real time but not thestochastic evolution of the mRNA molecules. We could then write down a master equationfor the variables m j in which the transcription rate would be a known function of time: ddt P = R ( t ) [ P ( m − − P ]+ L − (cid:88) j =1 f j [( m j + 1) P ( m j + 1 , m j +1 − − m j P ]+ L (cid:88) j =2 g j [( m j + 1) P ( m j + 1 , m j − − − m j P ]+ d [( m L + 1) P ( m L + 1) − m L P ] , (12)where R ( t ) depends on time through the variable s : R ( t ) = R ( s ( t )). Figure 2 shows anexample of what s ( t ) might look like for a simple promoter with two binding sites and oneTF. The evolution of s is sometimes refer to as “path”. In what follows, it will be moreconvenient to work with a generating function (GF), defined as F ( ξ , ..., ξ L , t ) = (cid:88) m ... (cid:88) m L ( ξ m ...ξ m L L ) P ( m , t ) . (13)7 IG. 2. An example of a path s ( t ) for a promoter with two promoter sites acted upon by one TF.The integer values of s correspond to the following promoter states: s = 1 - both promoter sitesare unoccupied; s = 2 - only promoter site A is occupied; s = 3 - only promoter site B is occupied; s = 4 - both A and B are occupied. Knowing the GF, one can recover the joined PD from this relation P ( m , ..., m L , t ) = (cid:20) m ! ...m L ! ∂ m ∂ξ m ... ∂ m L ∂ξ m L L F ( ξ , ..., ξ L , t ) (cid:21) ξ =0 . (14)Here, we are interested in computing PDs for each variable separately; hence, we will workwith a single variable GF, defined as F j ( ξ, t ) = (cid:88) m ... (cid:88) m L ξ m j P ( m , t ) , (15)from which the single variable PD can be recovered: P ( m j , t ) = (cid:20) m j ! ∂ m j ∂ξ m j F ( ξ, t ) (cid:21) ξ =0 . (16)In reference [11], we have shown that for a system governed by Eq. (12) F j ( ξ, t ) = G j ( ξ, t )exp (cid:20)(cid:90) t ( ξ − K j ( t, t (cid:48) ) R ( t (cid:48) ) dt (cid:48) (cid:21) , (17)where G j ( ξ, t ) = (cid:88) n ... (cid:88) n L P ( n , M (cid:89) l =1 (cid:34) ( ξ − (cid:88) i U ji U − il e S i t + 1 (cid:35) n l , (18)8here P ( n ,
0) is the initial joint PD for all variables m , S i are the eigenvalues of the matrix S = − g f . . . . g − ( g + f ) f . . . g − ( g + f ) f . .
00 0 g − ( g + f ) f . . ··· . . . . g L − − ( f L + d ) , (19)and U is the unit matrix that satisfies [ U − SU ] il = S i δ il . In the exponent of Eq. (17), theintegration kernel K j ( t, t (cid:48) ) is given by K j ( t, t (cid:48) ) = L (cid:88) i =1 e S i ( t − t (cid:48) ) (cid:0) u Tj UB i U − u (cid:1) , (20)where the elements of the diagonal matrices B i are [ B i ] lp = δ li δ pi , and u j is the j th unitvector of the bases [ u i ] j = δ ij .Eq. (17) is valid only for a specific path taken by s . In order to obtain the PD for thevariable m j , regardless of the promoter states, we must multiply Eq. (17) by the probabilityof observing a specific path, and then integrate over all possible paths – a procedure we willrefer to as “integrating (something) over all paths.” This can be accomplished with the helpof the following theorem: Theorem 1 : Let X = ( X , X , ..., X V ) be a set of variables of an arbitrary system, X i beone possible set of values X could take, and ddt P ( X , t ) − H ( A P , X , t ) = 0 , (21)be the system’s ME, where H is some function of X , t and A P = ( P ( X , t ) , P ( X , t ) , ... ).If P ( X ,
0) is the probability to observe X at t = 0, then, for an arbitrary function W ( X ( t (cid:48) ) , t, t (cid:48) ), integrating exp (cid:20)(cid:90) t W ( X ( t (cid:48) ) , t, t (cid:48) ) dt (cid:48) (cid:21) (22)over all paths is given by Q ( t ) = (cid:88) X ... (cid:88) X V Q ( X , t (cid:48) ) (cid:12)(cid:12)(cid:12)(cid:12) t (cid:48) = t , (23)9here Q ( X , t (cid:48) ) is the solution of dQ ( X , t (cid:48) ) dt (cid:48) − H ( A Q , X , t ) = W ( X , t, t (cid:48) ) Q ( X , t (cid:48) ) for t (cid:48) ≥ t, (24)such that Q ( X ,
0) = P ( X , t should be considered as a parameter. We will refer to t (cid:48) as a “dummytime”, since it is an artefact of the integral in Eq. (22). In the present case, X = s , and H ( A P , X , t ) = M ˜P and W ( X ( t (cid:48) ) , t, t (cid:48) ) = ( ξ − K j ( t, t (cid:48) ). Hence, we obtain d Q ( t (cid:48) ) dt (cid:48) = [ M + ( ξ − R K j ( t, t (cid:48) )] Q ( t (cid:48) ) , (25)with the initial conditions Q (0) = G j ( ξ, t ) ˜P (0). Following the instructions of Eq. (23), weobtain the GF for the variable m j : F j ( ξ, t ) = G j ( ξ, t ) Q ( t ) , (26)where Q ( t ) = (cid:34) M (cid:88) i =1 u i · Q ( t (cid:48) ) (cid:35) t (cid:48) = t . (27)Solving Eq. (25) is not possible; however, we can convert it into an equation for the PDfor m j by applying the operator 1 / ( m !) ∂ m /∂ξ m and then setting ξ = 0. The result is this: d P m dt (cid:48) = [ M − R K j ( t, t (cid:48) )] P m + R K j ( t, t (cid:48) ) P m − , (28)where P m = 1 m ! ∂ m ∂ξ m Q ( t (cid:48) ) (cid:12)(cid:12)(cid:12)(cid:12) ξ =0 . (29)Eq. (28) must be solved for the initial conditions P m (0) = (cid:20) ˜P (0) 1 m ! ∂ m ∂ξ m G j ( ξ, t ) (cid:21) ξ =0 . (30)To work out Eq. (30), we can invoke Cauchy’s integral formula, which states that1 m ! d m f ( ξ ) dξ m = 12 πi (cid:73) dz f ( z )( z − ξ ) m +1 , (31)where f ( z ) is analytic at the point ξ . The integral over the complex variable z must enclose ξ but is otherwise arbitrary. Replacing f ( z ) in Eq. (33) with G j ( ξ, t ), setting ξ = 0 and10erforming the integration over a unit circle centered at z = 0, we obtain (cid:90) π dθ π e − miθ G j ( e iθ , t ) = (cid:88) n P ( n , n (cid:88) q =0 ... n L (cid:88) q L =0 P (cid:89) l =1 (cid:18) n l q l (cid:19) h q l jl (1 − h jl ) n l − q l × (cid:90) π dθ π exp (cid:34) i (cid:32) P (cid:88) µ =1 q µ − m (cid:33) θ (cid:35) = (cid:88) n P ( n , (cid:34) n (cid:88) q =0 ... n L (cid:88) q L =0 P (cid:89) l =1 (cid:18) n l q l (cid:19) h q l jl (1 − h jl ) n l − q l δ m, ¯ q (cid:35) , (32)where h jl = (cid:88) i U ji U − il e S i t (33)and ¯ q = (cid:80) µ q µ . Hence, the initial conditions for P m ( t (cid:48) ) are P m (0) = (cid:88) n P ( n , (cid:34) ˜ m (cid:88) q =0 ... n P (cid:88) q L =0 P (cid:89) l =1 (cid:18) n l q l (cid:19) h q l jl (1 − h jl ) n l − q l δ m, ¯ q (cid:35) ˜P (0) . (34) RESULTS AND DISCUSSION
In order to test the validity of Eqs. (28), we generated ten random samples for each ofthe parameter sets, α ki , b ki , f i , g i and R ij for M = 2, N = 3 and L = 3. For each of theten cases, we chose initial condition P ( m ,
0) = δ m , ˜ m δ m , ˜ m δ m , ˜ m , where ˜ m j was randomlyselected from square distributions of integers ranging from 0 to 50. Eq. (28) was solvednumerically on Mathematica using the NDSolve package for t = 0 . t = 5 and t = 10. Foreach parameter set, initial conditions and t = 0 . , ,
10, we generated an ensemble of 100krealizations using the GA, from which we constructed the PD for each variable. The resultsare presented in Figures 3; the parameter ranges are given in the figure captions.The advantage of the method presented herein is that it allows one to decouple the PDsfor the mRNA species. As a result, our method takes us from computationally expensiveor infeasible to highly efficient. One drawback of this method is that Eq. (28) must beintegrated over what we termed “dummy time” from zero to the real time, which mustbe set beforehand. This means that unlike the solution to the ME, which, if numericallysolvable, gives us a pseudo-continuous solution in time, our method does not. To obtaina pseudo-continuous solution in time with our method, one must solve Eq. (28) for asuitable number of time points and then interpolate the solutions. However, in practice,11
IG. 3. Probability distributions for ten randomly selected parameter sets for M = 2, N = 3 and L = 3 at t = 0 . t = 5min (circle) and t = 10min (triangle) for a) m , b) m , and c) m . The parameters were sampled from square distributions with the range: R ij = [1 ,
50] min − , f i = [0 . , .
5] min − , g i = [0 . , .
5] min − , γ ki = [0 . , .
01] min − , β kli,i − = β kli,i +1 = γ ki /κ min − , β klpi = γ ki /κ min − , K = (1 , , ,
4) and K = (1 , , , m = [0 , m = [0 , m = [0 , CONCLUSION
We have presented an alternative approach to the ME for a system of an arbitrarily com-plex promoter and a set of mRNA species that have either partially or fully undergone thepost-transcription processing. The approach consists of obtaining the generating function(GF) for the mRNAs only as a functional of a particular realization of the promoter state,and then integrating over all possible promoter states. As a result, we derived an alternativeequation for the GF, which we then converted into separate equations for the probability dis-tribution for each species of mRNA for arbitrary initial conditions. We have demonstratedthe validity of our derivations by comparing the results obtained via our method to those ofGillespie simulations. This method is highly efficient compared to other methods when thenumber of mRNA species is greater than one. In practice, this method lends itself to thereverse-engineering of promoter architectures based on single-cell RNA data. [1] Blake MWEJ, Minnoye L, Aibar S, Gonzalez-Blas CB, Atak ZK, Aerts S, (2018) Mappinggene regulatory networks from single-cell omics data.[2] Van Kampen NG (2007) Stochastic Processes in Physics and Chemistry 3rd print, NorthHolland, Amsterdam[3] Gillespie DT, (1977) Exact Stochastic Simulation of Coupled Chemical Reactions. J. Phys.Chem. 81(25), 2340-2361[4] Gibson MA, Bruck J, (2000) Efficient Exact Stochastic Simulation of Chemical Systems withMany Species and Many Channels. J. Phys. Chem. 104(9), 18761889[5] Gillespie DT, (2001) Approximate accelerated stochastic simulation of chemically reactingsystems. J. Chem. Phys. 115(4), 1716[6] Cao Y, Li H, Petzold L, (2004) Efficient formulation of the stochastic simulation algorithmfor chemically reacting systems. J. Chem. Phys. 121, 4059
7] Cao Y, Gillespie DT, Petzold LR, (2005) Avoiding negative populations in explicit Poissontau-leaping. J. Chem. Phys. 123(5), 054104[8] Cao Y, Gillespie DT, Petzold LR, (2005) Efficient step size selection for the tau-leaping sim-ulation method. J. Chem. Phys. 124(4), 044109[9] Jahnke T, Huisinga W, (2007) Solving the chemical master equation for monomolecular reac-tion systems analytically. J Math Biol. 54(1):1-26[10] Albert J, Rooman M, (2016) Probability distributions for multimeric systems J. math. biol.72 (1-2), 157-169[11] Albert J, (2019) Path integral approach to generating functions for multistep post-transcription and post-translation processes and arbitrary initial conditions Authors J. Math.Biol. 79(6-7): 2211-2236[12] Shahrezaei V, Swain PS, (2008) Analytical distributions for stochastic gene expression. PNAS,105(45): 1725617261.[13] Pendar H, Platini T, Kulkarni RV, (2013) Exact protein distributions for stochastic modelsof gene expression using partitioning of Poisson processes Phys. Rev. E, 87, 042720[14] Bokes P, King JR, Wood ATA, Loose M, (2012) Exact and approximate distributions ofprotein and mRNA levels in the low-copy regime of gene expression J. Math. Biol. 64, 5,829854[15] Bokes P, King JR, Wood ATA, Loose M, (2012) Multiscale stochastic modelling of geneexpression J. Math. Biol. 65, 3, 493520[16] Popovi´c N, Marr C, Swain PS (2016) A geometric analysis of fast-slow models for stochasticgene expression J. Math. Biol. 72, 12, 87122[17] Veerman F, Marr C, Popovi´c N (2018) Time-dependent propagators for stochastic models ofgene expression: an analytical method J. Math. Biol. 77, 2, 261312[18] Burrage K, Tian T, Burrage P, (2004) A multi-scaled approach for simulating chemical reactionsystems. Progress in Biophysics & Molecular Biology, 85, 217-234[19] Jahnke T, Altntan D, (2010) Efficient simulation of discrete stochastic reaction systems witha splitting method. BIT Num Math 50(4), 797-822[20] Albert J, (2016) A hybrid of the chemical master equation and the Gillespie algorithm forefficient stochastic simulations of sub-networks. PloS one 11 (3), e0149909
21] Albert J, (2016) Stochastic simulation of reaction subnetworks: Exploiting synergy betweenthe chemical master equation and the Gillespie algorithm AIP Conference Proceedings 1790(1), 150026[22] Duso L, Zechner C, (2018) Selected-node stochastic simulation algorithm J. Chem. Phys, 148,164108[23] Alfonsi A, Cances E, Turinic G, Ventura BD, Huisinga W, (2005) Adaptive simulation ofhybrid stochastic and deterministic models for biochemical systems. ESAIM: Proc. 14, 1-13[24] Kurasov P, L¨uck A, Mugnolo D, Wolf V, (2018) Stochastic Hybrid Models of Gene RegulatoryNetworks Mathematical Biosciences, 305, 170-17721] Albert J, (2016) Stochastic simulation of reaction subnetworks: Exploiting synergy betweenthe chemical master equation and the Gillespie algorithm AIP Conference Proceedings 1790(1), 150026[22] Duso L, Zechner C, (2018) Selected-node stochastic simulation algorithm J. Chem. Phys, 148,164108[23] Alfonsi A, Cances E, Turinic G, Ventura BD, Huisinga W, (2005) Adaptive simulation ofhybrid stochastic and deterministic models for biochemical systems. ESAIM: Proc. 14, 1-13[24] Kurasov P, L¨uck A, Mugnolo D, Wolf V, (2018) Stochastic Hybrid Models of Gene RegulatoryNetworks Mathematical Biosciences, 305, 170-177