A Molecular Implementation of the Least Mean Squares Estimator
aa r X i v : . [ q - b i o . M N ] J a n A Molecular Implementation of the Least Mean Squares Estimator
Christoph Zechner ∗ and Mustafa Khammash † Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
Abstract – In order to function reliably, synthetic molec-ular circuits require mechanisms that allow them to adaptto environmental disturbances. Least mean squares (LMS)schemes, such as commonly encountered in signal processingand control, provide a powerful means to accomplish thatgoal. In this paper we show how the traditional LMS algo-rithm can be implemented at the molecular level using onlya few elementary biomolecular reactions. We demonstrateour approach using several simulation studies and discuss itsrelevance to synthetic biology.
Engineered circuits in living cells often exhibit poor ro-bustness and substantial variations from one cell to thenext [1, 2]. In extreme cases, they are found functionalin only a small fraction of cells in an isogenic popula-tion, while others act unpredictably. A major cause forsuch behavior is that the biochemical components thatconstitute a circuit depend on factors in their molecularenvironment (or context ) of the cell [3]. For instance, therate at which a protein is expressed depends on the genedosage or the number of available ribosomes and so on.Recently, progress has been made in taking into ac-count such environmental factors into the modeling anddesign of molecular circuits [4, 5, 6, 7]. This can tremen-dously improve the faithfulness of computational modelsand in turn the predictability of rationally designed cir-cuits. However, in practical scenarios, the origins andproperties of potential disturbances are barely known andhard to anticipate during design time. From this point ofview, it seems barely realistic to tune a circuit in silico such that it acts robustly under all possible perturbationsthat it may encounter in the real environment of a cell.A viable alternative is to employ adaptive design prin-ciples in which a circuit continuously senses and adjustsitself to changing environmental conditions. This requiresmolecular circuits that learn and make inference abouttheir surroundings. A few attempts have been made re-cently to devise such circuits in the form of chemical re-action networks, for example to perform neural networkcomputations [8], to realize message passing inference [9]or supervised learning [10]. Along these lines, we haverecently proposed a molecular implementation of an opti-mal filter that allows one to estimate dynamically chang-ing noise signals [11]. This estimator was derived undera Bayesian optimality criterion by employing a Kushner-Stratonovich differential equation [12]. ∗ [email protected] † [email protected] In the present work, we consider another powerful classof estimation schemes that are frequently used in adap-tive signal processing and control theory. These schemes– termed least means squares (LMS) estimators [13, 14] –iteratively compute the solution of a general least squaresproblem through a gradient-based parameter search. Thisiterative structure allows a circuit to estimate unknownquantities in an adaptive fashion by processing measure-ments in realtime. In this paper we demonstrate howLMS-type estimators can be realized using elementarybiomolecular reactions. Our work is related to [10], wherethe authors have proposed a DNA-based gradient-descentscheme, which is able to learn static linear functions. Inthis work, however, we focus on dynamical and possiblystochastic biochemical systems that shall be identified bya molecular LMS estimator.The remainder of the paper is structured as follows.In Section 2 we introduce the mathematical notation andmodels required to describe molecular circuits. In Sec-tion 3 we introduce the concept of LMS estimation andpresent possible molecular realizations. In Section 4 wetest the performance of the proposed circuits using sev-eral simulation studies and discuss how they may be usedin practical applications.
We consider well-mixed molecular reaction networks com-prising K molecular species Z = ( Z , . . . , Z K ) T that in-teract with each other through L reaction channels of theform Z h i ( Z ( t )) −−−−− ⇀ Z + ν i , (1)with i as the reaction index, Z ( t ) as the abundance of Z at time t , h i as a rate function determined by the law ofmass-action and ν i as the stoichiometric change associ-ated with reaction i . Throughout this paper, we followthe convention to denote molecular species as boldfacesymbols. Note that we use the same symbol also to referto the circuit that those species constitute.We describe the time-evolution of Z as a continuous-time Markov chain (CTMC) that can take into accountthe inherent randomness of biochemical reactions [15]. Itcan be shown [16] that the molecular abundance Z ( t ) sat-isfies a stochastic integral equation of the form Z ( t ) = Z + L X i =1 P i (cid:18)Z t h i ( Z ( s ))d s (cid:19) ν i , (2)where P i is an independent unit Poisson processes de-scribing the firings of reaction i . Eq. (2) is commonlyknown as the random time change model .1ssuming the chemical species to be highly abundant,molecular fluctuations become negligible and eq. (2) canbe approximated by a deterministic rate equation of theform dd t ˜ Z ( t ) = L X i =1 h i ( ˜ Z ( t )) ν i (3)with ˜ Z ( t ) ≈ Z ( t ). We will make use of equations (2) and(3) at a later in this manuscript to model stochastic anddeterministic reaction networks, respectively. Suppose a circuit requires knowledge about certain envi-ronmental factors θ . For example, θ could be the numberof phosphotases available to the circuit. However, thesefactors are typically not accessible directly by the circuitbut only indirectly through available intermediates Y . Inthe example above, Y could be a protein that is targetedby that phosphatase, for instance.The idea is now to use a second molecular circuit X that is able to identify the dynamics of Y through a suit-able adaptation scheme. We assume here that X and Y are equivalent in their structure but have distinct param-eters ˆ θ and θ , respectively. The goal of the adaptationscheme is to adjust the parameters ˆ θ such as to minimizethe discrepancy between the measured output Y ( t ) andthe output of X (termed X ( t )). The resulting optimalparameters ˆ θ ∗ then represent an estimate of θ .A suitable and analytically convenient metric to as-sess the discrepancy between Y ( t ) and X ( t ) is the meansquared error J (ˆ θ ) = E (cid:2) ( X ( t ) − Y ( t )) (cid:3) . (4)According to this measure, we seek for the set of param-eters that minimizes J (ˆ θ ), i.e.,ˆ θ ∗ = argmin ˆ θ J (ˆ θ ) , (5)in which case ˆ θ ∗ is referred to as the least mean squared(LMS) estimator. The closed-form solution of this opti-mization problem can be found in certain specific scenar-ios, for instance in the case of linear system dynamics [14].In most scenarios, however, (5) is analytically intractableand one has to minimize J (ˆ θ ∗ ) numerically. Note thatiterative schemes may be beneficial even when (5) is an-alytically tractable, because it gives X the flexibility toreadapt to changes in θ as will be shown later in thismanuscript.A common strategy to minimize J (ˆ θ ) is to employ agradient-based method that, at each iteration, moves theparameters ˆ θ along the direction of the steepest descent,giving rise to the well-known LMS algorithm. This algo-rithm is usually used in a discrete-time scenario, for in-stance when operated on a digital signal processing unit.In such case, at each time iteration n , the algorithm wouldupdate the parameters using the relationˆ θ n +1 = ˆ θ n − α ( n ) ∂∂ ˆ θ J (ˆ θ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˆ θ n (6) with α ( n ) as a tuneable step-size. The choice of the latterusually involves a tradeoff between the rate of convergenceand the steady sate error of the scheme (i.e., excess error ).Since we consider continuous-time dynamical systems,we seek for an infinitesimal variant of the LMS algorithm[17], i.e., dd t ˆ θ ( t ) = − α ( t ) ∂∂ ˆ θ J (ˆ θ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˆ θ ( t ) , (7)in which case α ( t ) can be understood as a rate at whichthe scheme adapts. +- Target systemAdaptive system LMS scheme
Y(t)X(t) e(t)
Figure 1:
Schematic illustration of an adaptive molecular cir-cuit. The goal is to construct a biomolecular circuit X that ad-justs its parameters to mimick those of a target system Y . Whilethose parameters are inaccessible by the circuit, it can measurethe output Y ( t ) of the target system. This output is comparedto the output of X to construct an error signal e ( t ) , which is inturn used to find the optimal parameters of X . For the sake of simplicity, we make a few simplifica-tions related to X and Y before deriving the scheme. Amore general scenario, however, will be subject of a futuremanuscript. First, we assume that θ (and correspond-ingly ˆ θ ) contains only a single parameter that needs to beidentified from Y ( t ). Furthermore, we allow only Y ( t ) tobe corrupted by molecular noise, and reactions associatedwith X ( t ) are assumed to evolve deterministically (e.g.,through appropriate rescaling of the associated reactionrates).Under these assumptions, the gradient of J (ˆ θ ) is givenby ∂∂ ˆ θ J (ˆ θ ) = ∂∂ ˆ θ E (cid:2) ( X ( t ) − Y ( t )) (cid:3) = E (cid:20) ∂∂ ˆ θ ( X ( t ) − Y ( t )) (cid:21) = E (cid:20) X ( t ) − Y ( t )) ∂∂ ˆ θ X ( t ) (cid:21) = 2 E (cid:20) ( X ( t ) − Y ( t )) ∂∂ ˆ θ X ( t ) (cid:21) . (8)Since we assume that X ( t ) is deterministic, (8) further2implifies to ∂∂ ˆ θ J (ˆ θ ) = 2 E (cid:20) ( X ( t ) − Y ( t )) ∂∂ ˆ θ X ( t ) (cid:21) = 2 X ( t ) ∂∂ ˆ θ X ( t ) − E [ Y ( t )] ∂∂ ˆ θ X ( t )= 2 X ( t ) S ( t ) − E [ Y ( t )] S ( t ) , (9)with S ( t ) := ∂∂ ˆ θ X ( t ) as the sensitivity of X ( t ) with re-spect to ˆ θ . Note that the particular form of this sensitiv-ity depends on X and how ˆ θ enters its dynamics. Specificexamples will be given later in Section 4.Now, plugging eq. (9) into (7) yields a dynamic equa-tion for the LMS estimator ˆ θ ( t ), i.e.,dd t ˆ θ ( t ) = − α ( t ) X ( t ) S ( t ) + 2 α ( t ) E [ Y ( t )] S ( t )= − ˜ α ( t ) X ( t ) S ( t ) + ˜ α ( t ) E [ Y ( t )] S ( t ) , (10)with ˜ α ( t ) = 2 α ( t ). Eq. (10) provides the desiredcontinuous-time solution of the LMS problem. However,in its current form it is not adaptive, meaning that it as-sumes known (and fixed) statistics of Y (i.e., the outputmean E [ Y ( t )]). In practice, however, such statistics areoften unknown and they might also vary over time. UsingLMS estimation, this problem can be bypassed by esti-mating E [ Y ( t )] online from available measurements Y ( t ).This way, the required statistics are extracted directlyfrom data, which in turn allows the scheme to readaptwhen θ changes. A common and simple approach is toapproximate E [ Y ( t )] by the current value of Y ( t ) suchthat dd t ˆ θ ( t ) = − ˜ α ( t ) X ( t ) S ( t ) + ˜ α ( t ) Y ( t ) S ( t ) (11)and we adopt this strategy also in the present work. Agraphical depiction of the online LMS scheme is depictedin Fig. 1. The goal is now tosynthesize eq. (11) using biochemical reactions. How-ever, in its present form eq. (11) is incompatible withmass-action rate laws because it contains a negative (i.e.,degradation) flux that does not depend on the currentvalue of ˆ θ ( t ). To account for this, we choose the adap-tation rate to be proportional to ˆ θ ( t ), i.e., ˜ α ( t ) := λ ˆ θ ( t )and thus,dd t ˆ θ ( t ) = − λ ˆ θ ( t ) X ( t ) S ( t ) + λ ˆ θ ( t ) Y ( t ) S ( t ) . (12)While (12) is now in principle compatible with mass-action kinetics, it involves trimolecular reactions, that arehard or maybe impossible to realize in practice. However,the trimolecular reaction can be composed from two bi-molecular reactions with appropriately chosen rate con-stants. For example, the reaction A + B + C λ − ⇀ D can be represented by A + B f − ⇀↽ − b OO + C λb/f −−− ⇀ D , assuming b, f >> λ .Specific implementations of the derived LMS adapta-tion scheme will be given in the subsequent section. In this section we provide several numerical and analyticalexamples to demonstrate our molecular LMS estimationframework. In all of the examples, we will consider atarget process ∅ ρ − ⇀ Y φ − ⇀ ∅ , (13)with ρ and φ as the process parameters that we aim toidentify using an LMS scheme. As indicated earlier, werestrict ourselves to the case of a single unknown param-eter, meaning that we either have θ = { ρ } or θ = { φ } . We first consider thecase where the birth-rate ρ is unknown to the circuit.The goal is to construct a corresponding adaptive circuit X ∅ ˆ θ − ⇀ X φ − ⇀ ∅ , (14)whose birth-rate ˆ θ adapts to that of Y . To accomplishthis, we require a molecular implementation of relation(11). The first step is to derive the particular form of thesensitivity function S ( t ) from the dynamics of X . Thelatter is given by the rate equation (3) which in this casereads dd t X ( t ) = ˆ θ − φX ( t ) . (15)Differentiating both sides of eq. (15) with respect to ˆ θ yields dd t ∂∂ ˆ θ X ( t ) = dd t S ( t ) = 1 − φS ( t ) . (16)Fortunately, this equation is already in the form of avalid rate equation. In particular, it describes the time-evolution of a birth-death process ∅ − ⇀ S φ − ⇀ ∅ . (17)In conjunction with (11), the overall adaptive system canbe implemented through reactions ∅ θ − ⇀ YY φ − ⇀ ∅ ˆ θ − ⇀ ˆ θ + XX φ − ⇀ ∅∅ − ⇀ SS φ − ⇀ ∅ S + ˆ θ + X λ − ⇀ S + XS + ˆ θ + Y λ − ⇀ S + 2ˆ θ + Y . (18)3e first studied the adaptation performance of (18)as a function of the tuning parameter λ in an idealizednoise-free scenario. In this case, the network from (18) isdescribed by the rate equationsdd t Y ( t ) = θ − φY ( t ) (19)dd t X ( t ) = ˆ θ ( t ) − φX ( t ) (20)dd t ˆ θ ( t ) = − λ ˆ θ ( t ) X ( t ) S ( t ) + λ ˆ θ ( t ) Y ( t ) S ( t ) (21)dd t S ( t ) = 1 − φS ( t ) . (22)This equation was simulated for different values of λ asdepicted in Fig. 2. The results indicate a tradeoff that isassociated with the choice of λ : too small λ lead to slowconvergence of the scheme, while too large λ cause theadaptation scheme to “overshoot” the target value andexhibit oscillations.In order to analyze the convergence properties of (18),we performed a local stability analysis of (22) basedon linearization. Noting that Y ( t ) and S ( t ) evolve au-tonomously, we can replace those variables by their steadystate values Y ∞ = θ/φ and S ∞ = 1 /φ , respectively. Thisyields the reduced systemdd t X ( t ) = ˆ θ ( t ) − φX ( t ) (23)dd t ˆ θ ( t ) = − λφ ˆ θ ( t ) X ( t ) + λθφ ˆ θ ( t ) (24)which has equilibrium points X ∞ = ˆ θ ∞ = 0 and X ∞ = θ/φ and ˆ θ ∞ = θ . The respective Jaccobians are given by A = (cid:18) − φ λθφ (cid:19) A = (cid:18) − φ − θλφ (cid:19) . (25)The eigenvalue λθ/φ of A is always positive meaningthat for initial conditions ˆ θ (0) >
0, the system will notconverge to that equilibrium point. From A we findeigenvalues µ = − p φ − θλφ + φ φ µ = − − p φ − θλφ + φ φ . (26)For any λ >
0, the real part of both eigenvalues isnegative, meaning that θ is the only stable equilibriumof the adaptation scheme. However, we find that for λ > φ / (4 θ ), the system will have complex eigenvalues,indicating oscillatory behavior. This can also be seen fromin Fig. 2, especially for the case λ = 3 e − θ . We used the model from eq. (2) and stochastic sim-ulations [18] to simulate the reaction network from (18).The results from Fig. 3 show that the birth-rate θ and inturn the system output Y ( t ) is accurately tracked by themolecular LMS scheme. A detailed and quantitative erroranalysis in the presence of molecular fluctuations will besubject of future work. Remark:
We want to point out another interestingproperty of this particular LMS estimator. Replacing thesensitivity S ( t ) by a positive constant (e.g., its stationary Time true value
Figure 2:
Convergence of estimated birth-rate as a function of λ . The adaptive circuit from (18) was simulated using parameters θ = 0 . and φ = 0 . . Small λ lead to slow convergence, while λ > φ / (4 θ ) = 1 . e − cause overshooting and oscillatorybehavior. X ( t ) Time Time
True Estimated
Figure 3:
Algorithm performance in the presence of molecularnoise. The adaptive circuit from (18) was simulated using thestochastic simulation algorithm to account for molecular noise.We assume that the target value θ changes spontaneously atcertain time points to check the circuit’s ability to readapt. Theparameters used for the simulations were θ = 0 . , φ = 0 . and λ = 1 e − . value S ∞ = 1 /φ ) does not change the asymptotic behav-ior of ˆ θ ( t ), meaning that θ will remain as the only stableequilibrium point. However, the adaptation law simplifiesto dd t ˆ θ ( t ) = − λφ ˆ θ ( t ) X ( t ) + λφ ˆ θ ( t ) Y ( t )= λφ ˆ θ ( t )( Y ( t ) − X ( t )) , (27)which is structurally equivalent to a specific control motifthat has been studied previously [19, 20]. In particular,it was shown to act as an integral control circuit exhibit-ing robust perfect adaptation [21]. This points out thepotential use of our adaptive estimation framework forstudying robustness in biological networks. We further show howthe LMS adaption can be used to identify the target cir-cuit’s death rate θ = { φ } . The corresponding sensitivity S ( t ) can be shown to satisfydd t S ( t ) = − X ( t ) − ˆ θS ( t ) . (28)There are two issues associated with the above equation.First, it depends on the tuning parameter ˆ θ , which willchange over time due to the adaptation scheme. Corre-spondingly, the value of S ( t ) will be different from the4ctual sensitivity ∂∂ ˆ θ X ( t ). While this will have an im-pact on the convergence rate of the LMS scheme, it doesnot affect its steady state behavior (see analytical resultsbelow). The second problem is that S ( t ) is incompati-ble with mass-action rate laws due to the negative de-pendency on X ( t ). In order to address this problem, weconsider the equation for S − ( t ) = − S ( t ), which is givenby dd t S − ( t ) = X ( t ) − ˆ θS − ( t ) (29)and correspondingly use the LMS update ruledd t ˆ θ ( t ) = − λ ˆ θ ( t ) X ( t ) S ( t ) + λ ˆ θ ( t ) Y ( t ) S ( t )= λ ˆ θ ( t ) X ( t ) S − ( t ) − λ ˆ θ ( t ) Y ( t ) S − ( t ) . (30)Overall, the adaptive circuit is given by the reactions ∅ ρ − ⇀ YY θ − ⇀ ∅∅ ρ − ⇀ X ˆ θ + X − ⇀ ˆ θ X − ⇀ X + S ˆ θ + S − ⇀ ∅ S + ˆ θ + Y λ − ⇀ S + YS + ˆ θ + X λ − ⇀ S + 2ˆ θ + X . (31)Similar to the previous section, we performed simula-tions to check the adaptation performance of the circuitas a function of λ under idealized noise-free conditions.This allows us to describe the adaptive circuit by the dif-ferential equationsdd t Y ( t ) = ρ − θY ( t ) (32)dd t X ( t ) = ρ − ˆ θ ( t ) X ( t ) (33)dd t ˆ θ ( t ) = − λ ˆ θ ( t ) X ( t ) S ( t ) + λ ˆ θ ( t ) Y ( t ) S ( t ) (34)dd t S ( t ) = − X ( t ) − ˆ θ ( t ) S ( t ) . (35)We again performed a local stability analysis of thedifferential equations to investigate the convergence of thecircuit. In this case, the sensitivity S ( t ) is coupled to X ( t )and ˆ θ ( t ) and thus, has to be included in the dynamicanalysis. After eliminating the equation corresponding to Y ( t ), we obtain a three-dimensional systemdd t X ( t ) = ρ − ˆ θ ( t ) X ( t ) (36)dd t ˆ θ ( t ) = − λ ˆ θ ( t ) X ( t ) S ( t ) + λρθ ˆ θ ( t ) S ( t ) (37)dd t S ( t ) = − X ( t ) − ˆ θ ( t ) S ( t ) , (38)which has equilibria at the origin and at the point X ∞ = ρ/θ, ˆ θ ∞ = θ and S ∞ = − /θ . For compactness, we skipexplicit expressions of the respective Jaccobian matricesand eigenvalues. However, as in the previous example, we found that for any λ >
0, only the non-zero equilibriumpoint is stable. For any λ > θ / (4 ρ ), the adaptationexhibit oscillatory behavior, which should be taken intoconsideration when designing this circuit. These resultsare confirmed in Fig. 4, for which we simulated (35) forthree different values of λ . Time true value
Figure 4:
Convergence of estimated death-rate as a function of λ . The adaptive circuit from (31) was simulated for θ = 0 . and φ = 0 . and different values of λ . For λ > θ / (4 ρ ) = 2 . e − ,the adaptation scheme exhibits oscillatory behavior. LMS schemes provide a powerful and versatile frameworkfor adaptive estimation. In this work we have shownhow simple LMS-type estimators that usually run on acomputer can be implemented biochemically for the pur-pose of synthetic biology. Such algorithms would allowa circuit to make inference about its environment andfacilitate adaptive behavior. We have shown by simula-tion that the LMS circuit is able to accurately estimateand track unknown parameters of a birth-death process.We are currently extending the proposed scheme to moregeneral scenarios when multiple parameters are to be es-timated simultaneously and when both X and Y are ar-bitrary, possibly multivariate stochastic circuits.We anticipate several important potential applicationsof the presented framework. In [11], optimal filters (suchas the Kalman filter [22]) were employed to design noise-cancelling synthetic circuits. To this end, the LMS ap-proach provides an attractive alternative to optimal filter-ing due to its generic and simple structure. In the future,we will extend the approach to nonlinear and multivariatemodels and provide an in-depth analysis of its properties. ACKNOWLEDGMENTS.
This project was financedwith a grant from the Swiss SystemsX.ch initiative, eval-uated by the Swiss National Science Foundation.
COPYRIGHT INFORMATION.
This article waspresented and published at the 2016 IEEE 55th Confer-ence on Decision and Control (CDC) in Las Vegas. Inthe original version, Figure 1 was unreferenced and θ ineq. (28) should have been ˆ θ . In the present version, bothissues have been corrected.5 eferences [1] Timothy S Gardner, Charles R Cantor, and James J Collins.Construction of a genetic toggle switch in Escherichia coli. Na-ture , 403(6767):339–342, 2000.[2] Michael B Elowitz and Stanislas Leibler. A syntheticoscillatory network of transcriptional regulators.
Nature ,403(6767):335–338, 2000.[3] Stefano Cardinale and Adam Paul Arkin. Contextualizing con-text for synthetic biology – identifying causes of failure of syn-thetic biological systems.
Biotechnology Journal , 7(7):856–866,2012.[4] Artemis Llamosi, Andres M. Gonzalez-Vargas, Cristian Ver-sari, Eugenio Cinquemani, Giancarlo Ferrari-Trecate, PascalHersen, and Gregory Batt. What population reveals about in-dividual cell identity: Single-cell parameter estimation of mod-els of gene expression in yeast.
PLoS Comput Biol , 12(2):1–18,02 2016.[5] Christoph Zechner and Heinz Koeppl. Uncoupled analysis ofstochastic reaction networks in fluctuating environments.
PlosComputational Biology , 10(12):e1003942, 2014.[6] Tina Toni and Bruce Tidor. Combined model of intrin-sic and extrinsic variability for computational network designwith application to synthetic biology.
PLoS Comput Biol ,9(3):e1002960, 2013.[7] J. Hasenauer, S. Waldherr, M. Doszczak, N. Radde,P. Scheurich, and F. Allgower. Identification of models ofheterogeneous cell populations from population snapshot data.
BMC Bioinformatics , 12(1):125, 2011.[8] Lulu Qian, Erik Winfree, and Jehoshua Bruck. Neural networkcomputation with DNA strand displacement cascades.
Nature ,475(7356):368–372, 2011.[9] Nils E Napp and Ryan P Adams. Message passing inferencewith chemical reaction networks. In C. J. C. Burges, L. Bottou,M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors,
Advances in Neural Information Processing Systems 26 , pages2247–2255. Curran Associates, Inc., 2013.[10] Matthew R. Lakin and Darko Stefanovic. Supervised learningin adaptive DNA strand displacement networks.
ACS Syn-thetic Biology , 5(8):885–897, 2016. [11] Christoph Zechner, Georg Seelig, Marc Rullan, and MustafaKhammash. Molecular circuits for dynamic noise filtering.
Pro-ceedings of the National Academy of Sciences , 113(17):4729–4734, 2016.[12] Harold J Kushner. On the differential equations satisfied byconditional probablitity densities of Markov processes, withapplications.
SIAM Journal on Control and Optimization ,2(1):106, 1964.[13] Simon Haykin.
Adaptive Filter Theory (3rd Ed.) . Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1996.[14] S. M. Kay.
Fundamentals of Statistical Signal Processing: Es-timation Theory . Prentice Hall, Englewood Cliffs, NJ, 1993.[15] Nicolaas Godfried Van Kampen.
Stochastic processes inphysics and chemistry , volume 1. Elsevier, 1992.[16] D. F. Anderson and T. G. Kurtz. Continuous time Markovchain models for chemical reaction networks. In
Design andAnalysis of Biomolecular Circuits , pages 3–42. Springer, 2011.[17] S. Karni and G. Zeng. The analysis of the continuous-timelms algorithm.
IEEE Transactions on Acoustics, Speech, andSignal Processing , 37(4):595–597, 1989.[18] Daniel T. Gillespie. Stochastic simulation of chemical kinetics.
Annual Review of Physical Chemistry , 58(1):35–55, 2007.[19] Oren Shoval, Lea Goentoro, Yuval Hart, Avi Mayo, EduardoSontag, and Uri Alon. Fold-change detection and scalar sym-metry of sensory input fields.
Proceedings of the NationalAcademy of Sciences , 107(36):15995–16000, 2010.[20] C. Briat, C. Zechner, and M. Khammash. Design of a syn-thetic integral feedback circuit: dynamic analysis and DNAimplementation.
ACS Synthetic Biology (in press) , 2016.[21] Tau-Mu Yi, Yun Huang, Melvin I. Simon, and John Doyle.Robust perfect adaptation in bacterial chemotaxis through in-tegral feedback control.
Proceedings of the National Academyof Sciences , 97(9):4649–4653, 2000.[22] Rudolph E Kalman and Richard S Bucy. New results in linearfiltering and prediction theory.
Journal of Fluids Engineering ,83(1):95–108, 1961.,83(1):95–108, 1961.