A New Identification Framework For Off-Line Computation of Moving-Horizon Observers
AA New Identification Framework For Off-lineComputation of Moving-Horizon Observers
Mazen Alamir
Abstract
In this paper, a new nonlinear identification framework is proposed to address the issue of off-linecomputation of moving-horizon observer estimate. The proposed structure merges the advantages ofnonlinear approximators with the efficient computation of constrained quadratic programming problems.A bound on the estimation error is proposed and the efficiency of the resulting scheme is illustratedusing two state estimation examples.
I. INTRODUCTIONState estimation is a key issue in nonlinear systems control and diagnosis. Algorithms that achievethis task are called observers. These algorithms attempt to reconstruct the evolution of the statevector by using the only measured (and generally noisy) quantities. As far as nonlinear systemsare concerned, many observation techniques have been developed during the last four decades.This includes high-gain observers [7], sliding-modes observers [15], Moving-Horizon Observers(MHOs) [11] and naturally, the widely used Extended-Kalman-Filter (EKF). Excellent reviewsof nonlinear observer design techniques can be found in [14] and [6].Amongst all possible observer design alternatives, MHO technique has witnessed an increasinginterest these last years because of its ability to handle constraints and to fully exploit precise andgenerally nonlinear models of the dynamic processes under study. This observer requires on-linesolution of non convex optimization problems in which the cost function is the integral outputprediction error while the decision variable is the set of unknown quantities to be recovered
Mazen Alamir is with CNRS, Gipsa-lab, University of Grenoble. 11, Rue des Math´ematiques, 38402, Saint-Martin d’H`eres,France. [email protected] a r X i v : . [ c s . S Y ] O c t state and unknown parameter vectors).Despite encouraging recent advances on the issue of real-time computation of MHOs (see [3],[5] and the references therein for a recent survey on the topics), the highly demanding on-linecomputation involved may question the feasibility of the algorithm when system needing highsampling rate are involved or when the use of highly involved optimization software is prohibitedby real-life context. When the combination of such obstacles and the absence of mathematicalstructure that renders impossible the use of analytical observers, something has to be done inorder to achieve the estimation task.The ideas proposed in this paper follow a suggestion made by [1] aiming at identifying off-linethe relationship between the sequence of (input/output) measurements and the correspondinginitial state at the beginning of the moving observation window. By doing so, the cumbersomeon-line optimization step involved in Moving-Horizon Observers [11] (MHOs) can be avoided.Concrete implementation of such an idea using Neural Networks (NNs) identification structurehas been proposed in [2]. As pointed out in [1], NNs structures, like all standard nonlinearapproximators offer high approximation capabilities at the price of non convex optimizationschemes which generally suffer from convergence and computation time issues. In the presentpaper, a novel nonlinear identification scheme is proposed, which after a suitable change in thedecision variables, can be solved using constrained quadratic programming. Such a feature iscrucial if this optimization problem lies in the inner loop of a whole static game optimizationformulation (such as the one proposed in [1]) that may be needed to assess the convergence of theresulting MHO. More precisely, the solution of the static game is needed to compute the set ofapproximator parameters that achieves sufficiently small maximum error over all possible initialstates (and initial estimation error). The price to pay in order to gain computational efficiency is atheoretical restriction of the class of situations that can be addressed as the proposed identificationstructure possesses no universal approximation property. For this reason, the comparison betweenthe two identification schemes is problem-dependent.It is important to underline that in the scheme proposed in [1], [2] a set of non convex optimiza-tion problems are solved off-line (using different initial conditions and initial state estimationerrors) in order to built the learning data for the identification step. Instead, the scheme of theresent paper avoids the use of such non convex optimization by focusing on the model-basedstate/output relationship that can be discovered using system simulation providing the learningdata followed by an identification step using specific class of nonlinear structures. This so iden-tified function gives the output-related guess of the state which can then be amended by a modelrelated term in a rather trivial way in order to recover a kind of classical measurement/modelconfidence trade-off. By doing so, one can avoid the risk of local minima that may corrupt thequality of the learning data.The paper is organized as follows. First, section II defines the state estimation-related iden-tification problem. Section III describes the proposed nonlinear identification setting and gives abound on the state estimation error. The way the learning data used in the identification schemeis built is shown in section IV. Illustrative examples are given in section V while section VIsummarizes the contribution of the paper and suggests hints for further investigations.II. S TATE ESTIMATION AS AN IDENTIFICATION PROBLEM
Let us consider a nonlinear system given by: x ( k + 1) = f ( x ( k ) , u ( k )) ; y ( k ) = h ( x ( k ) , u ( k )) (1)where x ∈ R n , u ∈ R n u and y ∈ R n y represent the state, the measured input and the measuredoutput vectors respectively. The integer k refers to the sampling instant kτ for some samplingperiod τ > . Regardless of the algorithm that may be used to reconstruct the state vector usingthe measured quantities, the implicit assumption that underlines the possibility of state recon-struction is that there is an integer N ∈ N and a map F such that the following approximationholds: x ( k ) ≈ F ( Z ( k )) (2)where Z ( k ) is the regressor built up with the past measured quantities according to: Z ( k ) := ˜ y ( k )˜ u ( k ) ∈ R N ( n u + n y )=: n z (3)where: ˜ y ( k ) := ( y T ( k ) , · · · , y T ( k − N + 1)) T and ˜ u ( k ) := u T ( k ) , · · · , u T ( k − N + 1)) T . Notethat in the absence of measurement noise, the implicit definition of the map F involved in (2)s given by the solution of the following optimization problem: F ( Z ( k )) := X ( k, k − N + 1 , x ∗ , ˜ u ( k )) (4) x ∗ := arg min x J ( x, Z ( k )) := N − (cid:88) i =0 (cid:107) y ( k − i ) − Y ( k − i, k − N + 1 , x, ˜ u ( k )) (cid:107) (5)where X ( j, k − N +1 , x, ˜ u ( k )) [resp. Y ( j, k − N +1 , x, ˜ u ( k )) ] refers to the model-based predictedvalue at instant j of the state [resp. output] when the state at instant k − N + 1 is equal to x andwhen the sequence of controls defined by ˜ u ( k ) is applied on the time interval [ k − N + 1 , k ] .It results that if one obtains through off-line computations a good approximation of the map F involved in (2), then the measurement-related part of the MHOs would be obtained withouton-line optimization. This is obviously a static identification problem. More precisely, since thisidentification is to be obtained for each component of the state vector one needs to reconstruct,the basic generic problem one has to solve is the one consisting of finding a nonlinear map F that links a scalar quantity r (a component of the state vector) to a regression vector Z , namely: r ≈ F ( Z ) ; r ∈ R ; Z ∈ R n z (6) Remark 1:
Note that, as suggested in [1] the cost function involved in (5) may contain aregularization term (cid:107) x − ¯ x (cid:107) where ¯ x stands for the predicted value of the decision variable basedon the state space model and the last estimate. This enables a regularization of the estimationprocess and enhance more stability of the iterates. Such regularization can be decoupled from theidentification process by introducing afterward the following final estimation which representsa trade-off between the measurement related estimation and the model related estimation basedon the previous solution: ˆ x = λ F ( Z ) + (1 − λ )¯ x (7)where F ( Z ) is the measurement-based identified part while ¯ x is the model predicted part basedon the past value of the estimation. In the sequel, we focus on F ( Z ) term which correspondsto the choice λ = 1 in (7) .III. A N ONLINEAR I DENTIFICATION F RAMEWORK
Identification of nonlinear relationships is an open issue. Many frameworks have been proposedincluding neural networks, Wiener-Hammerstein, Volterra Series based formulations [9] to citeut few possibilities. While offering high approximation capabilities, nonlinear approximatorsneed non convex optimization in order to compute the approximator parameters. Such optimiza-tion problems suffer from computation time issue and the presence of local minima that mayprevent the solver from reaching the global minimizers. In this paper, a nonlinear approximatoris proposed that can be computed through constrained QP formulation at the price of lesseruniversality.More precisely, in this paper, attention is focused on nonlinear maps F that takes the followingform: F ( Z ) := Γ − ( L T Z ) ; Γ( · ) strictly increasing (8)The existence of Γ − is guaranteed by the strict monotonicity of Γ . Note that putting together(6) and (8) enables the identification problem to be re-formulated as the one of finding: • the vector L ∈ R n z and • the monotonic increasing function Γ such that the following approximation holds: Γ( r ( k )) ≈ L T Z ( k ) (9) Remark 2:
Note that (8) is a nonlinear parameterization as one has to find both Γ − and L which operate nonlinearly. Moreover, while this structure is adapted to the derivation of efficientlysolvable constrained QP problems, it is not universal in the sense that any function F ( Z ) cannotnecessarily be represented using the structure (8). Only the class of functions F for which thereexists a linear combination of the components of Z that maps to Z through a monotonic functionis eligible. This property is impossible to check a priori, but the efficiency of the QP problemsolvers makes it easy to check even for very rich parameterization. If the residual is still toohigh, then the system is surely out of this class since failure cannot result from local minima asit is the case in standard nonlinear approximators computation.The general form of the l.h.s of (9) that is used hereafter is the one given by: Γ( r ) := (cid:104) B (cid:0) r − r min r max − r min (cid:1)(cid:105) · µ =: (cid:104) B ( η ( r )) (cid:105) · µ = n b (cid:88) j =1 µ j (cid:104) B ( j ) ( η ( r )) (cid:105) (10)where r min and r max are the minimum and maximum values of r over the learning data (seesection IV) while B ( η ) is a function basis that is hereafter defined according to: (cid:110) B ( j ) (cid:111) n b j =1 := (cid:110) (cid:111) ∩ (cid:110) B ( i )1 (cid:111) n m − i =1 ∩ (cid:110) B ( i )2 (cid:111) n m i =1 (11)here the number of functions in the set is n b = 2 n m while the functions B ( i )1 and B ( i )2 aredefined by: B ( i )1 := (1 + α i ) η α i η ; B ( i )2 := η α i − α i η The coefficients α i are given by α i := exp (cid:0) β (1 − i ) (cid:1) − . Note that many other function basiscan be used although the author’s experience suggests that the function basis proposed above israther appropriate given the monotonicity character of the targeted nonlinear map.Now by combining (9)-(10) , it follows that the unknowns L and µ may be obtained by solvingthe following least squares problem: min L,µ (cid:88) ( r,Z ) ∈E (cid:107) B ( η ( r )) µ − Z T L (cid:107) under the constraint (15) (12)where E is the learning data including a large number of instantiations of the pairs ( r, Z ) . Notehowever that the least squares minimization invoked in (12) has to be done taking into accountthe following constraints:1) Γ must be strictly increasing . This leads to the following inequality constraint on theparameter vector µ : ∀ η ∈ [0 , (cid:104) dBdη ( η ) (cid:105) µ ≥ ε (13)for some a priori chosen lower bound of the derivative ε > .2) The following normalization integral constraint has to be satisfied in order to avoid trivialzero values trivial solution ( L = 0 , µ = 0 ): (cid:104)(cid:90) B ( η ) dη (cid:105) · µ = 12 (cid:104) r min + r max (cid:105) (14) Remark 3:
Note that the use of ( r min + r max ) / in the r.h.s of (14) is arbitrary since thesolution of (12) is defined up to a multiplicative gain. Note however that the r.h.s of (14) isinspired by the particular case where a linear function fits the learning data. In this case, Γ canbe taken to be the identity map and in this case, the integral has to equal the mean value of r .The two sets of constraints (13) and (14) are affine in the decision variable µ . Therefore byonsidering a sequence of values η < η < · · · < η q = 1 , these constraints can beapproximated by the following matrix inequalities: [ A ineq ] µ ≤ B ineq ∈ R q ; [ A eq ] µ = B eq ∈ R (15)To summarize, the quadratic function (12) in the decision variables L and µ is minimized underthe linear constraints (15) to obtain the optimal parameters L opt and µ opt .Note that there is no loss in generality in taking Γ strictly increasing since it suffices to take L of opposite sign to make (9) valid for a strictly decreasing map.The following result on the estimation error is straightforward: Proposition 1: If Let X ⊂ R n be a subset of the state space to which belong all state ofinterest. The following conditions are satisfied:1) The admissible control sequences are bounded (i.e. ˜ u ∈ ˜ U ) and lead for any x (0) ∈ X to abounded sequence of output ( ˜ y ∈ ˜ Y )2) there is an upper bound ε x > on the identification residual: sup ( x , ˜ u ) ∈ X × ˜ U (cid:13)(cid:13) x (0) − F (cid:0) Z ( x (0) , ˜ u ) (cid:1)(cid:13)(cid:13) ≤ ε x (16)3) For any initial state x (0) , the combined effect of noise and model mismatches on the regressor Z is bounded according to: (cid:107) Z reel ( x (0) , ˜ u ) − Z ( x (0) , ˜ u ) (cid:107) ∞ ≤ γ (17)where Z reel denotes the real measurement matrix (which is not obtained by simulation) that maybe obtained on the real system starting from x (0) and under the control sequence ˜ u . then the estimation error is of the form: (cid:107) ˆ x − x (cid:107) ≤ ε x + O ( γ ) (18)P ROOF . First of all, note that assumption 2) implies that all nominal measurement regressors Z of interest belong to some compact set Z . Using the assumptions (16)-(17), one clearly has: (cid:13)(cid:13) x (0) − ˆ x (0) (cid:13)(cid:13) := (cid:13)(cid:13) x (0) − F (cid:0) Z reel ( x (0) , ˜ u ) (cid:1)(cid:13)(cid:13) ≤ (cid:107) x (0) − F ( Z ) (cid:107) + (cid:107)F ( Z ) − F ( z reel ) (cid:107)≤ ε x + max Z ∈Z (cid:13)(cid:13)(cid:13)(cid:13) ∂ F ∂Z ( Z ) (cid:13)(cid:13)(cid:13)(cid:13) · (cid:107) Z − Z reel (cid:107) ≤ ε x + √ n z · M ( γ ) · γ here Z := Z + B (0 , γ ) and where M ( γ ) is the maximum value of the continuous (byconstruction) map ∂ F ∂Z over the compact set Z . (cid:3) Remark 4:
Note that the upper bound ε x involved in (16) can be used as a cost function ε x ( N, n m , β ) to be minimized in the decision variable ( N, n m , β ) which are the parameters ofthe approximator. This obviously results in a static game in which the identification step appearsin the inner loop. This strengthens the relevance of having easy to solve identification problemthrough the constrained QP formulation. Note also that the value of ε x reflects to which extentthe map linking the regressor Z and the initial state is far from the set of maps described by thestructure (8). This is because no identification error can be affected to the optimization processas the underlying problem is a quadratic programming one.IV. BUILDING THE LEARNING DATAAssume without loss of generality that one is focused on the identification problem associatedto the estimation of r = x i for some i ∈ { , . . . , n } . Let us also assume that the set of relevantvalues of the state vector x is contained in some hypercube, namely: x ∈ X := Π ni =1 [ x mini , x maxi ] (19)The learning data set E involved in the definition of the least squares problem (12) is obtainedthrough the following steps:1) First, a set of initial states (cid:8) x ( j )0 (cid:9) n g j =1 is chosen. This can be obtained using uniform grid oneach interval [ x mini , x maxi ] of possible values of the i -th component of the state.2) A set of n g control profiles (cid:8) ˜ u ( j ) (cid:9) n g j =1 ; ˜ u ( j ) ∈ R N s · n u is also generated. Each profiledefines a control sequence over N s > N sampling periods.3) For each pair ( x ( j )0 , ˜ u ( j ) ) , the system is simulated over N s sampling periods in order to generatethe corresponding state profiles: (cid:110) X ( k, , x ( j )0 , ˜ u ( j ) ) , k ∈ { , . . . , N s } (cid:111) n g j =1 (20)4) The data described above enables, for each pair ( j, k ) ∈ { , . . . , n g } × { N, . . . , N s } to obtain: • two sequences ˜ y ( j ) ( k ) , ˜ u ( j ) ( k ) that defines Z ( j ) ( k ) according to (3). • the corresponding value of r ( j ) ( k ) according to r ( j ) ( k ) := X (cid:16) N, , X ( k − N, x ( j )0 , ˜ u ( j ) ) , ˜ u ( j ) (cid:17) inally, the learning data set E involved in the definition of the least squares problem (12) isgiven by: E := (cid:110)(cid:0) r ( j ) ( k ) , Z ( j ) ( k ) (cid:1)(cid:111) ( j,k ) ∈{ ,...,n g }×{ N,...,N s } (21)which is obviously a discrete set of cardinality n E that is given by: n E := n g · ( N s − N + 1) (22)V. ILLUSTRATIVE EXAMPLESIn this section, illustrative examples are proposed in order to give concrete instantiations of thedifferent steps of the proposed framework. A. Example 1
Let us consider the famous Van der Pol oscillator which is governed by: ˙ x = − x ; ˙ x = 4 x + (1 − x ) x ; y := x + x + w (23)where w is a measurement noise assumed here to be white, Gaussian with variance σ = 0 . .Typical behavior of the resulting noisy measurement is shown in Figure 1 (left subplot) togetherwith typical corresponding level of the noise (right subplot). The basic sampling period is takenequal to τ = 0 . sec . The bounds on the state components leading to the definition of the subset X are given by X := [ − , +2] × [ − , +5] which obviously enhances the nonlinear character ofthe resulting identification problem (as is not negligible when compared to 1 in the expressionof ˙ x ). The learning data has been defined using a uniform grid containing n g = 4 = 16 initialstates. For each state, the system is simulated during N s = 100 sampling period ( sec ) which,according to (22) leads to a learning data E of cardinality n E = 16 × (100 −
10 + 1) = 1456 .Note that since there are two states x and x to be estimated, two identification problems aresolved and two nonlinear maps F (1) and F (2) are obtained for the reconstruction of these twostates according to ˆ x i ( k ) = F ( i ) ( Z ( k )) , i ∈ { , } The identification parameters N = 10 and n m = 5 are used which leads to an observation horizon of N × τ = 1 sec and a functional basiscontaining n b = 10 elements.The quality of the resulting matching between the identified and the true values are shown inthe upper subplots of Figure 2 (left plots). Note that the identified values are obtained using ig. 1. Example 1. Typical behavior of the measured output and the corresponding noise. The Gaussian noise used in (23) isgenerated with a variance σ = 0 . an intentionally noised simulation data which enables one to inject the noise already in theidentification process resolving the trade-off at this early stage. The lower subplots of Figure 2(Left) shows the gradient of the resulting nonlinear functions Γ (1) and Γ (2) that are involved in(8) for the two components of the state respectively. One may note the highly nonlinear characterof the map Γ (1) in particular. An estimation scenario is shown in Figure 2 (right plots) wherethe initial state of the system and the observer are respectively given by x (0) = (2 , . T and ˆ x (0) = (1 , . T and where a new generation of the measurement noise is used (different fromthe ones used to construct the learning data set). Note that the correction of the observer beginsonly when data is obtained that covers the observer horizon length. This means that the firstcorrection occurs at t = ( N − τ = 0 . sec . It is worth emphasizing here that the observerdesign is done using only the output measurement related correction in order to concentrateon the contribution of the present paper. It goes without saying that a balanced estimation inwhich the output-related estimation and the state equation related estimation can be implementedfollowing (7) or more generally the standard ideas of [8], [13] and the references therein. B. Example 2
Let us consider the problem of estimating the state of a dynamic model describing the EscherichiaColi Strain. Many knowledge-based model derivation attempts have been investigated in order to ig. 2. Example 1. (Left): A picture showing the results of the nonlinear identification procedure using a learning set ofcardinality n E = 1456 . The upper plots compares the estimated states based on the identified maps F (1) and F (2) while thelower plots show the gradient of the corresponding monotonic functions Γ (1) and Γ (2) (Right) : Typical results of the stateestimation using a purely output-related estimation. The observer begins once the first N = 10 first measurements are acquired(first correction at t = 0 . sec ). This estimation is obtained while a Gaussian measurement noise w of variance σ = 0 . isinjected (see figure 1 for a typical behavior of the noise) better understand the mechanisms that underline the evolution of the population [4], [10] or todevelop model-based state observers [12]. The dynamic model that is commonly used in derivingdynamic state estimation involves the E. Coli strain X that grows on the limiting substrate S while yielding a final intracellular product: the β -galactosidae P . The model is given by: ˙ X = µ ( S ) X − k d exp( − k p P ) X (24) ˙ S = − y s µ ( S ) X − k m X (25) ˙ P = y p µ ( S ) II + k I X − k d exp( − k p P ) P (26)where µ is the growth rate that is modelled using classical Monod-type relation such as µ ( S ) := µ m Sk s + S where µ m is the maximum specific growth rate for the cell growth (in h − ). k s is thehalf saturation constant; k p and k d are constants involved in the Arrhenius-type death kinetic thatdepends on P . k m is a maintenance rate that describes the energy required for normal upkeep andrepair. y s , y l [used in the measurement equation (27) below] and y p are identified coefficients. I stands for the arabinose inducer that is assumed to be constant (no degradation). The outputmeasurement vector is given by y := ( X + w , L + w ) T where L is the light produced by theioluminescence that is linked to the state variables by the following expressions: L = y l · µ ( S ) II + k l XP (27)while w and w are measurement noises that are taken here white, Gaussian and of variances σ and σ respectively. The values of the model parameters used in the sequel can be found in[12].For this example, the framework described above can be used to construct a reduced observer.More precisely, it is shown hereafter that there is a satisfactory solution to the underlinedidentification problem for any learning set that is constructed using an admissible set of ini-tial conditions of the form X ( X ) := { X } × [0 , × [0 , . with the following parameters τ = 0 . N s = 40 ; N = 3 ; n m = 5 ; n g = 25 . This means that for each initial value X of theE. Coli strain, n g = 25 simulations of the system with different initial states (sharing all the samevalue X and different values of P and S ) is simulated during time units ( = N s τ = 40 × . )hence generating a learning set of cardinality n E given by n E = 25 × (40 −
3) = 925 . Note thattwo identification problems are to be defined and solved using the following definition of thequantities r and r to be identified: r := P ; r = µ ( S ) · X (28)Note that if r and r are well estimated then µ ( S ) can also be well estimated since X isassumed to be measured (reduced observer). Note also that since only µ ( S ) is involved in thesystem equation, S is necessarily estimated through µ ( S ) .The identification results are shown on Figure 3 for two different values of X ∈ { . , } . Onecan appreciate that a good match between the estimated ˆ r i and the simulated r i for i = 1 , isobtained over the learning set while using a rather economic parametrization ( N = 3 and n m = 5 ). This clearly shows that the proposed methodology enables us to perform the estimationscheme provided that the initial value of the state component X is measured at the beginningof the batch.Figure 4 shows typical behavior of the output-based state estimation that used the two nonlinearmaps identified above. Note that the noises w and w that affect the measured signals used inthe construction of the regressor are given by σ = 0 . and σ = 20 . This leads to a noise levelthat can be observed in Figure 5.a) Identification results for X (0) = 0 . (b) Identification results for X (0) = 2 . Fig. 3. Example 2. Examples of Identification results for r = P and r = µ ( S ) · X for two different values of the E. Colistrain X ∈ { . , } .Fig. 4. Example 2. Behavior of the output-based estimation of r = P and r = µ ( S ) · X . The measurement noise variancesare respectively given by σ = 0 . and σ = 20 . Typicalbehavior of the measurement noise can be observed oin Figure5. Fig. 5. Example 2: Typical behavior of the measurement noiseused in the simulation of the state estimation depicted in Figure4. VI. C
ONCLUSION AND FUTURE WORK
In this paper, a nonlinear approximator has been proposed for a class of nonlinear relationshipsand has been applied in the context of moving-horizon observer design. The proposed schemeoffers the advantage of requiring only a constrained QP problem solution and can therefore befficiently integrated in the inner loop of a global scheme aiming at optimizing the approximatorparameters.A potential research line is to investigate a systematic computation of the optimal triplet ( N, τ, β ) defining the nonlinear approximator. Indeed, a convenient ( optimal ) choice of these parametersis intimately linked to the noise level as well as the uncertainty structure. A good knowledge ofthe latter is crucial to obtain pertinent choice of these parameters which had been found in thispaper by trial and error approach.Another research track concerns the use of sparse identification techniques in order to derive lowdimensional parameter vector. This can be greatly facilitated by the availability of the Lagrangemultipliers of the QP problem that underline the identification step.R EFERENCES [1] A. Alessandri, M. Baglietto, and G. Battistelli. Moving-horizon state estimation for nonlinear discrete-time systems: Newstability results and and approximation schemes.
Automatica , 44:1753–1765, 2008.[2] A. Alessandri, M. Baglietto, G. Battistelli, and M. Gaggero. Moving-horizon state estimation for nonlinear systems usingneural networks.
IEEE Transactions on Neural Networks , 22(5):768–780, 2011.[3] A. Alessandri, M. Baglietto, G. Battistelli, and V. Zavala. Advances in moving horizon estimation for nonlinear systems.In
Proceedings of the 49th IEEE conference on Decision and Control, Atlanta, GA, USA. , 2010.[4] H. J. Cha, C. F. Wu, J. Valdes, G. Rao, and W. Bentley. Observation of green fluorescent protein as a fusion partner ingenetically engineered escherchia coli: monitoring protein expression and solubility.
Biotech. Bioeng. , 76:565–574, 2000.[5] M. Diehl, H. J. Ferreau, and N. Haverbeke.
Assessment and Future Directions of Nonlinear Model Predictive Control ,chapter Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation, pages 317–417. Springer-Verlag,2009.[6] G. Besanc¸on (Ed).
Nonlinear Observers and Applications . Lecture Notes in Control and Information Sciences. Springer-Verlag, 2007.[7] J. P. Gauthier, H. Hammouri, and S. Othman. A simple observer for nonlinear systems – application to bio-reactors. IEEETransactions on Automatic Control, 37, 1992.[8] E. L. Hasteline and J. B. Rawlings. Critical Evaluation of Extended Kalman Filtering and Moving Horizon Estimation.
Ind. Eng. Chem. , 44(8), 2005.[9] F. J. Doyle III, R. K. Pearson, and B. A. Ogunnaike.
Identification and Control Using Volterra Models . Springer-Verlag,London, 2001.[10] J. Lee and F. Ramirez. Mathematical modeling of induced protein production by recombinant bacteria.
Biotech. Bioeng. ,29:635–646, 1992.[11] H. Michalska and D. Q. Mayne. Moving-horizon observers and observer-based control.
IEEE Transactions on AutomaticControl , 40:995–1006, 1995.12] M. Nardi, I. Trezzani, H. Hammouri, and P. Dhurjati. Mathematical model and observer for recombinant escherichia coli strain with biolumniscence indicator genes. In
Proceedings of the Second International Symposium on Communications,Control and Signal Processing (ISCCSP 2006) , 2006.[13] C. V. Rao, J. B. Rawlings, and D. Q. Mayne. State Constrained Estimation for Nonlinear Discrete-Time Systems: Stabilityand Moving Horizon Approximations.
IEEE Transactions on Automatic Control , 48(2), 2003.[14] D. Simon.