Computing Flowpipe of Nonlinear Hybrid Systems with Numerical Methods
CComputing Flowpipe of Nonlinear Hybrid Systemswith Numerical Methods
Olivier Bouissou and Samuel Mimram
CEA Saclay Nano-INNOV Institut CARNOT, Gif-sur-Yvette France
Alexandre Chapoutot
U2IS – ENSTA ParisTech, Palaiseau France
January 2013
Abstract
Modern control-command systems often include controllers that perform nonlinear computations tocontrol a physical system, which can typically be described by an hybrid automaton containing high-dimensional systems of nonlinear differential equations. To prove safety of such systems, one mustcompute all the reachable sets from a given initial position, which might be uncertain (its value is notprecisely known). On linear hybrid systems, efficient and precise techniques exist, but they fail to handlenonlinear flows or jump conditions. In this article, we present a new tool name HySon which computesthe flowpipes of both linear and nonlinear hybrid systems using guaranteed generalization of classicalefficient numerical simulation methods, including with variable integration step-size. In particular, wepresent an algorithm for detecting discrete events based on guaranteed interpolation polynomials thatturns out to be both precise and efficient. Illustrations of the techniques developed in this article aregiven on representative examples.
Modern control-command software for industrial systems are becoming more and more complex to design.On the one side, the description of the physical system that must be controlled, a power plant for instance,is frequently done using partial differential equations or nonlinear ordinary differential equations, whosenumber can grow very fast when one tries to have a precise model. On the other side, the complexity ofthe controller also increases when one wants it to be precise and efficient. In particular, adaptive controllers(which embed information on the plant dynamics) are more and more used: for such systems, the controllermay need to compute approximations of the plant evolution using a look-up table or a simple approximationscheme as in [5]. As an extreme example, consider a controller for an air conditioning device in a car. Inorder to correctly and pleasantly regulate the temperature in the car, the controller takes information fromthe temperature of the engine but also from the outside temperature and the sunshine on the car. Based onthese data, it acts on a cooling device, which is usually made of a hot and a cold fluid circuit, and is thusdescribed using usual equations in fluid dynamics, which are given by high dimensional nonlinear differentialequations.In an industrial context, the design of such control-command systems is generally validated by performingnumerical simulations of a high level description of the system using a Simulink like formalism. Usually, someinput scenarios are defined and numerical simulation tools are used to observe the reaction of the system tothese inputs and check that they are in accordance with the specifications. This methodology is widespread,because the methods for numerical simulation used nowadays are very powerful and efficient to approximatethe behavior of complex dynamical systems, and scale very well w.r.t. to both complexity and dimension.1 a r X i v : . [ m a t h . O C ] J un imulation algorithms are mainly based on two parts: algorithms to compute approximations of the con-tinuous evolutions of the system [23], and algorithms to compute switching times [25]. Matlab/Simulink isthe de facto standard for the modeling and simulation of hybrid systems; we recall its basics principles inSection 2.2, and refer the reader to [2] for a complete formalization of its numerical engine.The main drawback of simulation is that it cannot give strong guarantees on the behavior of a system,since it merely produces approximations of it for a finite subset of the possible inputs. To overcome thisproblem, verification techniques have been proposed on slightly different models of hybrid systems. Themost popular and used technique is bounded model checking of hybrid automata [16, 18, 12, 11] thatcomputes over-approximations of the set of reachable states of a hybrid system over a finite horizon. Toapply such techniques on Simulink industrial systems, one must first translate it into the hybrid automataformalism (for example using techniques from [1]), and then apply some simplifications and linearizationsto the model in order to obtain a linear hybrid automaton for which the good techniques exist [12]. Thisprocess of linearization can be performed automatically [7], but increase largely the number of discrete states(exponentially w.r.t. dimension), so that we believe that it is not applicable for large and highly nonlinearsystems with stiff dynamics. Contribution.
In this article, we propose a new method to compute bounded horizon over-approximationsof the trajectories of hybrid systems. This method improves our previous work [4] as it modifies numericalsimulation algorithms to make them compute guaranteed bounds of the trajectories. Our algorithm isgeneral enough to handle both nonlinear continuous dynamics and nonlinear jump conditions (also namedzero-crossing events in Simulink). In short, our algorithm relies on two guaranteed methods: the continuousevolution is over-approximated using guaranteed integration of differential equations, using a generalizationof [6], and the discrete jumps are solved using a new method (presented in Section 3.3) that can be seen asa guaranteed version of the zero-crossing algorithm of Simulink.
Related work.
We already mentioned the work on reachability analysis in hybrid automata, either for thelinear case [20, 12], or in the nonlinear case where a hybridization is used to construct an over-approximatedlinear automata [7]. Our approach is quite different as the algorithms we propose do not suppose anythingabout the differential equations and the jump conditions except their continuity w.r.t. state space variables.Previous works also used guaranteed numerical methods for reachability analysis of hybrid systems [17, 9].These methods mainly use intervals as representation of sets, such as in the library vnode [22], to computeguaranteed bounds on the continuous trajectories, and interval methods or a sat solver to safely over-approximate the discrete jumps. Our method uses a more expressive domain for representing sets (affineforms [14, 4]) and polynomial interpolation for discrete jumps, which offers an efficient bisection method.Finally, the work closest to our is [24], in which a flowpipe for nonlinear hybrid systems is computed using aTaylor model to enclose the continuous behavior, and the discrete jumps are handled by doing the intersectionof elements of the Taylor model and polyhedric guards. Compared to our approach, this work only allows forpolynomial dynamics and polyhedral guards, while we have no such restrictions (as exemplified in Section 5).Beside, as will be clear from our benchmarks, the use of affine forms and numerical methods is generallymore efficient than Taylor models.
Outline of the paper.
The rest of this article is organized as follows. In Section 2, we present ourformalism for hybrid systems and recall traditional method for their numerical simulation. Then, in Section 3,we explain how we could turn these methods into guaranteed methods that compute enclosures rather thanapproximations. In Section 4, we present our main algorithm for computing safe bounds on the trajectories ofhybrid systems, and Section 5 presents some benchmarks that include both nonlinear dynamics and nonlinearjump conditions. 2
Preliminaries
In this article, in order to facilitate the understanding of our method, we consider hybrid systems describedas hybrid automata ( ha ). However, our tool HySon uses a slightly different representation as in our previouswork [2, 4]. This state-space representation, comparable to the one used in [13], can encode both ha andSimulink models, as shown in [2]. We denote by R the set of real numbers, and by IB the set of booleans(containing two elements, > meaning true and ⊥ meaning false). Given a function x : R → R n , we denote by x − ( t ) its left-limit. Definition 1 (Hybrid automaton, [16]) . An n -dimensional hybrid automaton H = ( L, F, E, G, R ) is a tuplesuch that L is a finite set of locations , the function F : L → ( R × R n → R n ) associates a flow equation to eachlocation, E ⊆ L × L is a finite set of edges , G : E → ( R n → IB) maps edges to guards and R : E → ( R × R n → R n ) maps edges to reset maps . Notice that to simplify the presentation of our approach, we consider ha without invariants in each location,we will discuss this point in the conclusion. Besides, we assume that a transition e = ( l, l ) is taken as soonas G ( e ) is true. Example 1.
We consider a modification of the classical bouncing-ball system that we call the windy ball :the ball is falling but there is in addition an horizontal wind which varies with time. So, the dynamics of thehorizontal position x and height y of the ball are given by ˙ x ( t ) = 10(1 + 1 . t )) ˙ y ( t ) = v y ( t ) ˙ v y ( t ) = − g The ha thus has only one location l such that F ( l ) is the above flow. There is also one edge e = ( l, l ) for whenthe ball bounces on the floor, with a guard G ( e ) = y ≤ and a reset R ( e ) = ( x, y, v y ) ( x, y, − . v y ) . The operational semantics [16] of an ha is a transition system with two kinds of transitions for the timeelapse and the discrete jumps. From this operational semantics we can define the trajectories of the ha , asin [13]. Definition 2 (Trajectory of an hybrid automaton) . Suppose fixed an ha H = ( L, F, E, G, R ) . A state of H isa couple ( x, l ) with x ∈ R n and l ∈ L . A trajectory of H , on the time interval [ t , t f ] , starting from an initialstate ( x , l ) , is a pair of functions ( x, l ) with x : [ t , t f ] → R n and l : [ t , t f ] → L , such that there exists timeinstants t ≤ t ≤ . . . ≤ t n = t f satisfying, for every index i ,1. x is continuous and l is constant on [ t i , t i +1 [ ,2. x (0) = x , l (0) = l ,3. ∀ t ∈ [ t i , t i +1 [ , ˙ x ( t ) = F ( l ( t ))( t, x ( t )) ,4. ∀ t ∈ ] t i , t i +1 [ , ∀ e = ( l ( t ) , l ) ∈ E , G ( e )( x ( t )) = ⊥ ,5. G ( e )( x − ( t i )) = > with e = ( l − ( t i ) , l ( t i )) and x ( t i ) = R ( e )( t i , x − ( t i )) . In the above definition, the equations constraint the function x so that it conforms to the flow and jump con-ditions of H . Equation 2 ensures that x satisfies the initial conditions, Eq. 3 specifies that the dynamics of x ( t ) is the flow at location l ( t ) , Eq. 4 and 5 ensure that the t i are the instants where jumping conditions occur andthat x evolves as described by reset maps when the corresponding guard is satisfied. Notice that we do notconsider Zeno phenomena here as we assume that there are finitely many jumps between t and t f . Also, wedo not discuss conditions ensuring existence and unicity of trajectories as this is beyond the scope of this pa-per [15], but implicitly suppose that these are granted. We suppose fixed initial and terminal simulation times t and t f . Given an ha H and an initial state ( x , l ) , we denote by Reach H ( x , l ) the continuous trajectory on [ t , t f ] as defined above, and given X ⊆ R n and L ⊆ L , we define Reach H ( S , L ) = S x ∈ X ,l ∈ L Reach H ( x , l ) .3omputing the set Reach H ( X , L ) for an ha H is sufficient in order to decide the reachability of someregion in the state space, and thus often to prove its safety (for bounded time). As trajectories are in generalnot computable, over-approximations must be performed: this is the goal of our algorithm presented inSections 3 and 4. In Section 2.2, we present numerical algorithms, used for example by Simulink, that allowto compute approximations of the set Reach H ( x , l ) for some initial state ( x , l ) ∈ R n × L . In Section 3 wepresent how we can adapt these methods in order to be safe w.r.t. the exact trajectories of H . Numerical simulation aims at producing discrete approximations of the trajectories of an hybrid system H on the time interval [ t , t f ] . We described in details in [2] how the simulation engine of Simulink operates,and briefly adapt here this simulation engine to ha .Suppose that H is an ha , ( x , l ) an initial state, and ( x ( t ) , l ( t )) a trajectory of H starting from ( x , l ) .A numerical simulation algorithm computes a sequence ( t k , x k , l k ) k ∈ [0 ,N ] of time instants, variables valuesand locations such that ∀ k ∈ [0 , N ] , x k ≈ x ( t k ) . Most of the difficulty lies in approximating the discretejumps (instants where a guard becomes true), which are called zero-crossings in the numerical simula-tion community. In order to compute ( t k , x k , l k ) , the following simulation loop is used, where h k is thesimulation step-size (that can be modified to a smaller value in order to maintain a good precision): repeat x k +1 ← SolveODE ( F ( l ( t k ) , x k , h k ) . Solver step 1
3: ( x k +1 , l k +1 ) ← SolveZC ( x k , x k +1 ) . Solver step 2 compute h k +1 k ← k + 16: until t k ≥ t f In this simulation loop, the solver first makes a continuous transition between instants t k and t k + h k underthe assumption that no jump occurs (solver step 1), and then it verifies this assumption (solver step 2). Ifit turns out that there was a jump between t k and t k + h k , the solver approximates as precisely as possiblethe time t ∈ [ t k , t k + h k ] at which this jump occurred. We briefly detail both steps in the rest of this section. Solver step 1.
The continuous evolution of x between t k and t k + h k is described by ˙ x ( t ) = F ( l ( t k )) (cid:0) t, x ( t ) (cid:1) and x ( t k ) = x k . So, we want to compute an approximation of the solution at t k + h k of the initial value problem( ivp ), with f = F ( l ( t k )) : ˙ x ( t ) = f ( t, x ( t )) x ( t k ) = x k (1)(we assume classical hypotheses on f ensuring existence and uniqueness of a solution of ivp ). Usually, precisesimulation algorithms often rely on a variable step solver, for which ( h k ) is not constant. The simplest isprobably the Bogacki-Shampine method [23], also named ode . It computes x k +1 by k = f ( t k , x k ) k = f ( t k + h k , x k + h k k ) k = f ( t k + 3 h k , x k + 3 h k k ) (2a) x k +1 = x k + h k k + 3 k + 4 k ) (2b) k = f ( t k + h k , x k +1 ) z k +1 = x k + h k
24 (7 k + 6 k + 8 k + 3 k ) (2c) The value z k +1 defined in (2c) is a third order approximation of x ( t k + h k ) , whereas x k +1 is a second orderapproximation of this value, and is used to estimate the error err = | x k +1 − z k +1 | . This error is compared to agiven tolerance tol and the step-size is changed accordingly: if the error is smaller then the step is validated and the step-size increased in order to speed up computations (in ode , next step-size is computed with h k +1 = h k p tol / err ), if the error is greater then the step is rejected and the computation is tried again withthe smaller step-size h k / . We refer to [15, p. 167] for a complete description on such numerical methods.4 olver step 2. Once x k and x k +1 computed, the solver checks if there were a jump in the time interval [ t k , t k +1 ] . In order to do so, it tests for each edge e starting from l k whether G ( e )( x k ) is false and G ( e )( x k +1 ) istrue. If there is no such edge, then it is considered that no jump occurred, we set l k +1 = l k and continue thesimulation. Notice this technique does not guarantee the detection of all events occurring between [ t k , t k +1 ] as explained in [25] or [10].If the solver finds such an edge, this means that there was a jump on [ t k , t k +1 ] and we must approximatethe first time instant ξ ∈ [ t k , t k + h k ] such that G ( e )( x ( ξ )) is true. To do so, the solver encloses ξ in an interval [ t l , t r ] starting with t l = t k and t r = t k + h k , and reduces this interval until the time precision | t l − t r | is smallerthan some parameter. To reduce the width of the interval, the solver first makes a guess for ξ using alinear extrapolation and then computes an approximation of x ( ξ ) using a polynomial interpolation of x on [ t k , t k + h k ] . Depending on G ( e )( x ( ξ )) , it then sets t l = ξ or t r = ξ and starts over. In the case of Hermiteinterpolation (which is the method used together with the ode solver), the polynomial interpolation isgiven, for t ∈ [ t k , t k + h k ] , by x ( t ) ≈ (2 τ − τ + 1) x k + ( τ − τ + τ ) h k ˙ x k + ( − τ + 3 τ ) x k +1 + ( τ − τ ) h k ˙ x k +1 (3)where τ = ( t − t k ) /h k , and ˙ x k , ˙ x k +1 are approximations of the derivative of x at t k , t k +1 . For more details onzero-crossing algorithms, we refer to [2, 25]. Example 2.
Consider the windy ball again (Example 1). The red curve below is the result of the simulationfor t ∈ [0 , using Simulink. In blue is the flowpipe computed by HySon whose computation is going to bedescribed in next sections. yz The elaboration of our algorithm consisted essentially in adapting simulation algorithms such as the onedescribed in Section 2 in order to (i) compute with sets of values instead of values, and (ii) ensure thatthe resulting algorithm is guaranteed in the sense that the set ˆ x k of values computed for x at instant t k always contains the value of the mathematical solution at instant t k . This means that we have to take inaccount numerical errors due to the integration method and the use of floats (see Section 3), and designan algorithm computing an over-approximation of jump times (Section 4). In this section, we first brieflypresent our encoding of sets using affine arithmetic (Section 3.1) and show how explicit Runge-Kutta likenumerical integration methods (Section 3.2) and the polynomial interpolation (Section 3.3) can be turnedinto guaranteed algorithms. The simplest and most common way to represent and manipulate sets of values is interval arithmetic [21].Nevertheless, this representation usually produces too much over-approximated results, because it cannottake dependencies between variables in account: for instance, if x = [0 , , then x − x = [ − , = 0 . Moregenerally, it can be shown for most integration schemes that the width of the result can only grow if weinterpret sets of values as intervals.To avoid this problem we use an improvement over interval arithmetic named affine arithmetic [8] whichcan track linear correlations between program variables. A set of values in this domain is represented by an5 ffine form ˆ x (also called a zonotope ), which is a formal expression of the form ˆ x = α + P ni =1 α i ε i where thecoefficients α i are real numbers, α being called the center of the affine form, and the ε i are formal variablesranging over the interval [ − , . Obviously, an interval a = [ a , a ] can be seen as the affine form ˆ x = α + α ε with α = ( a + a ) / and α = ( a − a ) / . Moreover, affine forms encode linear dependencies between variables:if x ∈ [ a , a ] and y is such that y = 2 x , then x will be represented by the affine form ˆ x above and y will berepresented as ˆ y = 2 α + 2 α ε .Usual operations on real numbers extend to affine arithmetic in the expected way. For instance, if ˆ x = α + P ni =1 α i ε i and ˆ y = β + P ni =1 β i ε i , then with a, b, c ∈ R we have a ˆ x + b ˆ y + c = ( aα + bβ + c )+ P ni =1 ( aα i + bβ i ) ε i .However, unlike the addition, most operations create new noise symbols. Multiplication for example is definedby ˆ x × ˆ y = α α + P ni =1 ( α i β + α β i ) ε i + νε n +1 , where ν = (cid:0)P ni =1 | α i | (cid:1) × (cid:0)P ni =1 | β i | (cid:1) over-approximates the errorbetween the linear approximation of multiplication and multiplication itself. Other operations, like sin , exp , are evaluated using their Taylor expansions. The set-based evaluation of an expression only consistsin interpreting all the mathematical operators (such as + or sin ) by their counterpart in affine arithmetic.We will denote by Aff( e ) the evaluation of the expression e using affine arithmetic, see [4] for practicalimplementation details. Recall from Section 2 that a numerical integration method computes a sequence of approximations ( t n , x n ) of the solution x ( t ; x ) of the ivp defined in (1) such that x n ≈ x ( t n ; x ) . Every numerical method memberof the Runge-Kutta family follows the condition order [15, Chap. II.2, Thm. 2.13]. This condition statesthat a method is of order p if and only if the p + 1 first coefficients of the Taylor expansion of the truesolution and the Taylor expansion of the numerical method are equal. The truncation error measures thedistance between the true solution and the numerical solution and it is defined by x ( t n ; x ) − x n . Using thecondition order, it can be shown that this truncation error is proportional to the Lagrange remainders. Wenow briefly recall our approach to make any explicit Runge-Kutta method guaranteed, which is based onthis observation, see [3] for a detailed presentation.The general form of an explicit s -stage Runge-Kutta formula (using s evaluations of f ) is x n +1 = x n + h s X i =1 b i k i with k i = f (cid:16) t n + c i h, x n + h i − X j =1 a ij k j (cid:17) for ≤ i ≤ s . The coefficients c i , a ij and b i are usually summarized in a Butcher table (see [15]) whichfully characterizes a Runge-Kutta method. We denote by φ ( t ) = x n + h t P si =1 b i k i ( t ) , where k i ( t ) is defined aspreviously with h replaced by h t = t − t n . Hence the truncation error is defined by x ( t n ; x ) − x n = h p +1 n ( p + 1)! (cid:18) f ( p ) ( ξ, x ( ξ )) − d p +1 φ d t p +1 ( η ) (cid:19) (4)for some ξ ∈ ] t k , t k +1 [ and η ∈ ] t n , t n +1 [ . In (4), f ( p ) stands for the p -th derivative of function f w.r.t. time t ,and h n = t n +1 − t n is the step size. In (4), the Lagrange remainder of the exact solution is f ( p ) ( ξ, x ( ξ ; x )) andthe Lagrange remainder of the numerical solution is dp +1 φdtp +1 ( η ) .The challenge to make Runge-Kutta integration schemes safe w.r.t. the exact solution of ivp amountsto bounding the result of (4). The remainder dp +1 φdtp +1 ( η ) is straightforward to bound because the function φ only depends on the value of the step size h , and so does its ( p + 1) -th derivative: d p +1 φ d t p +1 ( η ) ∈ Aff (cid:18) d p +1 φ d t p +1 ([ t n , t n +1 ]) (cid:19) (5)However, the expression f ( p ) ( ξ, x ( ξ ; x )) is not so easy to bound as it requires to evaluate f for a particularvalue of the ivp solution x ( ξ ; x ) at a unknown time ξ ∈ ] t n , t n +1 [ . The solution we used is similar to the onefound in [22, 6]: we first compute an a priori enclosure of the ivp on the interval [ t n , t n +1 ] . To do so, we use the6anach fixed-point theorem on the Picard-Lindelöf operator P , defined by P ( x, t n , x n ) = t x n + R ttn f ( s, x ( s ))d s .Notice that this operator is the integral form of (1), so a fixpoint of this operator is also a solution of (1).Now, to get an a priori enclosure of the solution over [ t n , t n +1 ] , we prove that the operator P (which is anoperator on functions) is contracting and use Banach theorem to deduce that it has a fixpoint. To find the en-closure ˆ z on the solution, we thus iteratively solve using affine arithmetic the equation P (ˆ z, t n , x n )([ t n , t n +1 ]) ⊆ ˆ z .Then, we know that the set of functions [ t n , t n +1 ] → ˆ z contains the solution of the ivp , so ˆ z can be used anenclosure of the solution of ivp over the time interval [ t n , t n +1 ] . We can hence bound the Lagrange remainderof the true solution with ˆ z such that f ( p ) ( ξ, x ( ξ ; x )) ∈ Aff (cid:0) f ( p ) ([ t n , t n +1 ] , ˆ z ) (cid:1) (6)Finally, using (5) and (6) we can prove Theorem 1 and thus bound the distance between the approxima-tions point of any explicit Runge-Kutta method and any solution of the ivp . Theorem 1.
Suppose that Φ is a numerical integration scheme and Φ Aff is the evaluation of Φ using affinearithmetic. Given a set S ⊆ R n of initial states, and an affine form ˆ x such that S ⊆ ˆ x , let ( t n , ˆ x n ) bea sequence of time instants and affine forms defined by ˆ x n +1 = ˆ x n +1 + ˆ e n +1 where ( t n +1 , ˆ x n +1 ) = Φ Aff ( t n , ˆ x n ) and ˆ e n +1 is the truncation error as defined by (4) and is evaluated using (5) and (6) . Then, for any x ∈ S and n ∈ N we have x ( t n ; x ) ∈ ˆ x n . From two (guaranteed) solutions x n , x n +1 at times t n , t n +1 of an ivp , one would like to deduce by interpo-lation all the solutions x ( t ) with t ∈ [ t n , t n +1 ] . This question has motivated a series of work on polynomialapproximations of solutions, a.k.a. continuous extension , see [15, Chap. 6]. We briefly recall the polynomialinterpolation method based on Hermite-Birkhoff which is the main method used for continuous extension.Furthermore, we present a new extension of this method allowing us to compute a guaranteed polynomialinterpolation using the result of the Picard-Lindelöf operator.Suppose given a sequence ( t i , x ( k ) i ) of n + 1 computed values of the solution of an ivp and its derivativeat instants t i , with ≤ i ≤ n and k = 0 , . Remark that these values are those produced by numericalintegration methods. The goal of Hermite-Birkhoff polynomial interpolation is to build a polynomial function p ( t ) = P ni =0 (cid:16) x i A i ( t ) + x (1) i B i ( t ) (cid:17) of degree N = 2 n + 1 from these values such that A i ( t ) = (cid:0) − t − t i ) ‘ i ( t i ) (cid:1) ‘ i ( t ) , B i ( t ) = ( t − t i ) ‘ i ( t ) , ‘ i ( t ) = Q nj =0 ,j = i t − tjti − tj , and ‘ i ( t i ) = P nk =0 ,k = i ti − tk : the functions ‘ i ( t ) are the Lagrangepolynomials and this interpolation generalizes the Lagrange interpolation. Under the assumption that allthe t i are distinct, we know that the polynomial interpolation is unique. For instance, Eq. (3) is associatedto the Hermite-Birkhoff polynomial with n = 1 . We know that interpolation error x ( t ; x ) − p ( t ) is defined by x ( N +1)( ξ )( N +1)! Q ni =0 ( t − t i ) with ξ ∈ [ t , t n ] , which can be reformulated as x ( t ; x ) − p ( t ) = f ( N ) ( ξ, x ( ξ ))( N + 1)! n Y i =0 ( t − t i ) with ξ ∈ [ t k , t k +1 ] In consequence, to guarantee the polynomial interpolation, it is enough to know an enclosure of the solution x ( t ) of ivp on the interval [ t k , t k +1 ] . And fortunately, we can reuse the result of the Picard-Lindelöf operatorin that context. In next section, this guaranteed polynomial interpolation will be used to approximate thesolution of an ivp in order to compute jump times. Theorem 2.
Let p Aff ( t ) be the interpolation polynomial based on n + 1 guaranteed solutions ˆ x i of an ivp (1) and n + 1 evaluations ˆ x (1) i of f with affine arithmetic, and let ˆ z be the result of the Picard-Lindelöf operator.We have, ∀ t ∈ [ t k , t k +1 ] , x ( t ) ∈ Aff p Aff ( t ) + f ( N ) ([ t k , t k +1 ] , ˆ z )( N + 1)! n Y i =0 ( t − t i ) ! lgorithm 1 Guaranteed simulation algorithm
Require: H = ( L, F, E, G, R ) , a, hybrid automaton Require: ˆ x , l , h , t f . Initial state, step-size and final time n ←
02: ˆ t n ← while inf(ˆ t n ) ≤ t f do
4: (ˆ x n +1 , ˆ x hn ) ← GSolveODE( F ( l n ) , ˆ x n , h n )5: (ˆ x n +1 , ˆ t n +1 , l n +1 ) ← GSolveZC( l n , ˆ x n , ˆ x n +1 , ˆ x hn , ˆ t n , h n )6: n ← n + 17: end while We present in this section our main algorithm to compute an over-approximation of the set of reachable statesof linear or nonlinear hybrid systems (Algorithm 1), which is based on the guaranteed numerical methodspresented in Section 3. In a nutshell, it works as follows. It produces a sequence of values (ˆ t n , ˆ x hn , ˆ x n , l n ) suchthat l n is the current location, ˆ t n is a time interval, ˆ x n is an over-approximation of x ( t ) for every t ∈ ˆ t n , and ˆ x hn is an over-approximation of x ( t ) for every t ∈ [ˆ t n , ˆ t n +1 ] , i.e. an over-approximation of the trajectory betweentwo discrete instants (here [ˆ t n , ˆ t n +1 ] designates the convex hull of the union of the two affine forms ˆ t n and ˆ t n +1 ). Our method uses the guaranteed ode solver described in Section 3.2 to compute ˆ x n +1 and ˆ x hn , andthe guaranteed polynomial interpolation of Section 3.3 to precisely and safely enclose the potential jumpingtimes between t n and t n +1 , and thus refine t n +1 and ˆ x n +1 . Trivalent Logic.
First, notice that since we are working with sets of values, the evaluation of a booleancondition, such as x ≥ , is not necessarily false or true, but can also be false for some elements and true forsome other elements in the set ˆ x (for instance when ˆ x = [ − , in the preceding example). In order to takethis in account, boolean conditions are evaluated in the domain of trivalent logic instead of usual booleans IB .This logic is the natural extension of boolean algebra to the three following values: ⊥ (false), > (true) and ⊥> (unknown). We denote this set by IB ∗ . Notice that a function g : R n → IB naturally extends to a function Aff( g ) : P ( R n ) → IB ∗ using affine arithmetic and trivalent logic. In particular, the guards of the discrete jumpswill be evaluated in IB ∗ , which brings subtleties in the zero-crossing detection algorithm (when such a guardevaluates to ⊥> ), as we will see in next section. In the following, we shall write g for Aff( g ) when it is clearfrom the context. Main Algorithm.
Let H be an ha as defined in Definition 1. Our method computes a sequence of values (ˆ t n , ˆ x n , ˆ x hn , l n ) such that l n is the current mode of the ha , t n is a time interval and ˆ x n and ˆ x hn are affine formssuch that we have ∀ t ∈ ˆ t n x ( t ) ∈ ˆ x n ∀ t ∈ [ˆ t n , ˆ t n +1 ] , x ( t ) ∈ ˆ x hn for all trajectories of H . To compute this sequence, we start from ˆ t = 0 and iterate until the lower bound of ˆ t n (denoted inf(ˆ t n ) ) is lower than t f . The guaranteed simulation loop is given in Algorithm 1, where GSolveODE is the guaranteed solver of ode presented in Section 3.2 and
GSolveZC is the procedure described below.Notice that the function
GSolveODE outputs both ˆ x n +1 , the tight over-approximation of x at ˆ t n + h n , and ˆ x hn ,the result of Picard iteration (see Section 3.2) since we reuse it in GSolveZC . Detecting Jumps.
We now present our algorithm (
GSolveZC ) for detecting and handling discrete jumps.Let H = ( L, F, E, G, R ) be an ha , and let l n , ˆ x n , ˆ x n +1 and ˆ x hn be the states computed with GSolveODE . Let usdenote l • n the set of all transitions originating from l n , i.e. l • n = { e ∈ E | ∃ l ∈ L, e = ( l n , l ) } . A transition e ∈ l • n was surely activated between t n and t n + h n if G ( e )(ˆ x n ) = ⊥ and G ( e )(ˆ x n +1 ) = > . The transition e was maybe activated if if G ( e )(ˆ x n ) = ⊥ and G ( e )(ˆ x n +1 ) = ⊥> . Note that in both cases we have G ( e )(ˆ x hn ) = ⊥> . In this section,we present our algorithm in the simple (but most common) case where we have only one transition activated8ase a x tt n t n +1 Case b x tt n t n +1 Case c x tt n t n +1 Figure 1: Three cases for discrete transitions. Exact trajectories are depicted in dark gray, the over-approximated flow pipes in light gray.at a given time, and where we are not in the situation of G ( e )(ˆ x n ) = G ( e )(ˆ x n +1 ) = ⊥ with G ( e )(ˆ x hn ) = ⊥> ; wediscuss these two cases later.The function GSolveZC is described in Algorithm 2 and runs as follows. First, if for all edges e ∈ l • n , G ( e )(ˆ x hn ) = ⊥ , then no transition was activated between t n and t n +1 , and we do nothing (lines 2–4). Otherwise,if there is e ∈ l • n that may have been activated, then we make sure that we have G ( e )(ˆ x n + ) = > , i.e. thatthe event really occurred between ˆ t n and ˆ t n +1 (this is the case (c) in Figure 1, other cases are handled as“special cases” below), which is achieved by continuing the guaranteed integration of F ( l n ) until we have G ( e )(ˆ x n +1 ) . This is the role of the while loop (lines 6–10), in which we also compute the hull of all Picardover-approximations computed during this process. Then, we are sure that e occurred between ˆ x n and ˆ x n +1 .We then reduce the time interval [ˆ t n , ˆ t n +1 ] in order to precisely enclose the time ˆ t zc at which the condition G ( e ) became true (line 11). To do so, we use the guaranteed polynomial extrapolation p of Section 3.3to approximate the value of x between ˆ t n and ˆ t n +1 without having to call GSolveODE , and use a bisectionalgorithm to find the lower and upper limits of ˆ t zc .To get the lower limit (the upper limit is obtained similarly), the bisection algorithm perform as follows.We start with a working list containing [ˆ t n , ˆ t n +1 ] , the convex hull of both time stamps. Then, we pick thefirst element ˆ t of the working list and evaluate p on it. If p (ˆ t ) = ⊥> and the width of ˆ t is larger than the desiredprecision, we split ˆ t into ˆ t and ˆ t and add them to the working list. If the width ˆ t is smaller than theprecision, we return ˆ t . If p (ˆ t ) = ⊥ , we discard ˆ t and continue with the rest of the working list. Note that wecannot have p (ˆ t ) = > . The method to find the upper limit is the same, except that we discard ˆ t if p (ˆ t ) = > .Finally, once we have ˆ t zc , we use the guaranteed polynomial again to compute the zero-crossing state ˆ x zc = p (ˆ t zc ) and set ˆ x n +1 = R ( e )(ˆ x zc ) , i.e. we apply the reset map.Notice that our algorithm needs to maintain the invariant G ( e )(ˆ x n ) = ⊥ for all e ∈ l • n . This imposes thatwe sometimes have a particular formulation for zero-crossing conditions. For instance, the guard and resetfunctions of the windy ball of example 1 should be reformulated as G ( e ) = x < and R ( e )( x, y, v ) = ( x, , − . v ) .Under this new formulation, just after the zero-crossing action has been performed, we have x = 0 andtherefore the zero-crossing condition x < is not true. Otherwise, with the first formulation, the simulationwill fail at first zero-crossing. The transformation is performed automatically for usual conditions in HySon.Note also that it may be the case that there exist e ∈ l • n +1 such that G ( e )(ˆ x n +1 ) = ⊥ , i.e. a transition startingfrom l n +1 may be activated by ˆ x n +1 . In this case, we execute the transition immediately after e , and continueuntil we arrive in a location l such that no transition starting from l is activated. We assume that such l exists, which is true if the ha H does not have Zeno behavior. Special Cases.
If there is more than one transition activated during the step from t k to t k +1 , we first rejectthe step and continue with a reduced step-size. This way, we shall eventually reach a step-size where only onecondition is activated and not the other. If we cannot separate both transitions before reaching a minimalstep-size, we use our previous algorithm on both transitions separately, apply both reset maps and then wefollow both possible trajectories, i.e. we have a disjunctive analysis when we are not sure of the location.Finally, we shall discuss the case when the state at times t k and t k +1 do not verify the guard of a transition e but the hull computed by Picard iteration does (see Figure 1, cases a and b). Then, either the trajectoriesbetween t k and t k +1 cross twice the guard boundary and we missed a zero-crossing, (case a) or it is the9 lgorithm 2 Guaranteed Zero-crossing algorithm
Require: H = ( L, F, E, G, R ) , a hybrid automaton function GSolveZC ( ˆ x n , ˆ x n +1 , ˆ x hn , t n , h n , l n ) if ∀ e ∈ l • n , G ( e )(ˆ x hn ) = ⊥ then return ˆ x n +1 , l n , t n + h n . No jumps end if Let e = ( l n , l n +1 ) ∈ l • n be such that G ( e )(ˆ x hn ) = ⊥> while G ( e )(ˆ x n +1 ) = > do
7: (ˆ x n +1 , ˆ x h ) ← GSolveODE( F ( l n ) , ˆ x n +1 , h n )8: x hn ← x hn ∪ x h h n ← h n + h n end while . Now G ( e )(ˆ x n ) = ⊥ and G ( e )(ˆ x n +1 ) = > t zc ← tightInterval(ˆ x n , ˆ x n +1 , t n , t n +1 )12: x zc ← GPolyODE(ˆ x n , ˆ x n +1 , ˆ x hn , t zc )13: return ( R ( e )(ˆ x zc ) , l n +1 , t zc ) end function Figure 2: Over-approximation of the trajectories of the Brusselator (left) and Car (right) systems. The bluesets are the over-approximations for all t (given by Picard iteration) and the red sets are the tight enclosuresat the discretization time stamps.over-approximation due to Picard iteration which makes the guard validated (case b). We use again ourbisection algorithm to distinguish between these two cases and perform a disjunctive analysis if we cannotdifferentiate between them. We implemented our method in a tool named HySon. It is written in OCaml and takes as input a repre-sentation of a hybrid system either using a set of equations similar to the ones defined in [2] or a Simulinkmodel (for now without stateflow support). We first present the output of HySon on some continuous orhybrid systems, and then we compare the performances of HySon with other tools.
Brusselator.
We consider the following system, also used in [24]: ˙ x = 1 + x y − . x ˙ y = 1 . x − x y x (0) ∈ [0 . , y (0) ∈ [0 , . HySon computes the flowpipe up to t = 15 in . s, see Figure 2, left.10 ar. We consider the initial value problem given by: ˙ x = v cos(0 . t ) cos( θ ) ˙ y = v cos(0 . t ) sin( θ )˙ θ = v sin(0 . t ) / x (0) = 0 y (0) = 0 θ (0) = [0 , . HySon computes the flowpipe up to t = 30 in . s, see Figure 2, right. We now present two hybrid systems: a ball bouncing on a sinusoidal floor and a non-linear system with apolynomial jump condition.
Ball bouncing on a sinusoidal floor.
A ball is falling on a sinusoidal floor, and we consider a dynamicswith non-linear wind friction for the ball. The dynamics of the system is given by ˙ v x = 0 ˙ x = v x ˙ v y = − g + kv y ˙ y = v y starting from the initial conditions x (0) = 1 . , v x (0) = 0 , y (0) = 5 and v y (0) = − . The bouncing of the ball isgiven by the transition: v x = e ( v d − v x ) v y = e ( v d cos( x ) − v y ) y = sin( x ) when y < sin( x ) with v d = ( v x + v y cos( x )) / (1 + cos( x ) ) , where g = 9 . , k = 0 . and e = 0 . . Note that the exact dynamics of thissystem is almost chaotic. HySon is able to compute flow-pipe for this system, as shown on the followingfigure. xy Wolfgram.
We study the following system, with a = 2 : ˙ x ( t ) = ( t + 2 x if ( x + 3 / + ( t + 1 / < t + 3 x − a otherwise x (0) ∈ [0 . , . The dynamics of the system is relatively simple, however the jump condition is a polynomial and is thusnot well suited for classical intersection techniques as in [24, 12]. Our bisection algorithm for computing thezero-crossing time encloses precisely the jumping time. To precisely enclose the value of x , we insert a resetin the discrete transition and set x = p − ( t + 1 / − / . This transformation allows us to obtain a tightenclosure of x as well. Note however that we performed this transformation manually for now except forpolynomial guard, our future work will include the automatization of this task for more expressions. We now compare the performance of HySon with other tools for reachability analysis of non-linear hybridsystems: Flow ∗ as in [24] and HydLogic [19]. We downloaded both tools from the web and run them onvarious examples included in the Flow ∗ distribution (we could not compile HydLogic). We run HySon on thesame examples and present the execution time for both in Table 1. We see that HySon outperforms Flow ∗ ∗ )Brusselator 1 2 15 14.3 49.97Van-der-Pol 1 2 6 16.2 49.17Lorenz 1 3 1 13.32 119.94WaterTank 2 5 30 4.35 316.72Hybrid3D 2 3 2.0 26.65 237.4Pendulum 1 2 3.8 26.75 N/ADiode oscillator [11] 3 2 20 29.56 42.65on all these examples, whether they are purely continuous systems (VanDerPol, Brusselator or Lorenz) orhybrid systems (Watertank). Note that for the Lorenz system, we set a fixed step-size of . to achievea good precision, which explains the large computation time. For all other examples, we used a variablestep-size and an order for the Taylor models used in Flow ∗ . Let us remark however that some exampleswork well on Flow ∗ but not in HySon, especially the examples with many transitions that may happensimultaneously. We also want to point out that our tool performs well on linear examples. We compared itwith SpaceEx [12] on simple examples where HySon and SpaceEx produced very similar results in terms ofprecision and computation time (Appendix B). We presented a new approach to compute the flowpipes of nonlinear hybrid systems using guaranteed versionof numerical methods. Our method is based on guaranteed explicit Runge-Kutta integration methods andon a new guaranteed polynomial interpolation based on the well-known Hermite-Birkoff method. Thisinterpolation is cheap and precise to over-approximate continuous state values. Using both methods, we canprecisely compute flowpipes of nonlinear hybrid systems, with a few number of restrictions on the nature offlows and jumps. Remark that with guaranteed polynomial interpolation, we can accurately and soundlyhandle nonlinear jumps in hybrid systems without using an intersection operator which is usually costlyto define. Note also that we can handle in the same manner invariants in hybrid automaton using ouralgorithm for zero-crossing events. More precisely, we would add a new step in the simulation loop to checkthat the invariant is fulfilled at each integration step. Finally, the experiments showed that our approachis efficient and precise on a set of representative case studies: we showed that our approach outperformsexisting techniques on the flowpipe computation of nonlinear systems.As future work, we plan to handle multiple zero-crossing events involving trajectories associated todifferent system behaviors. As a result, to keep the flowpipe computation sharp we must handle disjunctivefutures efficiently. We also want to extend our parser of Simulink models, presented in [4], to handle Stateflowand thus apply our tool on more realistic examples.
References [1] A. Agrawal, G. Simon, and G. Karsai. Semantic translation of Simulink/Stateflow models to hybridautomata using GReAT. In
GT-VMT , ENTCS, 2004.[2] O. Bouissou and A. Chapoutot. An operational semantics for Simulink’s simulation engine. In
LCTES .ACM, 2012.[3] O. Bouissou, A. Chapoutot, and A. Djoudi. Enclosing temporal evolution of dynamical systems usingnumerical methods. under submission, 2013. 124] O. Bouissou, A. Chapoutot, and S. Mimram. HySon: Precise simulation of hybrid systems with impreciseinputs. In
RSP . IEEE, 2012.[5] O. Bouissou, E. Goubault, S. Putot, K. Tekkal, and F. Vedrine. HybridFluctuat: A static analyzer ofnumerical programs within a continuous environment. In
CAV , volume 5643 of
LNCS , pages 620–626.Springer, 2009.[6] O. Bouissou and M. Martel. GRKLib: a Guaranteed Runge Kutta Library. In
Scientific Computing,Computer Arithmetic and Validated Numerics , 2006.[7] T. Dang and R. Testylier. Hybridization domain construction using curvature estimation. In
HSCC ,pages 123–132. ACM, 2011.[8] L. H. de Figueiredo and J. Stolfi.
Self-Validated Numerical Methods and Applications . Brazilian Math-ematics Colloquium monographs. IMPA/CNPq, 1997.[9] A. Eggers, N. Ramdani, N. Nedialkov, and M. Fränzle. Improving SAT modulo ODE for hybrid systemsanalysis by combining different enclosure methods. In
SEFM , volume 7041 of
LNCS , pages 172–187.Springer, 2011.[10] J. M. Esposito, V. Kumar, and G. J. Pappas. Accurate event detection for simulating hybrid systems.In
HSCC , volume 2034 of
LNCS , pages 204–217. Springer, 2001.[11] G. Frehse. Phaver: Algorithmic verification of hybrid systems past hytech. In
HSCC’05 , volume 3414of
LNCS , pages 258–273. Springer, 2005.[12] G. Frehse, C. Le Guernic, A. Donzé, S. Cotton, R. Ray, O. Lebeltel, R. Ripado, A. Girard, T. Dang,and O. Maler. SpaceEx: Scalable verification of hybrid systems. In
CAV , volume 6806 of
LNCS , pages379–395. Springer, 2011.[13] R. Goebel, J. Hespanha, A. R. Teel, C. Cai, and R. Sanfelice. Hybrid systems: Generalized solutionsand robust stability. In
IFAC NOLCOS , pages 1–12, 2004.[14] E. Goubault and S. Putot. Static analysis of finite precision computations. In
VMCAI , volume 6538 of
LNCS , pages 232–247. Springer, 2011.[15] E. Hairer, S. P. Nørsett, and G. Wanner.
Solving Ordinary Differential Equations I: Nonstiff Problems .Springer, 1993.[16] T. A. Henzinger. The theory of hybrid automata. In
Symposium on Logic in Computer Science , pages278–292. IEEE Computer Society Press, 1996.[17] T. A. Henzinger, B. Horowitz, R. Majumdar, and H. Wong-Toi. Beyond HYTECH: Hybrid systemsanalysis using interval numerical methods. In
HSCC , volume 1790 of
LNCS , pages 130–144. Springer,2000.[18] T. A. Henzinger and V. Rusu. Reachability verification for hybrid automata. In
HSCC’98 , volume 1386of
LNCS , pages 190–204. Springer-Verlag, 1998.[19] D. Ishii, K. Ueda, H. Hosobe, and A. Goldsztejn. Interval-based solving of hybrid constraint systems.In
IFAC ADHS , pages 144–149, 2009.[20] C. Le Guernic and A. Girard. Reachability analysis of hybrid systems using support functions. In
CAV ,volume 5643 of
LNCS , pages 540–554. Springer, 2009.[21] R. Moore.
Interval Analysis . Prentice Hall, 1966.1322] N. S. Nedialkov, K. R. Jackson, and G. F. Corliss. Validated solutions of IVPs for ordinary differentialequations.
App. Math. and Comp. , 105(1):21 – 68, 1999.[23] L. Shampine, I. Gladwell, and S. Thompson.
Solving ODEs with MATLAB . Cambridge Univ. Press,2003.[24] E. A. Xin Chen and S. Sankaranarayanan. Taylor model flowpipe construction for non-linear hybridsystems. In
IEEE Real-Time Systems Symposium , 2012.[25] F. Zhang, M. Yeddanapudi, and P. Mosterman. Zero-crossing location and detection algorithms forhybrid system simulation. In
IFAC W. Cong. , pages 7967–7972, 2008.14
Other examples
Because of space constraints, we did not include the description of some examples in the article, they canbe found below.
A.1 The Bouncing Pendulum
This hybrid system describes pendulum attached to a rope of length l = 1 . falling under a gravity of g = 9 . .The angle θ of the pendulum (w.r.t. vertical) is described by the flow equation ¨ θ = − gl sin( θ ) θ (0) = [1 , . The pendulum bounces on a wall when θ = − . , in which case the reset condition is ˙ θ = − ˙ θ . The guaranteedsimulation of the system produces: tθ As illustration, we give here the description of the system given as input to HySon: set duration = 3.8;set dt = 0.05;set max_dt = 0.1;set scope_xy = true;init theta = [1.,1.05];init dtheta = 0.;init t = 0;l = 1.2;g = 9.81;theta’ = dtheta;dtheta’ = -g/l*sin(theta);t’ = 1;on sin(theta) <= -0.5 do { print("Bouncing!\n"); dtheta = -dtheta };output(t,theta);
Notice that the dynamics of the system is nonlinear (because of the presence sin( θ ) in the flow equation) andthe guard is also non linear, which makes that it cannot be simulated with Flow ∗ .15 .2 Wolfgram The simulation produced on the Wolfgram example is tx Comparison with SpaceEx
Since the main novelty of HySon is to handle efficiently non-linear systems, we did not detail experiments onlinear ones. However, performances are comparable with the state-of-the-art guaranteed simulators dedicatedto linear systems. As illustration, we compare here HySon with SpaceEx [12] on two examples.
B.1 Bouncing Ball
The above figure shows the flowpipe computed by SpaceEx (in gray) and by HySon (blue polygons) for theclassical bouncing-ball example, up to t f = 20 . The computation times were 1.031s for HySon and 1.15s forSpaceEx (we used the support-function representation of sets using 50 directions). Notice that the flowpipecomputed by HySon is within the flowpipe of SpaceEx; we could get a more precise results with SpaceEx byincreasing the number of directions, but at the cost of higher computation times (8.65s for 200 directions forexample). B.2 Thermostat
The above figure shows the flowpipe computed by SpaceEx (in gray) and by HySon (blue sets) for the classicalthermostat example, up to t f = 15= 15