Modeling Time in Computing: A Taxonomy and a Comparative Survey
Carlo A. Furia, Dino Mandrioli, Angelo Morzenti, Matteo Rossi
MModeling Time in Computing:A Taxonomy and a Comparative Survey
Carlo A. Furia, Dino Mandrioli,Angelo Morzenti, and Matteo RossiOctober 9, 2018
Abstract
The increasing relevance of areas such as real-time and embedded sys-tems, pervasive computing, hybrid systems control, and biological andsocial systems modeling is bringing a growing attention to the temporalaspects of computing, not only in the computer science domain, but alsoin more traditional fields of engineering.This article surveys various approaches to the formal modeling andanalysis of the temporal features of computer-based systems, with a levelof detail that is also suitable for nonspecialists. In doing so, it provides aunifying framework, rather than just a comprehensive list of formalisms.The article first lays out some key dimensions along which the variousformalisms can be evaluated and compared. Then, a significant sample offormalisms for time modeling in computing are presented and discussedaccording to these dimensions. The adopted perspective is, to some ex-tent, historical, going from “traditional” models and formalisms to moremodern ones. a r X i v : . [ c s . G L ] O c t ontents Introduction
In many fields of science and engineering, the term dynamics is intrinsicallybound to a notion of time. In fact, in classical physics a mathematical modelof a dynamical system most often consists of a set of equations that state arelation between a time variable and other quantities characterizing the system,often referred to as system state .In the theory of computation, conversely, the notion of time does not alwaysplay a major role. At the root of the theory, a problem is formalized as a function from some input domain to an output range. An algorithm is a process aimedat computing the value of the function; in this process, dynamic aspects areusually abstracted away, since the only concern is the result produced.Timing aspects, however, are quite relevant in computing too, for many rea-sons; let us recall some of them by adopting a somewhat historical perspective. • First, hardware design leads down to electronic devices where the physicalworld of circuits comes back into play, for instance when the designer mustverify that the sequence of logical gate switches that is necessary to executean instruction can be completed within a clock’s tick. The time modelsadopted here are borrowed from physics and electronics, and range fromdifferential equations on continuous time for modeling devices and circuits,to discrete time (coupled with discrete mathematics) for describing logicalgates and digital circuits. • When the level of description changes from hardware to software, physicaltime is progressively disregarded in favor of more “coarse-grained” viewsof time, where a time unit represents a computational step, possibly in ahigh-level programming language; or it is even completely abstracted awaywhen adopting a purely functional view of software, as a mapping fromsome input to the computed output. In this framework, computationalcomplexity theory was developed as a natural complement of computabil-ity theory: it was soon apparent that knowing an algorithm to solve aproblem is not enough if the execution of such an algorithm takes anunaffordable amount of time. As a consequence, models of abstract ma-chines have been developed or refined so as to measure the time needed fortheir operations. Then, such an abstract notion of time measure (typicallythe number of elementary computation steps) could be mapped easily tophysical time. • The advent of parallel processing mandated a further investigation of tim-ing issues in the theory of computing. To coordinate appropriately thevarious concurrent activities, in fact, it is necessary to take into accounttheir temporal evolution. Not by chance the term synchronization de-rives from the two Greek words συν (meaning “together”) and χρoνoσ (meaning “time”). • In relatively recent times the advent of novel methods for the design andverification of real-time systems also requires the inclusion of the envi-3onment with which the computer interacts in the models under analysis.Therefore the various activities are, in general, not fully synchronized,that is, it is impossible to delay indefinitely one activity while waiting foranother one to come alive. Significant classes of systems that possess real-time features are, among others, social organizations (in a broad sense),and distributed and embedded systems. For instance, in a plant controlsystem, the control apparatus must react to the stimuli coming from theplant at a pace that is mandated by the dynamics of the plant. Hencephysical time, which was progressively abstracted away, once again playsa prominent role.As a consequence, some type of time modeling is necessary in the theory ofcomputing as well as in any discipline that involves dynamics. Unlike otherfields of science and engineering, however, time modeling in computing is farfrom exhibiting a unitary and comprehensive framework that would be suitablein a general way for most needs of system analysis: this is probably due tothe fact that the issue of time modeling arose in different fields, in differentcircumstances, and was often attacked in a fairly ad hoc manner.In this article we survey various approaches that have been proposed to tacklethe issue of time modeling in computing. Rather than pursuing an exhaustivelist of formalisms, our main goal is to provide a unifying framework so that thevarious models can be put in perspective, compared, evaluated, and possiblyadapted to the peculiar needs of specific application fields. In this respect, weselected the notations among those that are most prominent in the scientificliterature, both as basic research targets and as useful modeling tools in appli-cations. We also aimed at providing suitable “coverage” of the most importantfeatures that arise in time modeling. We tried to keep our exposition at a levelpalatable for the nonspecialist who wishes to gain an overall but not superficialunderstanding of the issue. Also, although the main goal of time modeling iscertainly to use it in the practice of system design, we focus on the conceptualaspects of the problem (what can and cannot be done with a given model; howeasy it is to derive properties, etc.) rather than on practical “recipes” of how toapply a formal language in specific projects. The presentation is accompaniedby many examples from different domains; most of them are inspired by em-bedded systems concepts, others, however, show that the same concepts applyas well to a wider class of systems such as biological and social ones.We deliberately excluded from our survey time modeling approaches basedon stochastic formalisms. This sector is certainly important and very relevantfor several applications, and it has recently received increasing attention fromthe research community (e.g., [RKNP04, DK05]). In fact, most of the formalnotations presented in this survey have some variants that include stochasticor probabilistic features. However, including such variants in our presentationwould have also required us to present the additional mathematical notions andtools needed to tackle stochastic processes. These are largely different from thenotions discussed in the article, which aim at gaining “certainty” (e.g., “thesystem will not crash under any circumstances”) rather than a “measure of4ncertainty” (e.g., “the system will crash with probability 10 − ”) as happenswith probabilistic approaches. Thus, including stochastic formalisms would haveweakened the focus of the article and made it excessively long.The first part of this article introduces an informal reference frameworkwithin which the various formalisms can be explained and evaluated. First,Section 2 presents the notion of language, and gives a coarse categorization offormalisms; then, Section 3 proposes a collection of “dimensions” along whichthe various modeling approaches can be classified.The second part of the article is the actual survey of time modeling for-malisms. We do not aim at exhaustiveness; rather, we focus on several relevantformalisms, those that better exemplify the various approaches found in theliterature, and analyze them through the dimensions introduced in Section 3.We complement the exposition, however, with an extensive set of bibliographicreferences. In the survey, we follow a rather historical ordering: Section 4 sum-marizes the most traditional ways of taking care of timing aspects in computing,whereas Section 5 is devoted to the more recent proposals, often motivated bythe needs of new, critical, real-time applications. Finally, Section 6 containssome concluding remarks. When studying the different ways in which time has been represented in theliterature, and the associated properties, two aspects must be considered: thelanguage used to describe time and the way in which the language is inter-preted. Let us illustrate this point in some detail.A language (in the broad sense of the term) is the device that we employto describe anything of interest (an object, a function, a system, a property, afeature, etc.). Whenever we write a “sentence” in a language (any language), a meaning is also attached to that sentence. Depending on the extent to whichmathematical concepts are used to associate a sentence with its meaning, a lan-guage can be informal (no elements of the language are associated with math-ematical concepts), semiformal (some are, but not all), or formal (everythingis). More precisely, given a sentence φ written in some language L , we can assignit a variety of interpretations ; we then define the meaning of φ by establishingwhich ones, among all possible interpretations, are those that are actually associ-ated with it (in other words, by deciding which interpretations are “meaningful”,and which ones are not); we say that an interpretation satisfies a sentence withwhich it is associated, or, dually, that the sentence expresses its associated in-terpretations. In the rest of this article, we will sometimes refer to the languageas the syntax used to describe a sentence, as opposed to the interpretations thatthe latter expresses, which constitute its semantics . Such interpretations are referred to in mathematical logic as the models of φ ; in thisarticle we will in general avoid this terminology, as it might generate some confusion with thedifferent notion of a model as “description of a system”.
5n this survey we mainly deal with languages that have the distinguishingfeature of including a notion of time. Then, the interpretations associated withsentences in these languages include a notion of temporal evolution of elements;that is, they define what value is associated with an element at a certain timeinstant. As a consequence, we refer to the possible interpretations of sentencesin timed languages as behaviors . In fact, the semantics of every formal languagethat has a notion of time is defined through some idea of “behavior” (or trace):infinite words for linear temporal logic [Eme90], timed words for timed automata[AD94], sequences of markings for Petri nets [Pet81], and so on.For example, a behavior of a system is a mapping b : T → S , where T is atemporal domain and S is a state space; the behavior represents the system’sstate (i.e., the value of its elements) in the various time instants of T .Let us consider a language L and a sentence φ written in L . The nature of φ depends on L ; for example it could be a particular kind of graph if L is sometype of automaton (a Statechart, a Petri net, etc.), a logic formula if L is somesort of logic, and so on. Given a behavior b in the system model, we write b | = φ to indicate that b satisfies φ , that is, it is one of the behaviors expressed bythe sentence. The satisfaction relation | = is not general, that is, it is language-dependent (it is, in fact, | = L , but we omit the subscript for conciseness), and ispart of the definition of the language.Figure 1 depicts informally the relations among behaviors, language, systemdescriptions, real world, and semantics. Solid arrows denote that the entitiesthey point to are obtained by combining elements of entities they originatefrom; for instance, a system description consists of formalizations of (parts of)the real world through sentences in some language. Dashed arrows, on theother hand, denote indirect influences; for example, the features of a languagecan suggest the adoption of certain behavioral structures. Finally, the semanticsof a system is given by the set of all behaviors b satisfying system descriptionΦ. These relations will become clearer in the following examples.REALWORLD SYSTEMDESCRIPTION φ ∈ L LANGUAGE L BEHAVIORS b : T → S SYSTEMSEMANTICS { b | b | = φ } Figure 1: Behaviors, language, system descriptions, world.
Example 1 (Continuous, Scalar Linear Dynamic System) . Suppose L is thelanguage of differential equations used to describe traditional linear dynamic6ystems. With such a language we might model, for example, the simple RCcircuit of Figure 2. In this case, the sentence φ that describes the system couldbe ˙ q = − RC q (where q is the charge of the capacitor); then, a behavior b thatsatisfies φ (i.e., such that b | = φ ) is b ( t ) = C e − t/RC , where C is the initialcharge of the capacitor, at the time when the circuit is closed (conventionallyassumed to be 0). R C
Figure 2: An example of sentence in graphical language describing electric cir-cuits.To conclude this section, let us present a widely used categorization of lan-guages that, while neither sharp nor precise, nevertheless quickly conveys someimportant features of a language.Languages are often separated into two broad classes: operational languagesand descriptive languages [GJM02].Operational languages are well-suited to describe the evolution of a systemstarting from some initial state. Common examples of operational languagesare the differential equations used to describe dynamic systems in control the-ory (see Section 4.1), automata-based formalisms (finite-state automata, Turingmachines, timed automata, which are described in Sections 4.3 and 5.1.1) andPetri nets (which are presented in Section 5.1.2). Operational languages areusually based on the key concepts of state and transition (or event ), so that asystem is modeled as evolving from a state to the next one when a certain tran-sition/event occurs. For example, an operational description of the dynamics ofa digital safe could be the following:
Example 2 (Safe, operational formulation) . “When the last digit of the cor-rect security code is entered, the safe opens; if the safe remains open for threeminutes, it automatically closes.”Descriptive languages, instead, are better suited to describing the properties (static or dynamic) that the system must satisfy. Classic examples of descriptivelanguages are logic-based and algebra-based formalisms, such as those presentedin Section 5.2. An example of descriptive formulation of the properties of a safeis the following: Example 3 (Safe, descriptive formulation) . “The safe is open if and only if thecorrect security code has been entered no more than three minutes ago.”As mentioned above, the distinction between operational and descriptivelanguages is not as sharp as it sounds, for the following reasons. First, it is7ossible to use languages that are operational to describe system properties (e.g.,[AD94] used timed automata to represent both the system and its propertiesto be verified through model checking), and languages that are descriptive torepresent the system evolution with state/event concepts [GM01] (in fact, thedynamics of Example 2 can be represented using a logic language, while theproperty of Example 3 can be formalized through an automata-based language).In addition, it is common to use a combination of operational and descriptiveformalisms to model and analyze systems in a so-called dual-language approach .In this dual approach, an operational language is used to represent the dynamicsof the system (i.e., its evolution), while its requirements (i.e., the propertiesthat it must satisfy, and which one would like to verify in a formal manner)are expressed in a descriptive language. Model checking techniques [CGP00,HNSY94] and the combination of Petri nets with the TRIO temporal logic[FMM94] are examples of the dual language approach. When describing the modeling of time several distinctive issues need to be con-sidered. These constitute the “dimensions” of the problem from the perspectiveof this article. They will help the analysis of how time is modeled in the litera-ture, which is carried out in Sections 4 and 5.Some of the dimensions proposed here are indicative of issues that are perva-sive in the modeling of time in the literature (e.g., using discrete vs. continuoustime domains); others shed more light on subtle aspects of some formalisms. Webelieve that the systematic, though not exhaustive, analysis of the formalismssurveyed in Sections 4 and 5 against the dimensions proposed below shouldnot only provide the reader with an overall comparative assessment of the for-malisms described in this article, but also help her build her own evaluation ofother present and future formalisms in the literature.
A first natural categorization of the formalisms dealing with time-dependentsystems and the adopted time model is whether such a model is a discrete ordense set.A discrete set consists of isolated points, whereas a dense set (ordered by“ < ”) is such that for every two points t , t , with t < t , there is alwaysanother point t in between, that is, t < t < t . In the scientific literatureand applications, the most widely adopted discrete time models are natural andinteger numbers — herewith denoted as N and Z , respectively — whereas thetypical dense models are rational and real numbers — herewith denoted as Q and R , respectively. For instance, differential equations are normally statedwith respect to real variable domains, whereas difference equations are definedon integers. Computing devices are formalized through discrete models whentheir behavior is paced by a clock, so that it is natural to measure time by8ounting clock ticks, or when they deal with (i.e., measure, compute, or display)values in discrete domains.Besides the above well-known premises, however, a few more accurate dis-tinctions are useful to better evaluate and compare the many formalisms avail-able in the literature and those that will be proposed in the future. Continuous vs. Noncontinuous Time Models
Normally in mathematics, continuous time models (i.e., those in which the tem-poral domain is a dense set such that every nonempty set with an upper boundhas a least upper bound) such as real numbers are preferred to other dense do-mains such as the rationals, thanks to their completeness/closure with respect toall common operations (otherwise, referring to √ π would be cumbersome).Instead, normal numerical algorithms deal with rational numbers since they canapproximate real numbers — which cannot be represented by a finite sequenceof digits — up to any predefined error. There are cases, however, where thetwo sets exhibit a substantial difference. For instance, assume that a system iscomposed of two devices whose clocks c and c are incommensurable (i.e., suchthat there are no integer numbers n, m such that nc = mc ). In such a case,if one wants to “unify” the system model, Q is not a suitable temporal domain.Also, there are some sophisticated time analysis algorithms that impose the re-striction that the time domain is Q but not R . We refer to one such algorithmwhen discussing Petri nets in Section 5.1.2. Finite or Bounded Time Models
Normal system modeling assumes behaviors that may proceed indefinitely inthe future (and maybe in the past), so that it is natural to model time asan unbounded set. There are significant cases, however, where all relevantsystem behaviors can be a priori enclosed within a bounded “time window”.For instance, braking a car to a full stop requires at most a few seconds; thus, ifwe want to model and analyze the behavior of an anti-lock braking system thereis no loss of generality if we assume as a temporal domain, say, the real range[0 . . .
Hybrid Systems
In this article, by hybrid system model we mean a model that uses both discreteand dense domains. There are several circumstances when this may occur,mainly but not exclusively related to the problem of integrating heterogeneouscomponents: for instance, monitoring and controlling a continuous process bymeans of a digital device. 9
A system (component) with a discrete — possibly finite — set of states ismodeled as evolving in a dense time domain. In such a case its behavioris graphically described as a square wave form (see Figure 3) and its statecan be formalized as a piecewise constant function of time, as shown inFigure 3. ts ( t )Figure 3: A square-wave form over dense time. • In a fairly symmetric way, a continuous behavior can be sampled at regularintervals, as exemplified in Figure 4. t ∈ R t ∈ Z sampler Figure 4: A sampled continuous behavior. • A more sophisticated, but fairly common, case of hybridness may arisewhen a model evolves through a discrete sequence of “steps” while other,independent, variables evolve taking their values in nondiscrete domains,for instance, finite state automata augmented with dense-timed clock vari-ables. We see examples of such models in Section 5.1, in which timed andhybrid automata are discussed.
Time Granularity
In some sense, time granularity can be seen as a special case of hybridness. Wesay that two system components have different time granularities when their“natural time scales” differ, possibly by orders of magnitude. This, again, isquite frequent when we pair a process that evolves in the order of seconds orminutes, or even days or months (such as a chemical plant, or a hydroelectricpower plant) with a controller based on digital electronic devices. In principle,if we assume a unique continuous time model, say the reals, the problem isreduced to a, possibly cumbersome, change of time unit. Notice, however, that in very special cases the different time units could be incommensu-rable. In fact, even if in practice this circumstance may seldom arise, after all the two main × × ×
60 seconds fromnow”, or “this job has to be finished before midnight of the third day aftertoday”? Both interpretations may be adopted depending on the context of theclaim.An approach that deals rigorously with different time granularities is pre-sented in Section 5.2.1 when discussing temporal logics.
Another central issue is whether a formalism permits the expression of met-ric constraints on time, or, equivalently, of constraints that exploit the metricstructure of the underlying time model (if it has any).A domain (a time domain, in our case) has a metric when a notion of distance is defined on it (that is, when a nonnegative measure d( t , t ) ≥ t , t of the domain).As mentioned above, typical choices for the time domain are the usual dis-crete and dense numeric sets, that is N , Z , Q , R . All these domains have a“natural” metric defined on them, which corresponds to simply taking the dis-tance between two points: d( t , t ) = | t − t | . Notice, however, that although all common choices for the time domainspossess a metric, we focus on whether the language in which the system isdescribed permits descriptions using the same form of metric information asthat embedded in the underlying time domain. For instance, some languagesallow the user to state that an event p (e.g., “push button”) must temporallyprecede another event q (e.g., “take picture”), but do not include constructs to units offered by nature, the day and the year, are incommensurable. Technically, this is called the Euclidean distance. We focus our attention here on temporal domains T that are totally ordered; althoughpartially-ordered sets may be considered as time domains (see Section 3.6), they have notattracted as much research activity as totally ordered domains. p and that of q ; hence, theycannot distinguish the case in which the delay between p and q is 1 time unitfrom the case in which the delay is 100 time units. Thus, whenever the languagedoes not allow users to state metric constraints, it is possible to express onlyinformation about the relative ordering of phenomena (“ q occurs after p ”), butnot about their distance (“ q occurs 100 time units after p ”). In this case, wesay that the language has a purely qualitative notion of time, as opposed toallowing quantitative constraints, which are expressible with metric languages.Parallel systems have been defined [Wir77] as those where the correctness ofthe computation depends only on the relative ordering of computational steps,irrespective of the absolute distance between them. Reactive systems can oftenbe modeled as parallel systems, where the system evolves concurrently with theenvironment. Therefore, for the formal description of such systems a purelyqualitative language is sufficient. On the contrary, real-time systems are thosewhose correctness depends on the time distance among events as well. Hence,a complete description of such systems requires a language in which metricconstraints can be expressed. In this vein, the research in the field of formallanguages for system description has evolved from dealing with purely qualita-tive models to the more difficult task of providing the user with the possibilityof expressing and reasoning about metric constraints.For instance, consider the two sequences b , b of events p , q , where exactlyone event per time step occurs: • b = pqpqpq · · ·• b = ppqqppqq · · · b and b share all the qualitative properties expressible without using anymetric operator. For instance “every occurrence of p is eventually followed byan occurrence of q ” is an example of qualitative property that holds for bothbehaviors, whereas “ p occurs in every instant” is another qualitative property,that instead is false for both behaviors. If referring to metric properties isallowed, one can instead discriminate between b and b , for example throughthe property “every occurrence of q is followed by another occurrence of q aftertwo time steps”, which holds for b but not for b .Some authors have introduced a notion of equivalence between behaviorsthat captures the properties expressed by qualitative formulas. In particu-lar Lamport [Lam83] first proposed the notion of invariance under stuttering .Whenever a (discrete time) behavior b can be obtained from another behavior b by adding and removing “stuttering steps” (i.e., pairs of identical states onadjacent time steps), we say that b and b are stutter-equivalent. For instance,behaviors b and b outlined above are stutter-equivalent. Then, the equivalenceclasses induced by this equivalence relation precisely identify classes of proper-ties that share identical qualitative properties. Note that stutter invariance isdefined for discrete time models only. 12 .3 Linear vs. Branching Time Models The terms linear and branching refer to the structures on which a formallanguage is interpreted: linear -time formalisms are interpreted over linear se-quences of states, whereas branching -time formalisms are interpreted over trees of states. In other words, each description of a system adopting a linear notionof time refers to (a set of) linear behaviors, where the future evolution from agiven state at a given time is always unique. Conversely, a branching-time inter-pretation refers to behaviors that are structured in trees, where each “presentstate” may evolve into different “possible futures”. For instance, assuming dis-crete time, Figure 5 pictures a linear sequence of states and a tree of states oversix time instants. s s s s s s s s s s s s s s s s s s s s s s s s Figure 5: A linear and a branching time model.A linear behavior is a special case of a tree. Conversely, a tree might bethought of as a set of linear behaviors that share common prefixes (i.e., that areprefix-closed); this notion is captured formally by the notion of fusion closure [AH92b]. Thus, linear and branching models can be put on a common groundand compared. This has been done extensively in the literature.Linear or branching semantic structures are then matched, in the formal lan-guages, by corresponding syntactic elements that allow us to express propertiesabout the specific features of the interpretation. This applies in principle to allformal languages, but it has been the focus of logic languages especially, andtemporal logics in particular. Thus, a linear-time temporal logic is interpretedover linear structures, and is capable of expressing properties of behaviors withunique futures, such as “if event p happens, then event q will happen even-tually in the future”. On the other hand, branching-time temporal logics areinterpreted over tree structures and allow users to state properties of branchingfutures, such as “if event p happens at some time t , then there is some possiblefuture where event q holds”. We discuss this in greater depth in our consider-ation of temporal logics (Section 5.2.1); for general reference we cite the classicworks by Lamport [Lam80], Emerson and Halpern [EH86], Emerson [Eme90],and Alur and Henzinger [AH92b] — the last focusing on real-time models.13inally, we mention that it is possible to have semantic structures that arealso branching in the past [Koy92], which is where different pasts merge into asingle present. However, in practice, branching-in-the-past models are relativelyrare, so we will not deal with them in the remainder of this article. Determinism vs. Nondeterminism
Linear and branching time are features of the languages and structures on whichthey are interpreted, whereas the notions of determinism and nondeterminismare attributes of the systems being modeled or analyzed. More precisely, let usconsider systems where a notion of input is defined: one such system evolves overtime by reading its input and changing the current state accordingly. Wheneverthe future state of the system is uniquely determined by its current state andinput values, then we call the system deterministic . For instance, a light buttonis a deterministic system where pressing the button (input) when the light isin state off yields the unique possible future state of light on . Notice that,for a given input sequence, the behavior of a deterministic system is uniquelydetermined by its initial state. Conversely, systems that can evolve to differentfuture states from the same present state and the same input by making ar-bitrary “choices” are called nondeterministic . For example, a resource arbitermight be a nondeterministic system that responds to two requests happeningat the same time by “choosing” arbitrarily to whom to grant the resource first.The Ada programming language [BB94] embeds such a nondeterminism in itssyntax and semantics.In linear-time models the future of any instant is unique, whereas in branching-time models each instant branches into different possible futures; then, there isa natural coupling between deterministic systems and linear models on one side,and on the other side nondeterministic systems and branching models, whereall possible “choices” are mapped at some time to branches in the tree. Often,however, linear-time models are still preferred even for nondeterministic systemsfor their intuitiveness and simplicity. In the discussion of Petri nets (Section5.1.2) we see an example of linear time domains expressing the semantics ofnondeterministic formalisms. Some languages allow, or impose on, the user to make explicit reference totemporal items (attributes or entities of “type time”), whereas other formalisms,though enabling reasoning about temporal system properties, leave all or somereferences to time-related properties (occurrences of events, durations of statesor actions) implicit in the adopted notation.To illustrate, at one extreme consider pure first-order predicate calculusto specify system behavior and its properties. In such a case we could useexplicit terms ranging over the time domain and build any formula, possiblywith suitable quantifiers, involving such terms. We could then could expressproperties such as “if event p occurs at instant t , then q occurs at some instant14 (cid:48) no later than k time units after t ”. At the other extreme, classic temporallogic [Kam68], despite its name, does not offer users the possibility to explicitlymention any temporal quantity in its formulas, but aims at expressing temporalproperties by referring to an implicit “current time” and to the ordering of eventswith respect to it; for example, it has operators through which it is possible torepresent properties such as “if event p occurs now, then sometime in the futureevent q will follow”.Several formalisms adopt a kind of intermediate approach. For instance,many types of abstract machines allow the user to specify explicitly, say, theduration of an activity with implicit reference to its starting time (e.g., Stat-echarts, discussed in Section 5.1.1, and Petri nets, discussed in Section 5.1.2).Similarly, some languages inspired by temporal logic (e.g., MTL, presented inSection 5.2.1) keep its basic approach of referring any formula to an implicitcurrent instant (the now time), but allow the user to explicitly express a timedistance with respect to it. Typical examples of such formulas may expressproperties such as “if event p occurs now then event q will follow in the futurewithin t time units”.In general, using implicit reference to time instants — in particular the useof an implicit now — is quite natural and allows for compact formalizationswhen modeling and expressing properties of so-called “time-invariant systems”,which are the majority of real-life systems. In most cases, in fact, the systembehavior is the same if the initial state of and the input supplied to the systemare the same, even if the same computation occurs at different times. Hencethe resulting behaviors are simply a temporal translation of one another, andin such cases, explicitly expressing where the now is located in the time axis issuperfluous. The problem of time advancement arises whenever the model of a timed systemexhibits behaviors that do not progress past some instant. Such behaviors do notcorrespond to any physical “real” phenomena; they may be the consequence ofsome incompleteness and inconsistency in the formalization of the system, andthus must be ruled out.The simplest manifestation of the time advancement problem arises withmodels that allow transitions to occur in a null time. For instance, severalautomata-based formalisms such as Statecharts and timed versions of Petri netsadopt this abstract notion (see Section 5.1.1). Although a truly instantaneousaction is physically unfeasible, it is nonetheless a very useful abstraction forevents that take an amount of time which is negligible with respect to the over-all dynamics of the system [BB06]. For example, pushing a button is an actionwhose actual duration can usually be ignored and thus can be represented ab-stractly as a zero-time event. When zero-time transitions are allowed, an infinitenumber of such transitions may accumulate in an arbitrarily small interval tothe left of a given time instant, thus modeling a fictitious infinite computationwhere time does not advance at all. Behaviors where time does not advance15re usually called
Zeno behaviors, from the ancient philosopher Zeno of Elea and his paradoxes on time advancement (the term was coined by Abadi andLamport [AL94]). Notice that, from a rigorous point of view, even the notionof behavior as a function — whose domain is time and whose range is systemstate — is ill-defined if zero-time transitions are allowed, since the consequencesof a transition that takes zero time to occur is that the system is both at thesource state and at the target state of the transition in the same instant.Even if actions (i.e., state transformations) are noninstantaneous, it is stillpossible for Zeno behaviors to occur if time advances only by arbitrarily small amounts. Consider, for instance, a system that delivers an unbounded sequenceof events p k , for k ∈ N ; each event p k happens exactly t k time units after theprevious one (i.e., p k − ). If the sum of the relative times (that is, the sum Σ k t k of the time distances between consecutive events) converges to a finite limit t ,then the absolute time never surpasses t ; in other words, time stops at t , whilean infinite number of events occur between any t k and t . Such behaviors allowan infinite number of events to occur within a finite time.Even when we consider continuous-valued time-dependent functions of timethat vary smoothly, we may encounter Zeno behaviors. Take, for instance, thereal-valued function of time b ( t ) = exp ( − /t ) sin(1 /t ); b ( t ) is very smooth, asit possesses continuous derivatives of all orders. Nonetheless, its sign changesan infinite number of times in any interval containing the origin; therefore anatural notion such as “the next instant at which the sign of b changes” is notdefined at time 0, and, consequently, we cannot describe the system by relatingits behavior to such — otherwise well-defined — notions. Indeed, as explainedprecisely in Section 5.2 when discussing temporal logics, non-Zenoness can bemathematically characterized by the notion of analyticity , which is even strongerthan infinite derivability.The following remarks are to some extent related to the problem of timeadvancement, and might help a deeper understanding thereof. • Some formal systems possess “Zeno-like” behaviors, where the distancebetween consecutive events gets indefinitely smaller, even if time pro-gresses (these behaviors have been called “Berkeley” in [FPR08a], fromthe philosopher George Berkeley and his investigations arguing againstthe notion of infinitesimal). These systems cannot be controlled by digitalcontrollers operating with a fixed sampling rate such as in [CHR02], sincein this case their behaviors cannot be suitably discretized [FR06, FPR08a]. • Some well-known problems of — possibly — concurrent computation suchas termination , deadlocks , and fairness [Fra86] can be considered as dual problems to time advancement. In fact, they concern situations wheresome processes fail to advance their states , while time keeps on flowing.Examples of these problems and their solutions are discussed with refer-ence to a variety of formalisms introduced in Section 5. Circa 490–425 BC. Kilkenny, 1685–Oxford, 1753.
16e can classify solutions to the time advancement problem into two categories: a priori and a posteriori methods. In a priori methods, the syntax or thesemantics of the formal notation is restricted beforehand, in order to guaran-tee that the model of any system described with it will be exempt from timeadvancement problems. For instance, in some notations zero-time events aresimply forbidden, or only finite sequences of them are allowed.On the contrary, a posteriori methods do not deal with time advancementissues until after the system specification has been built; then, it is analyzedagainst a formal definition of time advancement in order to check that all of itsactual behaviors do not incur into the time advancement problem. An a poste-riori method may be particularly useful to spot possible risks in the behaviorof the real system. For instance, in some cases oscillations exhibited by themathematical model with a frequency that goes to infinity within a finite timeinterval, such as in the example above, may be the symptom of some instabilityin the modeled physical system, just in the same way as a physical quantity —say, a temperature or a pressure — that, in the mathematical model, tends toinfinity within a finite time is the symptom of the risk of serious failure in thereal system.
Most real systems — as the term itself suggests — are complex enough that itis useful, if not outright unavoidable, to model, analyze, and synthesize themas the composition of several subsystems. Such a composition/decompositionprocess may be iterated until each component is simple enough so that it canbe analyzed in isolation.Composition/decomposition, also referred to as modularization , is one ofthe most general and powerful design principles in any field of engineering. Inparticular, in the case of — sequential — software design it produced a richcollection of techniques and language constructs, from subprograms to abstractdata types.The state of the art is definitely less mature when we come to the compositionof concurrent activities. In fact, it is not surprising that very few programminglanguages deal explicitly with concurrency. It is well-known that the mainissue with the modeling of concurrency is the synchronization of activities (forwhich a plethora of more or less equivalent constructs are used in the literature:processes, tasks, threads, etc.) when they have to access shared resources orexchange messages.The problem becomes even more intricate when the various activities areheterogeneous in nature. For instance, they may involve “environment activi-ties” such as a plant or a vehicle to be controlled, and monitoring and controlactivities implemented through some hardware and software components. Insuch cases the time reference can be implicit for some activities, explicit forothers; also, the system model might include parts in which time is representedsimply as an ordering of events and parts that are described through a met-ric notion of time; finally, it might even be the case that different components17re modeled by means of different time domains (discrete or continuous), thusproducing hybrid systems.Next, a basic classification of the approaches dealing with the compositionof concurrent units is provided.
Synchronous vs. Asynchronous Composition
When composing concurrent modules there are two foremost ways of relatingtheir temporal evolution: these are called synchronous and asynchronous com-position.Synchronous composition constraints state changes of the various units tooccur at the very same time or at time instants that are strictly and rigidlyrelated. Notice that synchronous composition is naturally paired with a discretetime domain, but meaningful exceptions may occur where the global system issynchronized over a continuous time domain.Conversely, in an asynchronous composition of parallel units, each activitycan progress at a speed relatively unrelated with others; in principle there isno need to know in which state each unit is at every instant; in some casesthis is even impossible: for instance, if we are dealing with a system that isgeographically distributed over a wide area and the dynamics of some componentevolves at a speed that is of the same order of magnitude as the light speed(more precisely, the state of a given component changes in a time that is shorterthan the time needed to send information about the component’s state to othercomponents).A similar situation occurs in totally different realms, such as the world-widestock market. There, the differences in local times between locations all overthe world make it impossible to define certain states about the global market,such as when it is “closed”.For asynchronous systems, interaction between different components occursonly at a few “meeting points” according to suitably specified rules. For in-stance, the Ada programming language [BB94] introduces the notion of rendezvous between asynchronous tasks: a task owning a resource waits to grant ituntil it receives a request thereof; symmetrically a task that needs to access theresources raises a request (an entry call ) and waits until the owner is ready toaccept it. When both conditions are verified (an entry call is issued and theowner is ready to accept it), the rendez vous occurs, that is, the two tasks aresynchronized. At the end of the entry execution by the owner, the tasks splitagain and continue their asynchronous execution.Many formalisms exist in the literature that aim at modeling some kind ofasynchronous composition. Among these, Petri nets exhibit similarities withthe above informal description of Ada’s task system.Not surprisingly, however, precisely formalizing the semantics of asynchronouscomposition is somewhat more complex than the synchronous one, and severalapproaches have been proposed in the literature. We examine some of them inSection 5. 18 .7 Analysis and Verification Issues
A fundamental feature of a formal model is its amenability to analysis; namely,we can probe the model of a system to be sure that it ensures certain desiredfeatures. In a widespread paradigm [GJM02, Som04], we call specification themodel under analysis, and requirements the properties that the specificationmodel must exhibit. The task of ensuring that a given specification satisfies aset of requirements is called verification . Although this survey does not focuson verification aspects, we will occasionally deal with some related notions.
Expressiveness
A fundamental criterion according to which formal languages can be classifiedis their expressiveness , that is, the possibility of characterizing extensive classesof properties. Informally, a language is more expressive than another one if itallows the designer to write sentences that can more finely and accurately par-tition the set of behaviors into those that satisfy or fail to satisfy the propertyexpressed by the sentence itself. Note that the expressiveness relation betweenlanguages is a partial order, as there are pairs of formal languages whose expres-sive power is incomparable: for each language there exist properties that canbe expressed only with the other language. Conversely, there exist formalismswhose expressive powers coincide; in such cases they are equivalent in that theycan express the very same properties. Expressiveness deals only with the logicalpossibility of expressing properties; this feature is totally different from other— somewhat subjective, but nonetheless very relevant — characterizations suchas conciseness, readability, naturalness, and ease of use.
Decidability and Complexity
Although in principle we might prefer the “most expressive” formalism, in ordernot to be restrained in what it be expressed, there is a fundamental trade-off be-tween expressiveness and another important characteristic of a formal notation,namely, its decidability . A certain property is decidable for a formal languageif there exists an algorithmic procedure that is capable of determining, for anyspecification written in that language, whether the property holds or not in themodel. Therefore, the verification of decidable properties can be — at least inprinciple — a totally automated process. The trade-off between expressivenessand decidability arises because, when we increase the expressiveness of the lan-guage, we may lose decidability, and thus have to resort to semi-automated ormanual methods for verification, or adopt partial verification techniques such astesting and simulation. Here the term partial refers to the fact that the analysisconducted with these techniques provides results that concern only a subset ofall possible behaviors of the model under analysis.While decidability is just a yes/no property, complexity analysis provides,in the case when a given property is decidable, a measure of the computationaleffort that is required by an algorithm that decides whether the property holdsor not for a model. The computational effort is typically measured in terms19f the amount of memory or time required to perform the computation, as afunction of the length of the input (that is, the size of the sentence that statesit in the chosen formal language; see also Section 4.3).
Analysis and Verification Techniques
There exist two large families of verification techniques: those based on exhaus-tive enumeration procedures and those based on syntactic transformations likededuction or rewriting, typically in the context of some axiomatic description.Although broad, these two classes do not cover, by any means, all the spectrumof verification algorithms, which comprises very different techniques and meth-ods; here, however, we limit ourselves to sketching a minimal definition of thesetwo basic techniques.
Exhaustive enumeration techniques are mostly automated, and are based onexploration of graphs or other structures representing an operational model ofthe system, or the space of all possible interpretations for the sentence expressingthe required property.Techniques based on syntactic transformations typically address the verifi-cation problem by means of logic deduction [Men97]. Therefore, usually boththe specification and the requirements are in descriptive form, and the verifi-cation consists of successive applications of some deduction schemes until therequirements are shown to be a logical consequence of the system specification.
In the rest of this article, in the light of the categories outlined in Section 3,we survey and compare a wide range of time models that have been used todescribe computational aspects of systems.This section presents an overview of the “traditional” models that first tack-led the problem of time modeling, whereas Section 5 discusses some more “mod-ern” formalisms. As stated in Section 2, we start from the description of for-malisms , but we will ultimately focus on their semantics and, therefore, on whatkind of temporal modeling they allow.Any model that aims at describing the “dynamics” of phenomena, or a“computation” will, in most cases, have some notion of time. The modelinglanguages that have been used from the outset to describe “systems”, be theyphysical (e.g., moving objects, fluids, electric circuits), logical (e.g., algorithms),or even social or economic ones (e.g., administrations) are no exception andincorporate a more or less abstract idea of time.This section presents the relevant features of the notion of time as tradi-tionally used in three major areas of science and engineering: control theory(Section 4.1), electronics (Section 4.2) and computer science (Section 4.3). Asthe traditional modeling languages used in these disciplines have been widelystudied and are well understood, we will only sketch their (well-known) mainfeatures; we will nonetheless pay particular attention to the defining character-20stics of the notion of time employed in these languages, and highlight its salientpoints.
A common way used to describe systems for control purposes in various engi-neering disciplines (mechanical, aerospace, chemical, electrical, etc.) is throughthe so-called state-space representation [Kha95, SP05].The state-space representation is based on three key elements: a vector x of state variables , a vector u of input variables , and a vector y of output variables . x , u , and y are all explicit functions of time , hence their values depend on thetime at which they are evaluated and they are usually represented as x ( t ), u ( t ),and y ( t ). In the state-space representation the temporal domain is usually either con-tinuous (e.g., R ), or discrete (e.g., Z ). Depending on whether the temporaldomain is R or Z , the relationship between x and u is often expressed throughdifferential or difference equations, respectively, e.g., in the following form:˙ x ( t ) = f ( x ( t ) , u ( t ) , t ) x ( k + 1) = f ( x ( k ) , u ( k ) , k ) (1)where t ∈ R and k ∈ Z (the relationship between y and the state and inputvariables is instead purely algebraic in the form y ( t ) = g ( x ( t ) , u ( t ) , t ).Given an initial condition x (0), and fixed a function u ( t ), all functions x ( t )(or x ( k )) if time is discrete) that are solutions to the equations (1) represent thepossible system behaviors. Notice that suitable constraints on the formalizationof the system’s dynamics are defined so that the derived behaviors satisfy somenatural causality principles. For instance, the form of equations (1) must ensurethat the state at time t depends only on the initial state and on the value of theinput in the interval [0 , t ] (the future cannot modify the past).Also, systems described through state-space equations are usually determin-istic (see Section 3.3), since the evolution of the state x ( t ) is unique for a fixedinput signal u ( t ) (and initial condition x (0)). Therefore, dynamical systemmodels typically assume a linear time model (see also the discussion in Section3.3).Moreover, time is typically treated quantitatively in these models, as themetric structure of the time domains R or Z is exploited.Notice also that often the first equation of (1) takes a simplified form:˙ x = f ( x , u ) Another classic way of representing a dynamical system is through its transfer function ,which describes the input/output relationship of the system; unlike the state-space represen-tation, the transfer function uses an implicit , rather than explicit, notion of time. Despiteits popularity and extensive use in the field of control theory, the transfer function has littleinterest in the modeling of computation, so we do not delve any further in its analysis. Notice that for a dynamical system described by equations such as (1) to be nondetermin-istic, the solution of the equation should be non-unique; this is usually ruled out by suitablehypotheses on the f function [Kha95]. time variable does not occur explicitly but is implicit in the fact that x and u are functions thereof. The time variable, of course, occurs explicitly inthe solution of the equation. This is typical of time-invariant systems i.e., thosesystems that behave identically if the same “experiment” is translated along thetime axis by a finite constant.A typical example of continuous-time system is the electric circuit of Figure2. A less common instance of discrete-time system is provided in the nextexample. Example 4 (Monodimensional Cellular Automata) . Let us consider a discrete-time family of dynamical systems called cellular automata , where T = N .More precisely, we consider the following instance, named rule 110 by Wol-fram [Wol94]. The state domain is a bi-infinite string of binary values s ( k ) = . . . s i − ( k ) s i − ( k ) s i ( k ) s i +1 ( k ) s i +2 ( k ) . . . ∈ { , } ω , and the output coincideswith the whole state. The system is closed, since it has no input, and its evolu-tion is entirely determined by its initial state s i (0) ( i ∈ Z ).The dynamics is defined by the following equation, which determines theupdate of the state according to its value at the previous instant (starting frominstant 1). s i ( k + 1) = (cid:40) s i − ( k ) s i ( k ) s i +1 ( k ) ∈ { , , , , } In a sense, many other formalisms for time modeling can be seenas a specialization of dynamical systems and can be reformulated in terms ofstate-space equations, including more computationally-oriented formalisms suchas finite state automata and Turing machines.The main limitations of the dynamical system models in describing timedsystems lie in their being “too detailed” for some purposes. Being intrinsicallyoperational and deterministic in most cases, such models provide complete de-scriptions of a system behavior, but are unsuitable for partial specifications or The recent literature of control theory also deals with hybrid systems where discreteand continuous time domains are integrated in the same system formalization [vS00, Ant00,BBM98].
One field in which the modeling of time has always been a crucial issue is (digital)electronic circuits design.The key modeling issue that must be addressed in describing digital devicesis the need to have different abstraction levels in the description of the samesystem. More exactly, we typically have two “views” of a digital component.One is the micro view , which is nearest to a physical description of the compo-nent. The other is the macro view , where most lower-level details are abstractedaway.The micro view is a low-level description of a digital component, wherethe basic physical quantities are modeled explicitly. System description usuallypartitions the relevant items into input, output, and state values. All of themrepresent physical quantities that vary continuously over time. Thus, the timedomain is continuous , and so is the state domain. More precisely, since we usu-ally define an initialization condition, the temporal domain is usually bounded on the left (i.e., R ≥ ). Conversely, the state domain is often, but not always,restricted to a bounded subset [ L, U ] of the whole domain R (in many electroniccircuits, for example, voltages vary from a lower bound of approximately 0V toan upper bound of approximately 5V).Similarly to the case of time-invariant dynamical systems, time is generally implicit in formalisms adopting the micro view. It is also metric — as it is alwaysthe case in describing directly physical quantities — and fully asynchronous , sothat inputs may change at any instant of time, and outputs and states react tothe changes in the inputs at any instant of time.A simple operational formalism used to describe systems at the micro viewis that of logic gates [KB04], which can then be used to represent more complexdigital components, with memory capabilities, such as flip-flops and sequentialmachines .Figure 6 shows an example of behavior of a sequential machine with twoinputs i and i , one output o , and two state values m and m .The figure highlights the salient features of the modeling of time at the micro(physical) level: continuity of both time and state, and asynchrony (for examplememory signals m and m can change their values at different time instants).More precisely, Figure 6 pictures a possible evolution of the state (i.e., thepair (cid:104) m , m (cid:105) ) and of the output (i.e., signal o ) of the sequential machine with23 i m m o tt t t t Figure 6: A behavior of a sequential machine.respect to its input (i.e., the pair (cid:104) i , i (cid:105) ). For example, it shows that if all foursignals i , i , m , m are “low” (e.g., at time t ), then the pair (cid:104) m , m (cid:105) remains“low”; however, if both input signals are “high” and the state is “low” (e.g., attime t ), m becomes “high” after a certain delay. The output is also relatedto the state, in that o is “high” when both m and m are “high” (in fact, o becomes “high” a little after m and m both become “high”, as shown at time t in the figure). Notice how the reaction to a change in the values of the inputsignals is not instantaneous, but takes a non-null amount of time (a propagationdelay ), which depends on the propagation delays of the logic gates composingthe sequential machine.As the description above suggests, the micro view of digital circuits, beingclose to the “physical” representation, is very detailed (e.g., it takes into accountthe transient state that occurs after variation in the inputs). However, if oneis able to guarantee that the circuit will eventually reach a stable state after avariation of the inputs, and that the duration of the transient state is short withrespect to the rate with which input variations occur, it is possible to abstractaway the inner workings of the digital circuits, and focus on the effects of achange in the inputs on the machine state, instead. In addition, it is commonpractice to represent the “high” and “low” values of signals in an abstract way,usually as the binary values 1 (for “high”) and 0 (for “low”). Then, we canassociate a sequential machine with a logic function that describes the evolutionof only the stable states. Table 1 represents such a logic function where weassociate a letter to every possible stable configuration of the inputs (columnheader) and the memory (row header), while the output is instead simply definedto be 1 if and only if the memory has the stable value 11. A blank cell in thetable denotes an undefined (“don’t care”) behavior for the corresponding pair ofcurrent state and current input. Then, the evolution in Figure 6 is compatiblewith the system specification introduced by Table 1.24 (00) b (01) c (11) d (10) A (00) 00 00 01 10 B (01) 10 11 01 C (11) 00 10 11 D (10) 00 10 10 11Table 1: A tabular description of the behavior of a sequential machine.Notice that by applying the above abstraction we discretized the state do-main and assumed zero-time transitions. However, in the behavior of Figure 6the inputs i and i vary too quickly to guarantee that the component realizesthe input/output function described by the table above. For example, when atinstant t both m and m become 1, memory signal m does not have the timeto become 0 (as stated in Table 1) before input i changes anew. In addition,the output does not reach a stable state (and become 1) before state m switchesto 0. Thus, the abstraction of zero-time transition was not totally correct.As the example suggests, full asynchrony in sequential machines poses severalproblems both at the modeling and at the implementation level. A very commonway to avoid these problems, thus simplifying the design and implementation ofdigital components, is to synchronize the evolution of the components througha clock (i.e., a physical signal that forces variations of other signals to occuronly at its edges).The benefits of the introduction of a clock are twofold: on the one hand, aclock rules out “degenerate behaviors”, in which signal stability is never reached[KB04]; on the other hand, it permits a higher-level view of the digital circuit,which we call the macro view .In the macro view, not only physical quantities are represented symbolically,as a combination of binary values; such values, in turn, are observed only whenthey have reached stable values. The time domain becomes discrete, too. In factinputs are read only at periodic instants of time, while the state and outputsare simultaneously (and instantaneously) updated. Thus, their observation is synchronized with a clock that beats the time periodically. Since we disregardany transient state, time is now a discrete domain. In practice, we adopt thenaturals N as time domain, whose origin matches the initialization instant ofthe system.Typical languages that adopt “macro” time models are those that belongto the large family of abstract state machines [Sip05, HMU00, MG87]. Moreprecisely, the well-known Moore machines [Moo56] and Mealy machines [Mea55]have been used for decades to model the dynamics of digital components. Forexample, the Moore machine of Figure 7 represents the dynamics of the se-quential machine implementing the logic function defined by Table 1. Everytransition in the Moore machine corresponds to the elapsing of a clock interval;thus, the model abstracts away from all physical details, and focuses on theevolution of the system at clock ticks. 25 | D | C | A | a, b c d c dc da b, cd a Figure 7: A Moore machine.We will discuss abstract state machines and their notion of time in moredetail in Section 5.1.1.
As mentioned above, abstract state machines such as the Moore machine of Fig-ure 7 give a representation of digital components that is more “computational-oriented” and abstract than the “physics-oriented” one of logic gates.Traditionally, the software community has adopted a view of the evolutionof programs over time that is yet more abstract.In the most basic view of computation, time is not modeled at all. In fact, asoftware application implements a function of the inputs to the outputs. There-fore, the whole computational process is considered atomic, and time is absentfrom functional formalization. In other words, behaviors have no temporal char-acteristics in this basic view, but they simply represent input/output pairs forsome computable function. An example of a formal language adopting suchblack-box view of computation is that of recursive functions , at the roots of the theory of computation [Odi99, Rog87, BL74].A refinement of this very abstract way of modeling software would keeptrack not only of the inputs and outputs of some computation, but also ofthe whole sequence of discrete steps the computing device undergoes duringthe computation (i.e., of the algorithm realized by the computation). Moreprecisely, the actual time length of each step is disregarded, assigning uniformlya unit length to each of them; this corresponds to choosing the naturals N astime domain. Therefore, time is discrete and bounded on the left: the initialtime 0 represents the time at which the computation starts. The time measurerepresents the number of elementary computational steps that are performedthrough the computation. Notice that no form of concurrency is allowed inthese computational models, which are strictly sequential , that is each step isfollowed uniquely by its successor (if any).Turing Machines [Pap94, Sip05, HMU00, MG87] are a classic formalism todescribe computations (i.e., algorithms). For example, the Turing machine of26igure 8 describes an algorithm to compute the successor function for a binaryinput (stored with the least significant bit on the left, that is in a “little-endian”fashion). ./., r / , r / , r / , l./., s (cid:3) / , r / , l Figure 8: A Turing machine computing the successor function. (cid:46) denotes theorigin of the Turing machine tape, (cid:3) denotes the blank symbol; a double circlemarks a halting state; in every transition,
I/O, M denotes the symbol read onthe tape upon taking the transition ( I ), the symbol ( O ) written on the tape inplace of I , and the way ( M ) in which the tape head is moved ( l for “left”, s for“stay”, and r for “right”).For a given Turing machine M computing a function f , or any other abstractmachine for which it is assumed that an elementary transition takes a time unitto execute, by associating the number of steps from time 0 until the reaching ofa halting state — if ever — we may define a time complexity function T M ( n ),whose argument n represents the size of the data input to f , and whose value isthe maximum number of steps required to complete the computation of f wheninput data has size n . For example, the computational complexity T succ ( n )of the Turing machine of Figure 8 is proportional to the length n of the inputstring.In the software view the functional behavior of a computation is normallyseparated from its timed behavior. Indeed, while the functional behavior isstudied without taking time into account, the modeling of the timed behaviorfocuses only on the number of steps required to complete the computation. Inother words, functional correctness and time complexity analysis are usuallyseparated and adopt different techniques.In some sense the software view of time models constitutes a further ab-straction of the macro hardware view. In particular, the adoption of a discretetime domain reflects the fact that the hardware is what actually performs thecomputations formalized by means of a software model. Therefore, all the ab-stract automata that are used for the macro modeling of hardware devices arealso commonly considered models of computation. As a particular case, if M ’s computation never reaches a halting state we conventionallydefine T M ( n ) = ∞ . Temporal Models in Modern Theory and Prac-tice
The growing importance and pervasiveness of computer systems has requiredthe introduction of new, richer, and more expressive temporal models, fosteringtheir evolution from the basic “historical” models of the previous section. Thisevolution has inevitably modified the boundaries between the traditional waysof modeling time, often making them fuzzy. In particular, this happened withheterogeneous systems, which require the combination of different abstractionswithin the same model.This section shows how the aforementioned models have been refined andadapted in order to meet more specific and advanced specification needs. Theseare particularly prominent in some classes of systems, such as hybrid, criti-cal, and real-time systems [HM96]. As we discussed above in Section 1, thesecategories are independent but with large areas of overlap.
Keywords Dimension Section discrete, dense, continuous, granularity
Discrete vs. Dense
Ordering vs. Metric
Linear vs. Branching
Implicit vs. Explicit
Time Advancement
Concurrency and Composition
Analysis and Verification
Table 2: Keyword references to the “Dimensions” of Section 3.As in the historical overview of Section 4, the main features of the mod-els presented in this section are discussed along the dimensions introduced inSection 3. Such dimensions, however, have different relevance for different for-malisms; in some cases a dimension may even be unrelated with some formalism.For this reason we avoid a presentation in the style of a systematic “tabular”cross-reference < Formalism / dimension > ; rather, to help the reader matchthe features of a formalism with the coordinates of Section 3, we highlight theportions of the text where a certain dimension is specifically discussed by graph-ically emphasizing (in small caps) some related keywords. The correspondencebetween keywords and dimensions is shown in Table 2. Also, for the sake ofconciseness, we do not repeat features of a derived formalism that are inheritedunaffected from the “parent” notation. The Computer- and System-Centric Views
As a preliminary remark we further generalize the need of adopting and com-bining different views of the same system and of its heterogeneous components.Going further — and, in some sense, back — in the path described in Section 4,which moved from the micro to the macro view of hardware, and then to the soft-ware view, we now distinguish between a computer-centric and a system-centric computer-centric view we consider systems where time is inherentlydiscrete, and which can be described with a (finite-)state model. Moreover, weusually adopt a strictly synchronous model of concurrency, where the globalsynchrony of the system is given by the global clock ticking. Nondeterminismis also often adopted to model concurrent computations at an abstract level.Another typical feature of this view is the focus on the ease of — possiblyautomated — analysis to validate some properties; in general, it is possible andpreferred to restrict and abstract away from many details of the time behaviorin favor of a decidable formal description, amenable to automated verification.An example of computer-centric view is the design and analysis of a fieldbus for process control: the attention is focused on discrete signals comingfrom several sensors and on their proper synchronization; the environment thatgenerates the signals is “hidden” by the interface provided by the sensors.Conversely, in the system-centric view, the aim is to model, design, andanalyze the whole system; this includes the process to be controlled, the sensorsand actuators, the network connecting the various elements, the computingdevices, etc.In the system-centric view, according to what kind of application domain weconsider, time is sometimes continuous, and sometimes discrete. The concur-rency model is often asynchronous, and the evolution of components is usuallydeterministic. For instance, a controlled chemical process would be described interms of continuous time and asynchronous deterministic processes; on the otherhand a logistic process — such as the description of a complex storage system— would be probably better described in terms of discrete time. Finally, thesystem-centric view puts particular emphasis on input/output variables, modu-lar divisions among components, and the resulting “information flow”, similarlyto some aspects of dynamical systems. Thus, the traditional division betweenhardware and software is blurred, in favor of the more systemic aspects.In practice, no model is usually taken to be totally computer-centric orsystem-centric; more often, some aspects of both views are united within thesame model, tailored for some specific needs.The remainder of this section presents some broad classes of formal languages,in order to discuss what kind of temporal models they introduce, and what kindof systems they are suitable to describe.We first analyze a selected sample of operational formalisms. Then, wediscuss descriptive formalisms based on logic, and devote particular attention to29ome important ones. Finally, we present another kind of descriptive notations,the algebraic formalisms, that are mostly timed versions of successful untimedformal languages and methods.To discuss some features of the formalisms surveyed we will adopt a simplerunning example based on a resource allocator. Let us warn the reader, however,that the various formalizations proposed for the running example do not aim atbeing different specifications of the same system; on the contrary, the semanticsmay change from case to case, according to which features of the formalism weaim at showing in that particular instance.
We consider three broad classes of operational formalisms: synchronous statemachines, Petri nets as the most significant exponent of asynchronous machines,and heterogeneous models.
In Section 4 we presented some classes of (finite-)state machines that have asynchronous behavior. As we noticed there, those models are mainly derivedfrom the synchronous “macro” view of hardware digital components, and theyare suitable to describe “traditional” sequential computations.The natural evolution of those models, in the direction of increasing com-plexity and sophistication, considers concurrent and reactive systems. Theseare, respectively, systems where different components operate in parallel, andopen systems whose ongoing interaction with the environment is the main focus,rather than a static input/output relation. The models presented in this sectionespecially tackle these new modeling needs.
Infinite-Word Finite-State Automata.
Perhaps the simplest extension ofautomata-based formalisms to deal with reactive computations consists in de-scribing a semantics of these machines over infinite (in particular, denumerable)sequences of input/output symbols. This gives rise to finite-state models thatare usually called “automata on infinite words” (or ω -words). The various fla-vors of these automata differ in how they define acceptance conditions (thatis, how they distinguish between the “good” and “bad” interactions with theenvironment) and what kind of semantic models they adopt.Normally these models are defined in a nondeterministic version, whose tran-sition relation δ ⊆ Σ × S × S (where Σ is the input alphabet, and S is the statespace) associates input symbol, current state and next state. Thus, for the samepair (cid:104) σ, s (cid:105) of input symbol and current state, more than one next state n may bein relation with it, that is, the automaton can “choose” any of the next states inthe set { n | (cid:104) σ, s, n (cid:105) ∈ δ } . Nondeterminism and infinite words require the defini-tion of different, more complex acceptance conditions than in the deterministic,finite word case. For instance, the B¨uchi acceptance condition is defined througha set of final states, some of which must be visited infinitely often in at least30ne of the nondeterministically-chosen runs [Var96]. Other acceptance condi-tions are defined, for instance, in Rabin automata, Streett automata, parityautomata, Muller automata, tree automata, etc. [Tho90].As an example of use of infinite-word automata, let us model a simple re-source manager. Before presenting the example, however, we warn the readerthat we are not interested in making the resource manager as realistic as possi-ble; rather, as our aim is to show through small-sized models the most relevantfeatures of the formalisms presented, for the sake of brevity we introduce sim-plifications that a real-world manager would most probably avoid.The behavior of the resource manager is the following: Users can issue arequest for a resource either with high priority ( hpr ) or with low priority ( lpr ).Whenever the resource is free and a high-priority request is raised, the resourceis immediately granted and it becomes occupied. If it is free and a low-priorityrequest is received, the resource is granted after two time units. Finally, if ahigh-priority request is received while the resource is granted, it will be servedas soon as the resource is released, while a low-priority request will be servedtwo instants after the resource is released. Further requests received while theresource is occupied are ignored.The above behavior can be modeled by the automaton of Figure 9, where thevarious requests and grant actions define the input alphabet (and noop definesa “waiting” transition); note that the automaton is actually deterministic. Weassume that all states are accepting states. free occ pendhpendlwg2wg1noop hpr noophpr noop , hprrel noop , hpr , lprlprrelnoopnooplpr rel Figure 9: A resource manager modeled by an infinite-word finite-state automa-ton.Let us analyze the infinite-word finite-state automaton models with respectto our coordinates. First of all, these models can be considered as mainly“computer-centric”, focusing on simplicity and abstractness. In particular, fromthe point of view of the computer scientist, they are particularly appealing, asthey allow one to reason about time in a highly simplified way.There is no explicit notion of quantitative time. As usual, however, a simple metric is implicitly defined by associating a time unit with the execution of asingle transition; thus time is inherently discrete . For example, in Figure 9,we measure implicitly the two time units after which a low priority request31s granted, by forcing the path from the request lpr to occ to pass through twointermediate states via two “wait” transitions noop .The simplicity of the time model makes it amenable to automated verifica-tion . Various techniques have been developed to analyze and verify automata,the most successful of whom is probably model checking [CGP00]. (See alsoSection 5.3).The nondeterministic versions of these automata are particularly effectivefor characterizing multiple computation paths. In defining its formal semanticsone may exploit a branching time model. There are, however, relevant ex-amples of nondeterministic automata that adopt a linear time model, B¨uchiautomata being the most noticeable instance thereof. In fact, modeling usinglinear time is usually considered more intuitive for the user; for instance, consid-ering the resource manager described above, the linear runs of the automatonnaturally represent the possible sequences of events that take place in the man-ager. This intuitiveness was often considered to be traded off with amenabilityto automatic verification, since the first model checking procedures were moreefficient with branching logic [CGP00]. Later progresses have shown, however,that this trade off is often fictitious, and linear time models may be endowedwith efficient verification procedures [Var01].When composing multiple automata in a global system we must face theproblem of concurrency . The two most common concurrency models usedwith finite automata are synchronous concurrency and interleaving concurrency. • In synchronous concurrency , concurrent transitions of different composedautomata occur simultaneously, that is the automata evolve with the same“global” time. This approach is probably the simpler one, since it presentsa global, unique vision of time, and is more akin to the “synchronous na-ture” of finite-state automata. Synchronous concurrency is pursued inseveral languages that constitute extensions and specializations of the ba-sic infinite-word finite-state automaton, such as Esterel [BG92] and Stat-echarts (see below). • In interleaving concurrency , concurrent transitions are ordered arbitrarily.Then any two global orderings of the transitions that differ only for theordering of concurrent transitions are considered equivalent. Interleav-ing semantics may be regarded as a way to introduce a weak notion ofconcurrency in a strictly synchronous system. The fact that interleavingintroduces partially ordered transitions weakens however the intuitive no-tion of time as a total order. Also, the natural correspondence between theexecution of a single transition and the elapsing of a time unit is lost and ad hoc rules are required to restate a time metric based on the transitionexecution sequence.Another problem introduced by adopting an interleaving semantic modellies in the fairness requirement, which prescribes that every concurrentrequest eventually gets satisfied. Usually, fairness is enforced explicitly apriori in the composition semantic.32he main strength of the infinite-word finite-state automata models, i.e.,their simplicity, constitutes also their main limitation. When describing physicalsystems, adopting a strictly synchronous and discrete view of time might be anobstacle to a “natural” modeling of continuous processes, since discretizationmay be too strong of an abstraction. In particular, some properties may nothold after discretization, such as periodicity if the duration of the period is someirrational constant, incommensurable with the duration of the step assumed inthe discretization.Moreover, it is very inconvenient to represent heterogeneous systems withthis formalism when different components run at highly different speeds and thetime granularity problem arises. In more technical terms, for this type ofmodels it is rather difficult to achieve compositionality [AFH96, AH92b]. Statecharts.
Statecharts are an automata-based formalism, invented byDavid Harel [Har87]. They are a quite popular tool in the software engineeringcommunity, and a version thereof is part of the UML standard [UML05, UML04].In a nutshell, Statecharts are an enrichment of classical finite-state automatathat introduces some mechanisms for hierarchical abstraction and parallel com-position (including synchronization and communication mechanisms). Theymay be regarded as an attempt to overcome some of the limitations of the barefinite-state automaton model, while retaining its advantages in terms of sim-plicity and ease of graphical representation. They assume a synchronous viewof communication between parallel processes.Let us use the resource manager running example to illustrate some of Stat-echarts’ features; to this purpose we introduce some modifications to the initialdefinition. First, after any request has been granted, the resource must bereleased within 100 time units. To model such metric temporal constraintswe associate a timeout to some states, namely those represented with a shortsquiggle on the boundary (such as hhr or wg in Figure 10).Thus, for instance, the transition that exits state hhr must be taken within100 time units after hhr has been entered: if no rel event has been generatedwithin 100 time units, the timeout event to is “spontaneously-generated” ex-actly after 100 time units. Conversely the lower bound of 0 in the same stateindicates that the same transition cannot be taken immediately. We use thesame mechanism to model the maximum amount of time a low-priority requestmay have to wait for the resource to become available; in this case, with re-spect to the previous example, we allow the low-priority request to be grantedimmediately, nondeterministically. Notice that modeling time constraints usingtimeouts (and exit events) implies an implicit modeling of a global system time,with respect to which timeouts are computed, just like in finite-state automata. Note that there are in fact two transitions from state hhr to state no - hr , one that is labeled to / rel , and one that is labeled rel ; they are represented in Figure 10 with a single arc insteadof two separate ones for the sake of readability. The transition labeled to / rel indicates thatwhen the timeout expires (the to event), a rel event is triggered, which is then sensed by theother parts of the Statechart, hence producing other state changes (for example from glr to free ). < ghr relno - hr hhr < wlr to / relno - lr wgwlrglr lr rel < hprrel / ghr hpr / rel ∧ ghrhrw1r glrfree relhpr / ghr lpr / wlr to / rel relto / relrel Figure 10: A resource manager modeled through a Statechart.In fact, timeouts can be regarded as an enrichment of the discrete finite stateautomaton model with a continuous feature.The example of Figure 10 exploits Statecharts’ so-called “AND (parallel)composition” to represent three logically separable components of the system,divided by dashed lines. The semantics of AND composition is obtained asthe Cartesian product construction, and it is usually called synchronouscomposition ; however, Statecharts’ graphical representation avoids the needto display all the states of the product construction, ameliorating the readabilityof a complex specification. In particular in our example, we choose to allow onepending high-priority request to be “enqueued” while the resource is occupied;thus the leftmost component is a finite-state automaton modeling whether theresource is free, serving a high-priority request with no other pending requests(state hr ), or with one pending request (state w1r ), or serving a low-priorityrequest (state glr ).Since in Statecharts all transition events — both input and output — are“broadcast” over the whole system, labeling different transitions with the samename enforces synchronization between them. For instance, whenever the au-tomaton is in the global state (cid:104) w1r , hhr , no - lr (cid:105) , a release event rel triggers theglobal state to become (cid:104) hr , no - hr , no - lr (cid:105) , and then cascading immediately to (cid:104) hr , hhr , no - lr (cid:105) , because of the output event ghr triggered by the transition from w1r to hr . Note that we are implicitly assuming, in the example above, that In fact, the semantics of the AND composition of submachines in Statecharts differsslightly from the classic notion of Cartesian product of finite-state machines; however, in thisarticle we will not delve any further in such details, and instead refer the interested reader to[Har87] for a deeper discussion of this issue. We warn the reader that the terminology often varies greatly among different areas; forinstance [CL99] names the Cartesian product composition “completely asynchronous”. hr and wlr are “internal events”, i.e., they do not occur spontaneously in theenvironment but can only be generated internally for synchronization. Nondeterminism can arise in three basic features of Statecharts models.First, we have the “usual” nondeterminism of two mutually exclusive transitionswith the same input label (such as in Figure 11(a)). Second, states with timeoutare exited nondeterministically within the prescribed bounds (Figure 11(b)).Third, Statechart modules may be composed with “XOR composition”, thatrepresents a nondeterministic choice between different modules (Figure 11(c)). < A BCαα β ABCγ (a) (b) (c)Figure 11: Nondeterminism in Statecharts.The popularity of Statecharts has produced an array of different analysistools , mostly automated. For instance [HLN +
90, BDW00, GTBF03].While overcoming some of the limitations of the basic finite-state automatamodels, Statecharts’ rich syntax often hides subtle semantic problems that in-stead should be better exposed to avoid inconsistencies and faults in specifica-tions. In fact, over the years several researches have tried to define formallythe most crucial aspects of the temporal semantics of Statecharts. The factitself that different problems were unveiled only incrementally by different con-tributors is an indication of the difficulty of finding a comprehensive, intuitive,non-ambiguous semantics to an apparently simple and plain language. We dis-cuss here just a few examples, referring the interested reader to [HPSS87, PS91,von94, HN96] for more details.The apparently safe “perfect synchrony ” assumption — the assumptionthat all transition events occur simultaneously — and the global “broadcast”availability of all events — which are therefore non local — generate somesubtle difficulties in obtaining a consistent semantics. Consider for instance theexample of Figure 10, and assume the system is in the global state (cid:104) glr , no - hr , lr (cid:105) .If a high-priority request takes place, and thus a hpr event is generated, thesystem shifts to the state (cid:104) hr , no - hr , lr (cid:105) in zero time. Simultaneously, the takentransition triggers the events rel and ghr . If we allow a zero-time residence instates, the former event moves the system to (cid:104) hr , no - hr , no - lr (cid:105) , representing thelow-priority request being forced to release the resource. Still simultaneously,the latter ghr event triggers the transition from no - hr to hhr in the middle sub-automaton. This is in conformity with our intuitive requirements; however thesame rel generated event also triggers the first sub-automaton to the state free ,which is instead against the intuition that suggests that the event is only amessage sent to the other parts of the automaton.If we refine the analysis, we discover that the picture is even more compli-35ated. The middle automaton is in fact in the state hhr , while the time has notadvanced; thus we still have the rel event available, which should immediatelyswitch the middle automaton back to the state no - hr . Besides being intuitivelynot acceptable, this is also in conflict with the lower bound on the residence timein hhr . Moreover, in general we may end up having multiple XOR states occu-pied at the same time. Finally, it is not difficult to conceive scenarios in whichthe simultaneous occurrence of some transitions causes an infinite sequence ofstates to be traversed, thus causing a Zeno behavior.How to properly disentangle such scenarios is not obvious. A partial solutionwould be, for instance, to avoid instantaneous transitions altogether, attaching anon-zero time to transitions and forcing an ordering between them or, symmet-rically, to disallow a zero-time residence in states. This (partially) asynchronousapproach is pursued for instance in Timed Statecharts [KP92], or in other works[Per93]. Alternatively, other solutions disallow loops of zero-time transitions,but accept a finite number of them (for instance, by “consuming” each eventspent by a transition [HN96]); the Esterel language, which is a “relative” ofStatecharts’, follows this approach.
Timed and Hybrid Automata.
As we discussed above, the strictly discreteand synchronous view of finite-state automata may be unsuitable to model ad-equately and compositionally processes that evolve over a dense domain. Stat-echarts try to overcome these problems by adding some continuous features,namely timeout states. Timed and hybrid automata push this idea further,constituting models, still based on finite-state automata, that can manage con-tinuous variables. Let us first discuss timed automata.
Timed automata enrich the basic finite-state automata with real-valued clock variables. Although the name “timed automata” could be used generically todenote automata formalisms where a description of time has been added (e.g.,[LV96, AH96, Arc00]), here we specifically refer to the model first proposed byAlur and Dill [AD94], and to its subsequent enrichments and variations. Werefer the reader to Alur and Dill’s original paper [AD94] and to [BY04] for aformal, detailed presentation.In timed automata, the total state is composed of two parts: a finite com-ponent (corresponding to the state of a finite automaton, which is often called location ), and a continuous one represented by a finite number of positive realvalues assigned to variables called clocks . The resulting system has thereforean infinite state space, since the clock components take value in the infinite set R ≥ . The evolution of the system is made of alternating phases of instantaneoussynchronous discrete “jumps” and continuous clock increases. More precisely,whenever a timed automaton sits in some discrete state, each clock variable x increases as time elapses, that is it evolves according to the dynamic equation˙ x = 1, thus effectively measuring time. External input events cause the dis-crete state to switch; during the transition some clock variables may be reset tozero instantaneously. Moreover, both discrete states and transitions may haveattached some constraints on clocks; each constraint must be satisfied while36itting in the discrete state, and when taking the transition, respectively. To illustrate this notation, let us model the resource manager examplethrough a timed automaton. We modify the system behavior of the State-chart example, by disallowing high-priority requests to preempt low-priorityones; moreover, let us assume that one low-priority request can be “enqueued”waiting for the resource to become free. The resulting timed automaton —using a single clock w — is pictured in Figure 12. free occ pendhpendlwg hprlpr rel w := 0 w := 0 rellprrel hpr w := 0 w := 0 w > w < w < w < w < w := 0 (cid:15) Figure 12: A resource manager modeled through a timed automaton.The semantics of a timed automaton is usually formally defined by meansof a timed transition system. The “natural” semantics is the timed seman-tics, which exactly defines the possible runs of one automaton over sequencesof input symbols. More precisely, each symbol in the input sequence is pairedwith a timestamp that indicates the absolute time at which the symbol is re-ceived. Then, a run is defined by a sequence of total states (each one a pair (cid:104) location, clock value (cid:105) of the automaton, which evolve according to the times-tamped input symbols, in such a way that, for every pair of consecutive states (cid:104) l i , c i (cid:105) in ,ts −−−→ (cid:104) l i +1 , c i +1 (cid:105) in the run the constraints on the locations and the tran-sition are met. For instance, the automaton of Figure 12 may go through thefollowing run: (cid:104) free , (cid:105) hpr , . −−−−→ (cid:104) occ , (cid:105) lpr , . −−−−→ (cid:104) pendl , . (cid:105) rel , −−−→ (cid:104) wg , (cid:105) (cid:15), . −−−→ (cid:104) occ , (cid:105) · · · In the run above state location occ is entered at time 4.7 and, since the corre-sponding transition resets clock w , the new state becomes (cid:104) occ , (cid:105) ; then, at time53.9 (when clock w has reached value 49.2), location occ is exited and pendl is en-tered (this time, the clock w is not reset), which satisfies the constraint w < occ , and so on.Timed semantics introduces a metric treatment of time through times-tamps. Notice that, in some sense, the use of timestamps introduces “two dif- The original Alur and Dill’s formalization [AD94] permitted constraints only on transi-tions; however, adding constraints to locations as well is a standard extension that does notimpact on the salient features of the model (expressiveness, in particular) [BY04]. discrete one, given by the position i inthe run/input sequence, which defines a total ordering on events, and the con-tinuous and metric one, recorded by the timestamps and controlled throughthe clocks. This approach, though simple in principle, somewhat sacrifices nat-uralness, since a complete time modeling is no more represented as a uniqueflow but is two-fold.Other, different semantics of timed automata have been introduced and ana-lyzed in the literature. Subtle differences often arise depending on which seman-tics is adopted; for instance, interval-based semantics interprets timed automataover piecewise-constant functions of time, and the change of location is triggeredby discontinuities in the input [AFH96, ACM02, Asa04].Let us consider a few more features of time modeling for timed automata. • While timed automata are in general nondeterministic , their seman-tics is usually defined through linear time models, such as the one out-lined above based on run sequences. Moreover, deterministic timed au-tomata are strictly less expressive than nondeterministic ones, but alsomore amenable to automated verification, so they may be preferred insome practical cases. • Absolute time is implicitly assumed in the model and becomes apparentin the timestamps associated with the input symbols. The relative time measured by clocks, however, is explicitly measured and set. • Timed automata may exhibit
Zeno behaviors , when distances betweentimes at which transitions in a sequence are taken become increasinglysmaller, accumulating to zero. For instance, in the example of Figure12, the two transitions hpr and rel may be taken at times 1 , − , − + 2 − , . . . , Σ nk =0 − k , . . . , so that the absolute time would accumulateat Σ ∞ k =0 − k = 2. Usually, these Zeno behaviors are ruled out a priori indefining the semantics of timed automata, by requiring that timestampedsequences are acceptable only when the timestamp values are unbounded.Moreover, in Alur and Dill’s formulation [AD94] timed words have strictlymonotonic timestamps, which implies that some time (however small)must elapse between two consecutive transitions; other semantics have re-laxed this requirement by allowing weakly monotonic timestamps [BY04],thus permitting sequences of zero-time transitions. Hybrid automata [ACHH93, NOSY93, Hen96] are a generalization of timedautomata where the dense-valued variables — called “clocks” in timed automata— are permitted to evolve through more complicated timed behaviors. Namely,in hybrid automata one associates to each discrete state a set of possible activ-ities , which are smooth functions (i.e., functions that are continuous togetherwith all of their derivatives) from time to the dense domain of the variables , anda set of invariants , which are sets of allowed values for the variables. Activi-ties specify possible variables’ behaviors, thus generalizing the simple dynamics38f clock variables in timed automata. More explicitly, whenever a hybrid au-tomaton sits in some discrete location, its variables evolve over time accordingto one activity, nondeterministically chosen among those associated with thatstate. However, the evolution can continue only as long as the variables keeptheir values within the invariant set of the state. Then, upon reading inputsymbols, the automaton instantaneously switches its discrete state, possibly re-setting some variables according to the additional constraints attached to thetaken transitions, similarly to timed automata.Although in this general definition the evolution of the dense-valued variablescan be represented by any function such that all its derivatives are continuous, inpractice more constrained (and simply definable) subsets are usually considered.A common choice is to define the activities by giving a set of bounds on thefirst-order derivative, with respect to time, of the variables. For a variable y ,the constraint 0 . < ˙ y < π is an example of a class of such activities (see Figure13 for a visual representation). y ( t ) ˙ y = 0 . y = π t Figure 13: Some behaviors compatible with the constraint 0 . < ˙ y < π .In both timed and hybrid automata, one typically defines a composition semantics where concurrent automata evolve in parallel, but synchronize ontransitions in response to input symbols, similarly to traditional automata andStatecharts.The development of timed and hybrid automata was also motivated by thedesire to extend and generalize the powerful and successful techniques of auto-matic verification (and model checking in particular) based on the combina-tion of infinite-word finite-state automata and temporal logic (see Section 5.3),to the metric treatment of time. However, the presence of real-valued variablesrenders the verification problem much more difficult and, often, undecidable.Thus, with respect to the general model, restrictions are introduced that makethe models more tractable and amenable to verification — usually at the priceof sacrificing some expressiveness.In a nutshell, the verification problem is generally tackled by producing a finite abstraction of a timed/hybrid automaton, where all the relevant behav-iors of the modeled system are captured by an equivalent, but finite, model,which is therefore exhaustively analyzable by model checking techniques. Such39rocedures usually assume that all the numeric constraints on clocks and vari-ables are expressed by rational numbers; this permits the partitioning of thespace of all possible behaviors of the variables into a finite set of regions thatdescribe equivalent behaviors, preserving verification properties such as reach-ability and emptiness. For a precise description of these techniques see e.g.,[AM04, ACH +
95, HNSY94, HKPV98].These analysis techniques have been implemented in some interesting tools,such as UPPAAL [LPY97], Kronos [Yov97], Cospan [AK95], IF [BGO + Timed Transition Models.
Ostroff’s
Timed Transition Models (TTM)[Ost90] are another formalism that is based on enriching automata with timevariables; they are a real-time metric extension of Manna and Pnueli’s fairtransition systems [MP92].In TTMs, time is modeled explicitly by means of a clock variable t . t takes values in a discrete time domain, and is updated explicitly and syn-chronously by the occurrence of a special tick transition. The clock variable,as any variable in TTMs, is global and thus shared by all transitions. All transi-tions other than tick do not change time but only update the other componentsof the state; therefore it is possible to have several different states associatedwith the same time instant. Transitions are usually annotated with lower andupper bounds l, u ; this prescribes that the transition is taken at least l , andno more than u clock ticks (i.e., time units), after the transition has becomeenabled.In practice, it is assumed that every TTM system includes a global clock subsystem, such as that pictured in Figure 14. Notice that this subsystemallows the special tick transition to occur at any time, making time advanceone step. The tick transition is a priori assumed to be fairly scheduled, that isit must occur infinitely often to prevent Zeno behaviors where time stops. clock tick : → t := t + 1 Figure 14: A Timed Transition Model for the clock.We give a few more details of TTMs in Section 5.3 (where a TTM resourcemanager specification is also given) when discussing dual language approaches.
This section introduces Petri nets as one of the most popular examples of asyn-chronous abstract machines.Petri nets owe their name to their inventor, Carl Adam Petri [Pet63]. Sincetheir introduction they became rather popular both in the academic and, to40ome extent, in the industrial world, as a fairly intuitive graphical tool to modelconcurrent systems. For instance, they inspired transition diagrams adoptedin the UML standard [UML05, UML04, EPLF03]. There are a few slightlydifferent definitions of such nets and of their semantics. Among them one of themost widely adopted is the following, which we present informally; the readeris referred to the literature [Pet81, Rei85] for a comprehensive treatment.A
Petri net consists of a set of places, and a set of transitions. Places storetokens and pass them to transitions. A transition is enabled whenever all ofthe incoming places hold at least one token. Whenever a transition is enableda firing can occur; this happens nondeterministically. As a consequence of afiring, the enabling tokens are removed from the incoming places and movedto the outgoing places the transition is connected to. Thus, for any possiblecombination of nondeterministic choices, we have a firing sequence .Let us consider again the example of the resource manager, using a Petri netmodel. We introduce the following modifications with respect to the previousexamples. First, since we are now considering untimed Petri nets, we do notintroduce any metric time constraint. Second, we disallow low-priority requestswhile the resource is occupied, or high-priority requests while there is a pend-ing low-priority request. Conversely, we introduce a mechanism to “count” thenumber of consecutive high-priority requests that occur while the resource is oc-cupied. Then, we make sure that all of them are served (consecutively) beforethe resource becomes free again. This behavior is modeled by the Petri net inFigure 15, where the places are denoted by the circles free , occ , pendh , wr , and wg , and the thick lines denote transitions. Notice that we allow an unboundednumber of tokens in each place (actually, the only place where the tokens canaccumulate is pendh , where each token represents a pending high-priority re-quest). Finally, we have also chosen to introduce an inhibiting arc , from place pendh to transition rel , denoted by a small circle in place of an arrowhead: thismeans that the corresponding transition is enabled if and only if place pendh stores no tokens. This is a non-standard feature of Petri nets which is oftenadded in the literature to increase the model’s expressive power.According to our taxonomy, Petri nets, as defined above, can be classifiedas follows: • There is no explicit notion of time. However a time model can be implic-itly associated with the semantics of the net. • There are at least two major approaches to formalizing the semantics ofPetri nets. – The simpler one is based on interleaving semantics . According tothis semantics the behaviors of a net are just its firing sequences.Interleaving semantics, however, introduces a total ordering in theevents modeled by the firing of net transitions which fails to capturethe asynchronous nature of the model. For instance, in the net ofFigure 15 the two sequences (cid:104) hpr , hpr , hpr , rel , hpr , rel , rel , rel (cid:105) and (cid:104) hpr , hpr , hpr , hpr , rel , rel , rel , rel (cid:105) both belong to the set of41 ree occ pendhwg hpr lpr rel hpr hpr rel rel wrslr Figure 15: A resource manager modeled through a Petri net.possible net’s behaviors; however, they both imply an order betweenthe firing of transitions hpr and rel , whereas the graphical structureof the net emphasizes that the two events can occur asynchronously(or simultaneously). – For this reason, a true concurrency (i.e., fully asynchronous ) ap-proach is often preferred to describe the semantics of Petri nets. In atrue concurrency approach it is natural to see the time model as a par-tial order , instead of a total order of the events modeled by transitionfirings. Intuitively, in a true concurrency modeling the two sequencesabove can be “collapsed” into (cid:104) hpr , hpr , hpr , { hpr , rel } , rel , rel , rel (cid:105) , where the pair { } denotes the fact that the included items canbe “shuffled” in any order. • Petri nets are a nondeterministic operational model. For instance,still in the net of Figure 15, whenever place occ holds some tokens, bothtransitions hpr and rel are enabled, but they are in conflict , so that onlyone of them can actually fire. Such a nondeterminism could be formalizedby exploiting a branching -time model. • In traditional Petri nets the time model has no metrics , so that it shouldbe seen only as a (possibly partial) order. • We also remark that Petri nets are usually “less compositional” than otheroperational formalisms, and synchronous automata in particular. Whilenotions of composition of Petri nets have been introduced in the lit-erature, they are often less natural and more complicated than, for in- Unless one adopts the convention of associating one time unit to the firing of a singletransition, as it is often assumed in other — synchronous — operational models such as finitestate automata. Such an assumption, however, would contrast sharply with the asynchronousoriginal nature of the model. ∞ ). Figure 16 showshow the net of Figure 15 can be augmented in such a way. The time boundsthat have been introduced refine the specification of the resource manager byprescribing that each use of the resource must take no longer than 100 contiguous(i.e., since the last request occurred) time units, and that a low priority requestis served within 2 time units. free occ pendhwg hpr lpr rel [0 , , hpr hpr rel [0 , , wr [0 , ∞ ][0 , ∞ ] [0 , ∞ ] [0 , ∞ ] slrrel Figure 16: A resource manager modeled through a timed Petri net.The fairly natural intuition behind this notation is that, since the time whena transition is enabled (i.e., all its input places have been filled with at leastone token), the transition can fire — nondeterministically — at any time that isincluded in the specified interval, unless it is disabled by the firing of a conflictingtransition. For instance, place wg becomes occupied after a low priority requestis issued, thus enabling transition slr . The latter can fire at any time between 0and 2 time instants after it has become enabled, thus expressing the fact thatthe request is served within • Suppose that the whole time interval elapsed since the time when a tran-sition became enabled: is at this point the transition forced to fire ornot? In the negative case it will never fire in the future and the tokensin their input places will be wasted (at least for that firing). There arearguments in favor of both choices; normally — including the exampleof Figure 16 — the former one is assumed (it is often called strong timesemantics (STS)) but there are also cases where the latter one (which iscalled weak time semantics (WTS) and is considered as more consistentwith traditional Petri nets semantics, where a transition is never forced tofire) is preferred. • If the minimum time associated with a transition is 0, then the transi-tion can fire immediately once enabled and we have a case of zero-timetransition (more precisely we call this circumstance zero-time firing ofthe transition). As we pointed out in other cases, zero-time firing can bea useful abstraction whenever the duration of the event modeled by thefiring of the transition can be neglected with respect to other activitiesof the whole process. On the other hand zero-time firing can producesome intricate situations since two subsequent transitions (e.g., hpr and rel in Figure 16) could fire simultaneously. This can produce some Zeno behaviors if the net contains loops of transitions with 0 minimum time.For this reason “zero-time loops” are often forbidden in the constructionof timed Petri nets.Once the above semantic ambiguities have been clarified, the behavior oftimed Petri nets can be formalized through two main approaches. • A time stamp can be attached to each token when it is produced by thefiring of some transition in an output place. For instance, with referenceto Figure 16, we might have the sequence of transitions (cid:104) hpr (2) , hpr (3) , hpr (4) , rel (5) , hpr (6) , · · · (cid:105) (that is, hpr fires at time 2 producing a tokenwith time stamp 2 in occ ; this is consumed at time 3 by the firing of hpr which also produces one token in pendh and one in wr , both timestamped3, etc.). In this way time is explicitly modeled in a metric way —whether discrete or continuous — as a further variable describing In this regard, notice that the timed automata of Section 5.1.1 could be considered to havea weak time semantics . In fact, transitions in timed automata are not forced to be taken whenthe upper limit of some constraint is met; rather, all that is prescribed by their semantics isthat when (if) a transition is taken by a timed automaton, its corresponding constraint (andthose of the source and target locations) must be met. Normally the firing of a transition is considered as instantaneous. This assumption doesnot affect generality since an activity with a non-null duration can be easily modeled as apair of transitions with a place between them: the first transition models the beginning of theactivity and the second one models its end. many further variables,one for each produced token).As remarked in Section 5.1.1, this approach actually introduces two differ-ent time models in the formalism: the time implicitly subsumed by the fir-ing sequence and the time modeled by the time stamps attached to tokens.Of course some restrictions should be applied to guarantee consistencybetween the two orderings: for instance, the same succession of firings de-scribed above could induce the timed sequence (cid:104) hpr (2) , hpr (3) , hpr (4) , hpr (6) , rel (5) , · · · (cid:105) , that should however be excluded from the possiblebehaviors. • The net could be described as a dynamical system as in the traditionalapproach described in Section 4.1. The system’s state would be the netmarking whose evolution should be formalized as a function of time. Topursue this approach, however, a few technical difficulties must be over-come: – First, tokens cannot be formalized as entities with no identity, as ithappens with traditional untimed Petri nets. Here too, some kindof time stamp may be necessary. Consider, for instance, the netfragment of Figure 17, and suppose that one token is produced intoplace P at time 3 by transition t i1 and another token is produced by t i2 at time 4; then, according to the normal interpretation of suchPetri nets (but different semantic formalizations could also be given,depending on the phenomenon that one wants to model) the outputtransition t o should fire once at time 6=3+3 and a second time attime 7=4+3. Thus, a state description that simply asserts that attime 4 there are two tokens in P would not be sufficient to fullydescribe the future evolution of the net. Pt i1 t i2 t o [3 , – If zero-time firings are admitted, strictly speaking, system’s state can-not be formalized as a function of the independent variable “time”:consider, for example, the case in which, in the net of Figure 16, attime t both transitions lpr and slr fire (which can happen, since slr admits zero-time firing); in this case, it would happen that at time t wg is marked and a state whereplace occ is marked — and wg is not marked anymore — would hold.In [FMM94] this problem has been solved by forbidding “zero-timeloops” and by stating the convention that in case of a “race” of zero-time firings (which is always finite) only the places at the “end of therace” are considered as marked, whereas tokens flow instantaneouslythrough other places without marking them.In [GMM99] a more general approach is proposed, where zero-timefirings are considered as an abstraction of a non-null but infinitesi-mal firing time. By this way it has been shown that mathematicalformalization and analysis of the net behavior become simpler and— perhaps — more elegant.Timed Petri nets have also been the object of a formalization through thedual language approach (see Section 5.3).As for other formalisms of comparable expressive power, Petri nets sufferintrinsic limitations in the techniques for (semi-)automatic analysis and ver-ification . In fact, let us consider the reachability problem, i.e., the problemof stating whether a given marking can be reached by another given marking.This is the main analysis problem for Petri nets since most other properties canbe reduced to some formulation of this basic problem [Pet81]. For normal, un-timed Petri nets with no inhibitor arcs the reachability problem has been shownintractable though decidable; if Petri nets are augmented with some metric timemodel and/or inhibitor arcs, then they reach the expressive power of Turing ma-chines and all problems of practical interest become undecidable. Even build-ing interpreters for Petri nets to analyze their properties through simulationfaces problems of combinatorial explosion due to the intrinsic nondeterminismof the model.Nevertheless interesting tools for the analysis of both untimed and timedPetri nets are available. Among them we mention [BD91], which provides analgorithm for the reachability problem of timed Petri nets assuming the set ofrational numbers as the time domain. This work has pioneered further devel-opments. For a comprehensive survey of tools based on Petri nets see [TGI].Before closing this section let us also mention the Abstract State Machines(ASM) formalism [BS03], whose generality subsumes most types of operationalformalisms, whether synchronous or asynchronous. However, ASM have notreceived, to the best of our knowledge, much attention in the realm of real-timecomputing until recently, when the Timed Abstract Machine notation [OL07b]and its tools [OL07a] have been developed. Of course, interesting particular cases are always possible, e.g., the case of bounded nets,where the net is such that during its behavior the number of tokens in every place neverexceeds a given bound. .2 Descriptive Formalisms Let us now consider descriptive (or declarative ) formalisms. In descriptive for-malisms a system is formalized by declaring the fundamental properties of itsbehavior. Most often, this is done by means of a language based on math-ematical logic; more seldom algebraic formalisms (e.g., process algebras) areexploited. As we saw in Section 2, descriptive notations can be used alone or incombination with operational ones, in a dual language approach. In the formercase, both the requirements and the system specification are expressed withinthe same formalism; therefore verification consists of proving that the axioms(often expressed in some logic language) that constitute the system specificationimply the formulas that describe the requirements. In the latter case, the ver-ification is usually based on some ad hoc techniques, whose features may varysignificantly depending on the adopted combination of descriptive and opera-tional notations. We treat dual-language approaches in Section 5.3.When considering the description of the timed behavior of a system througha logic formalism, it is natural to refer to temporal logics . A distinction shouldbe made here. Strictly speaking, temporal logics are a particular family ofmodal logics [Kri63, RU71] possessing specific operators — called modalities —apt to express temporal relationships about time-dependent propositions. Themodalities usually make the treating of time-related information quite intuitiveas they avoid the explicit reference to absolute time values and mirror the waythe human mind intuitively reasons about time; indeed, temporal logics wereinitially introduced by philosophers [Kam68]. It was Pnueli who first observed[Pnu77] that they could be effectively used to reason about temporal propertiesof programs, as well. Some temporal logics are discussed in the following Section5.2.1.In the computer science communities, however, the term “temporal logic”has been used in a broader sense, encompassing all logic-based formalisms thatpossess some mechanism to express temporal properties and to reason abouttime, even when they introduce some explicit reference to a dedicated variablerepresenting the current value of time or some sort of clock and hence adopta style of description that is different in nature from the original temporallogic derived from modal logic. Many of these languages have been used quitesuccessfully for modeling time-related features: some of them are described inSection 5.2.2 below.We emphasize that there is a wide variety of different styles and flavors whenit comes to temporal logics. As usual, we do not aim to be exhaustive in the pre-sentation of temporal logics (we refer the reader to other papers specifically ontemporal logics, e.g., [Eme90, AH93, AH92b, Ost92, Hen98, BMN00, FPR08b]),but to highlight some significant approaches to the problem of modeling timein logic.Finally, a different approach to descriptive modeling of systems, based on thecalculational aspects of specifications, is the algebraic one. We discuss algebraicformalisms in Section 5.2.3. 47 .2.1 Temporal Logics
In this section we deal with temporal logics with essentially implicit time, andwe focus our discussion on a few key issues, namely the distinction betweenlinear-time and branching-time logics, the adoption of a discrete or non-discretetime model, the use of a metric on time to provide means to express temporalproperties in a quantitatively precise way, the choice of using solely temporaloperators that refer to the future versus introducing also past-tense operators,and the assumption of time points or time intervals as the fundamental timeentities. In our discussion we will go from simple to richer notations and occa-sionally combine the treatment of some of the above mentioned issues. Finally,some verification issues about temporal logics will be discussed while pre-senting dual language approaches in Section 5.3.
Linear-Time Temporal Logic.
As a first, simplest example of temporallogic, let us consider propositional Linear-Time Temporal Logic (LTL) withdiscrete time. In LTL, formulas are composed from the atomic propositionswith the usual Boolean connectives and the temporal connectives X ( next , alsodenoted with the symbol (cid:13) ), F ( eventually in the future , also ♦ ), G ( globally — i.e., always — in the future , also (cid:3) ), and U ( until ). These have a rathernatural and intuitive interpretation, as the formulas of LTL are interpreted over linear sequences of states: the formula X p means that proposition p holds atthe state that immediately follows the one where the formula is interpreted, F p means that p will hold at some state following the current one, G p that p willhold at all future states, p U q means that there is some successive state suchthat proposition q will hold then, and that p holds in all the states between thecurrent and that one.Notice that the presence of the “next” operator X implies that the logicrefers to a discrete temporal domain: by definition, there would be no “nextstate” if the interpretation structure domain were not discrete. On the otherhand, depriving LTL of the next operator would “weaken” the logic to a pureordering without any metrics (see below).To illustrate LTL’s main features, let us consider again the resource managerintroduced in the previous sections: the following formula specifies that, if alow priority request is issued at a time when the resource is free, then it will begranted at the second successive state in the sequence.G( free ∧ lpr ⇒ XX occ )LTL is well-suited to specify qualitative time relations, for instance orderingamong events: the following formula describes a possible assumption aboutincoming resource requests, i.e., that no two consecutive high priority requestsmay occur without a release of the resource between them (literally, the formulareads as: if a high priority request is issued then the resource must be eventuallyreleased and no other similar request can take place until the release occurs).G( hpr ⇒ X( ¬ hpr U rel ))48hough LTL is not expressly equipped with a metric on time, one mightuse the next operator X for this purpose: for instance, X p (i.e., XXX p ) wouldmean that proposition p holds 3 time units in the future. The use of X k todenote the time instant at k time units in the future is only possible, however,under the condition that there is a one-to-one correspondence between the statesof the sequence over which the formulas are interpreted and the time points ofthe temporal domain. Designers of time-critical systems should be aware thatthis is not necessarily the case: there are linear discrete-time temporal logicswhere two consecutive states may well refer to the same time instant whereasthe first following state associated with the successive time instant is far away inthe state sequence [Lam94, MP92, Ost89]. We already encountered this criticalissue in the context of finite state automata and the fairness problem (seeSection 5.1.1) and timed Petri nets when zero-time transitions are allowed (seeSection 5.1.2) and will encounter it again in the dual language approach (Section5.3). Metric Temporal Logics.
Several variations or extensions of linear timetemporal logic have been defined to endow it with a metric on time, and hencemake it suitable to describe strict real-time systems. Among them, we mentionMetric Temporal Logic (MTL) [Koy90] and TRIO [GMM90, MMG92]. They arecommonly interpreted both over discrete and over dense (and continuous )time domains.MTL extends LTL by adding to its operators a quantitative time param-eter, possibly qualified with a relational symbol to imply an upper bound for avalue that typically represents a distance between time instants or the lengthof some time interval. For instance the following simple MTL formula specifiesbounded response time: there is a time distance d such that an event p is alwaysfollowed by an event q with a delay of at most d time units (notice that MTLis a first-order logic). ∃ d : G( p ⇒ F
0, or to the past, if d <
0, oreven to the present time if d = 0. All the operators of LTL, their quantitative-time counterparts and also other operators not found in traditional temporallogic are defined in TRIO by means of first-order quantification over the timeparameter of the basic operator Dist . We include in Table 3 a list of some ofthe most significant ones (and especially those used in the following).Referring again to the example of the resource manager, the following TRIOformula asserts that any low priority resource request is satisfied within 10049 perator Definition Description
Futr(
F, t ) t ≥ ∧ Dist(
F, t ) F holds t time units in the futurePast( F, t ) t ≥ ∧ Dist( F, − t ) F held t time units in the pastAlw( F ) ∀ d : Dist( F, d ) F holds alwaysLasts( F, t ) ∀ d ∈ (0 , t ) : Futr( F, d ) F holds for t time units in the futureLasted( F, t ) ∀ d ∈ (0 , t ) : Past( F, d ) F held for t time units in the pastWithinF( F, t ) ∃ d ∈ (0 , t ) : Futr( F, d ) F holds within t time units in the futureUntil( F, G ) ∃ d > F, d ) ∧ Futr(
G, d ) F holds until G holdsNowOn( F ) ∃ d > F, d ) F holds for some non-empty interval inthe futureUpToNow( F ) ∃ d > F, d ) F held for some non-empty interval inthe past Table 3: TRIO derived temporal operators.time units Alw( lpr ⇒ WithinF( occ , hpr ⇒ Lasts( ¬ hpr , explicit state component needs to be devoted to the representation of the current valueof “time”: quantitative timing properties can be specified using the modal op-erators embedded in the language. Other approaches to the quantitative spec-ification of timing properties in real-time systems are based on the use of theoperators of (plain) LTL in combination with assertions that refer to the valueof some ad hoc introduced clock predicates or explicit time variable [Ost89].For instance the following formula of Real Time Temporal Logic (RTTL, a logicthat will be discussed in Section 5.3) states the same property expressed byMTL Formula (2) above specifying bounded response time (in the formula thevariable t represents the current value of the time state component). ∀ T (( p ∧ t = T) ⇒ F( q ∧ t ≤ T + d )) Dealing with different time granularities
Once suitable constructs areavailable to denote in a quantitatively precise way the time distance amongevents and the length of time intervals, then the problem may arise of describ-ing systems that include several components that evolve, possibly in a partiallyindependent fashion, on different time scales. This is dealt in the temporal logicTRIO described above by adopting syntactic and semantic mechanisms thatenable dealing with different levels of time granularity [CCM + D denotes 30 days whereas3 H denotes 3 hours. They key issue is the possibility given to the user to specifya semantic mapping between time domains of different granularity; hence, thetruth of a predicate at a given time value at higher (coarser) level of granularityis defined in terms of the interpretation in an interval at the lower (finer) level50ssociated with the value at the higher level. For instance, Figure 18 specifiesthat, say, working during the month of November means working from the 2 nd through the 6 th , from the 9 th through the 13 th , etc. DM November1 314 7 10 13 16 19 22 25 28
Figure 18: Interpretation of an upper-level predicate in the lower-level domain.Solid lines denote the intervals in the lower domain where the predicate holds.As with derived TRIO temporal operators, suitable predefined mappingshelp the user specify a few standard situations. For instance given two temporaldomains T and T , such that T is coarser than T , p event in T → T meansthat predicate p is true in any t ∈ T if and only if it is true in just one instant ofthe interval of T corresponding to t . Similarly, p complete in T → T meansthat p is true in any t ∈ T if and only if it is true in the whole correspondinginterval of T .By this way the following TRIO formulaAlw M ( ∀ emp ( work ( emp ) ⇒ get salary ( emp )))which formalizes the sentence “every month, if an employee works, then shegets her salary” introduced in Section 3.1 is given a precise semantics by in-troducing the mapping of Figure 18 for predicate work , and by stating that get salary event in M → D .In some applicative domains having administrative, business, or financialimplications, the change of time granularity is often paired with a reference to aglobal time calendar that evolves in a synchronous way . For instance, time unitssuch as days, weeks, months and years change in a synchronized way at certainpredefined time instants (e.g., midnight or new year) that are conventionallyestablished in a global fashion.On the contrary, when a process evolves in a way such that its composingevents are related directly with one another but are unrelated with any globaltime scale, time distances can be expressed in a time scale with no intendedreference to a global time scale: in such cases we say that time granularityis managed in an asynchronous way . Quite often the distinction of the twointended meanings is implicit in natural language sentences and depends onsome conventional knowledge that is shared among the parties involved in thedescribed process; thus, in the formalization stage, it needs to be made explicit.Consider for instance the following description of a procedure for carrying outwritten exams: “Once the teacher has completed the explanation of the exercise,51he students must solve it within exactly three hours. Then, the teacher willcollect their solutions and will publish and register the grades after three days”.Clearly, the former part of the sentence must be interpreted in the asynchronousway (students have to complete their job within 180 minutes starting from theminute when the explanation ended). The latter part, however, is normallyintended according to the synchronous interpretation: results will be publishedbefore midnight of the third “calendar day” following the one when the examwas held.This notion of synchronous vs. asynchronous refinement of predicates can bemade explicit by adding an indication ( S for synchronous, A for asynchronous)denoting the intended mode of granularity refinement for the predicates includedin the subformula. Hence the above description of the written examinationprocedure could be formalized by the following formula, where H stands for“hours”, and D for “days”:Alw H , A ( exerciseDelivery ⇒ Futr( solutionCollect , )) ∧ Alw D , S ( exerciseDelivery ⇒ Futr( gradesPublication , ))To the best of our knowledge only few other languages in the literatureapproach the granularity problem in a formal way [BB06, Rom90]. Amongthese [Rom90] addresses the problem both for space and time in formal modelsof geographic data processing requirements. Dense Time Domains and the Non-Zenoness Property.
The adoptionof a dense, possibly continuous time domain allows one to model asynchronoussystems where the occurrence of distinct, independent events may be at timeinstants that are arbitrarily close. As a consequence,
Zeno behaviors, where forinstance an unbounded number of events takes place in a bounded time interval,become possible and must be ruled out by means of suitable axioms or throughthe adoption of ad hoc underlying semantic assumptions. The axiomatic descrip-tion of non-Zenoness is immediate for a first order, metric temporal logic likeMTL or TRIO, when it is applied to simple entities like predicates or variablesranging over finite domains. It can be more complicated when non-Zenonessmust be specified in the most general case of variables that are real-valuedfunctions of time [GM01].Informally, a predicate is non-Zeno if it has finite variability, i.e., its truthvalue changes a finite number of times over any finite interval. Correspondingly,a general predicate P can be constrained to be non-Zeno by requiring that therealways exists a time interval before or after every time instant, where P isconstantly true or it is constantly false. This constraint can be expressed by thefollowing TRIO formula (see [HR04, LWW07] for formulations in other similarlogics):Alw((UpToNow( P ) ∨ UpToNow( ¬ P )) ∧ (NowOn( P ) ∨ NowOn( ¬ P ))) (3)52he additional notion of non-Zeno interval-based predicate is introduced tomodel a property or state that holds continuously over time intervals of lengthstrictly greater than zero. Suppose, for instance, that the “occupied state” forthe resource in the resource manager example is modeled in the specificationthrough a predicate occ ; to impose that occ be an interval-based (non-Zeno)predicate, one can introduce, in addition to Formula (3), the following TRIOaxiom (which eliminates the possibility of occ being true in isolated time in-stants).Alw(( occ ⇒ UpToNow( occ ) ∨ NowOn( occ )) ∧ ( ¬ occ ⇒ UpToNow( ¬ occ ) ∨ NowOn( ¬ occ )))A complementary category of non-Zeno predicates corresponds to propertiesthat hold at isolated time points , and therefore can naturally model instanta-neous events. If, in the resource manager specification, predicate hpr representsthe issue of a high priority request, it can be constrained to be a point-basedpredicate by introducing the following formula in addition to Axiom (3).Alw(UpToNow( ¬ hpr ) ∧ NowOn( ¬ hpr ))Finally, non-Zenoness for a time dependent variable T (representing for in-stance the current temperature in a thermostat application) ranging over anuncountable domain D essentially coincides with T being piecewise analytic, as a function of time. Analyticity is a quite strong “smoothness” requirementon functions which guarantees that the function intersects any constant lineonly finitely many times over any finite interval. Hence, any formula of the kind T = v, where v is a constant value in D , is guaranteed to be non-Zeno accordingto the previous definitions for predicates. Formally, non-Zenoness for T can beconstrained by the following TRIO formula (where r , l : R → D are functionsthat are analytic at 0).Alw( ∃ d > ∀ t : 0 < t < d ⇒ (Dist( T = r ( t ) , t ) ∧ Dist( T = l ( t ) , − t )))In [GM06a] it is shown that the adoption of a small set of predefined cate-gories of specification items like the point- and interval-based predicates outlinedabove can make the modeling of real-time hybrid systems quite systematic andamenable to automated verification. Future and Past Operators.
While the Linear Temporal Logic LTL, asoriginally proposed by Pnueli [Pnu77] to study the correctness of programs,has only future operators, one may consider additional modalities for the pasttense, e.g., P (for previous ) as the operator corresponding in the past to thenext operator X, or O (for once ) as opposed to F, S (for since ) as the past A function is analytic at a given point if it possesses derivatives of all orders and agreeswith its Taylor series about that point [Wei, Kno96]. It is piecewise analytic if it is analyticover finitely many contiguous (open) intervals. until operator U, etc. The question then arises, whether the pastoperators are at all necessary (i.e., if they actually increase the expressiveness ofthe logic) or useful in practice (i.e., if there are significant classes of propertiesthat can be described in a more concise and transparent way by using also pastoperators than by using future operators only).Concerning the question of expressiveness, it is well known from [GPSS80]that LTL with past operators does not add expressive power to future-only LTL.Moreover, the separation theorem by Gabbay [Gab87] allows for the eliminationof past operators, producing an LTL formula to be evaluated in the initial instantonly: therefore, LTL with past operators is said to be initially equivalent tofuture-only LTL [Eme90]. On the other hand, it is widely recognized that the extension of LTL withpast operators [Kam68] allows one to write specifications that are easier, shorter,and more intuitive [LPZ85]. A customary example, taken from [Sch02], is thespecification:
Every alarm is due to a fault , which, using the globally operatorG and the previously operator O ( once ), may be very simply written as:G( alarm ⇒ O fault )whereas the following is one of the simplest LTL versions of the same specifica-tion, using the until operator. ¬ ( ¬ fault U ( alarm ∧ ¬ fault ))In [LMS02], it has been shown that the elimination of past operators mayyield an exponential growth of the length of the derived formula.These expressiveness results change significantly when we consider logics in-terpreted over dense time domains. In general, past operators add expressivepower when the time domain is dense, even if we consider mono-infinite timelines such as R ≥ . For instance, [BCM05] shows that, over the reals, proposi-tional MTL with past operators is strictly more expressive than its future-onlyversion. The question of the expressiveness of past operators over dense timedomains was first addressed, and shown to differ from the discrete case, in[AH92a, AH93]. Branching-Time Temporal Logic.
As discussed in Section 3.3, in branch-ing -time temporal logic every time instant may split into several future ones andtherefore formulas are interpreted over trees of states; such trees represent allpossible computations of the modeled system. The branching in the interpreta-tion structure naturally represents the nondeterministic nature of the model,which may derive from some intrinsic feature of the device under constructionor from some feature of the stimuli coming from the environment with whichthe device interacts. When interpreting a branching temporal logic formula atsome current time, the properties asserted for the future may be evaluated with As it is customary in the literature, we consider one-sided infinite time discrete domains(i.e., N ). The bi-infinite case (i.e., Z ) is much less studied [PP04]. all future computations (i.e., branches of the state tree) startingfrom the current time or only to some of them. Therefore, branching time tem-poral logic possesses modal operators that allow one to quantify universally orexistentially over computations starting from the current time.The Computation Tree Logic (CTL) [EH86] has operators that are similarto LTL, except that every temporal connective must be preceded by a pathquantifier : either E (which stands for there exists a computation , sometimesalso denoted with the quantification symbol ∃ ) or A ( for all computations , also ∀ ). With reference to the usual resource manager example, the formula belowasserts that in every execution a low priority request (predicate lpr ) will beeventually followed by the resource being occupied (predicate occ ) in some ofthe evolutions following the request:AG ( lpr ⇒ EF occ )while the following formula asserts that there exists a computation of the re-source manager where all low priority requests are certainly (i.e., in every pos-sible successive evolution) eventually followed by the resource being occupied:EG ( lpr ⇒ AF occ )These examples, though very simple, show that in branching time temporallogics temporal and path quantifiers may interact in quite a subtle way.Not surprisingly, branching temporal logic has been extended in a metric version (TCTL, timed CTL) by adding to its operators quantitative time pa-rameters, much in the same way MTL extends Linear Temporal Logic [ACD93,HNSY94].We refer the reader to [Var01] for a deep analysis of the mutual pros andcons of linear time versus branching time logics. Interval-Based Temporal Logics.
All temporal logics we have consideredso far adopt time points as the fundamental entities: every state is associatedwith a time instant and formulas are interpreted with reference to some timeinstant. By contrast, the so-called interval temporal logics assume time intervals ,rather than time instants, as the original temporal entity, while time points, ifnot completely ignored, are considered as derived entities.In principle, from a purely conceptual viewpoint, choosing intervals ratherthan points as the elementary time notion may be considered as a matter ofsubjective preference, once it is acknowledged that an interval may be consideredas a set of points, while, on the other hand, a point could be viewed as aspecial case of interval having null length [Koy92]. In formal logic, however,apparently limited variations in the set of operators may make a surprisinglysignificant difference in terms of expressiveness and complexity or decidabilityof the problems related with analysis and verification. Over the years, intervaltemporal logics have been a quite rich research field, producing a mass of formalnotations with related analysis and verification procedures and tools.55 few relevant ones are: the Interval-based Temporal Logic of Schwartz etal. [SMV83], the Interval Temporal Logic of Moszkowski [Mos83, Mos86], theDuration Calculus of Chachoen et al. [CHR91], the Metric Interval TemporalLogic (MITL) of Alur et al. [AFH96].Among them, Duration Calculus (DC) refers to a continuous linear se-quence of time instants as the basic interpretation structure. The significantportions of the system state are modeled by means of suitable functions fromtime (i.e., from the nonnegative reals) to Boolean values, and operators mea-suring accumulated durations of states are used to provide a metric over time.For instance, in our resource manager example, the property that the resourceis never occupied for more than 100 time units without interruption (exceptpossibly for isolated instants) would be expressed with the DC formula: (cid:3) ( (cid:100) occ (cid:101) ⇒ (cid:96) ≤ (cid:100) occ (cid:101) is a shorthand for (cid:82) occ = (cid:96) ∧ (cid:96) >
0, which formalizes the factthat the predicate occ stays true continually (except for isolated points) over aninterval of length (cid:96) .Another basic operator of Duration Calculus (and of several other intervallogics as well) is the chop operator ; (sometimes denoted as ∩ ). Its purpose it tojoin two formulas predicating about two different intervals into one predicatingabout two adjacent intervals. For example, if we wanted to formalize the prop-erty that any client occupies the resource for at least 5 time units, we could usethe chop operator as follows: (cid:3) ( (cid:100)¬ occ (cid:101) ; (cid:100) occ (cid:101) ; (cid:100)¬ occ (cid:101) ⇒ (cid:96) > (cid:96) in the right-hand side of the implication now refers tothe length of the overall interval, obtained by composition through the chop operator.Duration Calculus also embeds an underlying semantic assumption of finitevariability for state functions that essentially corresponds to the previously dis-cussed non- Zeno requirement: each (Boolean-valued) interpretation must haveonly finitely many discontinuity points in any finite interval.
Another category of descriptive formalisms adopts a “timestamp” explicit viewof time. This is typically done by introducing an ad hoc feature (e.g., a variablethat represents the current time, or a time-valued function providing a times-tamp associated with every event occurrence). In this section we focus on thedistinguishing features of Lamport’s Temporal Logic of Actions (TLA) [Lam94],and Alur and Henzinger’s Timed Propositional Temporal Logic (TPTL) [AH94].Other relevant examples of explicit-time logics are the Real Time Logic (RTL)of Mok et al. [JM86] and Ostroff’s Real-Time Temporal Logic (ESM/RTTL)[Ost89] (which will be presented in the context of the dual language approachin Section 5.3). 56 emporal Logic of Actions.
TLA formulas are interpreted over linear , discrete state sequences, and include variables, first order quantification, pred-icates and the usual modal operators F and G to refer to some or all futurestates. While basic TLA does not have a quantitative treating of time, in[AL94] Abadi and Lamport show how to introduce a distinguished state vari-able now with a continuous domain, representing the current time, so that thespecification of temporal properties consists of formulas predicating explicitlyon the values of now in different states, thus describing its expected behaviorwith respect to the events taking place.With reference to the resource manager example, to formally describe thebehavior in case of a low-priority request an action lpr would be introduced,describing the untimed behavior of this request. An action is a predicate abouttwo states, whose values are denoted by unprimed and primed variables, forthe current and next state, respectively. Therefore, the untimed behavior of anaccepted low-priority request would simply be to change the value of the stateof the resource (indicated by a variable res ) from free to occupied, as in thefollowing definition. lpr (cid:44) res = free ∧ res (cid:48) = occThen, the timed behavior associated with this action would be specified bysetting an upper bound on the time taken by the action, specifying that theaction must happen within 2 time units whenever it is continuously enabled.Following the scheme in [AL94], a timer would be defined by means of twoformulas (which we do not report here for the sake of brevity: the interestedreader can find them in [AL94]). The first one defines predicate MaxTime( t ),which holds in all states whose timestamp (represented by the state variable now ) is less than or equal the absolute time t . The second formula definespredicate VTimer( t, A, δ, v ), where A is an action, δ is a delay, v is the set of allvariables, and t is a state variable representing a timer. Then, VTimer( t, A, δ, v )holds if and only if either action A is not currently enabled and t is ∞ , or A is enabled and t is now + δ (and it will stay so until either A occurs, or A isdisabled, see [AL94, Sec. 3] for further details).Finally, the timed behavior of low-priority requests would be defined by thefollowing action lpr t , where T gr is a state variable representing the maximumtime within which action lpr must occur. lpr t (cid:44) lpr ∧ VTimer( T gr , lpr , , v ) ∧ MaxTime( T gr )More precisely, the formula above states that after action lpr is enabled, it mustoccur before time surpasses value now + 2.It is interesting to discuss how TLA solves the problem of Zeno behaviors.Zeno behaviors are possible because TLA formulas involving time are simply sat-isfied by behaviors where the variable now , being a regular state variable, doesnot change value. There are at least two mechanisms to ensure non-Zenoness.The first, simpler one introduces explicitly in the specification the requirement57hat time always advances, by the following formula NZ.NZ (cid:44) ∀ t ∈ R : F( now > t )An alternative a posteriori approach, which we do not discuss in detail, is basedon a set of theorems provided in [AL94] to infer the non-Zenoness of specifica-tions written in a certain canonical form, after verifying some semantic con-straints regarding the actions included in the specification.It is worth noticing that also in TLA, like in other temporal logics discussedabove, two consecutive states may refer to the same time instant, so that thelogic departs from the notion of time inherited from classical physics and fromtraditional dynamical system theory. In every timed TLA specification, it is thuscustomary to explicitly introduce a formula that states the separation of time-advancing steps from ordinary program steps (see [AL94] for further details).This approach is somewhat similar in spirit to that adopted in TTM/RTTL.which is presented in Section 5.3. Timed Propositional Temporal Logic.
The TPTL logic by Alur and Hen-zinger represents a quite interesting example of how a careful choice of theoperators provided by a temporal logic can make a great difference in termsof expressiveness, decidability, and complexity of the verification procedures.TPTL may be roughly described as a “half-order” logic, in that it is obtainedfrom propositional linear time logic by adding variables that refer to time, andallowing for a freeze quantification operator: for a variable x , the freeze quanti-fier (denoted as x. ) bounds the variable x to the time when the sub-formula inthe scope of the quantification is evaluated. One can think of it as the analogue,for logic languages, of clock resets in timed automata (see Section 5.1.1). Thefreeze quantifier is combined with the usual modal operators F and G: if φ ( x )is a formula in which variable x occurs free, then formula F x.φ ( x ) asserts thatthere is some future instant, with some absolute time k , such that φ ( k ) willhold in that instant; similarly, G x.φ ( x ) asserts that φ ( h ) will hold in any futureinstant, h being the absolute time of that instant.The familiar property of the resource manager, that any low priority resourcerequest is satisfied within 100 time units would be expressed in TPTL as follows.G x. ( lpr ⇒ F y. ( occ ∧ y < x + 100))In [AH94] the authors show that the logic is decidable over discrete time,and define a doubly exponential decision procedure for it; in [AH92b] theyprove that adding ordinary first order quantification on variables representingthe current time, or adding past operators to TPTL, would make the decisionprocedure of the resulting logic non-elementary. Therefore they argue thatTPTL constitutes the “best” combination of expressiveness and complexity fora temporal logic with metric on time.58 .2.3 Algebraic Formalisms Algebraic formalisms are descriptive formal languages that focus on the ax-iomatic and calculational aspects of a specification. In other words, they arebased on axioms that define how one can symbolically derive consequences ofbasic definitions [Bae04, Bae03]. From a software engineering viewpoint, thismeans that the emphasis is on refinement of specifications (which is formalizedthrough some kind of algebraic morphism ).In algebraic formalisms devoted to the description of concurrent activities,the basic behavior of a system is usually called process . Hence, algebraic for-malisms are often named with the term process algebras . A process is completelydescribed by a set of (abstract) events occurring in a certain order. Therefore,a process is also called a discrete event system .In order to describe concurrent and reactive systems, algebraic formalismsusually provide a notion of parallel composition among different, concurrentlyexecuting, processes. Then, the semantics of the global system is fully definedby applications of the transformation axioms of the algebra on the various pro-cesses. Such a semantics — given axiomatically as a set of transformations— is usually called operational semantics , not to be confused with operationalformalisms (see Section 5.1).
Untimed Process Algebras.
Historically, the first process algebraic ap-proaches date back to the early work by Bekiˇc [Bek71] and to Milner’s com-prehensive work on the Calculus of Communicating Systems (CCS) formalism[Mil80, Mil89]. Basically, they aimed at extending the axiomatic semanticsfor sequential programs to concurrent processes. In this section, we focus onCommunicating Sequential Processes (CSP), another popular process algebra,introduced by Hoare [Hoa78, Hoa85] and subsequently developed into severalformalisms. As usual, we refer the reader to [BPS01] for a more detailed andcomprehensive presentation of process algebras, and to the historical surveys[Bae04, Bae03].
Communicating Sequential Processes are a process algebra based on the no-tion of communication between processes. The basic process is defined by thesequences of events it can generate or accept; to this end the → operator isused, which denotes a sequence of two events that occur in order. Definitionsare typically recursive, and infinite behaviors can consequently arise However, apre-defined process SKIP always terminates as soon as it is executed. In the fol-lowing examples we denote primitive events by lowercase letters, and processesby uppercase letters.Processes can be unbounded in number, and parametric with respect to nu-meric parameters, which renders the formalism very expressive. We exploit thisfact in formalizing the usual resource manager example (whose complete CSPspecification is shown in Table 4) by allowing an unbounded number of pendinghigh-priority requests, similarly to what we did with Petri nets in Section 5.1.2.In CSP two choice operators are available. One is external choice, denotedby the box operator (cid:3) ; this is basically a choice where the process that is59ctually executed is determined by the first (prefix) event that is available inthe environment. In the resource manager example, external choice is used tomodel the fact that a
FREE process can stay idle for one transition (behaving asprocess P N ), or accept a high-priority request or a low-priority one (behaving asprocesses P H and P L , respectively). On the other hand, internal choice, denotedby the (cid:117) operator, models a nondeterministic choice where the process choosesbetween one of two or more possible behaviors, independently of externallygenerated events. In the resource manager example, the system’s process WG internally chooses whether to skip once or twice before granting the resource to alow-priority request. A special event, denoted by τ , is used to give a semanticsto internal choices: the τ event is considered invisible outside the process inwhich it occurs, and it leads to one of the possible internal choices.Concurrently executing processes are modeled through the parallel compo-sition operator (cid:107) . In our example, we represent the occupied resource by aparallel composition of an
OCC process and a counter
CNT ( k ) counting thenumber of pending high-priority requests. The former process turns back tobehaving as a FREE process as soon as there are no more pending requests.The latter, instead, reacts to release and high-priority request events. In par-ticular, it signals the number of remaining enqueued processes by issuing theparametric event enqueued ! k (which is received by an OCC process, as definedby the incoming event enqueued ? k of OCC ). FREE = (cid:3) k ∈{ H , L , N } P k P N = SKIP −→ FREEP H = hpr −→ P O P O = OCC { enqueued } (cid:107) { enqueued , rel , hpr } CNT (0)
OCC = enqueued ?0 −→ FREE (cid:3) enqueued ? k : N > −→ OCCCNT ( −
1) =
SKIP
CNT ( k ) = rel −→ DEQ ( k ) (cid:3) hpr −→ CNT ( k + 1) DEQ ( k ) = enqueued ! k −→ CNT ( k − P L = lpr −→ WGWG = WG (cid:117) WG WG = SKIP ; P O WG = SKIP ; SKIP ; P O Table 4: The resource manager modeled through CSP.Let us now discuss the characteristics of the process algebraic models in P A (cid:107) B P denotes the parallel composition of processes P and P such that P onlyengages in events in A , P only engages in events in B , and they both synchronize on eventsin A ∩ B . • Basic process algebras usually have no quantitative notion of time,defining simply an ordering among different events. In particular, timeis typically discrete [Bae04]. Variants of this basic model have beenproposed to introduce metric and/or dense time; we discuss them in theremainder of this section. • The presence of the silent transition τ is a way of modeling nondeter-ministic behaviors; in particular, the nondeterministic internal choiceoperator (cid:117) is based on the τ event. • Even if process algebras include nondeterministic behaviors, their seman-tics is usually defined on linear time models. There are two basic ap-proaches to formalize the semantics of a process algebra: the operational one has been briefly discussed above; for the denotational one we refer theinterested reader to [Sch00]. • The parallel composition operation is a fundamental primitive of processalgebras. The semantics which is consequently adopted for concurrency iseither based on interleaving or it is truly asynchronous . Whenever inter-leaving concurrency is chosen, it is possible to represent a process by a setof classes of equivalent linear traces (see the timed automata subsection ofSection 5.1.1). Therefore, the semantics of the parallel composition oper-ator can be expressed solely in terms of the other operators of the algebra;the rule that details how to do this is called expansion theorem [Bae04]. Onthe contrary, whenever a truly asynchronous concurrency model is chosenno expansion theorem holds, and the semantics of the parallel compositionoperator is not reducible to that of the other operators. • Processes described by algebraic formalisms may include deadlocked behaviors where the state does not advance as some process is blocked.Let us consider, for instance, the following process P i , which internallychooses whether to execute hpr → P i or lpr → P i : P i = hpr −→ P i (cid:117) lpr −→ P i Process P i may refuse an lpr event offered by the environment, if it inter-nally (i.e., independently of the environment) chooses to execute hpr → P i .In such a case, P i would deadlock. It is therefore the designer’s task toprove a posteriori that a given CSP specification is deadlock-free.Among other popular process algebras, let us just mention the Algebra ofCommunicating Processes (ACP) [BW90] and other approaches based on theintegration of data description into process formalization, the most widespreadapproach being probably that of LOTOS [vEVD89, Bri89].61 imed Process Algebras. Quantitative time modeling is typically intro-duced in process algebras according to the following general schema, presentedand discussed by Nicollin and Sifakis in [NS91]. First of all, each process isaugmented with an ad hoc variable that explicitly represents time and canbe continuous . Time is global and all cooperating processes are synchronizedon it.Then, each process’s evolution consists of a sequence of two-phase steps.During the first phase, an arbitrarily long — but finite — sequence of eventsoccurs, while time does not change; basically, this evolution phase can be fullydescribed by ordinary process algebraic means. During the second phase, in-stead, the time variable is incremented while all the other state variables stay un-changed, thus representing time progressing; all processes also synchronously update their time variables by the same amount, which can possibly be infinite(divergent behavior).Time in such a timestamp model is usually called abstract to denote thefact that it does not correspond to concrete or physical time. Notice that sev-eral of the synchronous operational formalisms, e.g., those presented in Section5.1.1, can also be described on the basis of such a time model. For instance,in synchronous abstract machines `a la
Esterel [BG92] the time-elapsing phasecorresponds implicitly to one (discrete) time unit.Assuming the general time model above, the syntax of process algebras isaugmented with constructs allowing one to explicitly refer to quantitative time in the description of a system. This has been first pursued for CSP in[RR88], and has been subsequently extended to most other process algebras. Werefer the reader to [BM02, NS91, Bae03] — among others — for more references,while briefly focusing on Timed CSP (TCSP) in the following example.
Example 5 (Timed CSP) . The CSP language has been modified [DS95, Sch00]by extending a minimal set of operators to allow the user to refer to metric time.In our resource manager example (whose complete Timed CSP specification isshown in Table 5), we only consider two metric constructs: the special process
WAIT and the so-called timed timeout (cid:66) t .The former is a quantitative version of the untimed SKIP : WAIT t is aprocess which just delays for t time units. We use this to model explicitlythe acceptance of a low-priority request, which waits for two time units beforeoccupying the resource (note that we modified the behavior with respect to theuntimed case, by removing the nondeterminism in the waiting time).The timed timeout (cid:66) t a modification of the untimed timeout (cid:66) (not pre-sented in the previous CSP example). The semantics of a formula P (cid:66) t Q isthat of a process that behaves as P if any of P ’s initial events occurs within t time units; otherwise, it behaves as Q after t time units. In the resource man-ager example, we exploit this semantics to prescribe that the resource cannot beoccupied continuously for longer than 100 time units: if no release ( rel ) or high-priority request ( hpr ) events occur within 100 time units, the process CNT ( k ) istimed out and the process DEQ is forcefully executed.Finally, it is worth discussing how TCSP deals with the problem of
Zeno REE = (cid:3) k ∈{ H , L , N } P k P N = SKIP −→ FREEP H = hpr −→ P O P O = OCC { enqueued } (cid:107) { enqueued , rel , hpr } CNT (0)
OCC = enqueued ?0 −→ FREE (cid:3) enqueued ? k : N > −→ OCCCNT ( −
1) =
SKIP
CNT ( k ) = ( rel −→ DEQ ( k ) (cid:3) hpr −→ CNT ( k + 1)) (cid:66) DEQ ( k ) DEQ ( k ) = enqueued ! k −→ CNT ( k − P L = lpr −→ WAIT −→ P O Table 5: The resource manager modeled through Timed CSP.behaviors. The original solution of TCSP (see [DS95]) was to rule out Zenoprocesses a priori by requiring that any two consecutive actions be separatedby a fixed delay of δ time units, thus prohibiting simultaneity altogether. Thissolution has the advantage of being simple and of totally ruling out problems ofZenoness; on the other hand, it forcefully introduces a discretization in behav-ior description, and it yields complications and lack of uniformity in the algebraaxioms. Therefore, subsequent TCSP models have abandoned this strong as-sumption by allowing for simultaneous events and arbitrarily short delays. Con-sequently, the non-Zenoness of any given TCSP specification must be checkedexplicitly a posteriori .Several analysis and verification techniques have been developed for,and adapted to, process algebraic formalisms. For instance, let us just mentionthe FDR2 refinement checker [Ros97], designed for CSP, and the LTSA toolset[MK99] for the analysis of dual-language models combining process-algebraicdescriptions with labeled transition systems. The dual language approach, as stated in the introduction of Section 5.2, com-bines an operational formalism, useful for describing the system behavior interms of states and transitions, with a descriptive notation suitable for specify-ing its properties. It provides a methodological support to the designer, in thatit constitutes a unified framework for requirement specification, design, and ver-ification. Although a dual language approach often provides methods and toolsfor verification (e.g., for model checking), we point out that effectiveness orefficiency of verification procedures are not necessarily a direct consequence ofthe presence of two, heterogeneous notations (an operational and a descriptiveone), but can derive from other factors, as the case of SPIN, discussed below,shows. In recent years a great number of frameworks to specify, design and63erify critical, embedded, real-time systems have been proposed, which may beconsidered as applications of the dual language approach. As usual we limitourselves to mention the most significant features of a few representative cases.
The TTM/RTTL Framework
The work of Ostroff [Ost89] is among the first ones addressing the problem offormal specification, design, and verification of real-time systems by pursuinga dual language approach. It proposes a framework based on Extended StateMachines and Real-Time Temporal Logic (ESM/RTTL). In later works, ESMhave been extended to Timed Transition Models (TTM) [Ost90, Ost99].The operational part of the framework (TTM) associates transitions withlower and upper bounds, referred to the value of a global, discrete time clockvariable. We briefly discussed the time model introduced by this formalism inSection 5.1.1.Here, let us illustrate TTM through the usual resource manager example.Figure 19 represents a system similar to the Timed Petri net example of Section5.1.2: the number of low-priority requests is not counted, while that of high-priority ones is. Each transition is annotated with lower and upper bounds,a guard , and a variable update rule. For instance, the transition rel can betaken whenever the guard occ > rel is to update the occ variable by incrementing it.Finally, when rel becomes enabled, it must be taken within a maximum of 100clock ticks, unless the state is left (and possibly re-entered) by taking another(non tick) enabled transition (such as hpr , which is always enabled, since it hasno guard). free occ h hpr [0 , ∞ ] : → occ := occ + 1 hpr [0 , ∞ ] : → occ := occ + 1 rel [0 , > → occ := occ − wg lpr [0 , ∞ ] occ l slr [0 , rel [0 , hpr [0 , ∞ ] : → occ := occ + 1 rel [0 , → occ := 0 Figure 19: A resource manager modeled through a Timed Transition Model.The descriptive part of the TTM/RTTL framework (RTTL) is based onManna and Pnueli’s temporal logic: it assumes linear time and it adopts theusual operators of future-only propositional LTL. Real-time (i.e., quantita-tive ) temporal properties are expressed by means of (in)equalities on simple64rithmetic expressions involving the clock variable, as discussed in Section 5.2.1.For instance, the familiar requirement that a low priority request is followed,within 100 time units, by the resource being occupied would be expressed asfollows. ∀ T (( lpr ∧ t = T) ⇒ F( occ ∧ t ≤ T + 100))RTTL formulas are interpreted over TTM trajectories , i.e., sequences ofstates corresponding to TTM computations: [Ost89] provides both a proof sys-tem and verification procedures based on reachability analysis techniques.The TTM/RTTL framework is also supported by the StateTime toolset [Ost97], which in turn relies on the STeP tool [BBC + Model Checking Environments
The SPIN model checking environment [Hol03] is based, for the operationalpart, on B¨uchi automata, which are edited by the designer using a high-levelnotation called ProMeLa. The syntax of ProMeLa closely resembles that of theC programming language (and therefore is — perhaps deceptively — amenableto C programmers) and, in addition to the traditional constructs for sequentialprogramming, provides features like parallel processes, communication chan-nels, nondeterministic conditional instructions. The descriptive notation is plainfuture-only LTL, with the known limitations concerning the possibility to ex-press complex properties and quantitative time constraints already pointed outin Section 5.2.1. Model checking in SPIN is performed by translating theLTL formula expressing the required property into a B¨uchi automaton andthen checking that the languages of the two automata (that obtained from theProMeLa program and the one coming from the LTL formula) are disjoint. It istherefore apparent that the distinction between the operational and the descrip-tive parts is maintained only in the user interface for methodological purposes,and it blurs during verification.UPPAAL [LPY97] is another prominent framework supporting model-checkingin a dual language approach. The operational part consists of a network of timedautomata combined by the CCS parallel composition operator, and it providesboth synchronous communication and asynchronous communication. The de-scriptive notation uses CTL in a restricted form, allowing only formulas of thekind AG φ , AF φ , EG φ , EF φ , and AG( φ ⇒ AF ψ ), where φ and ψ are “local”formulas, i.e., Boolean expressions over state predicates and integer variables,and clock constraints. Other Dual Language Approaches
Among the numerous other dual language frameworks [JM94] we mention[FMM94], which combines timed Petri nets and the TRIO temporal logic: itprovides a systematic procedure for translating any timed Petri net into a set ofTRIO axioms that characterize its behavior, thus making it possible to deriverequired properties of the Petri net within the TRIO proof system.65FM02] introduces a real-time extension of the Object Constraint Language(OCL, [WK99]), which is a logic language that allows users to state (and ver-ify through model checking) properties of transitions of UML state diagrams(which, as mentioned in Section 5.1.1, are a variation of Harel’s Statecharts),especially temporal ones.
In computer science, unlike other fields of science and engineering, the modelingof time is often restricted to the formalization and analysis of specific problemswithin particular application fields, if not entirely abstracted away. In thispaper we have analyzed the historical and practical reasons of this fact; wehave examined various categories under which formalisms to analyze timingaspects in computing can be classified; then we surveyed — with no attempt atexhaustiveness, but with the goal of conceptual completeness — many of suchformalisms; in doing so we analyzed and compared them with respect to theabove categories.The result is a quite rich and somewhat intricate picture of different butoften tightly connected models, certainly much more variegate than the waytime modeling is usually faced in other fields of science and engineering. Asin other cases, in this respect, too, computing science has much to learn fromother, more established, cultural fields of engineering, but also the converse istrue [GM06b].Perhaps, the main lesson we can extract from our study is that despite thecommon understanding that time is a basic, unique conceptual entity, there are“many notions of time” in our reasoning; this is reflected in the adoption ofdifferent formal models when specifying and analyzing any type of system wheretiming behavior is of any concern .In some sense the above claim could be seen as an application of a principleof relativity to the abstractions required by modern — heterogeneous — systemdesign. Whereas traditional engineering could comfortably deal with a uniqueabstract model of time as an independent “variable” flowing in an autonomousand immutable way to which all other system’s variables had to be related, theadvent of computing and communication technologies, with elaboration speedsthat are comparable with the light’s speed produced, and perhaps imposed, afairly sharp departure from such a view: • Often a different notion of time must be associated with different system’scomponents. This may happen not only because the various components(possibly social organizations) are located in different places and theirevolution may take place at a speed such that it is impossible to talkabout “system state at time t ”, but also because the various componentsmay have quite different nature — typically, a controlled environmentand a controller subsystem based on some computing device — with quitedifferent dynamics. 66 In particular, even inside the same computing device, it may be necessaryto distinguish between an “internal time”, defined and measured by de-vice’s clock, and an “external time”, which is the time of the environmentwith which the computing apparatus must interact and synchronize. Theconsequence of this fact is that often, perhaps in a hidden way, two dif-ferent notions of time coexist in the same model (for instance, the timedefined by the sequence of events and the time defined by a more or lessexplicit variable — a clock — whose value may be recorded and assignedjust like other program variables). • A different abstraction on time modeling may be useful depending onthe type of properties one may wish to analyze: for instance, in somecases just the ordering of events matters, whereas in other cases a precisequantitative measure of the distance among them is needed. As a con-sequence many different mathematical approaches have been pursued tocomply with the various modeling needs, the distinction between discreteand continuous time domains being only “the tip of the iceberg” of thisissue.Whether future evolutions will produce a better unification of the presentstate of the art or even more diversification and specialization in time modelingis an open and challenging question.
References [ACD93] Rajeev Alur, Costas Courcoubetis, and David L. Dill. Model-checking in dense real-time.
Information and Computation ,104(1):2–34, 1993.[ACH +
95] Rajeev Alur, Costas Courcoubetis, Nicolas Halbwachs, Thomas A.Henzinger, Pei-Hsin Ho, Xavier Nicollin, Alfredo Olivero, JosephSifakis, and Sergio Yovine. The algorithmic analysis of hybrid sys-tems.
Theoretical Computer Science , 138:3–34, 1995.[ACHH93] Rajeev Alur, Costas Courcoubetis, Thomas A. Henzinger, and Pei-Hsin Ho. Hybrid automata: An algorithmic approach to the speci-fication and verification of hybrid systems. In Robert L. Grossman,Anil Nerode, Anders P. Ravn, and Hans Rischel, editors,
HybridSystems , volume 736 of
Lecture Notes in Computer Science , pages209–229. Springer-Verlag, 1993.[ACM02] Eugene Asarin, Paul Caspi, and Oded Maler. Timed regular ex-pressions.
Journal of the ACM , 49(2):172–206, 2002.[AD94] Rajeev Alur and David L. Dill. A theory of timed automata.
The-oretical Computer Science , 126(2):183–235, 1994.67AFH96] Rajeev Alur, Tom´as Feder, and Thomas A. Henzinger. The benefitsof relaxing punctuality.
Journal of the ACM , 43(1):116–146, 1996.[AH92a] Rajeev Alur and Thomas A. Henzinger. Back to the future: To-wards a theory of timed regular languages. In
Proceedings ofthe 33rd Annual Symposium on Foundations of Computer Science(FOCS’92) , pages 177–186. IEEE Computer Society Press, 1992.[AH92b] Rajeev Alur and Thomas A. Henzinger. Logics and models of realtime: A survey. In
Real Time: Theory in Practice , volume 600 of
Lecture Notes in Computer Science , pages 74–106. Springer-Verlag,1992.[AH93] Rajeev Alur and Thomas A. Henzinger. Real-time logics: Complex-ity and expressiveness.
Information and Computation , 104(1):35–77, 1993.[AH94] Rajeev Alur and Thomas A. Henzinger. A really temporal logic.
Journal of the ACM , 41(1):181–204, 1994.[AH96] Myla Archer and Constance L. Heitmeyer. Mechanical verifica-tion of timed automata: a case study. In
Proceedings of the IEEEReal Time Technology and Applications Symposium , pages 192–203,1996.[AK95] Rajeev Alur and Robert P. Kurshan. Timing analysis in COSPAN.In Rajeev Alur, Thomas A. Henzinger, and Eduardo D. Sontag, ed-itors,
Proceedings of the 3rd DIMACS/SYCON Workshop on Ver-ification and Control of Hybrid Systems , volume 1066 of
LectureNotes in Computer Science , pages 220–231. Springer-Verlag, 1995.[AL94] Mart´ın Abadi and Leslie Lamport. An old-fashioned recipe for realtime.
ACM Transactions on Programming Languages and Systems ,16(5):1543–1571, 1994.[AM04] Rajeev Alur and P. Madhusudan. Decision problems for timed au-tomata: A survey. In Marco Bernardo and Flavio Corradini, editors,
Revised Lectures from the International School on Formal Meth-ods for the Design of Computer, Communication and Software Sys-tems: Formal Methods for the Design of Real-Time Systems (SFM-RT’04) , volume 3185 of
Lecture Notes in Computer Science , pages1–24. Springer-Verlag, 2004.[Ant00] Panos J. Antsaklis, editor.
Special Issue on Hybrid Systems: Theoryand Applications , volume 88 of
Proceedings of the IEEE . IEEEPress, 2000.[Arc00] Myla Archer. TAME: Using PVSstrategies for special-purpose the-orem proving.
Annals of Mathematics and Artificial Intelligence ,29(1–4):139–181, 2000. 68Asa04] Eugene Asarin. Challenges in timed languages: from applied theoryto basic theory.
Bulletin of the EATCS , 83:106–120, 2004. (Column:concurrency).[Bae03] J. C. M. Baeten. Over thirty years of process algebra: Past,present and future. In L. Aceto, Z. ´Esik, W. J. Fokkink, andA. Ing´olfsd´ottir, editors,
Process Algebra: Open Problems and Fu-ture Directions , volume NS–03–3 of
BRICS Notes Series , pages 7–12. 2003.[Bae04] J. C. M. Baeten. A brief history of process algebra. Technical Re-port CSR 04–02, Department of Mathematics and Computer Sci-ence, Technische Universiteit Eindhoven, 2004.[BB94] Grady Booch and Doug Bryan.
Software Engineering with ADA .Addison-Wesley, 1994.[BB06] Alan Burns and Gordon Baxter. Time bands in systems structure.In D. Besnard, C. Gacek, and C. B. Jones, editors,
Structure fordependability . Springer, 2006.[BBC +
00] Nikolaj S. Bjørner, Anca Browne, Michael Col´on, Bernd Finkbeiner,Zohar Manna, Henny B. Sipma, and Tom´as E. Uribe. Verifyingtemporal properties of reactive systems: A STeP tutorial.
FormalMethods in System Design , 16(3):227–270, 2000.[BBM98] Michael S. Branicky, Vivek S. Borkar, and Sanjoy K. Mitter. A uni-fied framework for hybrid control: Model and optimal control the-ory.
IEEE Transactions on Automatic Control , 43(1):31–45, 1998.[BCM05] Patricia Bouyer, Fabrice Chevalier, and Nicolas Markey. On theexpressiveness of TPTL and MTL. In R. Ramanujam and SandeepSen, editors,
Proceedings of the 25th International Conference onFoundations of Software Technology and Theoretical Computer Sci-ence (FSTTCS’05) , volume 3821 of
Lecture Notes in Computer Sci-ence , pages 432–443. Springer-Verlag, 2005.[BD91] Bernard Berthomieu and Michel Diaz. Modeling and verification oftime dependent systems using time Petri nets.
IEEE Transactionson Software Engineering , 17(3):259–273, 1991.[BDW00] Tom Bienm¨uller, Werner Damm, and Hartmut Wittke. TheSTATEMATE verification environment – making it real. In
Pro-ceedings of the 12th International Conference on Computer AidedVerification (CAV’00) , volume 1855 of
Lecture Notes in ComputerScience , pages 561–567. Springer-Verlag, 2000.[Bek71] Hans Bekiˇc. Towards a mathematical theory of processes. TechnicalReport Technical Report TR 25.125, IBM Laboratory, Wien, 1971.Published in [Bek84]. 69Bek84] Hans Bekiˇc. Programming languages and their definition. In C. B.Jones, editor,
Selected Papers by Hans Bekiˇc , volume 177 of
LectureNotes in Computer Science . Springer-Verlag, 1984.[BG92] G´erard Berry and Georges Gonthier. The Esterel synchronous pro-gramming language: Design, semantics, implementation.
Scienceof Computer Programming , 19(2):87–152, 1992.[BGO +
04] Marius Bozga, Susanne Graf, Ileana Ober, Iulian Ober, and JosephSifakis. The IF toolset. In Marco Bernardo and Flavio Corradini,editors,
Revised Lectures from the International School on FormalMethods for the Design of Computer, Communication and SoftwareSystems: Formal Methods for the Design of Real-Time Systems(SFM-RT’04) , volume 3185 of
Lecture Notes in Computer Science ,pages 237–267. Springer-Verlag, 2004.[BL74] Walter S. Brainerd and Lawrence H. Landweber.
Theory of Com-putation . John Wiley & Sons, 1974.[BM02] J. C. M. Baeten and C. A. Middelburg.
Process Algebra with Tim-ing . Monographs in Theoretical Computer Science. Springer-Verlag,2002.[BMN00] Pierfrancesco Bellini, Riccardo Mattolini, and Paolo. Nesi. Tem-poral logics for real-time system specification.
ACM ComputingSurveys , 32(1):12–42, March 2000.[BPS01] Jan A. Bergstra, Alban Ponse, and Scott A. Smolka, editors.
Hand-book of Process Algebra . Elsevier, 2001.[Bri89] Ed Brinksma, editor.
Information Processing Systems — Open Sys-tems Interconnection — LOTOS — A Formal Description Tech-nique Based on the Temporal Ordering of Observational Behaviour .ISO, 1989. ISO 8807:1989.[BS03] Egon B¨orger and Robert F. St¨ark.
Abstract State Machines. AMethod for High-Level System Design and Analysis . Springer-Verlag, 2003.[BW90] Jos C. M. Baeten and W. P. Weijland.
Process Algebra , volume 18of
Cambridge Tracts in Theoretical Computer Science . CambridgeUniversity Press, 1990.[BY04] Johan Bengtsson and Wang Yi. Timed automata: Semantics, al-gorithms and tools. In J¨org Desel, Wolfgang Reisig, and GrzegorzRozenberg, editors,
Lectures on Concurrency and Petri Nets, Ad-vances in Petri Nets (from ACPN’03) , volume 3098 of
Lecture Notesin Computer Science , pages 87–124. Springer-Verlag, 2004.70CCM +
91] Edoardo Corsetti, Ernani Crivelli, Dino Mandrioli, AngeloMorzenti, Angelo Montanari, Pierluigi San Pietro, and Elena Ratto.Dealing with different time scales in formal specifications. In
InProceedings of the 6th International Workshop on Software Specifi-cation and Design , pages 92–101, 1991.[Cer93] Antonio Cerone.
A Net-Based Approach for Specifying Real-TimeSystems . PhD thesis, Universit`a degli Studi di Pisa, Dipartimentodi Informatica, 1993. TD-16/93.[CGP00] Edmund M. Clarke, Orna Grumberg, and Doron A. Peled.
ModelChecking . MIT Press, 2000.[CHR91] Zhou Chaochen, C. A. R. Hoare, and Anders P. Ravn. A calculusof duration.
Information Processing Letters , 40(5):269–276, 1991.[CHR02] Franck Cassez, Thomas A. Henzinger, and Jean-Fran¸cois Raskin.A comparison of control problems for timed and hybrid systems.In Claire Tomlin and Mark R. Greenstreet, editors,
Proceedings ofthe 5th International Workshop on Hybrid Systems: Computationand Control (HSCC’02) , volume 2289 of
Lecture Notes in ComputerScience , pages 134–148. Springer-Verlag, 2002.[CL99] Christos G. Cassandras and Stephane Lafortune.
Introduction toDiscrete Event Systems . Kluwer Academic Publishers, 1999.[CMS99] Antonio Cerone and Andrea Maggiolo-Schettini. Time-based ex-pressivity of time Petri nets for system specification.
TheoreticalComputer Science , 216(1–2):1–53, 1999.[Coo04] Matthew Cook. Universality in elementary cellular automata.
Com-plex Systems , 15:1–40, 2004.[DK05] Pedro R. D’Argenio and Joost-Pieter Katoen. A theory of stochasticsystems (parts i and ii).
Information and Computation , 203(1):1–74, 2005.[DS95] Jim Davies and Steve Schneider. A brief history of timed CSP.
Theoretical Computer Science , 138:243–271, 1995.[EH86] E. Allen Emerson and Joseph Y. Halpern. “Sometimes” and “notnever” revisited: On branching versus linear time temporal logic.
Journal of the ACM , 33(1):151–178, 1986.[Eme90] E. Allen Emerson. Temporal and modal logic. In Jan van Leeuwen,editor,
Handbook of Theoretical Computer Science , volume B, pages996–1072. Elsevier Science, 1990.[EPLF03] Hans-Erik Eriksson, Magnus Penker, Brian Lyons, and David Fado.
UML 2 Toolkit . John Wiley & Sons, 2003.71FM02] Stephan Flake and Wolfgang Mueller. An OCL extension for real-time constraints. In T. Clark and J. Warmer, editors,
Object Mod-eling with the OCL , volume 2263 of
Lecture Notes in ComputerScience , pages 150–171. Springer-Verlag, 2002.[FMM94] Miguel Felder, Dino Mandrioli, and Angelo Morzenti. Provingproperties of real-time systems through logical specifications andPetri net models.
IEEE Transactions on Software Engineering ,20(2):127–141, 1994.[FPR08a] Carlo A. Furia, Matteo Pradella, and Matteo Rossi. Automatedverification of dense-time MTL specifications via discrete-time ap-proximation. In Jorge Cu´ellar, Tom Maibaum, and Kaisa Sere,editors,
Proceedings of the 15th International Symposium on For-mal Methods (FM’08) , volume 5014 of
Lecture Notes in ComputerScience , pages 132–147. Springer-Verlag, May 2008.[FPR08b] Carlo A. Furia, Matteo Pradella, and Matteo Rossi. Comments on“temporal logics for real-time system specification”.
ACM Comput-ing Surveys , 2008. Accepted for publication (April 2008).[FR06] Carlo A. Furia and Matteo Rossi. Integrating discrete- andcontinuous-time metric temporal logics through sampling. In Eu-gene Asarin and Patricia Bouyer, editors,
Proceedings of the 4thInternational Conference on the Formal Modeling and Analysis ofTimed Systems (FORMATS’06) , Lecture Notes in Computer Sci-ence. Springer-Verlag, 2006.[Fra86] Nissim Francez.
Fairness . Monographs in Computer Science.Springer-Verlag, 1986.[Gab87] Dov M. Gabbay. The declarative past and imperative future. InBehnam Banieqbal, Howard Barringer, and Amir Pnueli, editors,
Proceeding of Temporal Logic in Specification (TLS’87) , volume398 of
Lecture Notes in Computer Science , pages 409–448, Altrin-chamm, UK, April 1987. Springer-Verlag.[GJM02] Carlo Ghezzi, Mehdi Jazayeri, and Dino Mandrioli.
Fundamentalsof Software Engineering . Prentice Hall, 2nd edition, 2002.[GM01] Angelo Gargantini and Angelo Morzenti. Automated deductive re-quirements analysis of critical systems.
ACM Transactions on Soft-ware Engineering and Methodology , 10(3):255–307, 2001.[GM06a] Angelo Gargantini and Angelo Morzenti. Automated verification ofcontinuous time systems by discrete temporal induction. In
Proceed-ings of the 13th International Symposium on Temporal Represen-tation and Reasoning (TIME’06) . IEEE Computer Society Press,2006. 72GM06b] Carlo Ghezzi and Dino Mandrioli. The challenges of software en-gineering education. In P. Inverardi and M. Jazayeri, editors,
Pro-ceedings of the ICSE 2005 Education Track , volume 4309 of
LectureNotes in Computer Science , pages 115–127. Springer-Verlag, 2006.[GMM90] Carlo Ghezzi, Dino Mandrioli, and Angelo Morzenti. TRIO: A logiclanguage for executable specifications of real-time systems.
TheJournal of Systems and Software , 12(2):107–123, 1990.[GMM99] Angelo Gargantini, Dino Mandrioli, and Angelo Morzenti. Deal-ing with zero-time transitions in axiom systems.
Information andComputation , 150(2):119–131, 1999.[GMMP91] Carlo Ghezzi, Dino Mandrioli, Sandro Morasca, and Mauro Pezz`e.A unified high-level Petri net formalism for time-critical systems.
IEEE Transactions on Software Engineering , 17(2):160–172, 1991.[GPSS80] Dov M. Gabbay, Amir Pnueli, Saharon Shelah, and Jonathan Stavi.On the temporal basis of fairness. In
Proceedings of the 7th An-nual ACM Symposium on Principles of Programming Languages(POPL’80) , pages 163–173. ACM Press, 1980.[GTBF03] Holger Giese, Matthias Tichy, Sven Burmester, and Stephan Flake.Towards the compositional verification of real-time UML designs.In
Proceedings of ESEC/SIGSOFT FSE 2003 , pages 38–47, 2003.[Har87] David Harel. Statecharts: A visual formalism for complex systems.
Science of Computer Programming , 8(3):231–274, 1987.[Hen96] Thomas A. Henzinger. The theory of hybrid automata. In
Proceed-ings of the 11th Annual Symposium on Logic in Computer Science(LICS) , pages 278–292. IEEE Computer Society Press, 1996.[Hen98] Thomas A. Henzinger. It’s about time: Real-time logics reviewed.In Davide Sangiorgi and Robert de Simone, editors,
Proceedingsof the 9th International Conference on Concurrency Theory (CON-CUR’98) , volume 1466 of
Lecture Notes in Computer Science , pages439–454. Springer-Verlag, 1998.[HHWT97] Thomas A. Henzinger, Pei-Hsin Ho, and Howard Wong-Toi.HYTECH: A model checker for hybrid systems.
International Jour-nal on Software Tools for Technology Transfer , 1(1–2), 1997.[HKPV98] Thomas A. Henzinger, Peter W. Kopke, Anuj Puri, and PravinVaraiya. What’s decidable about hybrid automata?
Journal ofComputer and System Sciences , 57(1):94–124, 1998.[HLN +
90] David Harel, Hagi Lachover, Amnon Naamad, Amir Pnueli, MichalPoliti, Rivi Sherman, Aharon Shtull-Trauring, and Mark B. Trakht-enbrot. STATEMATE: A working environment for the development73f complex reactive systems.
IEEE Transactions on Software En-gineering , 16(4):403–414, 1990.[HM96] Constance L. Heitmeyer and Dino Mandrioli, editors.
Formal Meth-ods for Real-Time Computing . John Wiley & Sons, 1996.[HMU00] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman.
Intro-duction to Automata Theory, Languages, and Computation . Addi-son Wesley, 2nd edition, 2000.[HN96] David Harel and Amnon Naamad. The STATEMATE semanticsof statecharts.
ACM Transactions on Software Engineering andMethodology , 5(4):293–333, 1996.[HNSY94] Thomas A. Henzinger, Xavier Nicollin, Joseph Sifakis, and SergioYovine. Symbolic model checking for real-time systems.
Informationand Computation , 111(2):193–244, 1994.[Hoa78] C. A. R. Hoare. Communicating sequential processes.
Communi-cations of the ACM , 21(8):666—677, 1978.[Hoa85] C. A. R. Hoare.
Communicating Sequential Processes . PrenticeHall, 1985.[Hol03] Gerard J. Holzmann.
The SPIN Model Checker: Primer and Ref-erence Manual . Addison-Wesley, 2003.[HPSS87] David Harel, Amir Pnueli, Jeanette P. Schmidt, and Rivi Sherman.On the formal semantics of statecharts. In
Proceedings of the 2ndIEEE Symposium on Logic in Computer Science (LICS’87) , pages54–64, 1987.[HR04] Yoram Hirshfeld and Alexander Rabinovich. Logics for realtime: Decidability and complexity.
Fundamenta Informormaticae ,62(1):1–28, 2004.[JM86] Farnam Jahanian and Aloysius K. Mok. Safety analysis of timingproperties in real-time systems.
IEEE Transactions on SoftwareEngineering , 12(9):890–904, 1986.[JM94] Farnam Jahanian and Aloysius K. Mok. Modechart: A specificationlanguage for real-time systems.
IEEE Transactions on SoftwareEngineering , 20(12):933–947, 1994.[Kam68] Johan Anthony Willem Kamp.
Tense Logic and the Theory of Lin-ear Order . PhD thesis, University of California at Los Angeles,1968.[KB04] Randy H. Katz and Gaetano Borriello.
Contemporary Logic Design .Prentice Hall, 2nd edition, 2004.74Kha95] Hassan Khalil.
Nonlinear Systems . Prentice-Hall, 2nd edition, 1995.[Kno96] Konrad Knopp.
Theory of Functions, Parts 1 and 2, Two VolumesBound as One , chapter 8, pages 83–111. Dover, 1996.[Koy90] Ron Koymans. Specifying real-time properties with metric temporallogic.
Real-Time Systems , 2(4):255–299, 1990.[Koy92] Ron Koymans. (real) time: A philosophical perspective. In J. W.de Bakker, Cornelis Huizing, Willem P. de Roever, and GrzegorzRozenberg, editors,
Proceedings of the REX Workshop: “Real-Time: Theory in Practice” , volume 600 of
Lecture Notes in Com-puter Science , pages 353–370. Springer-Verlag, 1992.[KP92] Yonit Kesten and Amir Pnueli. Timed and hybrid statechartsand their textual representation. In Jan Vytopil, editor,
Proceed-ings of the 2nd International Symposium on Formal Techniques inReal-Time and Fault-Tolerant Systems (FTRTFT’92) , volume 571of
Lecture Notes in Computer Science , pages 591–620. Springer-Verlag, 1992.[Kri63] Saul Aaron Kripke. Semantical analysis of modal logic I.
Zeitschriftfur Mathematische Logik und Grundlagen der Mathematik , 9:67–96,1963.[Lam80] Leslie Lamport. “Sometime” is sometimes “not never”: On the tem-poral logic of programs. In
Proceedings of the 7th ACM Symposiumon Principles of Programming Languages (SIGPLAN-SIGACT) ,pages 174–185. ACM Press, 1980.[Lam83] Leslie Lamport. What good is temporal logic? In R. E. A. Mason,editor,
Proceedings of the 9th IFIP World Congress , volume 83 of
Information Processing , pages 657–668. North-Holland, 1983.[Lam94] Leslie Lamport. The temporal logic of actions.
ACM Transationson Programming Languages and Systems , 16(3):872–923, 1994.[LMS02] Fran¸cois Laroussinie, Nicolas Markey, and Philippe Schnoebelen.Temporal logic with forgettable past. In
Proceedings of the 17thAnnual IEEE Symposium on Logic in Computer Science (LICS’02) ,pages 383–392. IEEE Computer Society Press, 2002.[LPY97] Kim G. Larsen, Paul Pettersson, and Wang Yi. UPPAAL in anutshell.
International Journal on Software Tools for TechnologyTransfer , 1(1–2), 1997.[LPZ85] Orna Lichtenstein, Amir Pnueli, and Lenore D. Zuck. The gloryof the past. In
Proceedings of 3rd Workshop on Logic of Programs ,volume 193 of
Lecture Notes in Computer Science , pages 196–218.Springer-Verlag, 1985. 75LV96] Nancy Lynch and Frits W. Vaandrager. Forward and backward sim-ulations – part II: Timing-based systems.
Information and Compu-tation , 128(1):1–25, 1996.[LWW07] Carsten Lutz, Dirk Walther, and Frank Wolter. Quantitative tem-poral logics over the reals: PSPACE and below.
Information andComputation , 205(1):99–123, 2007.[Mea55] George H. Mealy. A method for synthesizing sequantial circuits.
Bell System Technical Journal , 34:1045–1079, 1955.[Men97] Elliott Mendelson.
Introduction to Mathematical Logic . Chapmanand Hall, fourth edition, 1997.[MF76] P. M. Merlin and D. J. Farber. Recoverability and communicationprotocols: Implications of a theoretical study.
IEEE Transactionson Communications , 24(9):1036–1043, 1976.[MG87] Dino Mandrioli and Carlo Ghezzi.
Theoretical Foundations of Com-puter Sciences . John Wiley & Sons, 1987.[Mil80] Robin Milner.
A Calculus of Communicating Systems , volume 92of
Lecture Notes in Computer Science . Springer-Verlag, 1980.[Mil89] Robin Milner.
Communication and Concurrency . Prentice Hall,1989.[MK99] Jeff Magee and Jeff Kramer.
Concurrency: State Models & JavaPrograms . John Wiley & Sons, 1999.[MMG92] Angelo Morzenti, Dino Mandrioli, and Carlo Ghezzi. A model para-metric real-time logic.
ACM Transactions on Programming Lan-guages and Systems , 14(4):521–573, 1992.[Moo56] Edward F. Moore. Gedanken-experiments on sequential machines.In
Automata Studies , volume 34 of
Annals of Mathematical Studies ,pages 129–153. Princeton University Press, 1956.[Mos83] Ben Moszkowski.
Reasoning about Digital Circuits . PhD thesis, De-partment of Computer Science, Stanford University., 1983. Techin-cal Report STAN–CS–83–970.[Mos86] Ben Moszkowski.
Executing temporal logic programs . CambridgeUniversity Press, 1986.[MP92] Zohar Manna and Amir Pnueli.
The Temporal Logic of Reactiveand Concurrent Systems: Specification . Springer-Verlag, 1992.76NOSY93] Xavier Nicollin, Alfredo Olivero, Joseph Sifakis, and Sergio Yovine.Hybrid automata: An algorithmic approach to the specification andverification of hybrid systems. In Robert L. Grossman, Anil Nerode,Anders P. Ravn, and Hans Rischel, editors,
Hybrid Systems , volume736 of
Lecture Notes in Computer Science , pages 149–178. Springer-Verlag, 1993.[NS91] Xavier Nicollin and Joseph Sifakis. An overview and synthesis oftimed process algebras. In Kim G. Larsen and Arne Skou, editors,
Proceedings of the 3rd International Workshop on Computer AidedVerification (CAV’91) , volume 575 of
Lecture Notes in ComputerScience , pages 376–398. Springer-Verlag, 1991.[Odi99] Piergiorgio Odifreddi.
Classical Recursion Theory . North Holland,1999.[OL07a] Martin Ouimet and Kristina Lundqvist. The TASM toolset: Speci-fication, simulation, and formal verification of real-time systems. InW. Damm and H. Hermanns, editors,
Proceedings of the 19th In-ternational Conference on Computer-Aided Verification (CAV’07) ,volume 4590 of
Lecture Notes in Computer Science , pages 126–130.Springer-Verlag, 2007.[OL07b] Martin Ouimet and Kristina Lundqvist. The timed abstract statemachine language: Abstract state machines for real-time systemengineering. In
Proceedings of the 14th International Workshop onAbstract State Machines (ASM’07) , 2007.[Ost89] Jonathan S. Ostroff.
Temporal Logic for Real Time Sytems . Ad-vanced Software Development Series. John Wiley & Sons, 1989.[Ost90] Jonathan S. Ostroff. Deciding properties of timed transition models.
IEEE Transactions on Parallel and Distributed Systems , 1(2):170–183, 1990.[Ost92] Jonathan S. Ostroff. Formal methods for the specification and de-sign of real-time safety critical systems.
Journal of Systems andSoftware , 18(1):33–60, 1992.[Ost97] Jonathan S. Ostroff. A visual toolset for the design of real-timediscrete-event systems.
IEEE Transactions on Control SystemsTechnology , 5(3):320–337, 1997.[Ost99] Jonathan S. Ostroff. Composition and refinement of discrete real-time systems.
ACM Transactions on Software Engineering andMethodology , 8(1):1–48, 1999.[Pap94] Christos H. Papadimitriou.
Computational Complexity . Addison-Wesley, 1994. 77Per93] Adriano Peron. Synchronous and asynchronous models for state-charts. Technical Report TD-21/93, Dipartimento di Informatica,Universit`a di Pisa, 1993.[Pet63] Carl A. Petri. Fundamentals of a theory of asynchronous informa-tion flow. In
Proceedings of IFIP Congress , pages 386–390. NorthHolland Publishing Company, 1963.[Pet81] James L. Peterson.
Petri Net theory and the Modelling of Systems .Prentice-Hall, 1981.[Pnu77] Amir Pnueli. The temporal logic of programs. In
Proceedingsof the 18th IEEE Symposium on Fundations of Computer Science(FOCS’77) , pages 46–67, 1977.[PP04] Dominique Perrin and Jean-´Eric Pin.
Infinite Words , volume 141of
Pure and Applied Mathematics . Elsevier, 2004.[PS91] Amir Pnueli and Michal Shalev. What is in a step: On the seman-tics of statecharts. In Takayasu Ito and Albert R. Meyer, editors,
Proceedings of the International Conference on Theoretical Aspectsof Computer Software (TACS’91) , volume 526 of
Lecture Notes inComputer Science , pages 244–264. Springer-Verlag, 1991.[Rei85] Wolfgang Reisig.
Petri Nets: An Introduction . EATCS Monographson Theoretical Computer Science. Springer-Verlag, 1985.[RKNP04] J. Rutten, M. Kwiatkowska, G. Norman, and D. Parker.
Mathemat-ical Techniques for Analyzing Concurrent and Probabilistic Systems ,volume 23 of
CRM Monograph Series . American Mathematical So-ciety, 2004.[Rog87] Hartley Rogers, Jr.
Theory of Recursive Functions and EffectiveComputability . MIT Press, 1987.[Rom90] Gruia-Catalin Roman. Formal specification of geographic data pro-cessing requirements.
IEEE Transaction on Knowledge and DataEngineering , 2(4):370–380, 1990.[Ros97] A. William Roscoe.
The Theory and Practice of Concurrency .Prentice-Hall International, 1997.[RR88] George M. Reed and A. William Roscoe. A timed model for Com-municating Sequential Processes.
Theoretical Computer Science ,58(1–3):249–261, 1988.[RU71] Nicholas Rescher and Alasdair Urquhart.
Temporal Logic . Springer-Verlag, 1971.[Sch00] Steven Schneider.
Concurrent and Real-Time Systems: The CSPApproach . John Wiley & Sons, 2000.78Sch02] Philippe Schnoebelen. The complexity of temporal logic modelchecking. In Philippe Balbiani, Nobu-Yuki Suzuki, Frank Wolter,and Michael Zakharyaschev, editors,
Proceedings of the 4th Con-ference on Advances in Modal Logic , pages 393–436. King’s CollegePublications, 2002.[Sip05] Michael Sipser.
Introduction to the Theory of Computation . CourseTechnology, 2nd edition, 2005.[SMV83] Richard L. Schwartz, P. M. Melliar-Smith, and Friedrich H. Vogt.An interval logic for higher-level temporal reasoning. In
Proceedingsof the 2nd Annual ACM Symposium on Principles of DistributedComputing (PODC’83) , pages 198–212. ACM Press, 1983.[Som04] Ian Sommerville.
Software Engineering . Addison Wesley, 7th edi-tion, 2004.[SP05] Sigurd Skogestad and Ian Postlethwaite.
Multivariable FeedbackControl: Analysis and Design . Wiley, 2nd edition, 2005.[TGI] Petri nets tools database. .[Tho90] Wolfgang Thomas. Automata on infinite objects. In Jan vanLeeuwen, editor,
Handbook of Theoretical Computer Science , vol-ume B, pages 133–164. Elsevier Science, 1990.[UML04] Unified modeling language specification. Technical Reportformal/04-07-02, Object Management Group, 2004.[UML05] UML 2.0 superstructure specification. Technical Report formal/05-07-04, Object Management Group, 2005.[Var96] Moshe Y. Vardi. An automata-theoretic approach to linear tempo-ral logic. In
Proceedings of the 8th BANFF Higher Order WorkshopConference on Logics for Concurrency: Structure versus Automata ,volume 1043 of
Lecture Notes in Computer Science , pages 238–266.Springer-Verlag, 1996.[Var01] Moshe Y. Vardi. Branching vs. linear time: Final showdown. InTiziana Margaria and Wang Yi, editors,
Proceedings of the 7th In-ternational Conference on Tools and Algorithms for the Construc-tion and Analysis of Systems (TACAS’01) , volume 2031 of
LectureNotes in Computer Science , pages 1–22. Springer-Verlag, 2001.[vEVD89] Peter H. J. van Eijk, C. A. Vissers, and Michel Diaz, editors.
Theformal description technique LOTOS . Elsevier Science, 1989.79von94] Michael von der Beeck. A comparison of statecharts variants. In
Proceedings of the 3rd International Symposium on Formal Tech-niques in Real-Time and Fault-Tolerant Systems , volume 863 of
Lec-ture Notes in Computer Science , pages 128–148. Springer-Verlag,1994.[vS00] Arjan van der Schaft and Hans Schumacher.
An Introduction toHybrid Dynamical Systems , volume 251 of
Lecture Notes in Controland Information Sciences . Springer-Verlag, 2000.[Wei] Eric W. Weisstein. Real analytic function. From MathWorld– A Wolfram Web Resource. http://mathworld.wolfram.com/RealAnalyticFunction.html .[Wir77] Niklaus Wirth. Toward a discipline of real-time programming.
Com-munications of the ACM , 20(8):577–583, 1977.[WK99] Jos B. Warmer and Anneke G. Kleppe, editors.
The Object Con-straint Language . Addison-Wesley, 1999.[Wol94] Stephen Wolfram.
Cellular automata and complexity . Perseus BooksGroup, 1994.[Yov97] Sergio Yovine. Kronos: A verification tool for real-time systems.