A faster exact multiprocessor schedulability test for sporadic tasks
AA faster exact multiprocessor schedulability test for sporadic tasks
Markus Lindström Gilles Geeraerts Joël GoossensUniversité libre de BruxellesDépartement d’Informatique, Faculté des SciencesAvenue Franklin D. Roosevelt 50, CP 2121050 Bruxelles, Belgium{mlindstr, gilles.geeraerts, joel.goossens}@ulb.ac.be
Abstract
Baker and Cirinei introduced an exact but naive algo-rithm [3], based on solving a state reachability problem ina finite automaton, to check whether sets of sporadic hardreal-time tasks are schedulable on identical multiproces-sor platforms. However, the algorithm suffered from poorperformance due to the exponential size of the automatonrelative to the size of the task set. In this paper, we suc-cessfully apply techniques developed by the formal verifi-cation community, specifically antichain algorithms [11],by defining and proving the correctness of a simulationrelation on Baker and Cirinei’s automaton. We show ourimproved algorithm yields dramatically improved perfor-mance for the schedulability test and opens for many fur-ther improvements.
1. Introduction
In this research we consider the schedulability prob-lem of hard real-time sporadic constrained deadline tasksystems upon identical multiprocessor platforms. Hardreal-time systems are systems where tasks are not onlyrequired to provide correct computations but are also re-quire to adhere to strict deadlines [16].Devising an exact schedulability criterion for sporadictask sets on multiprocessor platforms has so far provendifficult due to the fact that there is no known worst casescenario (nor critical instant). It was notably shown in [14]that the periodic case is not necessarily the worst on mul-tiprocessor systems. In this context, the real-time com-munity has mainly been focused on the development of sufficient schedulability tests that correctly identify all un-schedulable task sets, but may misidentify some schedula-ble systems as being unschedulable [2] using a given plat-form and scheduling policy (see e.g. [5, 4]).Baker and Cirinei introduced the first correct algo-rithm [3] that verified exactly whether a sporadic task sys-tem was schedulable on an identical multiprocessor plat-form by solving a reachability problem on a finite state automaton using a naive brute-force algorithm, but it suf-fered from the fact that the number of states was expo-nential in the size of the task sets and its periods, whichmade the algorithm intractable even for small task setswith large enough periods.In this paper, we apply techniques developed bythe formal verification community, specifically Doyen,Raskin et al. [11, 9] who developed faster algorithms tosolve the reachability problem using algorithms based ondata structures known as antichains . Their method hasbeen shown to be provably better [11] than naive statetraversal algorithms such as those used in [3] for decid-ing reachability from a set of initial states to a given set offinal states.An objective of this work is to be as self-containedas possible to allow readers from the real-time commu-nity to be able to fully understand the concepts borrowedfrom the formal verification community. We also hopeour work will kickstart a “specialisation” of the methodspresented herein within the realm of real-time scheduling,thus bridging the two communities.
Related work.
This work is not the first contributionto apply techniques and models first proposed in the set-ting of formal verification to real-time scheduling. Inthe field of operational research, Abdeddaïm and Malerhave studied the use of stopwatch automata to solve job-shop scheduling problems [1]. Cassez has recently ex-ploited game theory, specifically timed games, to boundworst-case execution times on modern computer archi-tectures, taking into account caching and pipelining [8].Fersman et al. have studied a similar problem and in-troduced task automata which assume continuous time[12], whereas we consider discrete time in our work.They showed that, given selected constraints, schedula-bility could be undecidable in their model. Bonifaci andMarchetti-Spaccamela have studied the related problem offeasibility of multiprocessor sporadic systems in [6] andhave established an upper bound on its complexity. a r X i v : . [ c s . O S ] S e p his research. We define a restriction to constraineddeadlines (systems where the relative deadline of tasks isno longer than their minimal interarrival time) of Bakerand Cirinei’s automaton in a more formal way than in [3].We also formulate various scheduling policy properties inthe framework of this automaton such as memorylessness.Our main contribution is the design and proof of cor-rectness of a non-trivial simulation relation on the au-tomaton, required to successfully apply a generic algo-rithm developed in the formal verification community,known as an antichain algorithm to Baker and Cirinei’sautomaton to prove or disprove the schedulability of agiven sporadic task system.Finally, we will show through implementation and ex-perimental analysis that our proposed algorithm outper-forms Baker and Cirinei’s original brute-force algorithm.
Paper organization.
Section 2 defines the real-timescheduling problem we are focusing on, i.e. devising anexact schedulability test for sporadic task sets on identi-cal multiprocessor platforms. Section 3 will formalize themodel (a non-deterministic automaton) we will use to de-scribe the problem and we formulate how the schedula-bility test can be mapped to a reachability problem in thismodel. We also formalize various real-time schedulingconcepts in the framework of our formal model.Section 4 then discusses how the reachability problemcan be solved. We present the classical breadth-first al-gorithm used in [3] and we introduce an improved algo-rithm that makes use of techniques borrowed from the for-mal verification community [11]. The algorithm requirescoarse simulation relations to work faster than the stan-dard breadth-first algorithm. Section 5 introduces the idletasks simulation relation which can be exploited by theaforementioned algorithm.Section 6 then showcases experimental results compar-ing the breadth-first and our improved algorithm using theaforementioned simulation relation, showing that our al-gorithm outperforms the naive one. Section 7 concludesour work. Appendix A gives a detailed proof of a lemmawe use in Section 4.
2. Problem definition
We consider an identical multiprocessor platform with m processors and a sporadic task set τ = { τ , τ , . . . , τ n } .Time is assumed to be discrete. A sporadic task τ i is char-acterized by a minimum interarrival time T i > , a rel-ative deadline D i > and a worst-case execution time (also written WCET) C i > . A sporadic task τ i submits apotentially infinite number of jobs to the system, with eachrequest being separated by at least T i units of time. Wewill assume jobs are not parallel, i.e. only execute on onesingle processor (though it may migrate from a processorto another during execution). We also assume jobs are in-dependent. We wish to establish an exact schedulabilitytest for any sporadic task set τ that tells us whether the set is schedulable on the platform with a given deterministic,predictable and preemptive scheduling policy. In the re-mainder of this paper, we will assume we only work with constrained deadline systems (i.e. where ∀ τ i : D i (cid:54) T i )which embody many real-time systems in practice.
3. Formal definition of the Baker-Cirinei au-tomaton
Baker and Cirinei’s automaton as presented in [3] mod-els the evolution of an arbitrary deadline sporadic task set(with a FIFO policy for jobs of a given task) scheduled onan identical multiprocessor platform with m processors.In this paper, we focus on constrained deadline systems asthis hypothesis simplifies the definition of the automaton.We expect to analyze Baker and Cirinei’s more completeconstruct in future works.The model presented herein allows use of preemp-tive , deterministic and predictable scheduling policies. Itcan, however, be generalized to model broader classes ofschedulers. We will discuss this aspect briefly in Sec-tion 7. Definition 1. An automaton is a tuple A = (cid:104) V, E, S , F (cid:105) ,where V is a finite set of states , E ⊆ V × V is the set of transitions , S ∈ V is the initial state and F ⊆ V is a setof target states .The problem on automata we are concerned with is thatof reachability (of target states). A path in an automaton A = (cid:104) V, E, S , F (cid:105) is a finite sequence v , . . . , v (cid:96) of statess.t. for all (cid:54) i (cid:54) (cid:96) − : ( v i , v i +1 ) ∈ E . Let V (cid:48) ⊆ V be a set of states of A . If there exists a path v , . . . , v (cid:96) in A s.t. v (cid:96) ∈ V (cid:48) , we say that v can reach V (cid:48) . Then, the reachability problem asks, given an automaton A whetherthe initial state S can reach the set of target states F .Let τ = { τ , τ , . . . , τ n } be a set of sporadic tasksand m be a number of processors. This section is de-voted to explaining how to model the behaviour of sucha system by means of an automaton A , and how to reducethe schedulability problem of τ on m processors to an in-stance of the reachability problem in A . At any momentduring the execution of such a system, the informationwe need to retain about each task τ i are: ( i ) the earliestnext arrival time nat( τ i ) relative to the current instant and ( ii ) the remaining processing time rct( τ i ) of the currentlyready job of τ i . Hence the definition of system state : Definition 2 (System states) . Let τ = { τ , τ , . . . , τ n } be a set of sporadic tasks. A system state of τ is a tu-ple S = (cid:104) nat S , rct S (cid:105) where nat S is a function from τ to { , , . . . T max } where T max def = max i T i , and rct S isa function from τ to { , , . . . , C max } , where C max def =max i C i . We denote by States ( τ ) the set of all systemstates of τ .In order to define the set of transitions of the automa-ton, we need to rely on ancillary notions: efinition 3 (Eligible task) . A task τ i is eligible in thestate S if it can submit a job (i.e. if and only if the taskdoes not currently have an active job and the last job wassubmitted at least T i time units ago) from this configura-tion. Formally, the set of eligible tasks in state S is: Eligible ( S ) def = { τ i | nat S ( τ i ) = rct S ( τ i ) = 0 } Definition 4 (Active task) . A task is active in state S ifit currently has a job that has not finished in S . Formally,the set of active tasks in S is: Active ( S ) def = { τ i | rct S ( τ i ) > } A task that is not active in S is said to be idle in S . Definition 5 (Laxity [3]) . The laxity of a task τ i in a sys-tem state S is: laxity S ( τ i ) def = nat S ( τ i ) − ( T i − D i ) − rct S ( τ i ) Definition 6 (Failure state) . A state S is a failure state iffthe laxity of at least one task is negative in S . Formally,the set of failure states on τ is: Fail τ def = { S | ∃ τ i ∈ τ : laxity S ( τ i ) < } Thanks to these notions we are now ready to explainhow to build the transition relation of the automaton thatmodels the behaviour of τ . For that purpose, we firstchoose a scheduler . Intuitively, a scheduler is a function Run that maps each state S to a set of at most m activetasks Run ( S ) to be run: Definition 7 (Scheduler) . A (deterministic) scheduler for τ on m processors is a function Run : States ( τ ) → τ s.t.for all S : Run ( S ) ⊆ Active ( S ) and (cid:54) | Run ( S ) | (cid:54) m .Moreover:1. Run is work-conserving iff for all S , | Run ( S ) | =min { m, | Active ( S ) |} Run is memoryless iff for all S , S ∈ States ( τ ) with Active ( S ) = Active ( S ) : ∀ τ i ∈ Active ( S ) : (cid:18) nat S ( τ i ) = nat S ( τ i ) ∧ rct S ( τ i ) = rct S ( τ i ) (cid:19) implies Run ( S ) = Run ( S ) Intuitively, the work-conserving property implies thatthe scheduler always exploits as many processors as avail-able. The memoryless property implies that the decisionsof the scheduler are not affected by tasks that are idle andthat the scheduler does not consider the past to make itsdecisions.As examples, we can formally define the preemptiveglobal DM and EDF schedulers. Remark that by modeling the scheduler as a function, we restrictourselves to deterministic schedulers . Definition 8 (Preemptive global DM scheduler) . Let (cid:96) def =min { m, | Active ( S ) |} . Then, Run DM is a function thatcomputes Run DM ( S ) def = { τ i , τ i , . . . , τ i (cid:96) } s.t. for all (cid:54) j (cid:54) (cid:96) and for all τ k in Active ( S ) \ Run DM ( S ) , wehave D k > D i j or D k = D i j ∧ k > i j . Definition 9 (Preemptive global EDF scheduler) . Let ttd S ( τ i ) def = nat S ( τ i ) − ( T i − D i ) be the time re-maining before the absolute deadline of the last submit-ted job [3] of τ i ∈ Active ( S ) in state S . Let (cid:96) def =min { m, | Active ( S ) |} . Then, Run
EDF is a function thatcomputes
Run
EDF ( S ) def = { τ i , τ i , . . . , τ i (cid:96) } s.t. for all (cid:54) j (cid:54) (cid:96) and for all τ k in Active ( S ) \ Run
EDF ( S ) , we have ttd S ( τ k ) > ttd S ( τ i j ) or ttd S ( τ k ) = ttd S ( τ i j ) ∧ k > i j .By Definition 7, global DM and EDF are thus work-conserving and it can also be verified that they are mem-oryless. In [3], suggestions to model several other sched-ulers were presented. It was particularily shown thatadding supplementary information to system states couldallow broader classes of schedulers to be used. Intuitively,states could e.g. keep track of what tasks were executedin their predecessor to implement non-preemptive sched-ulers.Clearly, in the case of the scheduling of sporadic tasks,two types of events can modify the current state of thesystem:1. Clock-tick transitions model the elapsing of time forone time unit, i.e. the execution of the scheduler andthe running of jobs.2.
Request transitions (called ready transitions in [3])model requests from sporadic tasks at a given instantin time.Let S be a state in States ( τ ) , and let Run be a sched-uler. Then, letting one time unit elapse from S under thescheduling policy imposed by Run amounts to decrement-ing the rct of the tasks in
Run ( S ) (and only those tasks),and to decrementing the nat of all tasks. Formally: Definition 10.
Let S = (cid:104) nat S , rct S (cid:105) ∈ States ( τ ) be asystem state and Run be a scheduler for τ on m proces-sors. Then, we say that S + = (cid:104) nat + S , rct + S (cid:105) is a clock-ticksuccessor of S under Run , denoted S Run −−→ S + iff:1. for all τ i ∈ Run ( S ) : rct + S ( τ i ) = rct S ( τ i ) − ;2. for all τ i (cid:54)∈ Run ( S ) : rct + S ( τ i ) = rct S ( τ i ) ;3. for all τ i ∈ τ : nat + S ( τ i ) = max { nat S ( τ i ) − , } .Let S be a state in States ( τ ) . Intuitively, when the sys-tem is in state S , a request by some task τ i for submittinga new job has the effect to update S by setting nat( τ i ) to T i and rct( τ i ) to C i . This can be generalised to sets oftasks. Formally: efinition 11. Let S ∈ States ( τ ) be a system state andlet τ (cid:48) ⊆ Eligible ( S ) be a set of tasks that are eligible tosubmit a new job in the system. Then, we say that S (cid:48) is a τ (cid:48) -request successor of S , denoted S τ (cid:48) −→ S (cid:48) , iff:1. for all τ i ∈ τ (cid:48) : nat S (cid:48) ( τ i ) = T i and rct S (cid:48) ( τ i ) = C i
2. for all τ i ∈ τ \ τ (cid:48) : nat S (cid:48) ( τ i ) = nat S ( τ i ) and rct S (cid:48) ( τ i ) = rct S ( τ i ) .Remark that we allow τ (cid:48) = ∅ (that is, no task asks tosubmit a new job in the system).We are now ready to define the automaton A ( τ, Run ) that formalises the behavior of the system of sporadictasks τ , when executed upon m processors under ascheduling policy Run : Definition 12.
Given a set of sporadic tasks τ and a sched-uler Run for τ on m processors, the automaton A ( τ, Run ) is the tuple (cid:104) V, E, S , F (cid:105) where:1. V = States ( τ ) ( S , S ) ∈ E iff there exists S (cid:48) ∈ States ( τ ) and τ (cid:48) ⊆ τ s.t. S τ (cid:48) −→ S (cid:48) Run −−→ S .3. S = (cid:104) nat , rct (cid:105) where for all τ i ∈ τ , nat ( τ i ) =rct ( τ i ) = 0 .4. F = Fail τ Figure 1 illustrates a possible graphical representationof one such automaton, which will be analyzed further inSection 5. On this example, the automaton depicts the fol-lowing EDF-schedulable sporadic task set using an EDFscheduler and assuming m = 2 : T i D i C i τ τ System states are represented by nodes. For the pur-pose of saving space, we represent a state S with the [ αβ, γδ ] format, meaning nat S ( τ ) = α , rct S ( τ ) = β , nat S ( τ ) = γ and rct S ( τ ) = δ . We explicitly representclock-tick transitions by edges labelled with Run , and τ (cid:48) -request transitions by edges labelled with τ (cid:48) . τ (cid:48) = ∅ loopsare implicit on each state. Note that, in accordance withDefinition 12, there are no successive τ (cid:48) -request transi-tions, and there are thus no such transitions from statessuch as [21 , and [00 , . Also note that the automa-ton indeed models the evolution of a sporadic system, ofwhich the periodic case is one possible path (the particu-lar case of a synchronous system is found by taking themaximal τ (cid:48) -request transition whenever possible, startingfrom [00 , ).We remark that our definition deviates slightly fromthat of Baker and Cirinei. In our definition, a path in theautomaton corresponds to an execution of the system thatalternates between requests transitions (possibly with anempty set of requests) and clock-tick transitions. In their work [3], Baker and Cirinei allow any sequence of clockticks and requests, but restrict each request to a single taskat a time. It is easy to see that these two definitions areequivalent. A sequence of k clock ticks in Baker’s au-tomaton corresponds in our case to a path S , S , . . . S k +1 s.t. for all (cid:54) i (cid:54) k : S i ∅ −→ S i Run −−→ S i +1 . A max-imal sequence of successive requests by τ , τ , . . . , τ k ,followed by a clock tick corresponds in our case to asingle edge ( S , S ) s.t. S { τ ,...,τ k } −−−−−−→ S (cid:48) Run −−→ S forsome S (cid:48) . Conversely, each edge ( S , S ) in A ( τ, Run ) s.t. S τ (cid:48) −→ S (cid:48) Run −−→ S , for some state S (cid:48) and set of tasks τ (cid:48) = { τ , . . . , τ k } , corresponds to a sequence of succes-sive requests by τ ,. . . , τ k followed by a clock tick inBaker’s setting.The purpose of the definition of A ( τ, Run ) should nowbe clear to the reader. Each possible execution of the sys-tem corresponds to a path in A ( τ, Run ) and vice-versa.States in Fail τ correspond to states of the system wherea deadline will unavoidably be missed. Hence, the set ofsporadic tasks τ is feasible under scheduler Run on m processors iff Fail τ is not reachable in A ( τ, Run ) [3]. Un-fortunately, the number of states of A ( τ, Run ) can be in-tractable even for very small sets of tasks τ . In the nextsection we present generic techniques to solve the reach-ability problem in an efficient fashion, and apply them toour case. Experimental results given in Section 6 demon-strate the practical interest of these methods.
4. Solving the reachability problem
Let us now discuss techniques to solve the reachabilityproblem. Let A = (cid:104) V, E, S , F (cid:105) be an automaton. Forany S ∈ V , let Succ ( S ) = { S (cid:48) | ( S, S (cid:48) ) ∈ E } be the setof one-step successors of S . For a set of states R , we let Succ ( R ) = ∪ S ∈ R Succ ( S ) . Then, solving the reachabil-ity problem on A can be done by a breadth-first traversal of the automaton, as shown in Algorithm 1. Algorithm 1:
Breadth-first traversal. begin i ← ; R ← { S } ; repeat i ← i + 1 ; R i ← R i − ∪ Succ ( R i − ) ; if R i ∩ F (cid:54) = ∅ then return Reachable ; until R i = R i − ; return Not reachable ;Intuitively, for all i (cid:62) , R i is the set of states that arereachable from S in i steps at most. The algorithm com-putes the sets R i up to the point where ( i ) either a statefrom F is met or ( ii ) the sequence of R i stabilises be-cause no new states have been discovered, and we declare Remark that the order does not matter. to be unreachable. This algorithm always terminatesand returns the correct answer. Indeed, either F is reach-able in, say k steps, and then R k ∩ F (cid:54) = ∅ , and we return‘ Reachable ’. Or F is not reachable, and the sequenceeventually stabilises because R ⊆ R ⊆ R ⊆ · · · ⊆ V ,and V is a finite set. Then, we exit the loop and re-turn ‘ Not reachable ’. Remark that this algorithm hasthe advantage that the whole automaton does not need bestored in memory before starting the computation, as Def-inition 10 and Definition 11 allow us to compute
Succ ( S ) on the fly for any state S . Nevertheless, in the worst case,this procedure needs to explore the whole automaton andis thus in O ( | V | ) which can be too large to handle in prac-tice [3].Equipped with such a simple definition of automaton ,this is the best algorithm we can hope for. However, inmany practical cases, the set of states of the automatonis endowed with a strong semantic that can be exploitedto speed up Algorithm 1. In our case, states are tuplesof integers that characterise sporadic tasks running in asystem. To harness this information, we rely on the formalnotion of simulation : Definition 13.
Let A = (cid:104) V, E, S , F (cid:105) be an automaton.A simulation relation for A is a preorder (cid:60) ⊆ V × V s.t.:1. For all S , S , S s.t. ( S , S ) ∈ E and S (cid:60) S ,there exists S s.t. ( S , S ) ∈ E and S (cid:60) S .2. For all S , S s.t. S (cid:60) S : S ∈ F implies S ∈ F .Whenever S (cid:60) S , we say that S simulates S . When-ever S (cid:60) S but S (cid:54) (cid:60) S , we write S (cid:31) S .Intuitively, this definition says that whenever a state S simulates a state S , then S can mimick every possiblemove of S by moving to a similar state: for every edge ( S , S ) , there is a corresponding edge ( S , S ) , where S simulates S . Moreover, we request that a target state can only be simulated by a target state. Remark that for agiven automaton there can be several simulation relations(for instance, equality is always a simulation relation).The key consequence of this definition is that if S isa state that can reach F , and if S (cid:60) S then S canreach F too . Indeed, if S can reach F , there is a path v , v , . . . , v n with v = S and v n ∈ F . Using Defini-tion 13 we can inductively build a path v (cid:48) , v (cid:48) , . . . , v (cid:48) n s.t. v (cid:48) = S and v (cid:48) i (cid:60) v i for all i (cid:62) . Thus, in particular v (cid:48) n (cid:60) v n ∈ F , hence v (cid:48) n ∈ F by Definition 13. Thismeans that S can reach F too. Thus, when we computetwo states S and S with S (cid:60) S , at some step of Algo-rithm 1, we do not need to further explore the successorsof S . Indeed, Algorithm 1 tries to detect reachable tar-get states. So, if S cannot reach a failure state, it is safenot to explore its succesors. Otherwise, if S can reacha target state, then S can reach a target state too, so it issafe to explore the successors of S only. By exploitingthis heuristic, Algorithm 1 could explore only a (small)subset of the states of A , which has the potential for a dramatic improvement in computation time. Remark thatsuch techniques have already been exploited in the settingof formal verification , where several so-called antichainsalgorithms have been studied [9, 11, 13] and have provedto be several order of magnitudes more efficient than theclassical techniques of the literature.Formally, for a set of states V (cid:48) ⊆ V , we let Max (cid:60) ( V (cid:48) ) = { S ∈ V (cid:48) | (cid:64) S (cid:48) ∈ V (cid:48) with S (cid:48) (cid:31) S } . In-tuitively, Max (cid:60) ( V (cid:48) ) is obtained from V (cid:48) by removing allthe states that are simulated by some other state in V (cid:48) . Sothe states we keep in Max (cid:60) ( V (cid:48) ) are irredundant wrt (cid:60) .Then, we consider Algorithm 2 which is an improved ver-sion of Algorithm 1. Algorithm 2:
Improved breadth-first traversal. begin i ← ; (cid:101) R ← { S } ; repeat i ← i + 1 ; (cid:101) R i ← (cid:101) R i − ∪ Succ (cid:16) (cid:101) R i − (cid:17) ; (cid:101) R i ← Max (cid:60) (cid:16) (cid:101) R i (cid:17) ; if (cid:101) R i ∩ F (cid:54) = ∅ then return Reachable ; until (cid:101) R i = (cid:101) R i − ; return Not reachable ;Proving the correctness and termination of Algorithm 2is a little bit more involved than for Algorithm 1 and relieson the following lemma (proof in appendix):
Lemma 14.
Let A be an automaton and let (cid:60) be a sim-ulation relation for A . Let R , R , . . . and (cid:101) R , (cid:101) R , . . . denote respectively the sequence of sets computed by Al-gorithm 1 and Algorithm 2 on A . Then, for all i (cid:62) : (cid:101) R i = Max (cid:60) ( R i ) . Intuitively, this means that some state S that is in R i could not be present in (cid:101) R i , but that we always keep in (cid:101) R i a state S (cid:48) that simulates S . Then, we can prove that: Theorem 15.
For all automata A = (cid:104) V, E, S , F (cid:105) , Al-gorithm 2 terminates and returns “Reachable” iff F isreachable in A .Proof. The proof relies on the comparison between thesequence of sets R , R , . . . computed by Algorithm 1(which is correct and terminates) and the sequence (cid:101) R , (cid:101) R , . . . computed by Algorithm 2.Assume F is reachable in A in k steps and not reach-able in less than k steps. Then, there exists a path v , v , . . . v k with v = S , v k ∈ F , and, for all (cid:54) i (cid:54) kv i ∈ R k . Let us first show per absurdum that the loop inAlgorithm 2 does not finish before the k th step. Assume itis not the case, i.e. there exists < (cid:96) < k s.t. (cid:101) R (cid:96) = (cid:101) R (cid:96) − . They form an antichain of states wrt (cid:60) . his implies that Max (cid:60) ( R (cid:96) ) = Max (cid:60) ( R (cid:96) − ) throughLemma 14. Since R (cid:96) (cid:54) = R (cid:96) − , we deduce that all thestates that have been added to R (cid:96) are simulated by somestate already present in R (cid:96) − : for all S ∈ R (cid:96) , there is S (cid:48) ∈ R (cid:96) − s.t. S (cid:48) (cid:60) S . Thus, in particular, there is S (cid:48) ∈ R (cid:96) − s.t. S (cid:48) (cid:60) v (cid:96) . We consider two cases. Ei-ther there is S (cid:48) ∈ R (cid:96) − s.t. S (cid:48) (cid:60) v k . Since v k ∈ F , F ∩ R (cid:96) − (cid:54) = ∅ , which contradicts our hypothesis that F is not reachable in less than k steps. Otherwise, let (cid:54) m < k be the least position in the path s.t. there is S (cid:48) ∈ R (cid:96) − with S (cid:48) (cid:60) v m , but there is no S (cid:48)(cid:48) ∈ R (cid:96) − with S (cid:48)(cid:48) (cid:60) v m +1 . In this case, since S (cid:48) (cid:60) v m and ( v m , v m +1 ) ∈ E , there is S ∈ Succ ( S (cid:48) ) ⊆ R (cid:96) s.t. S (cid:60) v m +1 . However, we have made the hypothesisthat every element in R (cid:96) is simulated by some elementin R (cid:96) − . Thus, there is S (cid:48)(cid:48) ∈ R (cid:96) − s.t. S (cid:48)(cid:48) (cid:60) S . Since S (cid:60) v m +1 , we deduce that S (cid:48)(cid:48) (cid:60) v m +1 , with S (cid:48)(cid:48) ∈ R (cid:96) − ,which contradicts our assumption that S (cid:48)(cid:48) / ∈ R (cid:96) − . Thus,Algorithm 2 will not stop before the k th iteration, and weknow that there is S F ∈ R k s.t. S F ∈ F . By Lemma 14, (cid:101) R k = Max (cid:60) ( R k ) , hence there is S (cid:48) ∈ (cid:101) R k s.t. S (cid:48) (cid:60) S . ByDefinition 13, S (cid:48) ∈ F since S ∈ F . Hence, (cid:101) R k ∩ F (cid:54) = ∅ and Algorithm 2 terminates after k steps with the correctanswer.Otherwise, assume F is not reachable in A . Hence, forevery i (cid:62) , R i ∩ F = ∅ . Since (cid:101) R i ⊆ R i for all i (cid:62) ,we conclude that (cid:101) R i ∩ F = ∅ for all i (cid:62) . Hence, Algo-rithm 2 never returns “Reachable” in this case. It re-mains to show that the repeat loop eventually terminates.Since F is not reachable in A , there is k s.t. R k = R k − .Hence, Max (cid:60) ( R k ) = Max (cid:60) ( R k − ) . By Lemma 14 thisimplies that (cid:101) R k = (cid:101) R k − . Thus, Algorithm 2 finishes after k steps and returns “Not reachable” .In order to apply Algorithm 2, it remains to show howto compute a simulation relation, which should containas many pairs of states as possible, since this raises thechances to avoid exploring some states during the breadth-first search. It is well-known that the largest simulationrelation of an automaton can be computed in polynomialtime wrt the size of the automaton [15]. However, thisrequires first computing the whole automaton, which isexactly what we want to avoid in our case. So we need todefine simulations relations that can be computed a priori ,only by considering the structure of the states (in our case,the functions nat and rct ). This is the purpose of the nextsection.
5. Idle tasks simulation relation
In this section we define a simulation relation (cid:60) idle ,called the idle tasks simulation relation that can be com-puted by inspecting the values nat and rct stored in thestates.
Definition 16.
Let τ be a set of sporadic tasks. Then, the idle tasks preorder (cid:60) idle ⊆ States ( τ ) × States ( τ ) is s.t.for all S , S : S (cid:60) idle S iff 1. rct S = rct S ;2. for all τ i s.t. rct S ( τ i ) = 0 : nat S ( τ i ) (cid:54) nat S ( τ i ) ;3. for all τ i s.t. rct S ( τ i ) > : nat S ( τ i ) = nat S ( τ i ) .Notice the relation is reflexive as well as transitive,and thus indeed a preorder. It also defines a partial or-der on States ( τ ) as it is antisymmetric. Moreover, since S (cid:60) idle S implies that rct S = rct S , we also have Active ( S ) = Active ( S ) . Intuitively, a state S simu-lates a state S iff ( i ) S and S coincide on all the activetasks (i.e., the tasks τ i s.t. rct S ( τ i ) > ), and ( ii ) the nat of each idle task is not larger in S than in S . Let us showthat this preorder is indeed a simulation relation when weconsider a memoryless scheduler (which is often the casein practice): Theorem 17.
Let τ be a set of sporadic tasks and let Run be a memoryless (deterministic) scheduler for τ on m processors. Then, (cid:60) idle is a simulation relation for A ( τ, Run ) .Proof. Let S , S (cid:48) and S be three states in States ( τ ) s.t. ( S , S (cid:48) ) ∈ E and S (cid:60) idle S , and let us show that thereexists S (cid:48) ∈ States ( τ ) with ( S , S (cid:48) ) ∈ E and S (cid:48) (cid:60) idle S (cid:48) .Since ( S , S (cid:48) ) ∈ E , there exists S and τ (cid:48) ⊆ τ s.t. S τ (cid:48) −→ S Run −−→ S (cid:48) , by Definition 12. Let S be the(unique) state s.t. S τ (cid:48) −→ S , and let us show that S (cid:60) idle S :1. for all τ i ∈ τ (cid:48) : rct S ( τ i ) = C i = rct S ( τ i ) . Forall τ i (cid:54)∈ τ (cid:48) : rct S ( τ i ) = rct S ( τ i ) , rct S ( τ i ) =rct S ( τ i ) , and, since S (cid:60) idle S : rct S ( τ i ) =rct S ( τ i ) . Thus we conclude that rct S = rct S .2. Let τ i be s.t. rct S ( τ i ) = 0 . Then, we must have τ i (cid:54)∈ τ (cid:48) . In this case, nat S ( τ i ) = nat S ( τ i ) , nat S ( τ i ) =nat S ( τ i ) , and, since S (cid:60) idle S , nat S ( τ i ) (cid:54) nat S ( τ i ) . Hence, nat S ( τ i ) (cid:54) nat S ( τ i ) . Weconclude that for every τ i s.t. rct S ( τ i ) = 0 : nat S ( τ i ) (cid:54) nat S ( τ i )
3. By similar reasoning, we conclude that, for all τ i s.t. rct S ( τ i ) > : nat S ( τ i ) = nat S ( τ i ) Then observe that, by Definition 13, S (cid:60) idle S im-plies that Active ( S ) = Active ( S ) . Let τ i be a task in Active ( S ) , hence rct S ( τ i ) > . In this case, and since S (cid:60) idle S , we conclude that rct S ( τ i ) = rct S ( τ i ) and nat S ( τ i ) = nat S ( τ i ) . Thus, since Run is memorylessby hypothesis,
Run ( S ) = Run ( S ) , by Definition 7. Let S (cid:48) be the unique state s.t. S Run −−→ S (cid:48) , and let us showthat S (cid:48) (cid:60) idle S (cid:48) :1. Since S (cid:60) idle S , we know that rct S = rct S .Let τ i be a task in Run ( S ) = Run ( S ) . By Defini-tion 10: rct S (cid:48) ( τ i ) = rct S ( τ i ) − and rct S (cid:48) ( τ i ) =ct S ( τ i ) − . Hence, rct S (cid:48) ( τ i ) = rct S (cid:48) ( τ i ) . For atask τ i (cid:54)∈ Run ( S ) = Run ( S ) , we have rct S (cid:48) ( τ i ) =rct S ( τ i ) and rct S (cid:48) ( τ i ) = rct S ( τ i ) , again by Def-inition 10. Hence, rct S (cid:48) ( τ i ) = rct S (cid:48) ( τ i ) . We con-clude that rct S (cid:48) = rct S (cid:48) .2. Let τ i be a task s.t. rct S (cid:48) ( τ i ) = 0 . By Defini-tion 10: nat S (cid:48) ( τ i ) = max { , nat S ( τ i ) − } and nat S (cid:48) ( τ i ) = max { , nat S ( τ i ) − } . However, since S (cid:60) idle S , we know that nat S ( τ i ) (cid:54) nat S ( τ i ) .We conclude that nat S (cid:48) ( τ i ) (cid:54) nat S (cid:48) ( τ i ) .3. Let τ i be a task s.t. rct S (cid:48) ( τ i ) > . By Def-inition 10: nat S (cid:48) ( τ i ) = max { , nat S ( τ i ) − } and nat S (cid:48) ( τ i ) = max { , nat S ( τ i ) − } . Since rct S (cid:48) ( τ i ) > , we have rct S ( τ i ) > too, since rct can only decrease with time elapsing. Since S (cid:60) idle S we have also nat S ( τ i ) = nat S ( τ i ) .We conclude that nat S (cid:48) ( τ i ) = nat S (cid:48) ( τ i ) .To conclude the proof it remains to show that, if S (cid:60) idle S and S ∈ Fail τ then S ∈ Fail τ too. Let τ i be a task s.t laxity S ( τ i ) = nat S ( τ i ) − ( T i − D i ) − rct S ( τ i ) < . Since S (cid:60) idle S : rct S ( τ i ) = rct S ( τ i ) ,and nat S ( τ i ) (cid:54) nat S ( τ i ) . Hence, laxity S ( τ i ) =nat S ( τ i ) − ( T i − D i ) − rct S ( τ i ) (cid:54) laxity S ( τ i ) < ,and thus, S ∈ Fail τ .Note that Theorem 17 does not require the schedulerto be work-conserving. Theorem 17 tells us that any statewhere tasks have to wait until their next job release can besimulated by a corresponding state where they can releasetheir job earlier, regardless of the specifics of the schedul-ing policy as long as it is deterministic, predictable andmemoryless, which is what many popular schedulers arein practice, such as preemptive DM or EDF.Figure 1, previously presented in Section 2, illustratesthe effect of using (cid:60) idle with Algorithm 2. If a state S has been encountered previously and we find another state S such that S (cid:60) idle S , then we can avoid exploring S and its successors altogether. However, note that this doesnot mean we will never encounter a successor of S asthey may be encountered through other paths (or indeed,may have been encountered already).
6. Experimental results
We implemented both Algorithm 1 (denoted BF ) andAlgorithm 2 (denoted ACBF for “antichain breadth-first”)in C++ using the STL and Boost libraries 1.40.0. We ranhead-to-head tests on a system equipped with a quad-core3.2 GHz Intel Core i7 processor and 12 GB of RAM run-ning under Ubuntu Linux 8.10 for AMD64. Our programswere compiled with Ubuntu’s distribution of GNU g++4.4.5 with flags for maximal optimization.We based our experimental protocol on that usedin [3]. We generated random task sets where task min-imum interarrival times T i were uniformly distributed [00,32][21,00] [00,10][21,10] [00,21][21,21][10,00] [10,32] [10,10][10,21][21 , RunRun Run RunRun Run Run Run Run RunRun Run τ τ τ τ τ τ Figure 1. Algorithm 2 exploits simulation re-lations to avoid exploring states needlessly.With (cid:60) idle on this small example, all greystates can be avoided as they are simulatedby another state (e.g. [00 , (cid:60) idle [10 , and [00 , (cid:60) idle [10 , ). in { , , . . . , T max } , task WCETs C i followed an expo-nential distribution of mean . T i and relative dead-lines were uniformly distributed in { C i , . . . , T i } . Tasksets where n (cid:54) m were dropped as well as sets where (cid:80) i C i /T i > m . Duplicate task sets were discarded aswere sets which could be scaled down by an integer fac-tor. We used EDF as scheduler and simulated m = 2 forall experiments. Execution times (specifically, used CPUtime) were measured using the C clock() primitive.Our first experiment used T max = 6 and we gener-ated 5,000 task sets following the previous rules (of which3,240 were EDF-schedulable). Figure 2 showcases theperformance of both algorithms on these sets. The numberof states explored by BF before halting gives a notion ofhow big the automaton was (if no failure state is reachable,the number is exactly the number of states in the automa-ton that are reachable from the initial state; if a failure stateis reachable, BF halts before exploring the whole system).It can be seen that while ACBF and BF show similar per-formance for fairly small systems (roughly up to 25,000states), ACBF outperforms BF for larger systems, and wecan thus conclude that the antichains technique scales bet-ter . The largest system analyzed in this experiment wasschedulable (and BF thus had to explore it completely), Number of states explored by BF before halt E x e c u t i on t i m e ( m i n ) BFACBF
Figure 2. States explored by BF before haltvs. execution time of BF and ACBF (5,000task sets with T max = 6 ). contained 277,811 states and was handled in slightly lessthan 2 hours with BF, whereas ACBF clocked in at 4 min-utes.Figure 3 shows, for the same experiment, a comparisonbetween explored states by BF and ACBF. This compar-ison is more objective than the previous one, as it doesnot account for the actual efficiency of our crude imple-mentations. As can be seen, the simulation relation al-lows ACBF to drop a considerable amount of states fromits exploration as compared with BF: on average, 70.8%were avoided (64.0% in the case of unschedulable systemswhich cause an early halt, 74.5% in the case of schedula-ble systems). This of course largely explains the betterperformance of ACBF, but we must also take into accountthe overhead due to the more complex algorithm. In fact,we found that in some cases, ACBF would yield worseperformance than BF. However, to the best of our knowl-edge, this only seems to occur in cases where BF took rel-atively little time to execute (less than five seconds) and isthus of no concern in practice.Our second experiment used 5,000 randomly generatedtask sets using T max = 8 (of which 3,175 were schedula-ble) and was intended to give a rough idea of the limitsof our current ACBF implementation. Figure 4 plots thenumber of states explored by ACBF before halting ver-sus its execution time. We can first notice the plot looksremarkably similar to BF in Figure 2, which seems to con-firm the exponential complexity of ACBF which we pre-dicted. The largest schedulable system considered neces-sitated exploring 198,072 states and required roughly 5.5hours. As a spot-check, we ran BF on a schedulable sys- Number of states explored by BF before halt N u m be r o f s t a t e s e x p l o r ed b y A C B F be f o r e ha l t Figure 3. States explored by BF before haltvs. states explored by ACBF before halt(5,000 task sets with T max = 6 ). Number of states explored by ACBF before halt A C B F e x e c u t i on t i m e ( m i n ) SchedulableUnschedulable
Figure 4. States explored by ACBF beforehalt vs. ACBF execution time (5,000 tasksets with T max = 8 ). em where ACBF halted after exploring 14,754 states in78 seconds; BF converged after just over 6 hours, explor-ing 434,086 states.Our experimental results thus yield several interestingobservations. The number of states explored by ACBFusing the idle tasks simulation relation is significantlyless on average than BF. This gives an objective metricto quantify the computational performance gains madeby ACBF wrt BF. In practice using our implementation,ACBF outperforms BF for any reasonably-sized automa-ton, but we have seen that while our current implemen-tation of ACBF defeats BF, it gets slow itself for slightlymore complicated task sets. However, we expect smarterimplementations and more powerful simulation relationsto push ACBF much further.
7. Conclusions and future work
We have successfully adapted a novel algorithmic tech-nique developed by the formal verification community,known as antichain algorithms [9, 11], to greatly improvethe performance of an existing exact schedulability testfor sporadic hard real-time tasks on identical multipro-cessor platforms [3]. To achieve this, we developed andproved the correctness of a simulation relation on a formalmodel of the scheduling problem. While our algorithmhas the same worst-case performance as a naive approach,we have shown experimentally that our preliminary im-plementation can still outperform the latter in practice.The model introduced in Section 3 yields the addedcontribution of bringing a fully formalized description ofthe scheduling problem we considered. This allowed usto formally define various scheduling concepts such asmemorylessness, work-conserving scheduling and variousscheduling policies. These definitions are univocal andnot open to interpretation, which we believe is an impor-tant consequence. We also clearly define what an execu-tion of the system is, as any execution is a possibly infinitepath in the automaton, and all possible executions are ac-counted for.We expect to extend these results to the general Baker-Cirinei automaton which allows for arbitrary deadlines indue time. We chose to focus on constrained deadlines inthis paper mainly because it simplified the automaton andmade our proofs simpler, but we expect the extension toarbitrary deadlines to be fairly straightforward. We alsoonly focused on developing forward simulations , but therealso exist antichain algorithms that use backward simula-tions [11]. It would be interesting to research such rela-tions and compare the efficiency of those algorithms withthat presented in this paper.The task model introduced in Section 2 can be furtherextended to enable study of more complex problems, suchas job-level parallelism and semi-partitioned scheduling.The model introduced in Section 3 can also be extendedto support broader classes of schedulers. This was brieflytouched on in [3]. For example, storing the previous scheduling choice in each state would allow modelling ofnon-preemptive schedulers.It has not yet been attempted to properly optimize ourantichain algorithm by harnessing adequate data struc-tures; our objective in this work was primarily to get apreliminary “proof-of-concept” comparison of the perfor-mance of the naive and antichain algorithms. Adequateimplementation of structures such as binary decision di-agrams [7] and covering sharing trees [10] should al-low pushing the limits of the antichain algorithm’s per-formance.Antichain algorithms should terminate quicker by us-ing coarser simulation preorders. Researching other simu-lation preorders on our model, particularily preorders thatare a function of the chosen scheduling policy, is also keyto improving performance. Determining the complexityclass of sporadic task set feasability on identical multi-processor platforms is also of interest, as it may tell uswhether other approaches could be used to solve the prob-lem.
A. Proof of Lemma 14
In order to establish the lemma, we first show that, forany set B of states, the following holds: Lemma 18.
Max (cid:60) (cid:16)
Succ (cid:16)
Max (cid:60) ( B ) (cid:17)(cid:17) = Max (cid:60) ( Succ ( B )) .Proof. We first show that
Max (cid:60) (cid:16)
Succ (cid:16)
Max (cid:60) ( B ) (cid:17)(cid:17) ⊆ Max (cid:60) ( Succ ( B )) . By def of Max (cid:60) ( B ) , we know that Max (cid:60) ( B ) ⊆ A . Moreover, Succ and
Max (cid:60) are mono-tonic wrt set inclusion. Hence:
Max (cid:60) ( B ) ⊆ B ⇒ Succ (cid:16)
Max (cid:60) ( B ) (cid:17) ⊆ Succ ( B ) ⇒ Max (cid:60) (cid:16)
Succ (cid:16)
Max (cid:60) ( B ) (cid:17)(cid:17) ⊆ Max (cid:60) ( Succ ( B )) Then, we show that
Max (cid:60) (cid:16)
Succ (cid:16)
Max (cid:60) ( B ) (cid:17)(cid:17) ⊇ Max (cid:60) ( Succ ( B )) . Let S be a state in Max (cid:60) ( Succ ( B )) .Let S ∈ B be a state s.t. ( S , S ) ∈ E . Since, S ∈ Succ ( B ) , S always exists. Since S ∈ B ,there exists S ∈ Max (cid:60) ( B ) s.t. S (cid:60) S . By Defini-tion 13, there is S ∈ Succ (cid:16)
Max (cid:60) ( B ) (cid:17) s.t. S (cid:60) S .To conclude, let us show per absurdum that S is max-imal in Succ (cid:16)
Max (cid:60) ( B ) (cid:17) . Assume there exists S ∈ Succ (cid:16)
Max (cid:60) ( B ) (cid:17) s.t. S (cid:31) S . Since Max (cid:60) ( B ) ⊆ A , S is in Succ ( B ) too. Moreover, since S (cid:60) S and S (cid:31) S , we conclude that S (cid:31) S . Thus, there is,in Succ ( B ) and element S (cid:31) S . This contradict ourhypothesis that S ∈ Max (cid:60) ( Succ ( B )) .Then, we are ready to show that: nduction hypotesis we assume that (cid:101) R k − = Max (cid:60) ( R k ) . Then: (cid:101) R k = Max (cid:60) (cid:16) (cid:101) R k − ∪ Succ (cid:16) (cid:101) R k − (cid:17)(cid:17) By def. = Max (cid:60) (cid:16)
Max (cid:60) (cid:16) (cid:101) R k − (cid:17) ∪ Max (cid:60) (cid:16)
Succ (cid:16) (cid:101) R k − (cid:17)(cid:17)(cid:17) by (1) = Max (cid:60) (cid:16)
Max (cid:60) (cid:16)
Max (cid:60) ( R k − ) (cid:17)(cid:17) ∪ Max (cid:60) (cid:16)
Succ (cid:16)
Max (cid:60) ( R k − ) (cid:17)(cid:17) By I.H. = Max (cid:60) (cid:16)
Max (cid:60) ( R k − ) ∪ Max (cid:60) ( Succ ( R k − )) (cid:17) By Lemma 18 = Max (cid:60) ( R k − ∪ Succ ( R k − )) By (1) = Max (cid:60) ( R k ) By def.
Figure 5. Inductive case for Lemma 19.
Lemma 19.
Let A be an automaton and let (cid:60) be a sim-ulation relation for A . Let R , R , . . . and (cid:101) R , (cid:101) R , . . . denote respectively the sequence of sets computed by Al-gorithm 1 and Algorithm 2 on A . Then, for all i (cid:62) : (cid:101) R i = Max (cid:60) ( R i ) .Proof. The proof is by induction on i . We first observethat for any pair of sets B and C , the following holds: Max (cid:60) ( B ∪ C )= Max (cid:60) (cid:16)
Max (cid:60) ( B ) ∪ Max (cid:60) ( C ) (cid:17) (1) Base case i = 0 Clearly,
Max (cid:60) ( R ) = R since R is asingleton. By definition (cid:101) R = R . Inductive case i = k See Figure 5.
Acknowledgment.
We thank Phan Hiep Tuan for iden-tifying mistakes in Definition 13 and Theorem 17.
References [1] Y. Abdeddaïm and O. Maler. Preemptive job-shopscheduling using stopwatch automata. In
AIPS-02 Work-shop on Planning via Model-Checking, Toulouse, France ,pages 7–13, 2002.[2] T. P. Baker and S. K. Baruah. Schedulability analysis ofmultiprocessor sporadic task systems. In I. Lee, J. Y.-T.Leung, and S. Son, editors,
Handbook of Real-Time andEmbedded Systems . Chapman & Hall/CRC Press, 2007.[3] T. P. Baker and M. Cirinei. Brute-force determinationof multiprocessor schedulability for sets of sporadic hard-deadline tasks. In Tovar et al. [17], pages 62–75.[4] S. K. Baruah and N. Fisher. Global deadline-monotonicscheduling of arbitrary-deadline sporadic task systems. InTovar et al. [17], pages 204–216.[5] M. Bertogna and S. K. Baruah. Tests for global EDFschedulability analysis.
Journal of Systems Architecture- Embedded Systems Design , 57(5):487–497, 2011.[6] V. Bonifaci and A. Marchetti-Spaccamela. Feasibilityanalysis of sporadic real-time multiprocessor task systems.In M. de Berg and U. Meyer, editors,
ESA (2) , volume6347 of
Lecture Notes in Computer Science , pages 230–241. Springer, 2010. [7] R. E. Bryant. Symbolic boolean manipulation with or-dered binary-decision diagrams.
ACM Comput. Surv. ,24(3):293–318, 1992.[8] F. Cassez. Timed games for computing WCET forpipelined processors with caches. In .IEEE Computer Society, June 2011. Forthcoming.[9] M. De Wulf, L. Doyen, T. A. Henzinger, and J.-F. Raskin.Antichains: A new algorithm for checking universality offinite automata. In T. Ball and R. B. Jones, editors,
CAV ,volume 4144 of
Lecture Notes in Computer Science , pages17–30. Springer, 2006.[10] G. Delzanno, J.-F. Raskin, and L. Van Begin. Coveringsharing trees: a compact data structure for parameterizedverification.
International Journal on Software Tools forTechnology Transfer (STTT) , 5(2–3):268–297, 2004.[11] L. Doyen and J.-F. Raskin. Antichain algorithms for finiteautomata. In J. Esparza and R. Majumdar, editors,
TACAS ,volume 6015 of
Lecture Notes in Computer Science , pages2–22. Springer, 2010.[12] E. Fersman, P. Krcal, P. Pettersson, and W. Yi. Taskautomata: Schedulability, decidability and undecidability.
Inf. Comput. , 205(8):1149–1172, 2007.[13] E. Filiot, N. Jin, and J.-F. Raskin. An antichain algo-rithm for LTL realizability. In
CAV , volume 5643 of
Lec-ture Notes in Computer Science , pages 263–277. Springer,2009.[14] J. Goossens, S. Funk, and S. Baruah. EDF scheduling onmultiprocessor platforms: some (perhaps) counterintuitiveobservations. In
Proceedings of the eighth InternationalConference on Real-time Computing Systems and Applica-tions (RTCSA) , pages 321–330, Tokyo Japan, March 2002.[15] M. R. Henzinger, T. A. Henzinger, and P. W. Kopke. Com-puting simulations on finite and infinite graphs. In
FOCS ,pages 453–462, 1995.[16] C. L. Liu and J. W. Layland. Scheduling algorithms formultiprogramming in a hard-real-time environment.
Jour-nal of the ACM , 20(1):46–61, 1973.[17] E. Tovar, P. Tsigas, and H. Fouchal, editors.
Principlesof Distributed Systems, 11th International Conference,OPODIS 2007, Guadeloupe, French West Indies, Decem-ber 17–20, 2007. Proceedings , volume 4878 of