Exact Feasibility Tests for Real-Time Scheduling of Periodic Tasks upon Multiprocessor Platforms
aa r X i v : . [ c s . O S ] J a n Exact Feasibility Tests for Real-Time Scheduling ofPeriodic Tasks upon Multiprocessor Platforms ∗ Liliana Cucu † Joël GoossensLORIA-INPL Université Libre de Bruxelles ( u . l . b .)615 rue du Jardin Botanique 50 Avenue Franklin D. Roosevelt54600 Villers-les-Nancy, France 1050 Brussels, [email protected] [email protected] Abstract
In this paper we study the global scheduling of periodic task systems upon multipro-cessor platforms. We first show two very general properties which are well-known foruniprocessor platforms and which remain for multiprocessor platforms: (i) under few andnot so restrictive assumptions, we show that feasible schedules of periodic task systemsare periodic from some point with a period equal to the least common multiple of taskperiods and (ii) for the specific case of synchronous periodic task systems, we showthat feasible schedules repeat from the origin. We then present our main result: wecharacterize, for task-level fixed-priority schedulers and for asynchronous constrained orarbitrary deadline periodic task models, upper bounds of the first time instant where theschedule repeats. We show that job-level fixed-priority schedulers are predictable uponunrelated multiprocessor platforms. For task-level fixed-priority schedulers, based on theupper bounds and the predictability property, we provide for asynchronous constrained orarbitrary deadline periodic task sets, exact feasibility tests. Finally, for the job-level fixed-priority
EDF scheduler, for which such an upper bound remains unknown, we provide anexact feasibility test as well.
The use of computers to control safety-critical real-time functions has increased rapidly overthe past few years. As a consequence, real-time systems — computer systems where thecorrectness of each computation depends on both the logical results of the computation andthe time at which these results are produced — have become the focus of much study. Sincethe concept of “time” is of such importance in real-time application systems, and since thesesystems typically involve the sharing of one or more resources among various contendingprocesses, the concept of scheduling is integral to real-time system design and analysis. ∗ This paper is an extended version of “Feasibility Intervals for Fixed-Priority Real-Time Scheduling on Uni-form Multiprocessors”, Proceedings of 11th IEEE International Conference on Emerging Technologies and Fac-tory Automation (ETFA06) and of “Feasibility Intervals for Multiprocessor Fixed-Priority Scheduling of ArbitraryDeadline Periodic Systems”, Proceedings of 10th Design, Automation and Test in Europe (DATE07). † Supported in part by FNRS Grant. τ i generates jobs at each integer multiple of its period T i withthe restriction that the first job is released at time O i (the task offset).The scheduling algorithm determines which job[s] should be executed at each time instant.When there is at least one schedule satisfying all constraints of the system, the system issaid to be feasible.Uniprocessor real-time systems are well studied since the seminal paper of Liu and Lay-land [11] which introduces a model of periodic systems. The literature considering schedul-ing algorithms and feasibility tests for uniprocessor scheduling is tremendous. In contrastfor multiprocessor parallel machines the problem of meeting timing constraints is a relativelynew research area.In the design of scheduling algorithms for multiprocessor environments, one can distinguishbetween at least two distinct approaches. In partitioned scheduling, all jobs generated by atask are required to execute on the same processor. Globalscheduling, by contrast, permitstaskmigration (i.e., different jobs of an individual task may execute upon different processors)as well as job migration (an individual job that is preempted may resume execution upon aprocessor different from the one upon which it had been executing prior to preemption).From theoretical and practical point of view we can distinguish between at least three kindsof multiprocessor machines (from less general to more general): Identical parallel machines
Platforms upon which all the processors are identical, in thesense that they have the same computing power.
Uniform parallel machines
By contrast, each processor in a uniform parallel machine ischaracterized by its own computing capacity, a job that executes on processor π i ofcomputing capacity s i for t time units completes s i × t units of execution. Unrelated parallel machines
In unrelated parallel machines, there is an execution rate s i,j associated with each job-processor pair, a job J i that executes on processor π j for ttime units completes s i,j × t units of execution. This kind of heterogeneous architecturesmodels dedicated processors (e.g., if s i,j = 0 means that π j cannot serve job J i ). Related research.
The problem of scheduling periodic task systems on multiprocessorswas originally studied in [10]. Recent studies provide a better understanding of that schedul-ing problem and provide first solutions. E.g., [2] presents a categorization of real-time multi-processor scheduling problems. It is important to notice that, to the best of our knowledge,the literature does not provide exact feasibility tests for global scheduling of periodic systemsupon multiprocessors. Moreover, we know that uniprocessor feasibility results do not remainfor multiprocessor scheduling. For instance the synchronous case (i.e., considering that alltasks start their execution synchronously) is not the worst case anymore upon multiproces-sors. Another example is the fact that the first busy period (see [9] for details) does notprovide a feasibility interval upon multiprocessors (see [7] for such counter-examples). Initial2esults indicate that real-time multiprocessor scheduling problems are typically not solvedby applying straightforward extensions of techniques used for solving similar uniprocessorproblems. Unfortunately, too often, researchers use uniprocessor arguments to study multi-processor scheduling problems which leads to incorrect properties. This fact motivated ourrigorous and formal approach; we will present and prove correct, rigorously, in this paper,our exact feasibility tests (and related properties).
This research.
In this paper we consider preemptive global scheduling and we present ex-act feasibility tests upon multiprocessors for various scheduling policies and various periodictask models.Our feasibility tests are based on periodicity properties of the schedules and on predictabilityproperties of the considered schedulers. The latter properties are not obvious because ofmultiprocessor scheduling anomalies (see [8] for details).More precisely, in the first part of this paper we prove that, under few and no so restrictiveassumptions, any feasible schedule of periodic tasks repeat from some point in time. Thenwe prove that job-level fixed-priority schedulers (e.g.,
EDF and RM ) are predictable uponunrelated multiprocessor platforms.We also characterize for task-level fixed-priority schedulers and for the various periodic taskmodels an upper bound of the first time instant where the schedule repeats (and its period).Lastly, we combine the periodicity and predictability properties to provide for these variouskind of periodic task sets and various schedulers exact feasibility tests. Organization.
This paper is organized as follows. Section 2 introduces the definitions, themodel of computation and our assumptions. We prove the periodicity of feasible schedulesof periodic systems in Section 3. In Section 4 we prove that job-level fixed-priority schedulers(e.g.,
EDF and RM ) are predictable upon unrelated multiprocessor platforms and we combinethe periodicity and predictability properties to provide for these various kind of periodic tasksets and various schedulers exact feasibility tests. Lastly, we conclude in Section 5. We consider the scheduling of periodic task systems. A system τ is composed by n periodictasks τ , τ , ... , τ n , each task is characterized by a period T i , a relative deadline D i , an exe-cution requirement C i and an offset O i . Such a periodic task generates an infinite sequenceof jobs, with the k th job arriving at time-instant O i + (k − i (k = 1, 2, ...), having an executionrequirement of C i units, and a deadline at time-instant O i + (k − i + D i . It is important tonotice that we assume in the first part of this manuscript that each task instance of the sametask (say τ i ) has the very same execution requirement (C i ); we will relax this assumption inthe second part of this manuscript by showing that our analysis is predictable.We will distinguish between implicitdeadlinesystems where D i = T i , ∀ i; constraineddeadlinesystems where D i ≤ T i , ∀ i and arbitrarydeadline systems where there is no relation between3he deadlines and the periods. Notice that arbitrary deadline systems includes constraineddeadline ones which includes the implicit deadline ones.In some cases, we will consider the more general problem of scheduling set of jobs, eachjob J j = (r j , e j , d j ) is characterized by a release time r j , an execution requirement e i and anabsolute deadline d j . The job J j must execute for e j time units over the interval [r j , d j ). A jobbecomes active from its release time to its completion.A periodic system is said to be synchronous if there is an instant where all tasks make a newrequest simultaneously, i.e., ∃ t, k , k , ... k n such that ∀ i : t = O i + k i T i (see [4] for details).Without loss of generality, we consider O i = 0, ∀ i for synchronous systems. Otherwise thesystem is said to be asynchronous.We denote by τ (i) def = { τ , ... , τ i } , by O max def = max { O , O , ... , O n } , by P i def = lcm { T , ... , T i } andP def = P n .We consider in this paper multiprocessor platforms π composed of m unrelated processors(or one of its particular cases: uniform and identical platforms): { π , π , ... , π m } . Executionrates s i,j are associated to each task-processor pair, a task τ i that executes on processor π j for t time units completes s i,j × t units of execution. For each task τ i we assume theassociated set of processors π n i,1 > π n i,2 > · · · > π n i,m ordered in the decreasing order of theexecution rates relatively to the task: s i,n i,1 ≥ s i,n i,2 ≥ · · · ≥ s i,n i,m . For identical execution rates,the ties are broken arbitrarily, but consistently, such that the set of processors associated toeach task is total ordered. Consequently, the fastest processor relatively to task τ i is π n i,1 ,i.e., the first processor of the ordered set associated to the task. Moreover, for a task τ i inthe following we consider that a processor π a is faster than π b (relatively to its associatedset of processors) if π a > π b even if we have s i,a = s i,b . For the processor-task pair ( π j , τ i )if s i,j , π j is said to be an eligible processor for τ i . Notice that these concepts anddefinitions can be trivially adapted to the scheduling of jobs upon unrelated platforms.We consider in this paper a discrete model, i.e., the characteristics of the tasks and the timeare integers.We define now the notions of the state of the system and the schedule. Definition 1 (State of the system θ (t)) . For any arbitrary deadline system τ = { τ , ... , τ n } we define the state θ (t) of the system τ at instant t as θ : N → ( Z × N ) n with θ (t) def =( θ (t), θ (t), ... , θ n (t)) where θ i (t) def = ( −
1, t , 0), ifnojoboftask τ i wasactivatedbeforeoratt. Inthatcaseitremainst time units until the first activation of τ i . (We have 0 < t ≤ O i .);(n , t , t ), otherwise. In that case t is the time elapsed at instant t since thelast action of the oldest active job of τ i . If there are n , τ i then t units were already executed for the oldest activejob. If n = 0, there is no active job of τ i at t, t is undefined in thatcase. (We have 0 ≤ n ≤ ⌈ D i T i ⌉ , 0 ≤ t < T i · ⌈ D i T i ⌉ and 0 ≤ t < C i .)Notice that at any instant t several jobs of the same task might be active and we considerthat the oldest job is scheduled first, i.e., the FIFO rule is used to serve the various jobs ofgiven task. 4 efinition 2 (Schedule σ (t)) . Foranytasksystem τ = { τ , ... , τ n } andanysetofmprocessors { π , ... , π m } we define the schedule σ (t) of system τ at instant t as σ : N → {
0, 1, ... , n } m where σ (t) def = ( σ (t), σ (t), ... , σ m (t)) with σ j (t) def =
0, if there is no task scheduled on π j at instant t;i, if task τ i is scheduled on π j at instant t. ∀ ≤ j ≤ m.Notice that Definition 2 can be extended trivially to the scheduling of jobs.A system τ is said to be feasible upon a multiprocessor platform if there exists at least oneschedule in which all tasks meet their deadlines. If A is an algorithm which schedules τ upona multiprocessor platform to meet its deadlines, then the system τ is said to be A-feasible.In this work, we consider that task parallelism is forbidden: a task cannot be scheduled atthe same instant on different processors, i.e. ∄ j , j ∈ {
1, 2, ... , m } and t ∈ N such that σ j (t) = σ j (t) , Definition 3 (Deterministic algorithm) . A schedulingalgorithmis said to be deterministic if itgenerates a unique schedule for any given sets of jobs .In uniprocessor (or identical multiprocessor) scheduling, a work-conserving algorithm is de-fined to be the one that never idles a processor while there is at least one active task. Forunrelated multiprocessors we adopt the following definition:
Definition 4 (Work-conserving algorithm) . Anunrelatedmultiprocessorschedulingalgorithmis said work-conserving if at each instant, the algorithm schedules jobs to processors asfollows: the highest priority (active) job J i is scheduled on its fastest (and eligible) processor π j . Theverysameruleisthenappliedtotheremainingactivejobsontheremainingavailableprocessors.Moreover, we will assume that the decision of the scheduling algorithm at time t is not basedon the past, nor on the actual time t but only on the characteristics of active tasks and on thestate of the system at time t. More formally, we consider memoryless schedulers. Definition 5 (Memoryless algorithm) . Aschedulingalgorithmissaidtobe memoryless ifthescheduling decision made by it at time t depends only on the characteristics of active tasksand on the current state of the system, i.e., on θ (t).Consequently, for memoryless and deterministic schedulers we have the following property: ∀ t , t such that θ (t ) = θ (t ) then σ (t ) = σ (t ).It follows by Definition 4 that a processor π j can be idled and a job J i can be active at thesame time if and only if s i,j = 0.In the following, we will distinguish between two kinds of scheduler: Definition 6 (Task-level fixed-priority) . The priorities are assigned to the tasks beforehand,at run-time each job inherits of its task priority and remains constant.5 efinition 7 (Job-level fixed-priority) . A scheduling algorithm is a job-level fixed-priority al-gorithm if and only if it satisfies the condition that for every pair of jobs J i and J j , if J i hashigher priority than J j at some time instant, then J i always has higher priority than J j .Popular task-level fixed-priority schedulers include the Rate Monotonic ( RM ) or the Dead-line Monotonic ( DM ); popular job-level fixed-priority schedulers include the Earliest DeadlineFirst ( EDF ), see [11] for details.We denote by δ ki the k th job of task τ i which becomes active at time instant R ki def = O i +(k − i . Definition 8 ( ǫ ki (t)) . For any task τ i , we define ǫ ki (t) to be the amount of time already exe-cuted for δ ki in the interval [R ki , t).We introduce now the availability of the processors for any schedule σ (t). Definition 9 (Availability of the processors a(t), task scheduling) . For any task system τ = { τ , ... , τ n } and any set of m processors { π , ... , π m } we define the availability of theprocessors a(t) of system τ at instant t as the set of available processors a(t) def = { j | σ j (t) =0 } ⊆ {
1, ... , m } . It is important to remind that we assume in this section that all task execution requirementsare constant, we will relax this assumption in Section 4.This section contains four parts, we give in each part of this section results concerning theperiodicity of feasible schedules. By periodicity (assuming that the period is γ ) of a schedule σ , we understand there is a time instant t such that σ (t) = σ (t + γ ), ∀ t ≥ t .The first part of this section provides periodicity results for a (very) general scheduling algo-rithm class: deterministic, memoryless and work-conserving schedulers.The second part of this section provides periodicity results for synchronous periodic tasksystems.The third and the fourth part of this section present periodicity results for task-level fixed-priority scheduling algorithms for constrained and arbitrary deadline systems, respectively. We show that feasible schedules of periodic task systems obtained using deterministic,memoryless and work-conserving algorithms are periodic from some point. Moreover weprove that the schedule repeats with a period equal to P for a sub-class of such schedulers.Based on that property, we provide two interesting corollaries for preemptive task-level fixed-priority algorithms (Corollary 4) and for preemptive deterministic
EDF (Corollary 5).We present first two preliminary results in order to prove Theorem 3. by deterministic EDF we mean that ambiguous situations are solved deterministically. emma 1. For any deterministic and memoryless algorithm A, if an asynchronous arbitrarydeadlinesystem τ isA-feasible,thentheA-feasiblescheduleof τ onmunrelatedprocessorsis periodic with a period divisible by P.Proof. First notice that from t ≥ O max all tasks are released, and the configuration θ i (t) ofeach task is a triple of finite integers ( α , β , γ ) with α ∈ {
0, 1, ... , ⌈ D i T i ⌉} , 0 ≤ β < max ≤ i ≤ n T i and0 ≤ γ < max ≤ i ≤ n C i . Therefore there is a finite number of different system states, hencewe can find two distinct instants t and t (t > t ≥ t ) with the same state of the system( θ (t ) = θ (t )). The schedule repeats from that instant with a period dividing t − t , since thescheduler is deterministic and memoryless.Notice that since the tasks are periodic, the arrival pattern of jobs repeats with a period equalto P from O max .We prove now by contradiction that t − t is necessarily a multiple of P. We suppose that ∃ k < k ∈ N such that t i = O max + k i P + ∆ i , ∀ i ∈ {
1, 2 } with ∆ , ∆ , ∆ , ∆ ∈ [0, P) and θ (t ) = θ (t ). This implies that there are tasks for which the time elapsed since the lastactivation at t and the time elapsed since the last activation at t are not equal. But this iscontradiction with the fact that θ (t ) = θ (t ). Consequently ∆ must be equal to ∆ and, thus,we have t − t = (k − k )P. (cid:3) For a sub-class of schedulers, we will show that the period of the schedule is P, but first adefinition (inspired from [6]):
Definition 10 (Request-dependent scheduler) . Ascheduleris saidtobe request-dependentif ∀ i, j, k, ℓ , t : δ k+h i i (t + P) > δ ℓ +h j j (t + P) if and only if δ ki (t) > δ ℓ j (t), where δ ki (t) > δ ℓ j (t) meansthat the request δ ki (t) has a higher priority than the request δ ℓ j (t).The next lemma extend results given for arbitrary deadline task systems in the uniprocessorcase (see [3], p. 55 for details). Lemma 2.
For any preemptive, job-level fixed-priority and request-dependent algorithm Aandanyasynchronousarbitrarydeadlinesystem τ onm unrelatedprocessors,wehavethat:for each task τ i , for any time instant t ≥ O i and k such that R ki ≤ t ≤ R ki + D i , if there is nodeadlinemissed up to time t + P, then ǫ ki (t) ≥ ǫ k+h i i (t + P) with h i def = PT i .Proof. The proof is made by contradiction. Notice first that the function ǫ ki ( ) is a non-decreasing discrete step function with 0 ≤ ǫ ki (t) ≤ C i , ∀ t and ǫ ki (R ki ) = 0 = ǫ k+h i i (R k+h i i ), ∀ k.We assume that a first time instant t exists such that there are j and k with R kj ≤ t ≤ R kj + D j and ǫ kj (t) < ǫ k+h j j (t+P). This assumption implies that there is a time instant t ′ with R kj ≤ t ′ < tsuch that δ k+h j j is scheduled at t ′ + P while δ kj is not scheduled at t ′ . We obtain that m higherpriority jobs are scheduled at t ′ and among these jobs there is, at least, one job δ k ℓ +h ℓ ℓ ofa task τ ℓ with ℓ ∈ {
1, 2, ... , n } that is not scheduled at t ′ + P, while δ k ℓ ℓ is scheduled at t ′ (h ℓ def = PT ℓ ). This implies that ǫ k ℓ ℓ (t ′ ) < ǫ k ℓ +h ℓ ℓ (t ′ + P) = C ℓ but this is a contradiction with the factthat t is the first such time instant. (cid:3) heorem 3. For any preemptive job-level fixed-priority and request-dependent algorithm Aand any A-feasible asynchronous arbitrary deadlinesystem τ upon m unrelatedprocessorsthe schedule is periodic with a period equalto P.Proof. By Lemma 1 we have that ∃ t i = O max + k i P + d, ∀ i ∈ {
1, 2 } with 0 ≤ d < P such that θ (t ) = θ (t ). We know also that the arrivals of jobs of tasks repeat with a period equal to Pfrom O max . Therefore for all time instants t + kP, ∀ k < k − k (i.e. t + kP < t ), we have thatthe time elapsed since the last activation at t + kP is the same for all tasks. Moreover since θ (t ) = θ (t ) we have that ǫ ℓ i i (t ) = ǫ ℓ i + (k2 − k1)PTi i (t ) with ℓ i = ⌈ O max +dT i ⌉ + k PT i , ∀ i. But by Lemma 2we also have that ǫ ℓ i i (t ) ≤ ǫ ℓ i + PTi i (t + P) ≤ · · · ≤ ǫ ℓ i + (k2 − k1)PTi i (t ), ∀ i. Consequently we obtain that θ (t ) ≤ θ (t + P) ≤ · · · ≤ θ (t ) and θ (t ) = θ (t ) which implies that θ (t ) = θ (t + P) = · · · = θ (t ). (cid:3) Corollary 4.
For any preemptive task-level fixed-priority algorithm A, if an asynchronousarbitrary deadline system τ is A-feasible upon m unrelated processors is periodic with aperiod equal to P.Proof. The result is a direct consequence of Theorem 3, since task-level fixed-priority algo-rithms are job-level fixed-priority and request-dependent schedulers. (cid:3) Corollary 5.
Afeasiblescheduleobtainedusingdeterministicrequest-dependentglobal
EDF on m unrelated processors of an asynchronous arbitrary deadline system τ is periodic witha period equal to P.Proof. The result is a direct consequence of Theorem 3, since EDF is a job-level fixed-priorityscheduler. (cid:3)
In this section we deal with the periodicity of feasible schedules of synchronous periodicsystems. Using the results obtained for deterministic, memoryless and work-conservingalgorithms we prove in Section 3.2.1 that the feasible schedules of synchronous constraineddeadline periodic systems are periodic from time instant equal to 0. In Section 3.2.2 westudy arbitrary deadline periodic systems and the periodicity of feasible schedules of thesesystems using preemptive task-level fixed-priority scheduling algorithms.
In this section we deal with the particular case of synchronous periodic task systems and weshow the periodicity of feasible schedules.
Theorem 6.
For any deterministic, memoryless and work-conserving algorithm A, if a syn-chronous constrained deadline system τ is A-feasible, then the A-feasible schedule of τ onm unrelated processors is periodic with a period P that begins at instant 0.8roof. Since τ is a synchronous periodic system, all tasks become active at instants 0 andP. Moreover, since τ is a A-feasible constrained deadline system, all jobs occurred strictlybefore instant P have finished their execution before or at instant P. Consequently, at instants0 and P the system is in the same state, i.e. θ (0) = θ (P), and a deterministic and memorylessscheduling algorithm will make the same scheduling decision. The schedule repeats with aperiod equal to P. (cid:3) An interesting particular case of Theorem 6 is the following:
Corollary 7.
A feasible schedule obtained using deterministic global
EDF of a synchronousconstrained deadline system τ on m identical or unrelated processors is periodic with aperiod P that begins at instant 0. In this section we deal with the particular case of synchronous arbitrary deadlines task sys-tems and we show the periodicity of feasible schedules obtained using preemptive task-levelfixed-priority scheduling algorithms.In the following, and without loss of generality, we consider the tasks ordered in decreasingorder of their priorities τ > τ > · · · > τ n . Lemma 8.
For any preemptive task-level fixed-priority algorithm A and for any synchronousarbitrary deadline system τ on m unrelated processors, if no deadline is missed in the timeinterval [0, P) andif θ (0) = θ (P),thenthescheduleof τ isperiodicwithaperiodP thatbeginsat instant 0.Proof. Since at time instants 0 and P the system is in the same state, i.e. θ (0) = θ (P), thenat time instants 0 and P a preemptive task-level fixed-priority algorithm will make the samescheduling decision and the scheduled repeats from 0 with a period equal to P. (cid:3) Theorem 9.
For any preemptive task-level fixed-priority algorithm A and any synchronousarbitrary deadline system τ on m unrelated processors, if all deadlines are met in [0, P) and θ (0) , θ (P), then τ is not A-feasible.Proof. In the following, we denote by σ (i) the schedule of the task subset τ (i) . Since θ (0) , θ (P), there is more than one active job of the same task at P. We define ℓ ∈ {
1, 2, ... , n } tobe the smallest task index such that τ ℓ has at least two active jobs at P ℓ . In order to provethe property we will prove that τ ℓ will miss a deadline.By definition of ℓ we have that θ (0) = θ (P ℓ − ) (at least for the schedule σ ( ℓ − ) and by Lemma 8we have that the time instants, such that at least one processor is available, are periodic witha period P ℓ − , i.e., the schedule σ ( ℓ − obtained by considering only the task subset τ ( ℓ − is periodic with a period P ℓ − . Moreover, since P ℓ is a multiple of P ℓ − , we know that theschedule σ ( ℓ − is periodic with a period P ℓ . Therefore in each time interval [k · P ℓ , (k + 1)P ℓ )with k ≥ τ , τ , ... , τ ℓ − there is the same number t ℓ of time instants suchthat at least one processor is available and where τ ℓ is scheduled. At time instant P ℓ , sincethe task parallelism is forbidden, there are P ℓ T ℓ C ℓ − t ℓ remaining units for execution of τ ℓ and,9onsequently, at each time instant (k + 1) · P ℓ there will be k · ( P ℓ T ℓ C ℓ − t ℓ ) remaining unitsfor execution of τ ℓ . Consequently we can find k ℓ = l D ℓ P ℓ / T ℓ · (C ℓ − t ℓ ) m such that the job actived at(k ℓ + 1)P ℓ will miss its deadline since it cannot be scheduled before older jobs of τ ℓ and thereare k ℓ (P ℓ / T ℓ · (C ℓ − t ℓ )) ≥ D ℓ remaining units for execution of τ ℓ at (k ℓ + 1)P ℓ .Since we consider task-level fixed-priority scheduling, then the tasks τ i with i > ℓ will notinterfere with the higher priority tasks already scheduled, particularly with τ ℓ that misses itsdeadline, and consequently the system is not A-feasible. (cid:3) Corollary 10.
For any preemptive task-level fixed-priority algorithm A andany synchronousarbitrary deadline system τ on m unrelated processors, if τ is A-feasible, then the scheduleof A is periodic with a period P that begins at instant 0.Proof. Since τ is A-feasible, we know by Theorem 9 that θ (0) = θ (P). Moreover, a determin-istic and memoryless scheduling algorithm will make the same scheduling decision at thoseinstants. Consequently, the schedule repeats from the origin with a period of P. (cid:3) In this section we give another important result: any feasible schedules on m unrelatedprocessors of asynchronous constrained deadline systems, obtained using preemptive task-level fixed-priority algorithms, are periodic from some point (Theorem 11) and we character-ize that point.Without loss of generality we consider the tasks ordered in decreasing order of their priorities τ > τ > · · · > τ n . Theorem 11.
For any preemptive task-level fixed-priority algorithm A and any A-feasibleasynchronous constrained deadline system τ upon m unrelated processors is periodic witha period P from instant S n whereS i is definedinductively as follows: • S = O ; • S i def = max { O i , O i + ⌈ S i − − O i T i ⌉ T i } , ∀ i ∈ {
2, 3, ... , n } .Proof. The proof is made by induction by n (the number of tasks). We denote by σ (i) theschedule obtained by considering only the task subset τ (i) , the first higher priority i tasks { τ , ... , τ i } , and by a (i) the corresponding availability of the processors. Our inductive hypoth-esis is the following: the schedule σ (k) is periodic from S k with a period P k for all 1 ≤ k ≤ i.The property is true in the base case: σ (1) is periodic from S = O with period P , for τ (1) = { τ } : since we consider constrained deadline systems, at instant P = T the previousrequest of τ has finished its execution and the schedule repeats.We will now show that any A-feasible schedules of τ (i+1) are periodic with period P i+1 fromS i+1 .Since σ (i) is periodic with a period P i from S i the following equation is verified:10 (i) (t) = σ (i) (t + P i ), ∀ t ≥ S i . (1)We denote by S i+1 def = max { O i+1 , O i+1 + ⌈ S i − O i+1 T i+1 ⌉ T i+1 } the first request of τ i+1 not before S i .Since the tasks in τ (i) have higher priority than τ i+1 , then the scheduling of τ i+1 will notinterfere with higher priority tasks which are already scheduled. Therefore, we may build σ (i+1) from σ (i) such that the tasks τ , τ , ... , τ i are scheduled at the very same instants andon the very same processors as they were in σ (i) . We apply now the induction step: for allt ≥ S i in σ (i) we have a (i) (t) = a (i) (t + P i ) the availability of the processors repeats. Notice thatat those instants t and t + P i the available processors (if any) are the same. Consequently,at only these instants task τ i+1 may be executed.The instants t with S i+1 ≤ t < S i+1 + P i+1 , where τ i+1 may be executed in σ (i+1) , are periodicwith period P i+1 = lcm { P i , T i+1 } . Moreover, since the system is feasible and we considerconstrained deadlines, the only active request of τ i+1 at S i+1 (respectively at S i+1 + P i+1 ) isthe one activated at S i+1 (respectively at S i+1 + P i+1 ). Consequently, the instants at which thetask-level fixed-priority algorithm A schedules τ i+1 are periodic with period P i+1 . Thereforethe schedule σ (i+1) repeats from S i+1 with period equal to P i+1 and the property is true for all1 ≤ k ≤ n, in particular for k = n : σ (n) is periodic with period equal to P from S n and theproperty follows. (cid:3) In this section we present another important result: any feasible schedule on m unrelatedprocessors of asynchronous arbitrary deadline systems, obtained using preemptive task-level fixed-priority algorithms, is periodic from some point (Theorem 14).
Corollary 12.
Foranypreemptivetask-levelfixed-priorityalgorithmA andanyasynchronousarbitrary deadline system τ on m unrelated processors, we have that: for each task τ i , foranytime instantt ≥ O i andk such thatR ki ≤ t ≤ R ki + D i , if thereis nodeadlinemissed uptotime t + P, then ǫ ki (t) ≥ ǫ k+h i i (t + P) with h i def = PT i .Proof. This result is direct consequence of Lemma 2 since preemptive task-level fixed-priority algorithms are job-level fixed-priority and request-dependent schedulers. (cid:3) Corollary 13.
Foranypreemptivetask-levelfixed-priorityalgorithmA andanyasynchronousarbitrarydeadlinesystem τ onmunrelatedprocessors,wehavethat: foreachtask τ i ,foranytimeinstantt ≥ O i ,ifthereisnodeadlinemisseduptotimet+P,theneither ( α i (t) < α i (t+P))or [ α i (t) = α i (t + P) and γ i (t) ≥ γ i (t + P)], where by the triple ( α i (t), β i (t), γ i (t)) we denoted θ i (t).Proof. If α i (t) = 0, then either α i (t + P) > α i (t + P) = 0 = β i (t + P) = β i (t). Otherwise, α i (t) = n i (t) − m i (t) where n i (t) is the number of jobs actived before or at t, and m i (t) is thenumber of jobs that have completed their execution before or at t. We have n i (t+P) = n i (t)+ PT i i (t+P) ≤ m i (t) + PT i . Consequently α i (t+P) ≥ α i (t), and if α i (t) = α i (t + P) then m i (t + P) = m i (t) + PT i , and β i (t) = ǫ m i (t)+1 ≥ ǫ m i (t)+1+ PTi i (t + P) = β i (t + P). (cid:3) Theorem 14.
For any preemptive task-level fixed-priority algorithm A and any A-feasibleasynchronous arbitrary deadline system τ upon m unrelated processors is periodic with aperiod P from instant b S n where b S n are definedinductively as follows: • b S = O • b S i def = max { O i , O i + ⌈ b S i − − O i T i ⌉ T i } + P i , (i > σ (i) theschedule obtained by considering only the task subset τ (i) , the first higher priority i tasks { τ , ... , τ i } , and by a (i) the corresponding availability of the processors. Our inductive hypoth-esis is the following: the schedule σ (k) is periodic from b S k with a period P k , for all 1 ≤ k ≤ i.The property is true in the base case: σ (1) is periodic from b S = O with period P = T , for τ (1) = { τ } : since we consider feasible systems, at instant P + O = T + O the previous jobof τ has finished its execution (C ≤ T ) and the schedule repeats.We will now show that any A-feasible schedule of τ (i+1) is periodic with period P i+1 from b S i+1 .Since σ (i) is periodic with a period P i from b S i the following equation is verified: σ (i) (t) = σ (i) (t + P i ), ∀ t ≥ b S i . (2)We denote by b S i+1 def = max { O i+1 , O i+1 + ⌈ b S i − O i+1 T i+1 ⌉ T i+1 } +P i+1 the time instant obtained by addingP i+1 to the time instant which corresponds to the first activation of τ i+1 after b S i .Since the tasks in τ (i) have higher priority than τ i+1 , then the scheduling of τ i+1 will notinterfere with higher priority tasks which are already scheduled. Therefore, we may build σ (i+1) from σ (i) such that the tasks τ , τ , ... , τ i are scheduled at the very same instants andon the very same processors as there were in σ (i) . We apply now the induction step: for allt ≥ b S i in σ (i) we have a (i) (t) = a (i) (t + P i ) the availability of the processors repeats. Noticethat at the instants t and t + P i the available processors (if any) are the same. Hence at onlythese instants task τ i+1 may be executed in the time interval [ b S i+1 , b S i+1 + P i+1 ).The instants t such that b S i+1 ≤ t < b S i+1 + P i+1 , where τ i+1 may be executed in σ (i+1) , areperiodic with period P i+1 , since P i+1 is a multiple of P i and b S i+1 ≥ b S i . We prove now bycontradiction that the system is in the same state at time instant b S i+1 and b S i+1 + P i+1 . Wesuppose that θ ( b S i+1 ) , θ ( b S i+1 + P i+1 ).We first prove that ∄ t ∈ [ b S i+1 , b S i+1 +P i+1 ) such that at t there is at least one available processorin σ (i) and no job of τ i+1 is scheduled at t in σ (i+1) . If there is such an instant t ′ , then byCorollary 13 we have that θ (t ′ − P i+1 ) = θ (t ′ ) since from the inductive hypothesis (noticethat P i+1 is multiple of P i ) and since t ′ − P i+1 ≥ b S i+1 − P i+1 ≥ b S i ≥ · · · ≥ b S we obtainthat θ k (t ′ − P i+1 ) = θ k (t ′ ) for 1 ≤ k ≤ i. Consequently, θ ( b S i+1 ) = θ ( b S i+1 + P i+1 ) which is incontradiction with our assumption. 12econdly, since θ i+1 ( b S i+1 ) , θ i+1 ( b S i+1 + P i+1 ) then by Corollary 13 we have that either thereare less active jobs at b S i+1 than at b S i+1 + P i+1 , or if there is the same number of active jobsof b S i+1 then the oldest active job at b S i+1 was executed for more time units than the oldestactive at b S i+1 + P i+1 . Therefore since ∄ t ∈ [ b S i+1 , b S i+1 + P i+1 ) such that at t there is at leastone processor available in σ (i) and no job of τ i+1 is scheduled at t in σ (i+1) , then we have thatthere are no sufficient time instants when at least one processor is available to schedule allthe jobs actived of τ i+1 in the time interval [ b S i+1 , b S i+1 + P i+1 ). We obtain that the system is notfeasible, which is in contradiction with our assumption of τ being feasible.Consequently θ ( b S i+1 ) = θ ( b S i+1 + P i+1 ), moreover by definition of b S i+1 (which corresponds toan activation of τ i+1 ) the task activations repeat from b S i+1 which proves the property. (cid:3) In the previous sections, we assumed that the execution requirement of each task is constantwhile the designer knows actually only an upper bound on the actual execution requirement,i.e., the worst case execution time (WCET). Consequently, we have to show that our testsare robust, i.e., considering the scenario where all task requirements are the maximal onesis indeed the worst case scenario, which is not obvious upon multiprocessors because ofscheduling anomalies. More precisely, we have to show that the considered schedulersupon the considered platforms are predictable. Based on this property of predictability andthe periodicity results of Section 3, we provide exact feasibility tests for the various kindschedulers and platforms considered in this work.First of all, we introduce and formalize the notion of feasibility interval necessary to providethe exact feasibility tests:
Definition 11 (Feasibility interval) . For any task system τ = { τ , ... , τ n } and any set of mprocessors { π , ... , π m } , the feasibility interval is a finite interval such that if no deadline ismissed while considering only requests within this interval then no deadline will ever bemissed. In this section, we consider the scheduling of sets of job J def = J , J , J ..., (finite or infiniteset of jobs) and without loss of generality we consider jobs in decreasing order of priorities(J > J > J > · · · ). We suppose that the execution times of each job J i can be anyvalue in the interval [e − i , e +i ] and we denote by J +i the job defined from job J i as follows:J +i def = (r i , e +i , d i ). The associated execution rates of J +i are s +i,j def = s i,j , ∀ j. Similarly, J − i is thejob defined from J i as follows: J − i = (r i , e − i , d i ). Similarly, the associated execution rates ofJ − i are s − i,j def = s i,j , ∀ j. We denote by J (i) the set of the first i higher priority jobs. We denotealso by J (i) − the set { J − , ... , J − i } and by J (i)+ the set { J +1 , ... , J +i } . Notice that the schedule of anordered set of jobs using a work-conserving and job-level fixed-priority algorithm is unique.Let S(J) be the time instant at which the lowest priority job of J begins its execution in the13chedule. Similarly, let F(J) be the time instant at which the lowest priority job of J completesits execution in the schedule. Definition 12 (Predictable algorithms) . A scheduling algorithm is said to be predictable ifS(J (i) − ) ≤ S(J (i) ) ≤ S(J (i)+ ) and F(J (i) − ) ≤ F(J (i) ) ≤ F(J (i)+ ), for all 1 ≤ i ≤ ℓ and for all feasible J (i)+ sets of jobs.In [8] the authors showed that work-conserving job-level fixed-priority algorithms are pre-dictable on identical processors. We will now extend that result by considering unrelatedplatforms.But first, we will adapt the definition availability of processors (Definition 9) to deal with thescheduling of jobs. Definition 13 (Availability of the processors A(J, t), job scheduling) . For any ordered setof jobs J and any set of m unrelated processors { π , ... , π m } , we define the availability ofthe processors A(J, t) of the set of jobs J at instant t as the set of available processors:A(J, t) def = { j | σ j (t) = 0 } ⊆ {
1, ... , m } , where σ is the schedule of J. Lemma 15.
ForanyfeasibleorderedsetofjobsJ (usingthejob-levelfixed-priority andwork-conservingschedule)uponanarbitrarysetofunrelatedprocessors { π , ... , π m } ,wehavethatA(J (i)+ , t) ⊆ A(J (i) , t), for all t and all i. That is, at any time instant the processors availablein σ (i)+ are also available in σ (i) . (We consider that the sets of jobs are ordered in the samedecreasing order of the priorities, i.e., J > J > · · · > J ℓ andJ +1 > J +2 > · · · > J + ℓ .)Proof. The proof is made by induction by ℓ (the number of jobs). Our inductive hypothesis isthe following: A(J (k)+ , t) ⊆ A(J (k) , t), for all t and 1 ≤ k ≤ i.The property is true in the base case since A(J (1)+ , t) ⊆ A(J (1) , t), for all t. Indeed, S(J (1) ) =S(J (1)+ ). Moreover J and J +1 are both scheduled on their fastest (same) processor π n , butJ +1 will be executed for the same or a larger amount of time than J .We will show now that A(J (i+1)+ , t) ⊆ A(J (i+1) , t), for all t.Since the jobs in J (i) have higher priority than J i+1 , then the scheduling of J i+1 will not interferewith higher priority jobs which are already scheduled. Similarly, J +i+1 will not interfere withhigher priority jobs of J (i)+ which are already scheduled. Therefore, we may build the schedule σ (i+1) from σ (i) , such that the jobs J , J , ... , J i , are scheduled at the very same instants andon the very same processors as they were in σ (i) . Similarly, we may build σ (i+1)+ from σ (i)+ .Notice that A(J (i+1) , t) will contain the same available processors as A(J (i) , t) for all t exceptthe time instants at which J (i+1) is scheduled, and similarly A(J (i+1)+ , t) will contain the sameavailable processors as A(J (i)+ , t) for all t except the time instants at which J (i+1)+ is scheduled.From the inductive hypothesis we have that A(J (i)+ , t) ⊆ A(J (i) , t), for all t, and consequently,at any time instant t we have the following situations: • there is at least one eligible processor in A(J (i) , t) \ A(J (i)+ , t) and among them the fastestprocessor is faster than those belonging to A(J (i)+ , t)). Consequently, J i+1 can be sched-uled at time instant t on faster processors than J +i+1 .14 there is no eligible processor in A(J (i) , t) \ A(J (i)+ , t). Consequently, J i+1 can be scheduledat time instant t on the very same processor as J +i+1 .Therefore, J i+1 can be scheduled either at the very same instants than J +i+1 on the very sameor faster processors, or may progress during additional time instants. Combined with the factthat e i ≤ e +i the property follows for both situations. (cid:3) Theorem 16.
Job-level fixed-priority algorithms are predictable on unrelated platforms.Proof. For a feasible ordered set J of ℓ jobs and a set of unrelated processors { π , ... , π m } ,we have to show that S(J (i) − ) ≤ S(J (i) ) ≤ S(J (i)+ ) and F(J (i) − ) ≤ F(J (i) ) ≤ F(J (i)+ ), for all 1 ≤ i ≤ ℓ .(The sets of jobs are ordered in the same decreasing order of the priorities, i.e., J − > J − > · · · > J − ℓ , J > J > · · · > J ℓ and J +1 > J +2 > · · · > J + ℓ .)The proof is made by induction by ℓ (the number of jobs). We show the second part of eachinequality, i.e. S(J (i) ) ≤ S(J (i)+ ) and F(J (i) ) ≤ F(J (i)+ ), for all 1 ≤ i ≤ ℓ . The proof of the first partof the inequality is similar.Our inductive hypothesis is the following: S(J (k) ) ≤ S(J (k)+ ) and F(J (k) ) ≤ F(J (k)+ ), for all1 ≤ k ≤ i.The property is true in the base case since S(J (1) ) = S(J (1)+ ) and F(J (1) ) ≤ F(J (1)+ ).We will show now that S(J (i+1) ) ≤ S(J (i+1)+ ) and F(J (i+1) ) ≤ F(J (i+1)+ ).Since the jobs in J (i) have higher priority than J i+1 then the scheduling of J i+1 will not interferewith higher priority jobs which are already scheduled. Similarly, J +i+1 will not interfere withhigher priority jobs of J (i)+ which are already scheduled. Therefore, we may build the schedule σ (i+1) from σ (i) , such that the jobs J , J , ... , J i , are scheduled at the very same instants andon the very same processors as they were in σ (i) . Similarly, we may build σ (i+1)+ from σ (i)+ .The job J i+1 can be scheduled only when processors, for which the associated executionrates are not equal to zero, are available in σ (i) and at those time instants t ≥ r i+1 for whichA(J (i) , t) contains at least one eligible processor. Similarly, J +i+1 may be scheduled at thosetime instants t +0 ≥ r i+1 for which A(J (i)+ , t) contains at least one eligible processor. By theinductive hypothesis we know that higher priority jobs complete sooner (or at the same time)consequently t ≤ t +0 and J i+1 begins its execution in σ (i+1) sooner or at the same instantthan J +i+1 in σ (i+1)+ , i.e. S(J (i+1) ) ≤ S(J (i+1)+ ). It follows by Lemma 15 that from time t the jobJ i+1 can be scheduled at least at the very same instants and on the very same processorsthan J +i+1 , but the job J i+1 may also progress at the very same instants on faster processors(relatively to its associated set of processors) or during additional time instants (since weconsider work-conserving scheduling). Consequently, F(J (i+1) ) ≤ F(J (i+1)+ ). (cid:3) Now we have the material to define an exact feasibility test for asynchronous constraineddeadline periodic systems. 15 orollary 17.
For any preemptive task-level fixed-priority algorithm A and for any asyn-chronous constrained deadline system τ on m unrelated processors, we have that τ is A-feasible if and only if all deadlines are met in [0, S n + P) and if θ (S n ) = θ (S n + P), where S i are definedinductively in Theorem11. Moreover, for every task τ i oneonly has to check thedeadlinesin the interval [S i , S i + lcm { T j | j ≤ i } ).Proof. The Corollary 17 is a direct consequence of Theorem 11 and Theorem 16, sincetask-level fixed-priority algorithms are job-level fixed-priority schedulers. (cid:3) The feasibility test given by Corollary 17 may be improved as it was done in the unipro-cessor case [5], actually the prove remains for multiprocessor platforms since it does notdepend on the number of processors, nor on the kind of platforms but on the availability ofthe processors.
Theorem 18 ([5]) . Let X i be inductively defined by X n = S n , X i = O i + ⌊ X i+1 − O i T i ⌋ T i (i ∈ { n −
1, n −
2, ... , 1 } ; we have that τ is A-feasible if and only if all deadlines are met in [X , S n + P)and if θ (S n ) = θ (S n + P). Now we have the material to define an exact feasibility test for asynchronous arbitrary dead-line periodic systems.
Corollary 19.
For any preemptive task-level fixed-priority algorithm A and for any asyn-chronousarbitrarydeadlinesystem τ onmunrelatedprocessors,wehavethat τ isA-feasibleifandonlyifalldeadlinesaremetin [0, b S n + P) andif θ ( b S n ) = θ ( b S n + P),where b S i aredefinedinductively in Theorem14.Proof. The Corollary 19 is a direct consequence of Theorem 14 and Theorem 16, sincetask-level fixed-priority algorithms are job-level fixed-priority schedulers. (cid:3) Notice that the length of our (feasibility) interval is proportional to P (the least common mul-tiple of the periods) which is unfortunately also the case of most feasibility intervals for thesimpler uniprocessor scheduling problem (and for identical platforms or simpler task mod-els). In practice, the periods are usually harmonics which limits fairly the term P.
EDF scheduling of asynchronous arbitrary deadline systems
We know by Corollary 5 that any deterministic, request-dependent and feasible
EDF sched-ule is periodic with a period equal to P. Unfortunately, from the best of our knowledge wehave no upper bound on the time instant at which the periodic part of the schedule begins.Examples show that O max + P is not such time instant for
EDF upon multiprocessors (see [1]for instance). Other examples, show that in some cases the periodic part of the schedulebegins after a very huge time interval (i.e., many hyper-periods).16ased on Corollary 5 we will however define an exact feasibility test under
EDF upon multi-processors. The idea illustrated by Algorithm 1 is to build the schedule (by means of simula-tion) and regularly check if the periodic part of the schedule is reached or not.
Algorithm 1 : Exact EDF-feasibility test upon multiprocessors
Input : task set τ Output : feasible beginSchedule (from 0) to O max ;{The function
Schedule stops the program and return false once a deadline ismissed}s := θ (O max ) ; Schedule (from O max ) to O max + P ;s := θ (O max + P) ;current-time := O max + P ; while s , s do s := s ; Schedule (from current-time) to current-time + P ;current-time := current-time + P ;s := θ (current-time) ; return true; end In this section we present exact feasibility tests in the particular case of synchronous periodictask systems. In Section 4.5.1, we study synchronous constrained deadline task systemsand in Section 4.5.2 synchronous arbitrary deadline task systems.
An exact feasibility test for synchronous constrained deadline systems scheduled could beobtained directly by Theorem 16.
Corollary 20.
Foranydeterministic,memoryless,job-levelfixed-priorityalgorithmA andanysynchronous constrained deadline system τ on m unrelated processors, we have that τ isA-feasible if and only if all deadlines are met in the interval [0, P).Proof. The result is a direct consequence of Theorem 6 and Theorem 16. (cid:3) For any preemptive task-level fixed-priority algorithm A andany synchronousarbitrary deadline system τ , τ is A-feasible on m unrelated processors if and only if: alldeadlinesare met in the interval [0, P), and θ (0) = θ (P).17roof. The result is a direct consequence of Corollary 10 and Theorem 16, since task-levelfixed-priority schedulers are priority-driven. (cid:3) In this paper we studied the global scheduling of periodic task systems upon heterogeneousmultiprocessor platforms. We provided exact feasibility tests based on periodicity properties.For any asynchronous arbitrary deadline periodic task system and any task-level fixed-priority scheduler (e.g., RM ) we characterized an upper bound in the schedule where theperiodic part begins. Based on that property we provide feasibility intervals (and conse-quently an exact feasibility tests) for those schedulers.From the best of our knowledge such an interval is unknown for EDF , a job-level fixed-priority scheduler. Fortunately, based on a periodicity property we provide an algorithmwhich determine (by simulation means) where the periodicity is already started (if feasible),this algorithm provides an exact feasibility test for
EDF upon heterogeneous multiprocessors.
References [1] B raun , C., and C ucu , L. Negative results on idle intervals and periodicity for multiproces-sor scheduling under EDF. In Junior Researcher Workshop on Real-Time Computing(2007), Institut National Polytechnique de Lorraine, France.[2] C arpenter , J., F unk , S., H olman , P., S rinivasan , A., A nderson , J., and B aruah , S. A catego-rization of real-time multiprocessor scheduling problems and algorithms. Handbook ofScheduling (2005).[3] G oossens , J. Scheduling of Hard Real-Time Periodic Systems with Various Kinds ofDeadline and Offset Constraints. PhD thesis, Université Libre de Bruxelles, Brussels,Belgium, 1999.[4] G oossens , J. Scheduling of offset free systems. Real-Time Systems: The InternationalJournal of Time-Critical Computing24, 2 (2003), 239–258.[5] G oossens , J., and D evillers , R. The non-optimality of the monotonic assignments forhard real-time offset free systems. Real-Time Systems: The International Journal ofTime-Critical Computing13, 2 (1997), 107–126.[6] G oossens , J., and D evillers , R. Feasibility intervals for the deadline driven scheduler witharbitrary deadlines. In Proceedings of the six International Conference on Real-timeComputing Systems and Applications (1999), IEEE Computer Society Press, pp. 54–61.[7] G oossens , J., F unk , S., and B aruah , S. EDF scheduling on multiprocessors: some (per-haps) counterintuitive observations. Proceedings of the 8th International Conferenceon Real-Time ComputingSystems andApplications (2002), 321–330.188] H a , R., and L iu , J. Validating timing constraints in multiprocessor and distributed real-time systems. Proceedings of the 14th IEEE International Conference on DistributedComputingSystems (1994).[9] L ehoczky , J. Fixed priority scheduling of periodic task sets with arbitrary deadlines. InIEEE Real-TimeSystems Symposium (1990), pp. 201–213.[10] L iu , C. Scheduling algorithms for multiprocessors in a hard real-time environment. JPLSpace Programs Summary37-60(II) (1969), 28–31.[11] L iu , C., and L aylandayland