Scheduling periodic messages on a shared link
SScheduling Periodic Messages on a Shared Link
Ma¨el Guiraud and Yann Strozecki David Laboratory, UVSQ, France, { mael.guiraud,yann.strozecki } @uvsq.fr Nokia Bell Labs, FranceJuly 2, 2020
Abstract
Cloud-RAN is a recent architecture for mobile networks where the processing units arelocated in distant data-centers while, until now, they were attached to antennas. The mainchallenge, to fulfill protocol constraints, is to guarantee a low latency for the periodic messagessent from each antenna to its processing unit and back. The problem we address is to find asending scheme of these periodic messages without contention nor buffering .We study the problem pma modeling this situation on a simple but common topology, whereall contentions are on a single link shared by all antennas. The problem is reminiscent of classicalscheduling and packing problems, but the periodicity introduces a new twist. We study howthe problem behave with regard to the load of the shared link. The two main contributions arepolynomial time algorithms which always find a solution for arbitrary size of messages and loadless than 0 . φ − φ being the golden ratio ( √ / Centralized radio network architecture, called C-RAN for Cloud Radio Access Network, has beenproposed as a next generation architecture to reduce energy consumption costs [16] and more gen-erally the total cost of ownership. The main challenge for C-RAN is to reach a latency compatiblewith transport protocols [18]. The latency is measured between the sending of a message by a Re-mote Radio Head (RRH) and the reception of the answer, computed by a BaseBand Unit (BBU) inthe cloud. For example, LTE standards require processing functions like HARQ (Hybrid AutomaticRepeat reQuest) in 3ms [7]. In 5G, some services need end-to-end latency as low as 1ms [2, 6]. Thespecificity of the C-RAN context is not only the latency constraint, but also the periodicity of thedata transfer in the frontaul network between RRHs and BBUs: messages need to be emitted andreceived each millisecond [7].Statistical multiplexing even with a large bandwidth does not satisfies the latency requirementsof C-RAN [4, 3]. The current solution [20, 23] is to use dedicated circuits for the fronthaul. Eachend-point, an RRH on one side and a BBU on the other side is connected through direct fiber orfull optical switches. This eliminates all contentions since each message flow has its own link, but itis extremely expensive and do not scale in the case of a mobile network composed of about 10 , a r X i v : . [ c s . D C ] J u l ur aim is to operate a C-RAN on a low-cost shared switched network. The question weaddress is thus the following: is it possible to schedule periodic messages on a shared link withoutusing buffers ? Eliminating this source of latency leaves us with more time budget for latency dueto the physical length of the routes in the network, and thus allows for wider deployment areas.Our proposed solution is to compute beforehand a periodic and deterministic sending scheme,which completely avoids contention. This kind of deterministic approach has gained some tractionrecently: Deterministic Networking is under standardization in IEEE 802.1 TSN group [9], as wellat IETF DetNet working group [1]. Several patents on concepts and mechanisms for DetNet havebeen already published, see for example [11, 13].The algorithmic problem studied, called Periodic Message Assignment or pma , is as follows:Given a period, a message size and a delay between the two contention points for each message,set a departure time in the period for each message, so that they go through both contentionpoints without collision. It is similar to the two flow shop scheduling problem [24] with periodicity.The periodicity adds more constraints, since messages from consecutive periods can interact. Inflow shop problems, the aim is usually to minimize the makespan, or schedule length, but in ourperiodic variant it is infinite. Hence, we choose to look for any periodic schedule without buffering,to minimize the trip time of each message.To our knowledge, all studied periodic scheduling problems are quite different from the one wepresent. In some works [12, 10], the aim is to minimize the number of processors on which theperiodic tasks are scheduled, while our problem corresponds to a single processor and a constraintsimilar to makespan minimization. In cyclic scheduling [14], the aim is to minimize the periodof a scheduling to maximize the throughput, while our period is fixed. The train timetablingproblem [15] and in particular the periodic event scheduling problem [21] are generalizations of ourproblem, since they take the period as input and can express the fact that two trains (like twomessages) should not cross. However, they are much more general: the trains can vary in size,speed, the network can be more complex than a single track and there are precedence constraints.Hence, the numerous variants of train scheduling problems are very hard to solve (and always NP -hard). Thus, some delay is allowed to make the problems solvable and most of the researchdone [15] is devising practical algorithms using branch and bound, mixed integer programming,genetic algorithms . . .In previous articles of the authors, generalizations of pma allowing buffering are studied on asingle link [4] or on a cycle [5]. Heuristics (using classical scheduling algorithms as subroutines) andFPT algorithms are used to find a sending scheme with minimal latency , while here we consideronly sending scheme with no additional latency . More complex scheduling problems for time sen-sitive networks have been practically solved, using mixed integer programming [17, 22] or an SMTsolver [8], but without theoretical guarantees on the quality of the produced solutions. Typicalapplications cited in these works (out of C-RAN) are sensor networks communicating periodicallyinside cars or planes, or logistic problems in production lines. We think our approach can bebrought to these settings. Organization of the Paper
In Sec. 2, we present the model and the problem pma . In Sec. 3,we present several greedy algorithms and prove they always find a solution to pma for moderateloads. These algorithms rely on schemes to build compact enough solutions, to bound measures ofthe size wasted when scheduling messages. Then, we illustrate their surprisingly good performanceon random instances in Sec. 3.4. It turns out that pma can be restricted to messages of size one2 atacenter withseveral BBUsFirst contention pointmessages going todatacenter Second contention pointmessages going back to the antennasAntennas (RRHs) d C C d d n − ...1 2 Figure 1: C-RAN network with a single shared link modeled by two contention points and delaysfor the price of doubling the load or adding some latency, as explained in Sec. 4. Hence, in Sec. 5,we present a deterministic and a probabilistic algorithm for this special case. The deterministicalgorithm is not greedy, contrarily to algorithms of Sec. 3, since it uses a swap mechanism whichcan move already scheduled messages. Experiments in Sec. 5.3 again shows that our algorithmswork even better on random instances than what we have established for their worst case.
In this article, we model a simple network, as represented in Fig.1, in which periodic messagesflow through a single bidirectional link. The answer to each message is then sent back through thesame link and it does not interact with the messages sent in the other direction, since the link wemodel is full-duplex. In the C-RAN context we model, all messages are of the same nature, hencethey are all the same size denoted by τ . This size represents the time needed to send a messagethrough some contention point of the network, here a link shared by all antennas. We denote by n the number of messages, which are numbered from 0 to n −
1. A message i is characterized byits delay d i : when the message number i arrives at the link at time t , then it returns to the otherend of the link on its way back at time t + d i . The model and problem can easily be generalizedto any topology, that is any directed acyclic multigraph with any number of contention points, see[4]. We choose here to focus on the simplest, but realistic, non-trivial network, for which we canstill obtain some theoretical results.The time is discretized and the process we consider is periodic of fixed integer period P . Weuse the notation [ P ] for the set { , . . . , P − } . Since the process is periodic, we may consider anyinterval of P units of time and the times at which messages go through the two contention pointsduring this interval to completely represent the state of our system. We call the representation ofthe interval of time in the first contention point the first period and the second period for thesecond contention point.An offset of a message is a choice of time at which it arrives at the first contention point (i.e. inthe first period). Let us consider a message i of offset o i , it uses the interval of time [ i ] = { ( o i + t )mod P | ≤ t < τ } in the first period and [ i ] = { ( d i + o i + t ) mod P | ≤ t < τ } in the secondperiod. Two messages i and j collide if either [ i ] ∩ [ j ] (cid:54) = ∅ or [ i ] ∩ [ j ] (cid:54) = ∅ . If t ∈ [ i ] (resp. t ∈ [ i ] ), we say that message i uses time t in the first period (resp. in the second period).We want to send all messages, so that there is no collision in the shared link. In other word,we look for a way to send the messages without using buffering and hence limiting the latencyof messages to the physical length of the links. An assignment is a choice of an offset for eachmessage such that no pair of messages collide , as shown in Fig. 2. Formally, an assignment is a3 irst periodSecond period τ = 2Period P Message ( i )Delay ( d i ) 00 1 23 1 4 0 1 20 12 Figure 2: An instance of pma with 3 messages, P = 10, τ = 2, and one assignmentfunction from the messages in [ n ] to their offsets in [ P ]. Let Periodic Message Assignment or pma be the following problem: given an instance of n messages, a period P and a size τ , find anassignment or decide there is none. When an assignment is found, we say the problem is solved positively .The complexity of pma is not yet known. However, we have proven that, when parameterizedby n the number of messages, the problem is FPT [3]. A slight generalization of pma , with morecontention points but each message only going through two of them, as in pma , is NP -hard [3]. Ifthe shared link is not full-duplex, that is, there is a single contention point and each message goesthrough it twice, it is also NP -hard [19]. Hence, we conjecture that pma is NP -hard.To overcome the supposed hardness of pma , we study it when the load of the system is smallenough. The load is defined as the number of units of time used in a period by all messages dividedby the period that is nτ /P . There cannot be an assignment when the load is larger than one; weprove in this article that, for moderate loads, there is always an assignment and that it can befound by a polynomial time algorithm. In this section, we study the case of arbitrary values for τ . When modeling real problems, it isrelevant to have τ > partial assignment A is a function defined from a subset S of [ n ] to [ P ]. The cardinal of S is the size of partial assignment A . A message in S is scheduled (by A ), and a message notin S is unscheduled . We only consider partial assignments such that no pair of messages of S collide. If A has domain S , and i / ∈ S , we define the extension of A to the message i by the offset o , denoted by A [ i → o ], as A on S and A [ i → o ]( i ) = o .All presented algorithms build an assignment incrementally, by growing the size of a partialassignment. Moreover, algorithms of this section are greedy : Once an offset is chosen for a message,it is never changed. In the rest of the paper, we sometimes compare the relative position of messages,but one should remember that the time is periodic and these are relative positions on a circle.Moreover, when it is unimportant and can hinder comprehension, we may omit to write mod P insome definitions and computations. Consider some partial assignment A , the message i uses all times from A ( i ) to A ( i ) + τ − j is scheduled by A , with A ( j ) < A ( i ), then the last time it uses in thefirst period is A ( j ) + τ − A ( i ), which implies that A ( j ) ≤ A ( i ) − τ .4ymmetrically, if A ( j ) > A ( i ), to avoid collision between messages j and i , we have A ( j ) ≥ A ( i )+ τ .Hence, message i forbids the interval ] A ( i ) − τ, A ( i ) + τ [ as offsets for messages still not scheduledbecause of its use of time in the first period. The same reasoning shows that 2 τ − | S | messages are alreadyscheduled, then | S | (4 τ −
2) offsets are forbidden for any unscheduled message. It is an upperbound on the number of forbidden offsets, since the same offset can be forbidden twice because ofa message on the first and on the second period.Let
F o ( A ) be the maximum number of forbidden offsets when extending A . Formally, as-sume A is defined over S and i / ∈ S , F o ( A ) is the maximum over all possible values of d i of | { o ∈ [ P ] | A [ i → o ] has no collision } | . The previous paragraph shows that F o ( A ) is alwaysbounded by (4 τ − | S | .Let First Fit be the following algorithm: for each unscheduled message (in the order theyare given), it tests all offsets from 0 to P − F o ( A ) < P , then whatever the delay of theroute we want to extend A with, there is an offset to do so. Since F o ( A ) ≤ (4 τ − | S | and | S | < n , First Fit (or any greedy algorithm) will always succeed when (4 τ − n ≤ P , that is when theload nτ /P is less than 1 /
4. It turns out that
First Fit always creates compact assignments (asdefined in [4]), that is a message is always next to another one in one of the two periods. Hence,we can prove a better bound on
F o ( A ), when A is built by First Fit , as stated in the followingtheorem.
Theorem 1.
First Fit solves pma positively on instances of load less than / .Proof. We show by induction on the size of S , that F o ( A ) ≤ | S | (3 τ −
1) + τ −
1. For S = 1, it isclear since a single message forbid at most (3 τ −
1) + τ − τ − F o ( A ) ≤ | S | (3 τ −
1) + τ − i / ∈ S such that First Fit builds A [ i → o ] from A . By definition of First Fit , choosing o − o − τ and o −
1, hence all these offsets are forbidden by A . The same offsets are also forbidden by the choice of o as offset for i , then only 3 τ − F o ( A [ i → o ]) ≤ F o ( A ) + (3 τ − The method of this section is described in [4] and it achieves the same bound on the load using adifferent method. It is recalled here to help understand several algorithms in the article. The ideais to restrict the possible offsets at which messages can be scheduled. It seems counter-intuitive,since it decreases artificially the number of available offsets to schedule new messages. However,it allows reducing the number of forbidden offsets for unscheduled messages. A meta-offset is anoffset of value iτ , with i an integer from 0 to P/τ . We call
Meta Offset the greedy algorithmwhich works as
First Fit , but consider only meta-offsets when scheduling messages.Let
F mo ( A ) be the maximal number of meta-offsets forbidden by A . By definition, two messageswith a different meta-offset cannot collide in the first period. Hence, F mo ( A ) can be bounded by3 | S | and we obtain the following theorem. Theorem 2 (Proposition 3 of [4]) . Meta Offset solves pma positively on instances of load lessthan / . Meta Offset is in O ( nP/τ ), while First Fit is in O ( nP ). However,it is not useful to consider every possible (meta-)offset at each step. By maintaining a list of positionsof scheduled messages in first and second period, both algorithms can be implemented in O ( n ). We present in this section a family of greedy algorithms which solve pma positively for larger loads.We try to combine the good properties of the two previous algorithms: the compactness of theassignments produced by
First Fit and the absence collision in the first period of
Meta Offset .The idea is to schedule several messages at once, using meta-offsets, to maximize the compactnessof the obtained solution. We first describe the algorithm which schedules pairs of routes and thenexplain quickly how to extend it to any tuples of messages.From now on, we use Lemma 3 to assume P = mτ . This hypothesis makes the analysis ofalgorithms based on meta-offsets simpler and tighter. The load increases from λ = nτ /P to atmost λ (1 + 1 /m ): the difference is less than 1 /m < /n , thus very small for most instances. Thetransformation of Lemma 3 does not give a bijection between assignments of both instances butonly an injection, which is enough for our purpose. Lemma 3.
Let I be an instance of pma with n messages of size τ , period P and m = P/τ . Thereis an instance J with n messages of size τ (cid:48) and period P (cid:48) = mτ (cid:48) such that any assignment of J canbe transformed into an assignment of I in polynomial time.Proof. Fig. 3 illustrates the reductions we define in this proof on a small instance. Let P = mτ + r with r ≤ τ . We define the instance I (cid:48) as follows: P (cid:48) = mP , d (cid:48) i = md i and τ (cid:48) = mτ + r . Withthis choice, we have P (cid:48) = m ( mτ + r ) = mτ (cid:48) . Consider an assignment A (cid:48) of the instance I (cid:48) . Ifwe let τ (cid:48)(cid:48) = mτ , then A (cid:48) is also an assignment for I (cid:48)(cid:48) = ( P (cid:48) , τ (cid:48)(cid:48) , ( d (cid:48) , . . . , d (cid:48) n − )). Indeed, the sizeof each message, thus the intervals of time used in the first and second period begin at the sameposition but are shorter, which cannot create collisions. We then use a compactification procedureon A (cid:48) seen as an assignment of I (cid:48)(cid:48) , with size of messages multiple of m (see Th.4 of [4] for a similarcompactification). W.l.o.g., the first message is positioned at offset zero. The first time it uses inthe second period is a multiple of m since its delay is by construction a multiple of m . Then, allother messages are translated to the left by removing increasing values to their offsets, until thereis a collision. It guarantees that some message j is in contact with the first one on the first orsecond period. It implies that either A (cid:48) ( j ) or A (cid:48) ( j ) + d j mod P (cid:48) is a multiple of m and since d j isa multiple of m , then both A (cid:48) ( j ) and A (cid:48) ( j ) + d j mod P (cid:48) are multiples of m . This procedure canbe repeated until we get an assignment A (cid:48)(cid:48) to I (cid:48)(cid:48) , such that all positions of messages in the firstand second period are multiples of m . Finally, we define A as A ( i ) = A (cid:48)(cid:48) ( i ) /m and we obtain anassignment of I .We are interested in the remainder modulo τ of the delays, let d i = d (cid:48) i τ + r i be the Euclideandivision of d i by τ . We assume, from now on, that messages are sorted by increasing r i . A Compactpair , as shown in Fig. 4 is a pair of messages ( i, j ) with i < j that can be scheduled using meta-offsets such that A ( i ) + ( d (cid:48) i + 1) τ = A ( j ) + d (cid:48) j τ , i.e. j is positioned less than τ unit of times after i in the second period. The gap between i and j is defined as g = d (cid:48) i + 1 − d (cid:48) j mod m , it is thedistance in meta offsets between i and j in the first period. By definition, we can make a compactpair out of i and j , if and only if their gap is not zero. Lemma 4.
Given three messages, two of them form a compact pair. = ( P, τ, ( d , d )) P = 5 m = 2 r = 1 τ = 2 d = 1 d = 6 I ′ = ( P ′ , τ ′ , ( d ′ , d ′ )) P ′ = 10 τ = 5 d = 2 d = 12 I ′′ = ( P ′ , τ ′′ , ( d ′ , d ′ )) P ′ = 10 τ ” = 4 d = 2 d = 12 A ′ A ′′ A Message size reductionCompactificationDivision of all parameters by m Figure 3: Transformation from A (cid:48)(cid:48) to A First period Second periodGap = 3 r r − r > Figure 4: A compact pair scheduled using meta-offsets, with d (cid:48) = 2 and d (cid:48) = 0 Proof.
If the first two messages or the first and the third message form a compact pair, then weare done. If not, then by definition d (cid:48) = 1 + d (cid:48) = 1 + d (cid:48) . Hence, messages 2 and 3 have the samedelay and form a compact pair of gap 1.Let Compact Pairs be the following greedy algorithm: From the messages in order of increasing r i , a sequence of at least n/ Meta Offset .The analysis of
Compact Pairs relies on the evaluation of the number of forbidden meta-offsets.In the first phase of
Compact Pairs , one should evaluate the number of forbidden offsets whenscheduling a compact pair, that we denote by
F mo ( A ). In the second phase, we need to evaluate F mo ( A ). When scheduling a message in the second phase, a scheduled compact pair only forbids three meta-offsets in the second period. If messages in a pair are scheduled independently, theyforbid four meta-offsets, which explains the improvement from Compact Pairs . We first state asimple lemma, whose proof can be read from Fig. 5, which allows bounding
F mo ( A ). Lemma 5.
A compact pair already scheduled by
Compact Pairs forbids at most four meta-offsetsin the second period to another compact pair when scheduled by
Compact Pairs . orbidden meta-offsetsSecond period Figure 5: Meta offsets forbidden by a scheduled compact pair (in blue) when scheduling anothercompact pair (in red)
Theorem 6.
Compact Pairs solves pma positively on instances of load less than / .Proof. Let n be the number of compact pairs scheduled in the first phase. When scheduling anew pair, the position of the 2 n messages on the first period forbid 4 n offsets for a compactpair. Indeed, each scheduled message can collide with each of the two messages which form acompact pair. On the second period, we can use Lemma 5 to bound the number of forbiddenoffsets by 4 n . Hence, we have established that during the first phase, the partial solution A satisfies F mo ( A ) ≤ n . This first phase continues while there are available offsets for compactpairs, which is guaranteed when F mo ( A ) ≤ m , that is while n ≤ m/
8. Hence, we assume that n = m/ n be the number of messages scheduled in the second phase to build partialassignment A , we have F mo ( A ) ≤ n ∗ n ∗ Compact Pairs can always schedule messageswhen
F mo ( A ) is less than m , which is implied by n ∗ n ∗ ≤ m . Solving this equation, weobtain that n ≥ m thus the number of routes scheduled is at least 2 n + n ≥ m . Assumingthere are exactly m messages to schedule, then m messages are scheduled as compact pairs. It istwo third of the m messages, hence Lemma 4 guarantees the existence of enough compact pairs.Therefore, an assignment is always produced when the load is less or equal to . Compact Pairs can be improved by forming compact tuples instead of compact pairs. A com-pact k -tuple is a sequence of messages i < · · · < i k (with r i , . . . , r i k increasing), for which meta-offsets can be chosen so that, there is no collision, the messages in the second period are in order i , . . . , i k and for all l , A ( i l ) + ( d (cid:48) i l + 1) τ = A ( i l +1 ) + d (cid:48) i l +1 τ . The algorithm Compact k-tuples works by scheduling compact k -tuples using meta offsets while possible, then scheduling compact k − k = 1. Lemma 7.
Given k + k ( k − k − / messages, k of them always form a compact k -tuple andwe can find them in polynomial time.Proof. We prove the property by induction on k . We have already proved it for k = 2 in Lemma 4.Now assume that we have found C a compact ( k − k − / k − + 1 messages. If k of them have the same delay modulo τ , then theyform a compact k -tuple and we are done. Otherwise, there are at least k different values modulo τ in those ( k − + 1 messages. Each element of the compact ( k − τ of a new k th element in the tuple. By pigeonhole principle, one of the k messageswith distinct delays modulo τ can be used to extend C . We have built a compact k -tuple from atmost ( k −
1) + ( k − k − k − / k − + 1 messages. It is equal to k + k ( k − k − / heorem 8. Compact 8-tuples always solves pma positively on instances of load less than / ,for instances with n ≥ .Proof. We need the following fact, which generalizes Lemma 5: A k -tuples forbids k + j +1 offsets inthe second period when scheduling a j -tuple. It enables us to compute a lower bound on the numberof scheduled i -tuples for i equal k down to 1 by bounding F mo i ( A ), the number of forbidden meta-offsets when placing i -tuple in the algorithm. If we denote by n i the number of compact i -tuplesscheduled by the algorithm, we have the following equation: F mo i ( A ) ≤ k (cid:88) j = i n j ( j ∗ i + j + i + 1) . The equation for n is slightly better: F mo ( A ) ≤ k (cid:88) j =1 n j (2 j + 1) . A bound on n i can be computed, using the fact that A can be extended while F mo i ( A ) < m .Lemma 7 ensures there are enough compact k -tuples, when n − (cid:80) j ≤ i ≤ n j is larger than i + i ( i − i − /
6. A numerical computation of the n i ’s shows that Compact 8-tuples always finds anassignment when the load is less than 4 /
10 and for n ≥ k = 8. Taking arbitrary large k and using refined bounds on F mo i ( A ) isnot enough to get an algorithm working for a load of 41 /
100 (and it only works from larger n ).The code computing the n i can be found on one author’s website . To make Compact 8-tuples work, there must be at least 220 messages to produce enough compact 8-tuples in the first phase.It is not a strong restriction for two reasons. First, the bound of Lemma 7 can be improved,using a smarter polynomial time algorithm to find compact tuples, which better takes into accountrepetitions of values and compute the compact tuples in both directions. Second, on randominstances, the probability that k messages do not form a compact k -tuples is low, and we can justbuild the tuples greedily. Therefore, for most instances, forming compact k -uples is not a problemand in practice Compact 8-tuples works even for small n . In this section, the performance on random instances of the algorithms presented in Sec. 3 is exper-imentally characterized. The implementation in C of these algorithms can be found on one author’swebsite . We experiment with several periods and message sizes. For each set of parameters, wetry every possible load by changing the number of messages and give the success rate of each al-gorithm. The success rate is measured on 10000 instances of pma generated by drawing uniformlyand independently the delays of each message in [ P ].We consider the following algorithms: • First Fit https://yann-strozecki.github.io/ https://yann-strozecki.github.io/ Meta Offset • Compact Pairs • Compact Fit • Greedy Uniform , the algorithm introduced and analyzed in Sec. 5, used for arbitrary τ • Exact Resolution using an algorithm from [4]
Delay ( d i )Message ( i ) d i mod τ Figure 6: Execution of
Compact Fit creating two compact pairs with P = 12 and τ = 2The only algorithm we have yet to describe is Compact Fit . The idea is, as for
CompactPairs , to combine the absence of collision on the first period of
Meta Offset and the compactnessof assignments given by
First Fit . The messages are ordered by increasing remainder of delaymodulo τ , and each message is scheduled so that it extends an already scheduled compact tuples.In other words, it is scheduled using meta offsets, so that using one less for meta offset createsa collision on the second period . If it is not possible to schedule the message in that way, thefirst possible meta-offset is chosen. This algorithm is designed to work well on random instances.Indeed, it is easy to evaluate the average size of the created compact tuples, and from that, to proveCompact Fit works with high probability when the load is strictly less than 1 /
2. Fig. 6 shows how
Compact Fit builds an assignment from a given instance. The messages are ordered by increasingremainder of delay modulo τ . A compact pair is built with messages 0 and 1. Message 2 cannotincrease the size of the compact pair, it so create a new uple, completed by message 3On a regular laptop, all algorithms terminates in less than a second when solving 10000 instanceswith 100 messages except the exact resolution, whose complexity is exponential in the number ofroutes (but polynomial in the rest of the parameters). Hence, the exact value of the success rate10iven by the exact resolution is only available in the experiment with at most 10 messages (thealgorithm cannot compute a solution in less than an hour for twenty messages and high load).Note that while First Fit , Compact Pairs , Meta Offset , Compact Fit all run in almost thesame time,
Greedy Uniform seems to be three times longer than the other algorithms to run oninstances with 100 messages. It is expected since it must find all available offsets at each stepinstead of one. S u cc e ss r a t e ( % ) LoadFirstFitMetaOffsetGreedyUniformCompactPairsCompactFit
Figure 7: Success rates of all algorithms for in-creasing loads, τ = 1000, P = 100 , S u cc e ss r a t e ( % ) LoadFirstFitMetaOffsetGreedyUniformCompactPairsCompactFit
Figure 8: Success rates of all algorithms forincreasing loads, τ = 10, P = 1 , S u cc e ss r a t e ( % ) LoadFirstFitMetaOffsetRandomOffsetCompactPairsCompactFitExhaustiveSearch
Figure 9: Success rates of all algorithms for in-creasing loads, τ = 1000, P = 10 , S u cc e ss r a t e ( % ) LoadFirstFitMetaOffsetRandomOffsetCompactPairsCompactFit
Figure 10: Same parameters as in Fig. 7, de-lays uniformly drawn in [ τ ]For all sets of parameters, the algorithms have the same relative performances. Meta Offset and
Greedy Uniform perform the worst and have almost equal success rate. Remark that theyhave a 100% success rate for load less than 1 /
2, while it is easy to build an instance of pma of load1 / (cid:15) which makes them fail. The difference between the worst case analysis and the average caseanalysis is explained for Greedy Uniform , when τ = 1 in Sec. 5. First Fit performs better than
Meta Offset while they have the same worst case.
Compact airs , which is the best theoretically also performs well in the experiments, always finding assign-ments for load of 0 . Compact Fit , which is similar in spirit to
Compact Pairs but is designed tohave a good success rate on random instances is indeed better than
Compact Pairs , when thereare enough messages.As demonstrated by Fig. 7 and Fig. 8, the size of the messages have little impact on the successrate of the algorithms, when the number of messages is constant. Comparing Fig. 9 and Fig. 7shows that for more messages, the transition between success rate of 100% to success rate of 0%is faster. Finally, the results of Exact Resolution in Fig. 9 show that the greedy algorithm are farfrom always finding a solution when it exists. Moreover, we have found an instance with load 0 . pma can always besolved positively.We also investigate the behavior of the algorithms when the delay of the messages are drawnin [ τ ] in Fig. 10. The difference from the case of large delay is that Compact Pairs and
CompactFit are extremely efficent: they always find a solution for 99 messages. It is expected, since all d (cid:48) i are equal in these settings and they will both build a 99-compact tuples and thus can only fail forload 1. In this section, we explain how we can trade load or buffering in the network to reduce the sizeof messages up to τ = 1. This further justifies the interest of Sec. 5, where specific algorithms for τ = 1 are given. We describe here a reduction from an instance of pma to another one with the same period andnumber of messages but the size of the messages is doubled. This instance is equivalent to aninstance with τ = 1, by dividing everything by the message size. Thus we can always assume that τ = 1, if we are willing to double the load. Theorem 9.
Let I be an instance of pma with n messages and load λ . There is an instance J with n messages of size and load λ such that an assignment of J can be transformed into anassignment of I in polynomial time.Proof. From I = ( P, τ, ( d , . . . , d n − )), we build I (cid:48) = ( P, τ, ( d (cid:48) , . . . , d (cid:48) n − )), where d (cid:48) i = d i − ( d i mod 2 τ ). The instance I (cid:48) has a load twice as large as I . On the other hand, all its delays are multi-ples of 2 τ hence solving pma on I (cid:48) is equivalent to solving it on I (cid:48)(cid:48) = ( P/ τ, , ( d / τ, . . . , d n − / τ )),as already explained in the proof of Lemma 3.Let us prove that an assignment A (cid:48) of I (cid:48) can be transformed into an assignment A of I . Considerthe message i with offset A (cid:48) ( i ), it uses all times between A (cid:48) ( i ) and A (cid:48) ( i ) + 2 τ − A (cid:48) ( i ) + d i − ( d i mod 2 τ ) to A (cid:48) ( i ) + 2 τ − d i − ( d i mod 2 τ ) in the secondperiod. If d i mod 2 τ < τ , we set A ( i ) = A (cid:48) ( i ), and the message i of I is scheduled “inside” themessage i of I (cid:48) , see Fig. 11. If τ ≤ d i mod 2 τ < τ , then we set A ( i ) = A (cid:48) ( i ) − τ . There is nocollision in the assignment A , since all messages in the second period use times which are used bythe same message in A (cid:48) . In the first period, the messages scheduled by A use either the first halfof the same message in A (cid:48) or the position τ before, which is either free in A (cid:48) or the second half ofthe times used by another message in A (cid:48) and thus not used in A .12 irst periodSecond period A (0) = A ′ (0) A (1) = A ′ (1) − τd d − τd ′ d ′ Message in I ′ Message in I Figure 11: Building I from I (cid:48) as explained explained in Th. 9Remark that combining Greedy Random and Th. 9 allows to solve pma on random instances,with probability one when the number of routes goes to infinity and the load is strictly less than1 /
2. This explains why we have not presented nor analyzed in details an algorithm designed forarbitrary τ on random instances, since any greedy algorithm, relying on optimizing F o ( A ), cannotguarantee anything for load larger than 1 /
2. However, in Sec. 3.4, we present Compact Fit, asimple greedy algorithm which exhibits good performance on random instances.
The problem pma is simplified version of the practical problem we adress, allowing a single degreeof freedom for each message: its offset. If we relax it slightly to be more similar to what is studiedin [3], we allow buffering a message i during a time b between the two contention points, whichtranslates here into changing d i to d i + b . The quality of the solutions obtained for such a modifiedinstance of pma are worst since the buffering adds latency to the messages. We now describe howwe can make a trade-off between the added latency and the size of the messages, knowing thathaving smaller messages helps to schedule instances with higher load.The idea is to buffer all messages so that their d i have the same remainder modulo τ . It costs atmost τ − P , the only value forwhich we are guaranted to find an assignment, whatever the instance. When all delays are changedso that d i is a multiple of τ , we have an easy reduction to the case of τ = 1, by dividing all valuesby τ , as explained in the proof of Lemma 3.We can do the same kind of transformation by buffering all messages, so that d i is a multiple of τ /k . The cost in terms of latency is then at most τ /k − k . For small size of messages, it is easy to get better algorithm for pma , in particular for τ = 1 aswe have shown in Sec. 5. Here we show how to adapt Compact Pairs to the case of τ = 2, to getan algorithm working with higher load. Theorem 10.
Compact Pairs on instances with τ = 2 always solves pma positively on instancesof load less than / .Proof. We assume w.l.o.g that there are less message with even d i than odd d i . We schedulecompact pairs of messages with even d i , then we schedule single message with odd d i . The worstcase is when there is the same number of the two types of messages. In the first phase, if we schedule n/ / n/ n/
4. In the second phase, if weschedule n/ / n/ n/ n/
4. Hence, both conditions are satisfied and we can always schedule messages when n ≤ (4 / m .We may want to add less latency to the message using the longest route. A natural idea ischoose the message with the longest route as the reference remainder by subtracting its remainderto every delay. As a consequence, this message needs zero buffering. However, the message withthe second longest route may have a remainder of τ −
1, thus the worst case increase of total latencyis τ − τ = 1 while optimizing the average latency. The onlydegree of freedom in the presented reduction is the choice of the reference remainder since all otherdelays are then modified to have the same remainder. Let us define the total latency for a choice t of reference time, denoted by L ( t ), as the sum of buffering times used for the messages, when t has been removed from their delay. If we sum L ( t ), from t = 0 to τ −
1, the contribution of eachmessage is (cid:80) τ − i =0 i . Since there are n messages, the sum of L ( t ) for all t is nτ ( τ − /
2. There is atleast one term of the sum less than its average, hence there is a t such that L ( t ) ≤ n ( τ − / t as reference is less than ( τ − / When τ = 1 and the load is less than 1 / any greedy algorithm solves pma positively since F o ( A ) ≤ (4 τ − | S | = 2 | S | where S is the number of scheduled messages. We give, in this section,a method which always finds an assignment for a load larger than 1 / To go above 1 / Definition 11.
The potential of a message of delay d , for a partial assignment A is the number ofintegers i ∈ [ P ] such that i is used in the first period and i + d mod P is used in the second period.The computation of the potential of a message of delay 3, is illustrated in Fig. 12. The potentialof a message counts the configurations which reduce the number of forbidden offsets. Indeed, when i is used in the first period and i + d mod P is used in the second period, then the same offset isforbidden twice for a message of delay d . Hence, the potential of a message is related to the numberof possible offsets as stated in the following lemma. Lemma 12.
Given a partial assignment A of size s , and i an unscheduled message of potential v ,then the set { o | A ( i → o ) has no collision } is of size P − s + v . irst periodSecond period Figure 12: A message of delay 3 has potential 2 in the represented assignment
First periodSecond periodDelays ( d i ) 2 3 1 3 21 Figure 13: Shaded position potential 2, in this assignmentFor our algorithm, we need a global measure of quality of a partial assignment, that we try toincrease when the algorithm fail to schedule new messages. We call our measure the potentialof an assignment and we denote it by
P ot ( A ), it is the sum of potentials of all messages in theinstance. Definition 13.
The potential of a position i , for a partial assignment A , is the number of messagesof delay d such that i + d mod P is used by a route scheduled by A .The potential of a position is illustrated in Fig. 13. Instead of decomposing the global potentialas a sum over messages, it can be understood as a sum over positions, as stated in the next lemma. Lemma 14.
The sum of potentials of all positions used in the first period by messages scheduledby A is equal to P ot ( A ) . By definition of the potential of a position, we obtain the following simple invariant.
Lemma 15.
The sum of potentials of all positions for a partial assignment with k scheduled mes-sages is nk . As a consequence of this lemma,
P ot ( A ) ≤ nk . Let us define a Swap operation , whichguarantees to obtain at last half the maximal value of the potential. Let A be some partialassignment of size s and let i be an unscheduled message of delay d . Assume that i cannot beused to extend A . The Swap operation is the following: select a free position o in the first period,remove the message which uses the position o + d in the second period from A and extend A by i with offset o . We denote this operation by Swap ( i, o, A ). Lemma 16.
Let A be some partial assignment of size k and let i be an unscheduled message. If i cannot be used to extend A , then either P ot ( A ) ≥ kn/ or there is an o such that P ot ( Swap ( i, o, A )) >P ot ( A ) .Proof. The positions in the first period can be partitioned into P u the positions used by somescheduled message and P f the positions unused. Let V f be the sum of the potentials of the positionsin P f and let V u be the sum of the potentials of the positions in P u . By Lemma 15, since P f and15 u partition the positions, we have V f + V u = kn . Moreover, by Lemma 14, P ot ( A ) = V u , then V f + P ot ( A ) = kn .By hypothesis, i cannot be scheduled, then, for all p ∈ P f , p + d i is used in the second period.Let us define the function F which associates to p ∈ P f the position A ( j ) such that there is ascheduled route j which uses p + d in the second period, that is A ( j ) + d j = p + d mod P . Thefunction F is an injection from P f to P u . Remark now that, if we compare Swap ( i, p, A ) to A , onthe second period the same positions are used. Hence, the potential of each position stay the sameafter the swap. As a consequence, doing the operation Swap ( i, p, A ) adds to P ot ( A ) the potentialof the position p and removes the potential of the position F ( p ).Assume now, to prove our lemma, that for all p , P ot ( Swap ( i, p, A )) ≤ P ot ( A ). It implies thatfor all p , the potential of p is smaller than the potential of F ( p ). Since F is an injection from P f to P u , we have that V f ≤ V u = P ot ( A ). Since V f + P ot ( A ) = kn , we have that P ot ( A ) ≥ kn/ Swap and Move : messages are scheduled while possibleby
First Fit and then the Swap operation is applied while it increases the potential. When thepotential is maximal,
Swap and Move schedule a new message by moving at most two scheduledmessages to other available offsets. If it fails to do so,
Swap and Move stops, otherwise the wholeprocedure is repeated. We analyze
Swap and Move in the following theorem.
Theorem 17.
Swap and Move solves positively pma , in polynomial time, for instances with τ = 1 and load less than / √ / − ≈ , .Proof. We determine for which value of the load
Swap and Move always works. We let n = (1 / (cid:15) ) P be the number of messages, the load is 1 / (cid:15) . We need only to study the case when n − A and Swap and Move tries to schedule the last one, since the previoussteps are similar but easier.Let d be the delay of the unscheduled message. We consider the pairs of times ( o, o + d ) for o ∈ [ P ]. Since the message cannot be scheduled, there are three cases. First, o is unused in thefirst period but o + d is used in the second period. Since there are n − P − n + 1 such value of o . If a message using the time o + d in the second period canbe scheduled elsewhere, so that the unscheduled message can use offset o , then Swap and Move succeeds. Otherwise, the message has no possible offsets, which means its potential is equal to2( (cid:15)P − o is used in the first period but o + d is unused in thesecond period. Finally, we have the case o is used in the first period and o + d is used in thesecond period. There are 2( (cid:15)P −
1) such values of o . If the two messages using times o and o + d can be rescheduled so that offset o can be used for the unscheduled message, then Swap and Move succeeds. This is always possible when one message is of potential at least 2 (cid:15)P − (cid:15)P + 1. Since the messages must be of potential more than 2( (cid:15)P −
1) and atmost n −
1, it is satisfied when the sum of the two potentials is at least 2( (cid:15)P −
1) + n .If we assume that Swap and Move was unable to schedule the last message by moving twoscheduled messages, the previous analysis gives us a bound on twice
P ot ( A ):2 P ot ( A ) ≤ P − n + 1)2( (cid:15)P −
1) + 2( (cid:15)P − (cid:15)P −
1) + n ) P ot ( A ) ≤ ( (cid:15)P − P + n )By Lemma 16, we know that P ot ( A ) ≥ n ( n − /
2, hence
Swap and Move must succeed when n ( n − / ≥ ( (cid:15)P − P + n ) .
16y expanding and simplifying, we obtain a second degree inequation in (cid:15) , 1 / − (cid:15) − (cid:15) ≥
0. Solvingthis inequation yields (cid:15) ≤ √ / − Swap and Move is in polynomial time. All Swap operations strictly increasethe potential. Moreover, when one or two messages are moved, the potential may decrease buta message is added to the partial assignment. The potential is bounded by O ( n ) and the moveoperations all together can only remove O ( n ) to the potential, hence there are at most O ( n )Swap operations during Swap and Move . A Swap operation can be performed in time O ( n ), sincefor a given message, all free offsets must be tested and the potential is evaluated in time O (1) (bymaintaining the potential of each position). This proves that Swap and Move is in O ( n ).Consider a partial assignment of size n (cid:48) = (1 / (cid:15) ) P , and a message of delay d . If we considerall n (cid:48) used offsets o and all times time o + d in the second period, then o and o + d are both usedfor at least n (cid:48) − ( P − n (cid:48) ) = 2 (cid:15)P values of o . The potential of any message is thus larger or equal to2 (cid:15)P . When a message cannot be scheduled, its potential is less or equal to 2 (cid:15)P , hence it is equalto 2 (cid:15)P .Hence, the potential of any assignment of size n (cid:48) is at least 2 (cid:15)P n . As a consequence, themethod of Lemma 16 will guarantee a non-trivial potential for 2 (cid:15)P n < nn (cid:48) /
2, that is (cid:15) < /
6. Anyalgorithm relying on the potential and the Swap operation cannot be guaranteed to work for loadlarger than 2 / / /
6. However, we may hope to improve on the analysis of Lemma 16, sinceit is not optimal: 2 (cid:15)P positions in P u are not taken into account in the proof.We conjecture that Swap and Move works for load up to 2 /
3. On random instances, we expectthe potential to be higher than the stated bound and to be better spread on the messages, whichwould make
Swap and Move works for larger loads, as it is indeed observed in experiments (seeAppendix 5.3).
We would like to understand better the behavior of our algorithms on instances drawn uniformlyat random. To this aim, we analyze the algorithm
Greedy Uniform , defined as follows: for eachmessage in the order of the input, choose one of the offsets, which does not create a collision withthe current partial assignment, uniformly at random.We analyze
Greedy Uniform over random instances: all messages have their delays drawnindependently and uniformly in [ m ]. We compute the probability of success of Greedy Uniform over all random choices by the algorithm and all possible instances . It turns out that this probability,for a fixed load strictly less than one, goes to one when m grows. For a given partial assignment, weare only interested in its trace: the set of times which are used in the first and second period. Hence,if n messages are scheduled in a period of size m , the trace of an assignment is a pair of subsets of[ m ] of size n . We now show that these traces are produced uniformly by Greedy Uniform . Theorem 18.
The distribution of traces of assignments produced by
Greedy Uniform when itsucceeds, from instances drawn uniformly at random, is also uniform.Proof.
The proof is by induction on n , the number of messages. It is clear for n = 1, since thedelay of the first message is uniformly drawn and all offsets can be used. Assume now the theoremtrue for some n > Greedy Uniform , by induction hypothesis has produced uniform traces fromthe first n messages. Hence, we should prove that, if we draw the delay of the n + 1 th message17andomly, extending the trace by a random possible offset produces a random distribution on thetraces of size n + 1.If we draw an offset uniformly at random (among all m offsets) and then extend the trace byscheduling the last message at this offset or fail, the distribution over the traces of size n + 1 is thesame as what produces Greedy Uniform . Indeed, all offsets which can be used to extend the tracehave the same probability to be drawn. Since all delays are drawn independently, we can assumethat, given a trace, we first draw an offset uniformly, then draw uniformly the delay of the addedmessage and add it to the trace if it is possible. This proves that all extensions of a given trace areequiprobable. Thus, all traces of size n + 1 are equiprobable, since they each can be formed from( n + 1) traces of size n by removing one used time from the first and second period. This provesthe induction and the theorem.Since Greedy Uniform can be seen as a simple random process on traces by Th. 19, it is easyto analyze its probability of success.
Theorem 19.
The probability over all instances with n messages and period m that GreedyUniform solves pma positively is n − (cid:89) i = m/ (1 − (cid:0) n i − m (cid:1)(cid:0) mi (cid:1) ) . Proof.
We evaluate Pr( m, n ) the probability that
Greedy Uniform fails at the n th step assumingit has not failed before. It is independent of the delay of the n th message. Indeed, the operationwhich adds one to all times used in the second period is a bijection on the set of traces of size n − n th message. We can thus assume that the delayis zero.Let S be the set of times used in the first period by the n − S the set oftimes used in the second period. We can assume that S is fixed, since all subsets of the first periodare equiprobable and because S is independent of S (Th. 19). There is no possible offset for the n th message, if and only if S ∪ S = [ m ]. It means that S has been drawn such that it contains[ m ] \ S . By Th.19, S is uniformly distributed over all sets of size n −
1. Hence, the probabilitythat [ m ] \ S ⊆ S is the probability to draw a set of size n − m − n + 1 fixedelements. This proves Pr( m, n ) = ( n n − − m )( mn − ) .From the previous expression, we can derive the probability of success of Greedy Uniform bya simple product of the probabilities of success (1 − Pr( m, i )) at step i , for all i ≤ n , which provesthe theorem.If we fix the load λ = n/m , we can bound P ( m, n ) using Stirling formula. We obtain for someconstant C , that P ( m, n ) ≤ C (cid:16) λ λ (2 λ − λ − (cid:17) m . We let f ( λ ) = λ λ (2 λ − λ − . The derivative of f isstrictly positive for 1 / < λ < f (1) = 1, hence f ( λ ) < λ <
1. By a simple unionbound, the probability that
Greedy Uniform fails is bounded by
Cλmf ( λ ) m , whose limit is zerowhen m goes to infinity. It explains why Greedy Uniform is good in practice for large m . In this section, the performance on random instances of the algorithms presented in Sec. 5 isexperimentally characterized. The settings are as in Sec. 3.4, with τ = 1. The evaluated algorithms18re: • First Fit • Greedy Uniform • Greedy Potential , a greedy algorithm which leverages the notion of potential introducedfor Swap. It schedules the messages in arbitrary order, choosing the possible offset whichmaximizes the potential of the unscheduled messages • Swap and Move • Exact Resolution
As in Sec. 3.4, the success rate on random instances is much better than the bound given byworst case analysis. In the experiment presented in Fig. 14, all algorithms succeed on all instanceswhen the load is less than 0 . Greedy Uniform behaves exactly as proved in Th. 19, with a verysmall variance. The performance of
Swap and Move and of its simpler variant
Greedy Potential ,which optimizes the potential in a greedy way, are much better than
First Fit or Greedy Uniform .Amazingly,
Swap and Move always finds an assignment when the load is less than 0 . Swap andMove is extremely close to Exact Resolution, but for P = 10 and load 0 . S u cc e ss r a t e ( % ) LoadGreedyUniformFirstFitProfitSwap and Move
Figure 14: Success rates of all algorithms for in-creasing loads, τ = 1 and P = 100 S u cc e ss r a t e ( % ) LoadGreedyUniformFirstFitProfitSwap and MoveExhaustiveSearch
Figure 15: Success rates of all algorithms forincreasing loads, τ = 1 and P = 10Finally, we evaluate the computation times of the algorithms to understand whether they scaleto large instances. We present the computation times in Fig. 16 and we choose to consider instancesof load 1, since they require the most computation time for a given size. The empirical complexityof an algorithm is evaluated by a linear regression on the function which associates to log( n ), the logof the computation time of the algorithm on n messages. First Fit , Greedy Uniform and
Swapand Move scale almost in the same way, with an empirical complexity slightly below O ( n ), while Greedy Potential has an empirical complexity of O ( n ). The empirical complexity corresponds to19
0 100 200 300 400 500 600 700 800 900 1000 A v e r age e x e c u t i on t i m e ( m s ) Number of routes GreedyUniformFirstFitProfitSwap and Move
Figure 16: Computation time (logarithmic scale) function of the number of messages of all algo-rithms on 10000 instances of load 1the worst case complexity we have proved, except for
Swap and Move which is in O ( n ) worst case.There are two explanations: most of the messages are scheduled by the fast First Fit subroutineand most Swap operations improve the potential by more than 1, as we assume in the worst caseanalysis.
In this article, we have proved that there is always a solution to pma and that it can be found inpolynomial time for large τ and load 0 . τ = 1 and load 0 .
61. Moreover, the performanceof the presented algorithms over average instances are shown to be excellent empirically but alsotheoretically for
Greedy Uniform . Hence, we can use the simple algorithms presented here toschedule C-RAN messages without buffer nor additional latency , if we are willing to use only halfthe bandwidth of the shared link.Several questions on pma are still unresolved, in particular its NP -hardness and the problemof doing better than load 0 . τ and random instances. We could also consider morecomplex network topologies with several shared links. First Fit or Meta Offset can easily betransfered to this context, and we could also try to adapt
Compact Pairs or Swap and Move .Finally, to model networks carrying several types of messages, different message sizes must beallowed, which would require to design an algorithm which does not use meta-offsets.
References [1] Time-sensitive networking task group. . Ac-cessed: 2016-09-22.[2] 3GPP. . Stage 1 (Release 16).203] Dominique Barth, Ma¨el Guiraud, Brice Leclerc, Olivier Marc´e, and Yann Strozecki. Deter-ministic scheduling of periodic messages for cloud ran. In , pages 405–410. IEEE, 2018.[4] Dominique Barth, Ma¨el Guiraud, and Yann Strozecki. Deterministic scheduling of periodicmessages for cloud ran. arXiv preprint arXiv:1801.07029 , 2018.[5] Dominique Barth, Ma¨el Guiraud, and Yann Strozecki. Deterministic contention managementfor low latency cloud RAN over an optical ring. In
ONDM 2019 - 23rd International Conferenceon Optical Network Design and Modeling (ONDM 2019) , 2019.[6] Federico Boccardi, Robert W Heath, Angel Lozano, Thomas L Marzetta, and Petar Popovski.Five disruptive technology directions for 5G.
IEEE Communications Magazine , 52(2):74–80,2014.[7] Yannick Bouguen, Eric Hardouin, Alain Maloberti, and Fran¸cois-Xavier Wolff.
LTE et lesr´eseaux 4G . Editions Eyrolles, 2012.[8] Aellison Cassimiro T dos Santos, Ben Schneider, and Vivek Nigam. Tsnsched: Automatedschedule generation for time sensitive networking. In , pages 69–77. IEEE, 2019.[9] Norman Finn and Pascal Thubert. Deterministic Networking Architecture. Internet-Draftdraft-finn-detnet-architecture-08, Internet Engineering Task Force, 2016. Work in Progress.URL: https://tools.ietf.org/html/draft-finn-detnet-architecture-08 .[10] Claire Hanen and Alix Munier.
Cyclic scheduling on parallel processors: an overview . Univer-sit´e de Paris-Sud, Centre d’Orsay, Laboratoire de Recherche en Informatique, 1993.[11] W. Howe. Time-scheduled and time-reservation packet switching, March 17 2005. US PatentApp. 10/947,487. URL: .[12] Jan Korst, Emile Aarts, Jan Karel Lenstra, and Jaap Wessels. Periodic multiprocessor schedul-ing. In
PARLE’91 Parallel Architectures and Languages Europe , pages 166–178. Springer, 1991.[13] B. Leclerc and O. Marc´e. Transmission of coherent data flow within packet-switched net-work, June 15 2016. EP Patent App. EP20,140,307,006. URL: .[14] Eugene Levner, Vladimir Kats, David Alcaide L´opez de Pablo, and TC Edwin Cheng. Com-plexity of cyclic scheduling problems: A state-of-the-art survey.
Computers & Industrial En-gineering , 59(2):352–361, 2010.[15] Richard M Lusby, Jesper Larsen, Matthias Ehrgott, and David Ryan. Railway track allocation:models and methods.
OR spectrum , 33(4):843–883, 2011.[16] China Mobile. C-RAN: the road towards green RAN.
White Paper, ver , 2, 2011.[17] Naresh Ganesh Nayak, Frank D¨urr, and Kurt Rothermel. Incremental flow scheduling androuting in time-sensitive software-defined networks.
IEEE Transactions on Industrial Infor-matics , 14(5):2066–2075, 2017. 2118] Time-Sensitive Networking Task Group of IEEE 802.1. Time-sensitive networks for fronthaul.July 2016. IEEE P802.1/D0.4.[19] Alex J Orman and Chris N Potts. On the complexity of coupled-task scheduling.
DiscreteApplied Mathematics , 72(1-2):141–154, 1997.[20] Anna Pizzinat, Philippe Chanclou, Fabienne Saliou, and Thierno Diallo. Things you shouldknow about fronthaul.
Journal of Lightwave Technology , 33(5):1077–1083, 2015.[21] Paolo Serafini and Walter Ukovich. A mathematical model for periodic scheduling problems.
SIAM Journal on Discrete Mathematics , 2(4):550–581, 1989.[22] Wilfried Steiner, Silviu S Craciunas, and Ramon Serna Oliver. Traffic planning for time-sensitive communication.
IEEE Communications Standards Magazine , 2(2):42–47, 2018.[23] Z Tayq, L Anet Neto, B Le Guyader, A De Lannoy, M Chouaref, C Aupetit-Berthelemot,M Nelamangala Anjanappa, S Nguyen, K Chowdhury, and P Chanclou. Real time demonstra-tion of the transport of ethernet fronthaul based on vran in optical access networks. In
OpticalFiber Communications Conference and Exhibition (OFC), 2017 , pages 1–3, 2017.[24] Wenci Yu, Han Hoogeveen, and Jan Karel Lenstra. Minimizing makespan in a two-machineflow shop with delays and unit-time operations is np-hard.