Fuzzy Simultaneous Congruences
aa r X i v : . [ c s . D M ] N ov Fuzzy Simultaneous Congruences
Max A. Deppert ∗ Kiel UniversityKiel, [email protected]
Klaus Jansen ∗ Kiel UniversityKiel, [email protected]
Kim-Manuel Klein
Kiel UniversityKiel, [email protected]
Abstract
We introduce a very natural generalization of the well-known problem of simulta-neous congruences. Instead of searching for a positive integer s that is specified by n fixed remainders modulo integer divisors a , . . . , a n we consider remainder intervals R , . . . , R n such that s is feasible if and only if s is congruent to r i modulo a i for some remainder r i in interval R i for all i .This problem is a special case of a 2-stage integer program with only two variablesper constraint which is is closely related to directed Diophantine approximation aswell as the mixing set problem. We give a hardness result showing that the problemis NP-hard in general.By investigating the case of harmonic divisors , i.e. a i +1 /a i is an integer for all i < n , which was heavily studied for the mixing set problem as well, we also answera recent algorithmic question from the field of real-time systems. We present analgorithm to decide the feasibility of an instance in time O ( n ) and we show that ifit exists even the smallest feasible solution can be computed in strongly polynomialtime O ( n ). Keywords—
Simultaneous congruences, Integer programming, Mixing Set, Real-time schedul-ing, Diophantine approximation
In the recent past there was a great interest in the so-called n-fold
IPs[10, 17, 19] and
IPs [18, 20]. The matrix A of a 2-stage IP is constructed by blocks A (1) , . . . , A ( n ) ∈ Z r × k and B (1) , . . . , B ( n ) ∈ Z r × t as follows: A = A (1) B (1) · · · A (2) B (2) . . . ...... ... . . . . . . 0 A ( n ) · · · B ( n ) For an objective vector c ∈ Z k + nt ≥ , a right-hand side b ∈ Z nr , and bounds ℓ, u ∈ Z k + nt ≥ the 2-stageIP is formulated asmax { c T x | A x = b, ℓ ≤ x ≤ u, x ∈ Z k + nt } . ∗ Research supported by German Research Foundation (DFG) project JA 612/20-1 special case of a 2-stage IP is given by the problem Mixing Set [6, 7, 15] (with only twovariables in each constraint) where especially r = k = t = 1 and A (1) = · · · = A ( n ) . Remark that2-variable integer programming problems were extensively studied by various authors, e.g. [3,12, 22]. The mixing set problem plays an important role for example in integer programmingapproaches for production planning [26]. Given vectors a, b ∈ Q n one aims to computemin { f ( s, x ) | s + a i x i ≥ b i ∀ i = 1 , . . . , n, ( s, x ) ∈ Z ≥ × Z n } (1)for some objective function f . Conforti et al. [8] pose the question whether the problem can besolved in polynomial time for linear functions f . Unless P = NP this was ruled out by Eisenbrandand Rothvoß [13] who proved that optimizing any linear function over a mixing set is NP-hard.However, the problem can be solved in polynomial time if a i = 1 [15, 24] or if the capacities a i fulfil a harmonic property [29], i.e. a i +1 /a i is integer for all i < n . The case of harmoniccapacities was intensively studied - see [8, 9] for simpler approaches.More recently, real-time systems with harmonic tasks (the periods are pairs of integer mul-tiples of each other) have received increased attention [5] and also harmonic periods have beenconsidered before [2, 11, 27, 28]. Now a recent manuscript in the field of real-time systems byNguyen et al. [25] gives rise to the study of a new problem. They present an algorithm forthe worst-case response time analysis of harmonic tasks with constrained release jitter runningin polynomial time. The release jitter of a task is the maximum difference between the arrivaltimes and the release times over all jobs of the task. Their algorithm uses heuristic componentsto solve an integer program that can be stated as a bounded version of the mixing set problemwith additional upper bounds B i as follows. Bounded Mixing Set (BMS)
Given capacities a , . . . , a n ∈ Z and bounds b, B ∈ Z n find ( s, x ) ∈ Z ≥ × Z n such that b i ≤ s + a i x i ≤ B i ∀ i = 1 , . . . , n. In particular they depend on minimizing the value of s which can be achieved in linear timein case of Mixing Set . See Lemma 18 in the appendix for the short proof. While BMS maylook artificial at first sight it is not; in fact, leading to a very natural generalization it can berestated in the well-known form of simultaneous congruences . Fuzzy Simultaneous Congruences (FSC)
Given divisors a , . . . , a n ∈ Z \{ } and remainder intervals R , . . . , R n ⊆ Z and an interval S ⊆ Z ≥ find a number s ∈ S such that ∃ r i ∈ R i : s ≡ r i (mod a i ) ∀ i = 1 , . . . , n. Obviously, this also generalizes over the well-known problem of the Chinese Remainder Theorem.Here we give its generalized form (cf. [21]).
Theorem 1 (Generalized Chinese Remainder Theorem) . Given divisors a , . . . , a n ∈ Z ≥ andremainders r , . . . , r n ∈ Z ≥ the system of n simultaneous congruences s ≡ r i (mod a i ) admits asolution s ∈ Z if and only if r i ≡ r j (mod gcd( a i , a j )) for all i = j . Furthermore, Leung and Whitehead [23] showed that k -Simultaneous Congruences ( k -SC) isNP-complete in the weak sense. Given divisors a , . . . , a n ∈ Z ≥ and remainders r , . . . , r n ∈ Z ≥ he task is to find a number s ∈ Z ≥ and a subset I ⊆ { , . . . , n } with | I | = k s.t. s ≡ r i (mod a i )for all i ∈ I . Later it was shown by Baruah et al. [4] that k -SC also is NP-complete in the strongsense.Both problems BMS and FSC are interchangeable formulations of the same problem (seeSection 2). Therefore, we will use them as synonyms and we especially assume formally that R i =[ b i , B i ]. Interestingly and to the best of our knowledge, FSC/BMS was not considered before.However, the investigation of simultaneous congruences has always been of transdisciplinaryinterest connecting a variety of fields and applications, e.g. [1, 14, 16]. Our Contribution (a) We show that BMS is NP-hard for general capacities a i . For the proof we refer to theappendix. Compared to the mixing set problem this is a stronger hardness result as BMSby itself only asks for an arbitrary feasible solution. However, every mixing set problemmay be solved by s = k b k ∞ , x = .(b) In the case of harmonic capacities (i.e. a i +1 /a i is an integer for all i < n ), which washeavily studied for the mixing set problem as mentioned before, we give an algorithmexploiting a merge idea based on modular arithmetics on intervals to decide the feasibilityproblem of FSC in time O ( n ). See Section 3.1 for the details.(c) Furthermore, for a feasible instance of FSC with harmonic capacities we present a polyno-mial algorithm as well as a strongly polynomial algorithm to compute the smallest feasiblesolution to FSC in time O (min { n log( a n ) , n } ) ≤ O ( n ). See Section 3.2 for the details.(d) Our algorithm gives a strongly polynomial replacement for the heuristic component (whichmay fail to compute a solution) in the algorithm of Nguyen et al. [25]. However, we presentan algorithm to solve the problem in linear time. See Section 4 for the details. For the sake of readability we write X [ α ] = ( X mod α ) for numbers X as well as X [ α ] = { z mod α | z ∈ X } for sets X (of numbers) to denotethe modular projection of some number or interval, respectively. Extending the usual notationwe also write X ≡ Y (mod α ) if X [ α ] = Y [ α ] for sets X, Y . Notice that on the one hand( X ∪ Y ) [ α ] = X [ α ] ∪ Y [ α ] but on the other hand be aware that ( X ∩ Y ) [ α ] = X [ α ] ∩ Y [ α ] in general(cf. Lemma 9). Figure 1 depicts the structure of v [ α ] if v = [ ℓ v , u v ] is an interval in Z .Also we use the well-tried notation t + X = { t + z | z ∈ X } to express the translation of a setof numbers X by some number t . For a set of sets S we write S S to denote the union S S ∈S S .Furthermore, we identify constraints by their indices. So for i ≤ n we say that “ b i ≤ s + a i x i ≤ B i ” is constraint i . Identity of BMS and FSC
In fact, BMS allows zero capacities while FSC cannot allow zerodivisors since (mod 0) is undefined. However, consider a constraint i of BMS with a i = 0. Let b i ≤ s + a i x i ≤ B i be satisfied and set r i = s + a i x i . Then r [ a i ] i = s [ a i ] and r i ∈ [ b i , B i ] = R i . Vice-versa let r i ∈ R i s.t. r i ≡ s (mod a i ). Then there is an x i ∈ Z s.t. s + a i x i = r i ∈ R i = [ b i , B i ].A constraint i that holds a i = 0 simply requires that s ∈ R i . Hence, if a i = a j = 0 fortwo constraints i = j they can be replaced by one new constraint k defined by R k = R i ∩ R j .Therefore, one may assume that there is at most one constraint i with a zero capacity a i . As allour results can be lifted to the general case with low effort we will assume in terms of BMS thatall capacities are non-zero and for FSC we take the equivalent assumption that S = Z ≥ . v − ℓ [ α ] v α αℓ v u v αℓ [ α ] v u [ α ] v ℓ v − ℓ [ α ] v α αℓ v u v αu [ α ] v ℓ [ α ] v Figure 1: The two possibilities for the modular projection of an interval
With our notation we may easily express the feasibility of a value s for a single constraint i as follows. Observation 2.
A value s satisfies constraint i if and only if s [ a i ] ∈ R [ a i ] i .Proof. ∃ r i ∈ R i : r i ≡ s (mod a i ) iff ∃ r i ∈ R i : r [ a i ] i = s [ a i ] iff s [ a i ] ∈ R [ a i ] i .By simply swapping the signs of the x i we may assume that a i ≥ i . We may alsoassume that the intervals are small in the sense that B i − b i + 1 < a i holds for all i . Assumethat B i − b i + 1 ≥ a i for an i and let s ≥ b i ≤ B i − a i + 1 andconstraint i may always be solved by setting x i = ⌈ ( b i − s ) /a i ⌉ which satisfies b i ≤ s + a i ⌈ b i − sa i ⌉ | {z } x i ≤ s + a i ⌈ B i − a i +1 − sa i ⌉ = s + a i ⌊ B i − sa i ⌋ ≤ B i . Hence, constraint i is redundant and may be omitted. As a direct consequence there can be atmost one feasible value for each x i for a given guess s . In fact, we can decide the feasibility of aguess s in time O ( n ) as for all constraints i and values x i it holds b i ≤ s + a i x i ≤ B i if and only if ⌈ ( b i − s ) /a i ⌉ = x i = ⌊ ( B i − s ) /a i ⌋ . So a guess s is feasible if and only if ⌈ ( b i − s ) /a i ⌉ = ⌊ ( B i − s ) /a i ⌋ holds for all constraints i . By s min we denote the smallest feasible solution s that satisfies allconstraints. Observation 3.
For feasible instances it holds that s min < lcm( a , . . . , a n ) .Proof. Let ϕ = lcm( a , . . . , a n ). Remark that ϕ/a i is integral for all i . Assume that ( s, x ) is asolution with s = s min ≥ ϕ . Let t = s − ϕ and y i = x i + ϕ/a i f.a. i . Then 0 ≤ t < s min and t + a i y i = s + a i x i f.a. i . So ( t, y ) is a solution that contradicts the optimality. Here we consider harmonic divisors in the sense that a i +1 /a i is aninteger for all i < n . We present an algorithm to decide the feasibility of an instance of FSC.Also we show how optimal solutions can be computed in (strongly) polynomial time. Both ofthese results are based on the fine-grained interconnection between modular arithmetics on setsand the harmonic property. For some intuition Figure 2 gives a perspective on s as an anchorfor 1-dimensional lattices with basis a i which have to “hit” the intervals R i . For example, in thefigure it holds that s + a · ( −
1) = s − a ∈ R , so the 1-dimensional lattice ( s + a z ) z ∈ Z hitsinterval R . Therefore, the choice of s satisfies constraint 2. R R R R R s Figure 2: 36 a = 18 a = 6 a = 3 a = a . The guess s is not feasible for constr. 3 and 50 α w [ α ] : v [ α ] : v [ α ] ∩ w [ α ] : 0 α α α α w [ α ] : v [ α ] : v [ α ] ∩ w [ α ] : 0 α α α The idea for our first algorithm will be to decide the feasibilityproblem by iteratively computing modular projections from constraint i = n down to i = 1.In the following we will say that an interval w ⊆ Z represents of a set M ⊆ Z (modulo α ) if w [ α ] = M [ α ] . Also a set of intervals R represents a set M ⊆ Z (modulo α ) if M [ α ] = S w ∈R w [ α ] .Given an integer α ≥ v , w we depend on the structure of the intersection v [ α ] ∩ w [ α ] ⊆ [0 , α ). To express it let v = [ ℓ v , u v ] , w = [ ℓ w , u w ] and we define the basic intervals ϕ α ( v, w ) = [ ℓ [ α ] v , u [ α ] w ] and ψ α ( v, w ) = [max { ℓ [ α ] v , ℓ [ α ] w } , α + min { u [ α ] v , u [ α ] w } ]for all intervals v , w . Remark that ψ α ( w, v ) = ψ α ( v, w ) is always true. Lemma 4.
Given an integer α ≥ and two intervals v, w ⊆ Z it holds that v [ α ] ∩ w [ α ] ∈ { Ø , v [ α ] , w [ α ] , ψ α ( v, w ) [ α ] , ϕ α ( v, w ) , ϕ α ( w, v ) ,ϕ α ( v, w ) ˙ ∪ ϕ α ( w, v ) , ϕ α ( v, w ) ˙ ∪ ψ α ( v, w ) [ α ] , ϕ α ( w, v ) ˙ ∪ ψ α ( v, w ) [ α ] } . The important intuition is that such a “modulo α intersection” can always be representedby at most two intervals. Remark that the sets in the second row are the only ones which arerepresented by 2 > Proof.
We do a case distinction (see Figure 3) as follows. We only look at the non-trivial case,i.e. v [ α ] ∩ w [ α ] / ∈ { Ø , v [ α ] , w [ α ] } , which especially implies | v | < α and | w | < α .We start with the case that neither v [ α ] nor w [ α ] is an interval, i.e. u [ α ] v < ℓ [ α ] v and u [ α ] w < ℓ [ α ] w .Then it cannot be that u [ α ] w ≥ ℓ [ α ] v and u [ α ] v ≥ ℓ [ α ] w since that implies ℓ [ α ] v ≤ u [ α ] w < ℓ [ α ] w ≤ u [ α ] v .Hence, there are three cases as follows. ase 1.1 . u [ α ] w < ℓ [ α ] v and u [ α ] v < ℓ [ α ] w . Then the intersection equals[0 , min { u [ α ] v , u [ α ] w } ] ˙ ∪ [max { ℓ [ α ] v , ℓ [ α ] w } , α ) = [max { ℓ [ α ] v , ℓ [ α ] w } , α + min { u [ α ] v , u [ α ] w } ] [ α ] = ψ α ( v, w ) [ α ] . Case 1.2 . u [ α ] w ≥ ℓ [ α ] v and u [ α ] v < ℓ [ α ] w . Then the intersection equals[0 , u [ α ] v ] ˙ ∪ [ ℓ [ α ] v , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , α ) = [ ℓ [ α ] v , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , α + u [ α ] v ] [ α ] = ϕ α ( v, w ) ˙ ∪ ψ α ( v, w ) [ α ] . Case 1.3 . u [ α ] w < ℓ [ α ] v and u [ α ] v ≥ ℓ [ α ] w . By symmetry we get v [ α ] ∩ w [ α ] = ϕ α ( w, v ) ˙ ∪ ψ α ( v, w ) [ α ] .Now, w.l.o.g. assume that v [ α ] is an interval, i.e. ℓ [ α ] v ≤ u [ α ] v , while w [ α ] consists of twointervals, i.e. u [ α ] w < ℓ [ α ] w . Then there are three cases as follows. Case 2.1 . ℓ [ α ] v ≤ u [ α ] w < u [ α ] v < ℓ [ α ] w . Then the intersection equals [ ℓ [ α ] v , u [ α ] w ] = ϕ α ( v, w ). Case 2.2 . u [ α ] w < ℓ [ α ] v < ℓ [ α ] w ≤ u [ α ] v . Then the intersection equals [ ℓ [ α ] w , u [ α ] v ] = ϕ α ( w, v ). Case 2.3 . ℓ [ α ] v ≤ u [ α ] w < ℓ [ α ] w ≤ u [ α ] v . Then the intersection is[ ℓ [ α ] v , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , u [ α ] v ] = ϕ α ( v, w ) ˙ ∪ ϕ α ( w, v ) . Clearly, if both v [ α ] and w [ α ] are intervals ( Case 3 ) (which are not disjoint) then their intersectionis either ϕ α ( v, w ) or ϕ α ( w, v ).While the previous lemma characterized the form of intersections of two modular projectionsof intervals, the next lemma reveals how many intervals will be required to represent a one-to-many intersection. We will use this bound in every step of our algorithm. We want to addthat both of these lemmata and even Lemma 6 do not depend on the harmonic property bythemselves. However, they turn out to be especially useful in this setting. Lemma 5.
Let α ≥ , let v be an interval and let Q be a set of k ≥ intervals. Then there isa set R of at most k +1 intervals s.t. v [ α ] ∩ ( S Q ) [ α ] = ( S R ) [ α ] .Proof of Lemma 5. We simply obtain that v [ α ] ∩ (cid:16)[ Q (cid:17) [ α ] = [ w ∈ Q ( v [ α ] ∩ w [ α ] ) = [ w ∈ Qw [ α ] ⊆ v [ α ] w [ α ] ∪ [ w ∈ D ( v [ α ] ∩ w [ α ] )where D = { w ∈ Q | w [ α ] * v [ α ] , w [ α ] ∩ v [ α ] = Ø } denotes the subset of intervals that causethe interesting intersections with v [ α ] (cf. Lemma 4). Obviously, all other intersections canbe represented by at most one interval each. So we study the intersections with D . In fact,everything gets simple if there are w , w ∈ D such that v [ α ] ∩ w [ α ]1 = ϕ α ( v, w ) ˙ ∪ ψ α ( v, w ) [ α ] and v [ α ] ∩ w [ α ]2 = ϕ α ( w , v ) ˙ ∪ ψ α ( v, w ) [ α ] . By simply adapting the inequalities of the first casedistinction in the proof of Lemma 4 we find( v [ α ] ∩ w [ α ]1 ) ∪ ( v [ α ] ∩ w [ α ]2 )= ([0 , u [ α ] v ] ˙ ∪ [ ℓ [ α ] v , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , α )) ∪ ([0 , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , u [ α ] v ] ˙ ∪ [ ℓ [ α ] v , α ))= [0 , u [ α ] v ] ˙ ∪ [ ℓ [ α ] v , α ) = v [ α ] i +1 : 0 a i a i +1 a i R [ a i ] i Q i : 0 a i Figure 4: A step from i +1 to i ; modular projection to [0 , a i ) and intersection with R [ a i ] i which implies that v [ α ] ∩ ( S Q ) [ α ] = v [ α ] can be represented by only one interval, namely v .Therefore, in order to get an upper bound we assume that these two types of intersections donot come together. In more detail, we may assume by symmetry that D = D ˙ ∪ D where D = { w ∈ D | v [ α ] ∩ w [ α ] = ϕ α ( v, w ) ˙ ∪ ϕ α ( w, v ) } and D = { w ∈ D | v [ α ] ∩ w [ α ] = ϕ α ( v, w ) ˙ ∪ ψ α ( v, w ) [ α ] } . It turns out that [ w ∈ D ( v [ α ] ∩ w [ α ] ) = [ w ∈ D ([ ℓ [ α ] v , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , u [ α ] v ])= [ ℓ [ α ] v , max w ∈ D u [ α ] w ] ∪ [ min w ∈ D ℓ [ α ] w , u [ α ] v ] and [ w ∈ D ( v [ α ] ∩ w [ α ] ) = [ w ∈ D ([ ℓ [ α ] v , u [ α ] w ] ˙ ∪ [ ℓ [ α ] w , α + u [ α ] v ] [ α ] )= [ ℓ [ α ] v , max w ∈ D u [ α ] w ] ∪ [ min w ∈ D ℓ [ α ] w , α + u [ α ] v ] [ α ] which finally joins up to [ w ∈ D ( v [ α ] ∩ w [ α ] ) = [ ℓ [ α ] v , max w ∈ D u [ α ] w ] ∪ [min w ∈ D ℓ [ α ] w , α + u [ α ] v ] [ α ] . Hence, all intersections with intervals in D may be represented by at most two intervals in totalwhile each other intersection can be represented by at most one interval. Thus, if | D | = 0 thenthe whole intersection can be represented by at most k intervals. If | D | ≥ | Q | − | D | ≤ k − k + 1 intervals required.Let S i denote the set of all solutions s ∈ Z ≥ that are feasible for each of the constraints i, i + 1 , . . . , n . We set S n +1 = Z ≥ to denote the feasible solutions to an empty set of constraints.The correctness of Algorithm 1 is implied by the following fundamental lemma. See Figure 4 foran example of a step inside the algorithm. Lemma 6.
It holds true that S [ a i ] i = R [ a i ] i ∩ S [ a i ] i +1 for all i = 1 , . . . , n .Proof. Let r ∈ S [ a i ] i . So there is a solution s ∈ S i such that r = s [ a i ] ∈ R [ a i ] i . It holds that S i ⊆ S i +1 which implies s ∈ S i +1 and thus r = s [ a i ] ∈ S [ a i ] i +1 .Vice-versa let r ∈ R [ a i ] i ∩ S [ a i ] i +1 . So there is a solution s ∈ S i +1 with s [ a i ] = r . From r ∈ R [ a i ] i we get s [ a i ] ∈ R [ a i ] i . Hence, s ∈ S i and r = s [ a i ] ∈ S [ a i ] i . lgorithm 1 Feasibility test for FSC procedure
Feasible ( I = ( a , . . . , a n , R , . . . , R n )) Q n ← { R n } for i = n − , . . . , do Compute set Q i s. t. ( S Q i ) [ a i ] = R [ a i ] i ∩ ( S Q i +1 ) [ a i ] and | Q i | ≤ O ( n − i ) if S Q = Ø thenreturn “infeasible” elsereturn “feasible” Theorem 7.
Algorithm 1 decides the feasibility of an instance in time O ( n ) .Proof. We show that S Q i ≡ S i (mod a i ) for all i = n, . . . ,
1. This will prove the algorithmcorrect since then S Q ≡ S (mod a ) and that means S Q is empty if and only if S isempty. Obviously it holds that S Q n ≡ S n (mod a n ) since S Q n = R n . Now suppose that S Q i +1 ≡ S i +1 (mod a i +1 ) for some i ≥
1. We have that (cid:16)[ Q i (cid:17) [ a i ] = R [ a i ] i ∩ (cid:16)[ Q i +1 (cid:17) [ a i ] where the harmonic property implies (cid:16)[ Q i +1 (cid:17) [ a i ] = (cid:18)(cid:16)[ Q i +1 (cid:17) [ a i +1 ] (cid:19) [ a i ] = (cid:16) S [ a i +1 ] i +1 (cid:17) [ a i ] = S [ a i ] i +1 . Together with Lemma 6 this yields (cid:16)[ Q i (cid:17) [ a i ] = R [ a i ] i ∩ S [ a i ] i +1 = S [ a i ] i and that proves the algorithm correct. Using Lemmas 4 to 6 each set Q i can be computed intime O ( n ) and this yields a total running time of O ( n ). Unfortunately, Algorithm 1 neither calculates a solutionnor directly implies one. Here we show how to compute the smallest feasible solution s min toFSC. However, by searching in the opposite direction the same technique also applies to thecomputation of the largest feasible solution s max < a n . We start with a simple binary searchapproach. Corollary 8.
For feasible instances s min can be computed in time O ( n log( a n )) . This can be achieved by introducing an additional constraint measuring the value of s asfollows. Let β be a positive integer. We extend the problem instance by a new constraint withnumber n + 1 defined by a n +1 = 2 · a n , b n +1 = 0, and B n +1 = β . Remark that this β -instance admits the same set of solutions as the original instance as long as β is large enough, e.g. β = a n (cf. Observation 3). Consider a feasible solution to the β -instance where β ≤ a n . It holds that2 a n x n +1 = a n +1 x n +1 ≤ s + a n +1 x n +1 ≤ B n +1 = β ≤ a n hich implies x n +1 ≤ ⌊ ⌋ = 0. However, if x n +1 < s ≥ a n +1 · | x n +1 | and therefore thesolution s ′ = s + a n +1 x n +1 with x ′ n +1 = 0 and x ′ i = x i − ( a n +1 /a i ) x n +1 for all i = 1 , . . . , n isbetter than s and x ′ n +1 = 0.Thus we may assume generally that x n +1 = 0 which allows us to measure the value of s usingthe upper bound β . We use β to do a binary search in the interval [0 , a n ] using Algorithm 1 tocheck the β -instance for feasibility. The smallest possible value for β then states the optimumvalue and that proves Corollary 8. However, with additional ideas we are able to achieve stronglypolynomial time. The next lemma seems to be a characteristic property of modular arithmeticson sets. Lemma 9.
For all numbers a, b ∈ Z ≥ and sets A, B ⊆ Z it holds A [ a ] ∩ B [ a ] = A [ ab ] ∩ b − [ i =0 ( ia + B [ a ] ) ! [ a ] . Proof.
Let x be a number. Then it holds x ∈ A [ ab ] ∩ b − [ i =0 ( ia + B [ a ] ) ! [ a ] ⇔ ∃ y ∈ A [ ab ] : y ∈ b − [ i =0 ( ia + B [ a ] ) ∧ x = y [ a ] ⇔ ∃ y ∈ A [ ab ] : y [ a ] ∈ B [ a ] ∧ x = y [ a ] ⇔ x ∈ A [ a ] ∩ B [ a ] where the last equivalence follows from ( A [ ab ] ) [ a ] = A [ a ] .Since the right side can be written as the modular projection of a union of intersections modulo a we can find a sensible strengthening; in fact, for arbitrary sets X, M , . . . , M m − itholds that m − [ i =0 ( X ∩ M i ) = m − [ i =0 ( X ∩ ( M i \ i − [ j =0 ( X ∩ M j ))) . While the left-hand side may not, the right-hand side is always a disjoint union. Taking intoaccount the modular projections this leads to the following corollary.
Corollary 10.
For all numbers a, b ∈ Z ≥ and sets A, B ⊆ Z it holds A [ a ] ∩ B [ a ] = b − [ i =0 D i ! [ a ] where D i = A [ ab ] ∩ Y i and Y i = ia + ( B [ a ] \ S i − j =0 D [ a ] j ) for all i = 0 , . . . , b − . We will use Corollary 10 to aggregate constraints in order to reduce the problem size. Thefollowing observation gives a first bound for the smallest feasible solution.
Observation 11.
For feasible instances it holds that s min ∈ R [ a n ] n . [ a n ] n : a n − a n Y = R [ a n − ] n − Y Y a n − a n D D D Figure 5: An example of four required intervals to represent R [ a n − ] n − ∩ R [ a n − ] n in Lemma 13 This is true since in the harmonic case s min < lcm( a , . . . , a n ) = a n due to Observation 3which then implies that s min = s [ a n ]min ∈ R [ a n ] n using Observation 2.The idea is to search for s min in the modular projection R [ a n ] n by aggregating the penultimateconstraint n − n . Fortunately, the number of intervals needed torepresent both constraints can be bounded by a constant. A fine-grained construction thenenforces the algorithm to efficiently iterate the feasibility test on aggregated instances to find theoptimum value. Theorem 12.
For feasible instances s min can be computed in time O ( n ) . Remark that the set of feasible solutions for the last two constraints is S n − = R [ a n − ] n − ∩ ( R [ a n ] n ) [ a n − ] = R [ a n − ] n − ∩ R [ a n − ] n . Therefore, the next lemma states the crucial argument of thealgorithm. Lemma 13.
The intersection R [ a n − ] n − ∩ R [ a n − ] n can always be represented by the disjoint union U ⊆ R [ a n ] n of only constant many intervals in R [ a n ] n such that(a) U [ a n − ] = R [ a n − ] n − ∩ R [ a n − ] n and(b) u ≡ r (mod a n − ) implies u ≤ r for all u ∈ U , r ∈ R [ a n ] n . Here the former property states that indeed the intervals in U are a proper representationfor the last two constraints. The important property is the latter; in fact, it ensures that U isthe best possible representation in the sense that U consists of the smallest intervals possible(see Figure 5). Proof of Lemma 13. (a). By defining D i = Y i ∩ R [ a n ] n and Y i = ia n − + ( R [ a n − ] n − \ i − [ j =0 D [ a n − ] j )for all i ∈ { , . . . , a n /a n − − } Corollary 10 proves the claim (cf. Figure 5). (b) follows byconstruction.It remains to show that S i D i is the union of only constant many disjoint intervals. Appar-ently, the intervals are disjoint by construction.We claim that there are at most three non-empty sets D i . Assume there are at least fournon-empty translates D i , namely D i , D j , D k , D ℓ . Then, since R n is an interval it holds for t least two p, q ∈ { i, j, k, ℓ } that the full interval translates F p = [ pa n − , ( p + 1) a n − ) and F q = [ qa n − , ( q + 1) a n − ) are subsets of R [ a n ] n . For p (and also for q ) we get D [ a n − ] p = ( Y p |{z} ⊆ F p ∩ R [ a n ] n ) [ a n − ] = Y [ a n − ] p = R [ a n − ] n − \ p − [ j =0 D [ a n − ] j which implies with S p − j =0 D [ a n − ] j ⊆ R [ a n − ] n − that p [ j =0 D [ a n − ] j = D [ a n − ] p ∪ p − [ j =0 D [ a n − ] j = R [ a n − ] n − . Then it follows S pj =0 D [ a n − ] j = R [ a n − ] n − = S qj =0 D [ a n − ] j . W.l.o.g. let p < q . Then D q = Y q ∩ R [ a n ] n is empty since Y q = qa n − + R [ a n − ] n − \ q − [ j =0 D [ a n − ] j ⊆ qa n − + (cid:16) R [ a n − ] n − \ R [ a n − ] n − (cid:17) is empty and we have a contradiction.Using the same case distinctions as in the proof of Lemma 4 one can show that each set D i consist of at most two intervals. Therefore, all the non-empty sets D i consist of at most 3 · n and n − E , . . . , E k ⊆ R [ a n ] n (representing theconstraints n and n −
1) where k ≤ C for a small constant C . If k ≥ I , . . . , I k defined by( I j ) min { s | s [ a i ] ∈ R [ a i ] i ∀ i = 1 , . . . , n − , s [ a n ] ∈ E [ a n ] j , s ∈ Z ≥ } . If none of the instances I , . . . , I k admits a solution then the original instance can not be feasible.Assume that there is at least one feasible instance. Now, since E , . . . , E k are disjoint exactlyone of them contains the optimum value for s . W.l.o.g. assume that E < · · · < E k . Then thereis a smallest index j such that I j is feasible and we solve I j recursively to find the optimumvalue. Together this yields an algorithm running in time n · C · O ( n ) = O ( n ). In real-time systems an important question is toask for the worst-case response time of a system. Nguyen et al. proposed a new algorithm [25]to compute it in polynomial time for preemptive sporadic tasks τ , . . . , τ n with harmonic periods T i ≥ T i /T i +1 ∈ Z . Their algorithm even allows the task execution to bedelayed by some release jitter J i . However, their algorithm depends on a heuristic componentwhich may fail to find a solution. In fact, the fundamental computation problem can be expressedas a BMS instance which immediately implies a robust solution in time O ( n ) with our algorithm.Nevertheless, it can be solved even more efficiently in time O ( n ) which we describe here.We adapt the notation of Nguyen et al. and extend it to our needs. The jobs of task τ i have the processing time C i and we define c i = P n − t = i +1 C t to accumulate the last of them. The tilization of task τ i is denoted by U i = C i /T i and it holds that P n − t =1 U t <
1. In Section 5.4.1of [25] Nguyen et al. describe that also x = 1 may be assumed. The system to solve ismin { x n | J i + T i x i ≤ J n + T n x n ,J n + T n x n − c i ≤ J i + T i x i ∀ i ≤ n − } (2)which can be formulated as the following BMS instance:min (cid:26) x n (cid:12)(cid:12)(cid:12)(cid:12) (cid:24) J i − J n T n (cid:25) ≤ x n − T i T n x i ≤ (cid:22) J i − J n + c i T n (cid:23) ∀ i ≤ n − (cid:27) (3) Lemma 14. If i < j ≤ n and ( c i + c j ) /T j < then in terms of variable x i there is at most onefeasible value for variable x j .Proof. If j < n then by combining the constraints for i and j in (2) we find T i x i + J i − J n ≤ T j x j + J j − J n + c j and T j x j + J j − J n ≤ T i x i + J i − J n + c i which with the harmonic property and the integrality of x j yields T i T j x i + (cid:24) J i − J j − c j T j (cid:25) ≤ x j ≤ T i T j x i + (cid:22) J i − J j + c i T j (cid:23) . (4)However, if j = n then c j = P n − t = n +1 C t = 0 and thus (4) follows from (2) too (cf. (3)). Now bysimply dropping the roundings we obtain in both cases that T i T j x i + (cid:22) J i − J j + c i T j (cid:23) − (cid:18) T i T j x i + (cid:24) J i − J j − c j T j (cid:25)(cid:19) ≤ c i + c j T j < ℓ ( i ) j ( z ) and u ( i ) j ( z ) to denote the feasible valuesfor variable x j in terms of variable x i where z states a value for variable x i , i.e. ℓ ( i ) j ( z ) = T i T j z + (cid:24) J i − J j − c j T j (cid:25) and u ( i ) j ( z ) = T i T j z + (cid:22) J i − J j + c i T j (cid:23) . Thus, (4) is equivalent to x j ∈ [ ℓ ( i ) j ( x i ) , u ( i ) j ( x i )] and if ( c i + c j ) /T j < ℓ ( i ) j ( x i ) = x j = u ( i ) j ( x i ) or there is no solution at all.Fortunately, there is always a chain of variables such that the value of every next variablecan be determined by knowing the value of the previous. The following lemma is crucial. Lemma 15. If i < n and k = max { t ≤ n | T i +1 = T t } then there is at most one feasible valuefor variable x k .Proof. If k < n − k that T k ≥ T k +1 ≥ T k +2 ≥ · · · ≥ T n − and thus T t /T k ≤ / t = k + 1 , . . . , n −
1. Hence, c i + c k T k = n − X t = i +1 U t T t T k + n − X t = k +1 U t T t T k = k X t = i +1 U t T t T k |{z} =1 +2 n − X t = k +1 U t T t T k |{z} ≤ / ≤ n − X t = i +1 U t < . x i x i +1 x k x n Figure 6: The variable revealing flow with vertical lines between blocks of equal periods
If otherwise k ≥ n − c k = 0 and hence c i + c k T k = c i T k = n − X t = i +1 U t T t T k |{z} =1 = n − X t = i +1 U t < . By Lemma 14 this proves the claim.This gives rise to the following algorithm. By iterating Lemma 15 and starting with x = 1we can reveal the last variable of each block of indices of equal periods (cf. Figure 6). Finally,this reveals the variable x n and we only need to assure that the value of x n admits feasible valuesfor variables which are not revealed so far. Apparently we may restate the constraints of (2) as (cid:24) J n − J j − c j + T n x n T j (cid:25) ≤ x j ≤ (cid:22) J n − J j + T n x n T j (cid:23) ∀ j = 1 , . . . , n − . Therefore, we can simply compare these bounds to assure the existence of a feasible value foreach variable x j . See Algorithm 2 for a formal description. Algorithm 2
Variable revealing flow procedure
Reveal x ← k ← while k < n do i ← kk ← max { t ≤ n | T i +1 = T t } if ℓ ( i ) k ( x i ) = u ( i ) k ( x i ) thenreturn − else x k ← ℓ ( i ) k ( x i ) ⊲ Lemma 15 for j = 1 , . . . , n − doif l J n − J j − c j + T n x n T j m > j J n − J j + T n x n T j k then ⊲ no feasible solution for x j return − return x n Observation 16.
In fact, by a more sophisticated investigation the number of index blocks ofequal periods can be bounded by a constant and thus, the while loop reveals x n in constant time.Therefore, the final feasibility test appears to be the only computational bottleneck. eferences. [1] Manindra Agrawal and Somenath Biswas. Primality and identity testing via chinese re-maindering. In Proc. FOCS 1999 , pages 202–209, 1999.[2] Saoussen Anssi, Stefan Kuntz, S´ebastien G´erard, and Fran¸cois Terrier. On the gap betweenschedulability tests and an automotive task model.
J. Syst. Archit. , 59(6):341–350, 2013.[3] Reuven Bar-Yehuda and Dror Rawitz. Efficient algorithms for integer programs with twovariables per constraint.
Algorithmica , 29(4):595–609, 2001.[4] Sanjoy K. Baruah, Louis E. Rosier, and Rodney R. Howell. Algorithms and complexityconcerning the preemptive scheduling of periodic, real-time tasks on one processor.
Real-Time Systems , 2(4):301–324, 1990.[5] Vincenzo Bonifaci, Alberto Marchetti-Spaccamela, Nicole Megow, and Andreas Wiese.Polynomial-time exact schedulability tests for harmonic real-time tasks. In
Proc. RTSS2013 , pages 236–245. IEEE Computer Society, 2013.[6] Michele Conforti, Gerard Cornu´ejols, and Giacomo Zambelli.
Integer Programming . SpringerPublishing Company, Incorporated, 2014.[7] Michele Conforti, Marco Di Summa, and Laurence A. Wolsey. The mixing set with flows.
SIAM J. Discrete Math. , 21(2):396–407, 2007.[8] Michele Conforti, Marco Di Summa, and Laurence A. Wolsey. The mixing set with divisiblecapacities. In
Proc. IPCO 2008 , pages 435–449, 2008.[9] Michele Conforti and Giacomo Zambelli. The mixing set with divisible capacities: A simpleapproach.
Oper. Res. Lett. , 37(6):379–383, 2009.[10] Jana Cslovjecsek, Friedrich Eisenbrand, Christoph Hunkenschr¨oder, Lars Rohwedder, andRobert Weismantel. Block-structured integer and linear programming in strongly polyno-mial and near linear time, 2020 (Manuscript).[11] Friedrich Eisenbrand, Karthikeyan Kesavan, Raju S. Mattikalli, Martin Niemeier, Arnold W.Nordsieck, Martin Skutella, Jos´e Verschae, and Andreas Wiese. Solving an avionics real-timescheduling problem by advanced ip-methods. In Mark de Berg and Ulrich Meyer, editors,
Proc. ESA 2010 , volume 6346 of
Lecture Notes in Computer Science , pages 11–22. Springer,2010.[12] Friedrich Eisenbrand and G¨unter Rote. Fast 2-variable integer programming. In
Proc. IPCO2001 , pages 78–89, 2001.[13] Friedrich Eisenbrand and Thomas Rothvoß. New hardness results for diophantine approxi-mation. In
Proc. APPROX 2009 , pages 98–110, 2009.[14] Oded Goldreich, Dana Ron, and Madhu Sudan. Chinese remaindering with errors. In
Proc.STOC 1999 , pages 225–234, 1999.[15] Oktay G¨unl¨uk and Yves Pochet. Mixing mixed-integer inequalities.
Math. Program. ,90(3):429–457, 2001.[16] Venkatesan Guruswami, Amit Sahai, and Madhu Sudan. ”soft-decision” decoding of chineseremainder codes. In
Proc. FOCS 2000 , pages 159–168, 2000.[17] Raymond Hemmecke, Shmuel Onn, and Lyubov Romanchuk. n-fold integer programmingin cubic time.
Math. Program. , 137(1-2):325–341, 2013.
18] Raymond Hemmecke and R¨udiger Schultz. Decomposition of test sets in stochastic integerprogramming.
Math. Program. , 94(2-3):323–341, 2003.[19] Klaus Jansen, Alexandra Lassota, and Lars Rohwedder. Near-linear time algorithm forn-fold ilps via color coding. In
Proc. ICALP 2019 , pages 75:1–75:13, 2019.[20] Kim-Manuel Klein. About the complexity of two-stage stochastic ips. In Daniel Bien-stock and Giacomo Zambelli, editors,
Proc. IPCO 2020 , volume 12125 of
Lecture Notes inComputer Science , pages 252–265. Springer, 2020.[21] Donald E. Knuth.
The Art of Computer Programming, Volume II: Seminumerical Algo-rithms, 2nd Edition . Addison-Wesley, 1981.[22] J. C. Lagarias. The computational complexity of simultaneous diophantine approximationproblems.
SIAM J. Comput. , 14(1):196–209, 1985.[23] Joseph Y.-T. Leung and Jennifer Whitehead. On the complexity of fixed-priority schedulingof periodic, real-time tasks.
Perform. Evaluation , 2(4):237–250, 1982.[24] Andrew J. Miller and Laurence A. Wolsey. Tight formulations for some simple mixed integerprograms and convex objective integer programs.
Math. Program. , 98(1-3):73–88, 2003.[25] Thi Huyen Chau Nguyen, Werner Grass, and Klaus Jansen. Exact polynomial time algo-rithm for the response time analysis of harmonic tasks with constrained release jitter, 2019(Manuscript).[26] Yves Pochet and Laurence A. Wolsey.
Production Planning by Mixed Integer Program-ming (Springer Series in Operations Research and Financial Engineering) . Springer-Verlag,Berlin, Heidelberg, 2006.[27] Chi-Sheng Shih, Sathish Gopalakrishnan, Phanindra Ganti, Marco Caccamo, and Lui Sha.Template-based real-time dwell scheduling with energy constraint. In
Proc. RTAS 2003 ,page 19. IEEE Computer Society, 2003.[28] Yang Xu, Anton Cervin, and Karl-Erik ˚Arz´en. Lqg-based scheduling and control co-designusing harmonic task periods. Technical report, Department of Automatic Control, LundInstitute of Technology, Lund University, 08 2016.[29] Ming Zhao and Ismael R. de Farias Jr. The mixing-mir set with divisible capacities.
Math.Program. , 115(1):73–103, 2008.
A Hardness of BMS.
We reduce from the problem of
Directed Diophantine Approx-imation with rounding down . For any vector v ∈ R n let ⌊ v ⌋ denote the vector where eachcomponent is rounded down, i.e. ( ⌊ v ⌋ ) i = ⌊ v i ⌋ for all i ≤ n . Directed Diophantine Approximation with rounding down (DDA ↓ )Given: α , . . . , α n ∈ Q + , N ∈ Z ≥ , ε ∈ Q , 0 < ε < Q ∈ { , . . . , N } such that k Qα − ⌊ Qα ⌋k ∞ ≤ ε .Eisenbrand and Rothvoß proved that DDA ↓ is NP-hard [13]. In fact, every instance of DDA ↓ can be expressed as a BMS instance, which yields the following theorem. Theorem 17.
BMS is NP -hard (even if b i = 0 for all i with a i = 0 ). roof. Write α i = β i /γ i for integers β i ≥ , γ i ≥ λ = Q j β j . Then λ/α i = ( λ/β i ) γ i ≥ M denote the following instance of BMS:0 ≤ Q ′ − ( λ/α i ) · y i ≤ ⌊ ( λ/α i ) · ε ⌋ ∀ i = 1 , . . . , n (5) λ ≤ Q ′ − · y n +1 ≤ λ · N (6)0 ≤ Q ′ − λ · y n +2 ≤ Q ′ , y i ∈ Z ∀ i = 1 , . . . , n + 2So let Q ∈ { , . . . , N } with k Qα − ⌊ Qα ⌋k ∞ ≤ ε be given. We obtain readily that Q ′ = λQ and y = ( ⌊ Qα ⌋ , . . . , ⌊ Qα n ⌋ , , Q ) defines a solution of M .Vice-versa let ( Q ′ , y ) be a solution to M . We see that (5) implies that0 ≤ Q ′ − ( λ/α i ) · y i ≤ ⌊ ( λ/α i ) · ε ⌋ ≤ ( λ/α i ) · ε and by (7) we get Q ′ = λ · y n +2 which then implies 0 ≤ y n +2 α i − y i ≤ ε < i ≤ n . Now,since y i is integer, there can be only one value for y i , i.e. y i = ⌊ y n +2 α i ⌋ . By Q ′ = λ · y n +2 and(6) we get y n +2 ∈ { , . . . , N } and by setting Q = y n +2 this yields k Qα − ⌊ Qα ⌋k ∞ ≤ ε and thatproves the claim. B Smallest Feasible s for Mixing Set.Lemma 18. For f ( s, x ) = s the mixing set (1) can be solved in linear time.Proof. We show that s min = s ∗ := max( { } ∪ { b i | a i = 0 } ) where s min denotes the optimalsolution to (1) for f ( s, x ) = s . Let i ≤ n . Case s ∗ ≥ b i . Set x ∗ i = 0. Then we have s ∗ + a i x ∗ i = s ∗ ≥ b i . Case s ∗ < b i . Then a i = 0 and b i − s ∗ >
0. We set x ∗ i = ⌈ a i ( b i − s ∗ ) ⌉ if a i > x ∗ i = ⌊ a i ( b i − s ∗ ) ⌋ if a i <
0. Again we get that s ∗ + a i x ∗ i ≥ b i .Hence, s ∗ is a solution. Apparently s ∗ is optimal if s ∗ = 0. If s ∗ > j with a j = 0 such that s ∗ = b j ≤ s min + a j x j = s min for any x j ..