An algebraic approach to project schedule development under precedence constraints
aa r X i v : . [ m a t h . O C ] O c t An Algebraic Approach to Project ScheduleDevelopment Under Precedence Constraints ∗ Nikolai Krivulin † Abstract
An approach to schedule development in project management isdeveloped within the framework of idempotent algebra. The approachoffers a way to represent precedence relationships among activities inprojects as linear vector equations in terms of an idempotent semiring.As a result, many issues in project scheduling reduce to solving compu-tational problems in the idempotent algebra setting, including linearequations and eigenvalue-eigenvector problems. The solutions to theproblems are given in a compact vector form that provides the basisfor the development of efficient computation procedures and relatedsoftware applications.
Key-Words: idempotent semiring, linear equation, eigenvalue, eigen-vector, project scheduling, precedence relations, flow time
The problem of scheduling a large-scale set of activities is a key issue inproject management [1, 2]. There is a variety of project scheduling tech-niques developed to handle different aspects of the problem. The techniquesrange from the classical Critical Path Method and the Program Evaluationand Review Technique marked the beginning of the active research in thearea in 1950s, to more recent approaches including methods and techniquesof idempotent algebra (see, e.g., [3, 4, 5, 6, 7, 8, 9] and references therein).We describe a new computational approach to project scheduling prob-lems, which is based on implementation and further development of modelsand methods of idempotent algebra in [10, 11, 8, 9]. The approach offersa useful way to represent different types of precedence relationships amongactivities in a project as linear vector equations written in terms of an idem-potent semiring. As a result, many issues in project scheduling reduce to ∗ International Journal of Applied Mathematics and Informatics, 2012. Vol. 6, no. 2,pp. 92-100. † Faculty of Mathematics and Mechanics, St. Petersburg State University, 28 Univer-sitetsky Ave., St. Petersburg, 198504, Russia, [email protected]
We start with a brief introduction to idempotent algebra based on [10, 11, 8,9]. Further details on the topic can be found in [3, 4, 5, 13, 6, 14, 7, 15, 16].
Consider a set X that is equipped with two operations ⊕ and ⊗ calledaddition and multiplication, and that has neutral elements and calledzero and identity. We suppose that h X , , , ⊕ , ⊗i is a commutative semiring,where addition is idempotent and multiplication is invertible. Since thenonzero elements in X form a group under multiplication, this semiring isoften referred to as the idempotent semifield.The idempotent property is given by the equality x ⊕ x = x that is true for all x ∈ X . Let X + = X \ { } . For each x ∈ X + , there existsits inverse x − such that x ⊗ x − = .The power notation is defined as usual. For any x ∈ X + and integer p >
0, we have x = , p = , and x p = x p − ⊗ x = x ⊗ x p − , x − p = ( x − ) p . It is assumed that in the semiring, the integer power can naturally beextended to the case of rational exponents.In what follows, the multiplication sign ⊗ is omitted as is usual in con-ventional algebra. The power notation is thought of as defined in termsof idempotent algebra. However, when writing exponents, we routinely useordinary arithmetic operations.Since the addition is idempotent, it induces a partial order ≤ on X according to the rule: x ≤ y if and only if x ⊕ y = y . With this definition,2t is easy to verify that both addition and multiplication are isotonic, andthat x ≤ x ⊕ y, y ≤ x ⊕ y. The relation symbols are understood below in the sense of this partialorder. Note that according to the order, we have x ≥ for any x ∈ X .As an example of the semirings under study, one can consider the idem-potent semifield of real numbers R max , + = h R ∪ {−∞} , −∞ , , max , + i . The semiring has neutral elements = −∞ and = 0. For each x ∈ R ,there exists its inverse x − , which is equal to − x in ordinary arithmetics.For any x, y ∈ R , the power x y is equivalent to the arithmetic product xy .The partial order coincides with the natural linear order on R .We use this semiring as the basis for the development of algebraic solu-tions to project scheduling problems in the subsequent sections. Vector and matrix operations are routinely introduced on the basis of thescalar operations. Consider a Cartesian product X n with its elements rep-resented as column vectors. For any two vectors a = ( a i ) and b = ( b i ) from X n , and a scalar x ∈ X , vector addition and scalar multiplication follow therules { a ⊕ b } i = a i ⊕ b i , { x a } i = xa i . A vector with all entries equal to zero is called the zero vector anddenoted by .A vector is regular if it has no zero elements.With the above operations, the set of vectors X n forms a semimoduleover an idempotent semifield.A geometric illustration for the operations in R , + is given in Fig. 1.Idempotent addition of two vectors in R , + follows the “rectangle rule”that defines the sum as a diagonal of a rectangle formed by the coordinateaxes together with lines drawn through the end points of the vectors. Scalarmultiplication of a vector is equivalent to shifting the end point of the vectorin the direction at 45 ◦ to the axes.As usual, a vector b ∈ X n is said to be linearly dependent on vectors a , . . . , a m ∈ X n if there are scalars x , . . . , x m ∈ X such that b = x a ⊕ · · · ⊕ x m a m . In particular, b is collinear with a when b = x a .Consider a set of vectors a , . . . , a m ∈ X n . The set of all linear combi-nations A = { x a ⊕ · · · ⊕ x m a m | x , . . . , x m ∈ X } ✻ ✂✂✂✂✂✂✂✂✍✟✟✟✟✟✟✟✟✟✟✯✚✚✚✚✚✚✚✚✚✚❃ b a a b b aa ⊕ b ✲✻ ✂✂✂✂✂✍✡✡✡✡✡✡✡✡✡✣ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) a x a a xa a xa Figure 1: Vector addition (left) and scalar multiplication (right) in R , + .is referred to as the linear span of the vectors.Specifically, the linear span of vectors a and a in R , + has the formof a strip bounded by the lines drawn through the end points of the vectors(see Fig. 2). ✲✻ ✂✂✂✂✍✡✡✡✡✡✡✡✡✣✘✘✘✘✘✘✘✿✟✟✟✟✟✟✟✟✟✟✯(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✚✚✚✚✚✚✚✚✚✚❃ a x x a a x x a x a ⊕ x a Figure 2: A linear span of two vectors in R , + .For any column vector x = ( x i ) ∈ X n + , we introduce a row vector x − =( x − i ) with elements x − i = x − i when x i = , and x − i = otherwise.We define the distance between any two regular vectors a and b with ametric ρ ( a , b ) = b − a ⊕ a − b . When b = a we have ρ ( a , b ) = , where is the minimum value themetric ρ can take.Specifically, in R n max , + , we have = 0, whereas the metric takes theform ρ ( x , y ) = max ≤ i ≤ n | x i − y i | , and thus coincides with the classical Chebyshev metric.4or any conforming matrices A = ( a ij ), B = ( b ij ), and C = ( c ij ) withentries in X , matrix addition and multiplication together with multiplicationby a scalar x ∈ X are performed in accordance with the formulas { A ⊕ B } ij = a ij ⊕ b ij , { BC } ij = M k b ik c kj , { xA } ij = xa ij . A matrix with all entries equal to zero is called the zero matrix anddenoted by .A matrix is regular if it has no zero rows.Consider the set of square matrices X n × n . A matrix is diagonal if itsoff-diagonal entries are zero. The diagonal matrix I = diag( , . . . , ) is theidentity matrix.A matrix is reducible if it can be put in a block triangular form bysimultaneous permutations of rows and columns. Otherwise, the matrix isirreducible.For any matrix A = and integer p >
0, we have A = I, A p = A p − A = AA p − . The trace of a matrix A = ( a ij ) is defined astr A = n M i =1 a ii . Any matrix A ∈ X m × n defines a mapping from the semimodule X n to thesemimodule X m . Since for any vectors x , y ∈ X n and scalar α ∈ X , it holdsthat A ( x ⊕ y ) = A x ⊕ A y , A ( α x ) = αA x , the mapping possesses the property of linear operators.Suppose A, C ∈ X m × n are given matrices, and b , d ∈ X m are givenvectors. A general linear equation in the unknown vector x ∈ X n is writtenin the form A x ⊕ b = C x ⊕ d . Note that due to the lack of additive inverse, one cannot put the equationin the form where all terms involving the unknown x are brought to oneside of the equation while those without x go to another side.Many practical problems reduce to solution of the following particularcases of the general equation A x = d , A x ⊕ b = x .
5y analogy with linear integral equations, the above two equations arerespectively referred to as that of the first kind and that of the second kind.The second-kind equations A x = x A x ⊕ b = x are also known in the literature as homogeneous and nonhomogeneous Bell-man equations.Some actual problems involve solution of inequalities of the first andsecond kinds in the form A x ≤ d A x ⊕ b ≤ x . Now we outline some recent results from [10, 11, 8, 9] that underlie subse-quent applications of idempotent algebra to project scheduling problems.
Given a matrix A ∈ X m × n and a vector d ∈ X m , the problem is to find allsolutions x ∈ X n of the equation and inequality given by A x = d , (1) A x ≤ d . (2)A solution x to equation (1) is maximal if x ≥ x for all solutions x of (1).We present solution of equation (1) based on the analysis of the distancebetween vectors in X m . The solution involves the introduction of a newsymbol ∆ = ( A ( d − A ) − ) − d to represent a residual quantity associated with (1).We start with a result that gives the distance from the vector d to thelinear span of columns in the matrix A A = { A x | x ∈ X n } . Lemma 1.
Suppose A ∈ X m × n and d ∈ X m are regular matrix and vector.Then it holds that min x ∈ X n ρ ( A x , d ) = ∆ / with the minimum attained at x = ∆ / ( d − A ) − . ✻ ✄✄✄✄✗❍❍❥ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✓✓✓✓✓✓✓✼ a a d A ∆ = = 0 ✲✻ ✄✄✄✄✗❍❍❥ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ❅❅❅ ✑✑✑✑✑✑✑✸✭✭✭✭✭✭✭✭✭✭✿ a a d A y ∆ > = 0 ∆ / Figure 3: A linear span A and a vector d in R , + when ∆ = (left) and∆ > (right).Fig. 3 presents examples of mutual arrangement of a vector d and thelinear span A of columns a and a in a matrix A . In the case when ∆ > ,the minimum distance to A is attained at the vector y = ∆ / A ( d − A ) − .Furthermore, we consider sets A = { A x | A x ≤ d , x ∈ X n } , A = { A x | A x ≥ d , x ∈ X n } . Lemma 2.
Suppose A ∈ X m × n and d ∈ X m are regular matrix and vector.Then it holds that min A x ≤ d ρ ( A x , d ) = min A x ≥ d ρ ( A x , d ) = ∆ , where the minimums are respectively attained at x = ( d − A ) − , x = ∆( d − A ) − . A geometric interpretation in R , + is given in Fig. 4.Note that if ∆ > then the minimum distance from d to A and A is attained at respective vectors y = A ( d − A ) − , y = ∆ A ( d − A ) − . The next statement is a consequence of the above results.
Theorem 1.
Suppose A ∈ X m × n and d ∈ X m are regular matrix and vector.Then the following statements hold. (a) A solution of equation (1) exists if and only if ∆ = . ✻ ✄✄✄✄✗❍❍❥ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)✓✓✓✓✓✓✓✼ a a d A A ∆ = = 0 ✲✻ ✄✄✄✄✗❍❍❥ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0) (cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)✚✚✚✚✚✚✚✚✚❃✟✟✟✟✟✯✘✘✘✘✘✘✘✘✘✿ a a d A A y y ∆ > = 0∆Figure 4: The sets A and A , and the vector d in R , + when ∆ = (left) and ∆ > (right). (b) If solvable, the equation has the maximum solution x = ( d − A ) − . Suppose that ∆ > . In this case equation (1) has no solution. However,we can define a quasi-solution to (1) as a solution of the equation A x = ∆ / A ( d − A ) − , which is always exists and takes the form x = ∆ / ( d − A ) − . The quasi-solution yields the minimum deviation between the vectors y = A x and the vector d in the sense of the metric ρ . When ∆ = , thequasi-solution obviously coincides with the maximum solution.Consider the problem of finding two vectors x and x that provide theminimum deviation between both sides of (1), while satisfying the respectiveinequalities A x ≤ d , A x ≥ d . A solution to the problem is readily given by Lemma 2.Finally, we present the following statement.
Lemma 3.
For any matrix A ∈ X m × n and vector d ∈ X m , the solution toinequality (2) is given by x ≤ ( d − A ) − . The general solution to equation (1) with arbitrary matrix A and vector d is considered in [8, 9]. 8 .2 Second-Kind Equations and Inequalities Suppose a matrix A ∈ X n × n and a vector b ∈ X n are given, whereas x ∈ X n is an unknown vector. We examine the equation and inequality that havethe form A x ⊕ b = x , (3) A x ⊕ b ≤ x . (4)To solve equation (3) we propose an approach based on the use of afunction Tr( A ) that takes each square matrix A to a scalar according tothe definition Tr( A ) = n M m =1 tr A m . The function is exploited to examine whether the equation has a uniquesolution, many solutions, or no solution, and so may play the role of thedeterminant.The solution involves evaluation of matrices A ∗ , A × , and A + . Thematrices A ∗ and A × are given by A ∗ = I ⊕ A ⊕ · · · ⊕ A n − , A × = A ⊕ · · · ⊕ A n . Let a × i be column i in A × , and a × ii be its diagonal element, i = 1 , . . . , n .To construct the matrix A + we take the set of columns a × i such that a × ii = , and then reduce it by removing those columns that are linearly dependenton others. Finally, the columns in the reduced set are put together to forma matrix A + .We start with the solution of the homogeneous equation and inequalityin the form A x = x , (5) A x ≤ x . (6)The general solutions to the problems in the case of irreducible matricesare given by the following results. Lemma 4.
Let x be the solution of equation (5) with an irreducible matrix A . Then the following statements hold. (a) If Tr( A ) = , then x = A + v for any vector v . (b) If Tr( A ) = , then there is only the trivial solution x = . Fig. 5 gives examples of solutions to homogeneous equations in R , + for some particular matrices A = ( a , a ) under the condition Tr( A ) = .On the left picture, the solution is shown with a thick line drawn throughthe end point of the vector a . The right picture demonstrates the casewhen the solution coincides with the linear span of both columns in thematrix A . 9 ✻ ✛(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✄✄✄✄✄✄✎ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ✁✁✁✁✁✁✕ a a x x x ✲✻ ✛(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)❄(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✡✡✡✡✡✡✣ a a x x x R , + . Lemma 5.
Let x be the solution of inequality (6) with an irreducible matrix A . Then the following statements hold. (a) If Tr( A ) ≤ , then x = A ∗ v for any vector v . (b) If Tr( A ) > , then there is only the trivial solution x = . Fig 6 demonstrates solutions of homogeneous equation (5) and inequality(6) with a common matrix A . ✲✻ ✛(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✄✄✄✄✄✄✎ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ✁✁✁✁✁✁✕ a a x x x ✲✻ ✛(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✄✄✄✄✄✄✎ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✡✡✡✡✡✣ a a x x x R , + .In the general case of the nonhomogeneous equation and inequality, wehave the following results. Theorem 2.
Let x be the solution of equation (3) with an irreducible matrix A . Then the following statements hold. a) If Tr( A ) < , then x = A ∗ b . (b) If Tr( A ) = , then x = A ∗ b ⊕ A + v for any vector v . (c) If Tr( A ) > , then x = provided that b = , and there is no solutionotherwise. Lemma 6.
Let x be the solution of inequality (4) with an irreducible matrix A . Then the following statements hold. (a) If Tr( A ) ≤ , then x = A ∗ ( b ⊕ v ) for any vector v . (b) If Tr( A ) > , then x = provided that b = , and there is no solutionotherwise. A graphical illustration of the solution to the nonhomogeneous equationsis given in Fig. 7. ✲✻ ✛ (cid:0)(cid:0) (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ✄✄✄✄✄✄✎ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ✁✁✁✁✁✁✕✏✏✶ a a x A ∗ b ✲✻ ✛ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)❄ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)✡✡✡✡✡✡✣✏✏✶ a a x A ∗ b ✲✻ ✛ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)❄ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) (cid:0)(cid:0)(cid:0)✡✡✡✡✡✡✣❆❆❆❆❯ a a x A ∗ b R , + .Related results for the case of arbitrary matrices can be found in [10, 8, 9]. A scalar λ is an eigenvalue of a matrix A ∈ X n × n if there is a nonzero vector x ∈ X n such that A x = λ x . Any vector x = that satisfies the above equality is an eigenvector of A , corresponding to λ .If the matrix A ∈ X n × n is irreducible, then it has only one eigenvaluegiven by λ = n M m =1 tr /m ( A m ) . (7)11he corresponding eigenvectors of A have no zero entries and take theform x = A + λ v , where A λ = λ − A and v is any nonzero vector.An example of an eigenvalue λ and eigenvector x for a matrix A =( a , a ) in R , + is given in Fig. 8. ✲✻ ✘✘✘✘✘✘✿ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ✑✑✑✑✑✑✑✑✑✑✑✑✑✸✂✂✂✂✂✂✂✂✍(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)❳❳❳❳❳❳❳❳③(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) ❄✻❄✻ ✲✛ ✲✛ a a x x x λ λA x = λ x Figure 8: An eigenvector x and eigenvalue λ of a matrix A in R , + .We conclude with an extremal property of the eigenvalue and eigenvec-tors of irreducible matrices. Lemma 7.
Suppose A is an irreducible matrix with an eigenvalue λ . Thenit holds that min x ∈ X n + ρ ( A x , x ) = λ ⊕ λ − with the minimum attained at any eigenvector of A . The eigenvalue-eigenvector problem and the above extremal property inthe case of arbitrary matrices are examined in [11, 8, 9].
In this section we show how to apply results presented above to solve schedul-ing problems under various constraints (for further details on the scheduledevelopment in project management see, e.g., [1, 2]).As the underlying idempotent semiring, we use R max , + in all examplesunder discussion. 12 .1 Start-to-Finish Precedence Constraints Consider a project that involves n activities. Activity dependencies areassumed the form of Start-to-Finish relations that do not allow an activityto complete until some predefined time after initiation of other activities.The scheduling problem of interest consists in finding the latest initiationtime for all activities subject to given constraints on their completion time.For each activity i = 1 , . . . , n , denote by x i its initiation time, and by y i its completion time. Let d i be a due date, and a ij be a minimum possibletime lag between initiation of activity j = 1 , . . . , n and completion of i .Given a ij and d i , the completion time of activity i must satisfy therelations y i = d i , x j + a ij ≤ y i , j = 1 , . . . , n. When a ij is not actually given for some j , it is assumed to be = −∞ .The relations can be combined into one equation in the unknown vari-ables x , . . . , x n , max( x + a i , . . . , x n + a in ) = d i . By replacing the ordinary operations with those in R max , + in all equa-tions, we get a i x ⊕ · · · ⊕ a in x n = d i , i = 1 , . . . , n. Furthermore, we introduce a matrix A = a . . . a n ... . . . ... a n . . . a nn , and vectors d = d ... d n , x = x ... x n . The scheduling problem under the Start-to-Finish constraints leads usto the solution of the equation A x = d . Consider the residual ∆ = ( A ( d − A ) − ) − d and suppose that ∆ = = 0.According to Theorem 1, the equation has a maximum solution x = ( d − A ) − .
13f it appears that ∆ >
0, then one can compute approximate solutionstogether with corresponding completion times as follows x = ∆ / ( d − A ) − , y = A x ; x = ( d − A ) − , y = A x ≤ d ; x = ∆( d − A ) − , y = A x ≥ d . Note that the completion times have their deviation from the due datesbounded with ρ ( y , d ) = ∆ / , ρ ( y , d ) = ρ ( y , d ) = ∆ . Suppose that the due date constraints are not mandatory and may beadjusted to some extent. As a vector of new due dates, it is natural to take d ′ such that y ≤ d ′ ≤ y . In this case, deviation of the new due datesfrom the original ones does not exceed ∆. The minimum deviation ∆ / isachieved when d ′ = y .As an example, consider a project with a constraint matrix A = , and two due date vectors given by d = , d = . Fig. 9 demonstrates a network representation of the proceedings relationsfor activities in the project. ✒✑✓✏ x ✒✑✓✏ x ✒✑✓✏ x ✒✑✓✏ x ✒✑✓✏ y ✒✑✓✏ y ✒✑✓✏ y ✒✑✓✏ y ❄❍❍❍❍❍❍❍❍❍❍❍❥❄(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✠ ❅❅❅❅❅❘ ❄(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✠ ❄✟✟✟✟✟✟✟✟✟✟✟✙ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)✠ A x = d . We have( d − A ) − = , A ( d − A ) − = . = ( A ( d − A ) − ) − d = 0, the equation has solutions includingthe maximum solution x = ( d − A ) − = (6 , , , T . Now consider the equation A x = d . We get ∆ = ( A ( d − A ) − ) − d =4 > x = (9 , , , T , y = (17 , , , T , x = (7 , , , T , y = (15 , , , T , x = (11 , , , T , y = (19 , , , T . Suppose there is a project consisting of n activities and operating underStart-to-Start precedence constraints that determine the minimum allowedtime intervals between initiation of activities. The problem is to find the ear-liest initiation time for each activity that does not violate these constraints.For each activity i = 1 , . . . , n , let b i be an early possible initiation time,and let a ij be a minimum possible time lag between initiation of activity j = 1 , . . . , n and initiation of i . The initiation time x i for activity i issubject to the relations b i ≤ x i , a ij + x j ≤ x i , j = 1 , . . . , n, where at least one must hold as an equality.We can replace the relations with one equationmax( x + a i , . . . , x n + a in , b i ) = x i . Representation in terms of R max , + , gives the scalar equations a i x ⊕ · · · ⊕ a in x n ⊕ b i = x i , i = 1 , . . . , n. With the matrix-vector notation A = ( a ij ) , b = ( b , . . . , b n ) T , x = ( x , . . . , x n ) T we arrive at a problem that is to solve the equation A x ⊕ b = x . Assume the matrix A to be irreducible. It follows from Theorem 2 thatif Tr( A ) ≤ = 0 then the equation has a solution given by x = A ∗ b ⊕ A + v , v is any vector of appropriate size.Consider a project with Start-to-Start relations and examine two cases,with and without early initiation time constraints. Let us define a matrix A = − − − − , and two vectors b = , b = (1 , , , T . A graph representation of the precedence relations involved in the projectis depicted in Fig. 10. ✒✑✓✏ x ✒✑✓✏ x ✒✑✓✏ x ✒✑✓✏ x ✲❅❅❅❅■ ❅❅❅❅❘(cid:0)(cid:0)(cid:0)(cid:0)✠ ✲PPPPPPPPPPPPPP✐ − − − − b = b = (that is, without early initiation time constraints imposed).Under this assumption, the equation becomes homogeneous and takes theform A x = x . The matrix A is irreducible and Tr( A ) = 0. Therefore, the equation hasa solution.Simple algebra gives A ∗ = A × = − −
32 0 3 − − − −
42 0 3 0 . Note that all diagonal entries in A × are equal to = 0. However,considering that the first three columns are proportional, we take only oneof them to form the matrix A + = − − − − −
40 0 . x = A + v = − − − − −
40 0 v , v ∈ R , + . Consider the case of the nonhomogeneous equation A x ⊕ b = x . We calculate the vector A ∗ b = (3 , , , T , and then get x = ⊕ − − − − −
40 0 v , v ∈ R , + . Consider a project that has both Start-to-Finish and Start-to-Start con-straints. Let A be a given Start-to-Finish constraint matrix, d a vector ofdue dates, and x an unknown vector of activity latest initiation time. Tomeet the constraints, the vector x must satisfy the inequality A x ≤ d . Furthermore, there are also Start-to-Start constraints defined by a con-straint matrix A . This leads to the equation A x = x . Suppose the equation has a solution x = A +2 v . Substitution into theinequality yields A A +2 v ≤ d . Application of Theorem 1 gives the maximum solution to the last in-equality in the form v = ( d − A A +2 ) − . The solution to the whole problemis then written as x = A +2 ( d − A A +2 ) − . As an illustration, we evaluate the solution to the problem under thecondition that A = , A = − − − − , d = (13 , , , T . By using results of previous examples, we successively get A A +2 =
10 98 812 1112 12 , ( d − A A +2 ) − = (cid:18) (cid:19) . Finally, we have x = A +2 ( d − A A +2 ) − = (1 , , , T . Assume that a project has n activities and operates under Start-to-Finishconstraints. For each activity, consider the time interval between its ini-tiation and completion, which is usually referred to as the flow time, theturnaround time or the processing time. The problem is to construct aschedule that minimizes the maximum flow time over all activities.Let A be an irreducible constraint matrix, x a vector of initiation time,and y = A x a vector of completion time for the project. The problem canbe formulated as that of finding a vector x that minimizemax( | y − x | , . . . , | y n − x n | ) = ρ ( y , x ) . In terms of R max , + we have ρ ( y , x ) = ρ ( A x , x ) . The problem of interest takes the formmin x ∈ R n ρ ( A x , x )and can be solved by the application of Lemma 7.Let d be a given vector of activity due dates. Consider a problem offinding the latest initiation time for all activities so as to provide both thecondition of minimum for the maximum flow time and the due date con-straints in the form A x ≤ d . By Lemma 7, the first condition is satisfied when x is an eigenvector thatcorresponds to the eigenvalue λ for the matrix A . The eigenvectors takethe form x = A + λ v , where A λ = λ − A and v is any vector of appropriatesize. 18y combining this result with the due date constraints, we get the in-equality AA + λ v ≤ d . With the maximum solution v = ( d − AA + λ ) − of the inequality, we arriveat the solution to the whole problem x = A + λ ( d − AA + λ ) − . Let us evaluate the solution with the constraint matrix and due datevector defined as A = , d = . First we get λ = 4 with (7), and define the matrix A λ = λ − A = − − − − − − . Furthermore, we have the matrices A ∗ λ = A × λ = − − , A + λ = , and then calculate AA + λ = , ( d − AA + λ ) − = 3 . Finally, we arrive at the solution x = A + λ ( d − AA + λ ) − = (4 , , T . We have presented an approach that exploits idempotent algebra to solvecomputational problems in project scheduling. The approach allows to han-dle and combine different constraints and objectives that appear in actualproblems in an easy and unified way. It is shown how to reformulate theproblems in the algebraic setting, and then find related solutions based onrecent results in the idempotent algebra theory. The solutions are given ina compact vector form that provides a basis for the development of efficientcomputation algorithms and software applications, including those intendedfor implementation on parallel and vector computers.19 eferences [1]
A Guide to the Project Management Body of Knowledge: PMBOKGuide . Project Management Institute, Newtown Square, PA, 2008.[2] K. Neumann, C. Schwindt, and J. Zimmermann,
Project Schedulingwith Time Windows and Scarce Resources , vol. 508 of
Lecture Notesin Economics and Mathematical Systems . Springer, Berlin, 2003.[3] R. Cuninghame-Green,
Minimax Algebra , vol. 166 of
Lecture Notes inEconomics and Mathematical Systems . Springer, Berlin, 1979.[4] U. Zimmermann,
Linear and Combinatorial Optimization in OrderedAlgebraic Structures , vol. 10 of
Annals of Discrete Mathematics .Elsevier, 1981.[5] F. L. Baccelli, G. Cohen, G. J. Olsder, and J.-P. Quadrat,
Synchronization and Lin-earity: An Algebra for Discrete Event Systems . Wiley, Chichester, 1993. .[6] R. A. Cuninghame-Green, “Minimax algebra and applications,” in
Advances in Imaging and Electron Physics, Vol. 90 , P. W. Hawkes,ed., pp. 1–121. Academic Press, San Diego, 1994.[7] B. Heidergott, G. J. Olsder, and J. van der Woude,
Max-plus at Work:Modeling and Analysis of Synchronized Systems . Princeton UniversityPress, Princeton, 2005.[8] N. K. Krivulin,
Idempotent Algebra Methods for Problems in Modelingand Analysis of Complex Systems . St. Petersburg University Press, St.Petersburg, 2009. (in Russian).[9] N. K. Krivulin,
Methods of Idempotent Algebra . LAP LambertAcademic Publishing, Saarbr¨ucken, 2011. (in Russian).[10] N. K. Krivulin, “Solution of generalized linear vector equations inidempotent algebra,”
Vestnik St. Petersburg Univ. Math. no. 1,(2006) 16–26.[11] N. K. Krivulin, “Eigenvalues and eigenvectors of matrices inidempotent algebra,” Vestnik St. Petersburg Univ. Math. no. 2,(2006) 72–83.[12] N. Krivulin, “Algebraic solutions to scheduling problems in projectmanagement,” in Recent Researches in Communications, Electronics,Signal Processing and Automatic Control , pp. 161–166. WSEASPress, 2011. .2013] V. N. Kolokoltsov and V. P. Maslov,
Idempotent Analysis and ItsApplications . Kluwer Academic Publishers, Dordrecht, 1997.[14] J. S. Golan,
Semirings and Affine Equations Over Them: Theory andApplications . Springer, New York, 2003.[15] P. Butkoviˇc,
Max-linear Systems: Theory and Algorithms . Springer,London, 2010.[16] G. L. Litvinov, “Idempotent/tropical analysis, the hamilton-jacobiand bellman equations,” arXiv:1203.0522v1 [math.RA]arXiv:1203.0522v1 [math.RA]