A polynomial-time algorithm for the routing flow shop problem with two machines: an asymmetric network with a fixed number of nodes
aa r X i v : . [ c s . D M ] M a y A polynomial-time algorithm for the routing flowshop problem with two machines: an asymmetricnetwork with a fixed number of nodes ⋆ Ilya Chernykh , , − − − , AlexanderKononov , − − − , and Sergey Sevastyanov − − − Sobolev Institute of Mathematics, Koptyug ave. 4, Novosibirsk, 630090, Russia { idchern,alvenko,seva } @math.nsc.ru Novosibirsk State University, Pirogova str. 2, Novosibirsk, 630090, Russia Novosibirsk State Technical University, Marksa ave. 20, Novosibirsk, 630073, Russia
Abstract.
We consider the routing flow shop problem with two ma-chines on an asymmetric network. For this problem we discuss propertiesof an optimal schedule and present a polynomial time algorithm assum-ing the number of nodes of the network to be bounded by a constant.To the best of our knowledge, this is the first positive result on the com-plexity of the routing flow shop problem with an arbitrary structure ofthe transportation network, even in the case of a symmetric network.This result stands in contrast with the complexity of the two-machinerouting open shop problem, which was shown to be NP-hard even on thetwo-node network.
Keywords:
Scheduling · flow shop · routing flow shop · polynomially-solvable case · dynamic programming. A flow shop problem to minimize the makespan (also known as
Johnson’s prob-lem ) is probably the first machine scheduling problem described in the literature[6]. It can be set as follows.
Flow shop problem.
Sets M of machines and J of jobs are given, each machine M i ∈ M has to process each job J j ∈ J ; such an operation takes p ji time units.Each job has to be processed by machines in the same order: first by machine M , then M and so on. No machine can process two jobs simultaneously. Thegoal is to construct a feasible schedule of processing all the jobs within theminimum makespan (which means, with the minimum completion time of thelast operation). According to the traditional three-field notation of schedulingproblems (see [8]), Johnson’s problem with a fixed number m of machines isdenoted as F m || C max . ⋆ This research was supported by the program of fundamental scientific researches ofthe SB RAS No I.5.1., project No 0314-2019-0014, and by the Russian Foundationfor Basic Research, projects 20-07-00458 and 18-01-00747. I. Chernykh, A. Kononov and S. Sevastyanov
Problem F || C max can be solved to the optimum by the well-known Johnson’salgorithm, which basically is a sorting of the set of jobs according to Johnson’srule [6]. On the other hand, problem F || C max is NP-hard in the strong sense [5].In classical scheduling problems (including flow shop), it is assumed thatthe location of each machine is fixed, and either there is no pre-specified delaybetween the processing of two consecutive operations of a job or such a delaydepends on the distance between the corresponding machines. However, thisassumption often diverges from real-life situations. Imagine that the companyis engaged in the construction or maintenance of country houses, cottages orchalets. The company has several crews which, for example, specialize either inpreparing the site for construction, or filling the foundation, or building a house,or landscaping the site. The facilities are located in a suburban area, and eachteam must move from place to place to carry out their work. The sequence ofjobs performed by various crews is fixed, e.g., you cannot start to build a housebefore filling the foundation.To take into account the situation described above, we consider a naturalcombination of F m || C max with the well-known traveling salesman problem, a so-called routing flow shop problem introduced in [1]. In this model, jobs are locatedat nodes of a transportation network G , while machines have to travel over theedges of the network to visit each job and perform their operation in the flowshopenvironment. All machines start from the same location (the depot ) and have toreturn to the depot after performing all the operations. The completion time ofthe last machine action (either traveling or processing an operation of some jobin the depot) is considered to be the makespan of the schedule ( C max ) and hasto be minimized. (See Sect. 2 for the detailed formulation of the problem.)We denote the m -machine routing flow shop problem as RF m || C max or RF m | G = W | C max , when we want to specify a certain structure W of thetransportation network.The routing-scheduling problems can simulate many problems in real-worldapplications. Examples of applications where machines have to travel betweenjobs include situations where parts are too big or heavy to be moved betweenmachines (e.g., engine casings of ships), or scheduling of robots that performdaily maintenance operations on immovable machines located in different placesof a workshop [2]. Another interesting application is related to the routing andscheduling of museum visitors traveling as homogeneous groups [9]. The model isembedded in a prototype wireless context-aware museum tour guide system de-veloped for the National Palace Museum of Taiwan, one of the top five museumsin the world.The routing flow shop problem is still understudied. Averbakh and Berman[1] considered RF || C max with exactly one job at each node, under the followingrestriction: each machine has to follow some shortest route through the set ofnodes of the network (not necessarily the same for both machines). This will bereferred to as an AB-restriction . They proved that for the two-machine problemthe AB-restriction affects the optimal makespan by a factor of at most , and thisbound is tight. They also showed that, under this restriction, there always exists n the asymmetric two-machine routing flow shop 3 a permutation optimal schedule, in which machines process jobs in the sameorder (a permutation property ). Using this property, they presented O ( n log n )algorithms for solving RF | AB -restriction , G = W | C max to the optimum, where W is a tree or a cactus, n is the number of jobs. These algorithms, therefore,provide a -approximation for the problem without the AB-restriction on a treeor on a cactus with a single job at each node. Later on ([2]), they extendedthese results to the case of an arbitrary graph G and an arbitrary number ofmachines m by presenting a m +12 -approximation algorithm for the RF m || C max problem. Yu and Znang [11] improved on the latter result and presented an O ( m )-approximation algorithm based on a reduction of the original problemto the permutation flow shop problem.A generalized routing flow shop problem with buffers and release dates ofjobs was also considered in [7]. The authors present a heuristic based on solvingthe corresponding multiple TSP.Yu et al. [10] investigated the RF || C max problem with a single job at eachnode farther. They obtained the following results:1. The permutation property also holds for the problem without the AB-restriction.2. The problem is ordinary NP-hard, even if G is a tree (moreover, if G is aspider of diameter 4 with the depot in the center).3. There is a -approximation algorithm that solves the RF | G = tree | C max problem in O ( n ) time.Finally, the possibility of designing a polynomial-time algorithm for the spe-cial case of our problem, when the transportation network is symmetric, wasclaimed in [4] (although, without any proof).In the present paper, we investigate the generalization of RF || C max problemto the case of asymmetric travel times and of an arbitrary number of jobs at anynode. Thus, we have to consider a directed network G in which the travel timesthrough an edge may be different in the opposite directions. (We will denote sucha problem by → RF || C max .) We prove that the permutation property holds for thisversion of the problem, as well. We also establish another important property:there exists an optimal permutation schedule (with the same job processing order π on both machines) such that for each node v , sub-sequence π v of π consistingof all jobs from node v obeys Johnson’s rule. These two properties allow usto design a dynamic programming algorithm which solves this problem in time O ( n g +1 ), where g is the number of nodes in G . Thereby, we have establisheda polynomial-time solvability of the asymmetric two-machine routing flow shopproblem with a constant number of network nodes. This result stands in contrastwith the complexity result for the two-machine routing open shop problem, whichis known to be ordinary NP-hard even if G consists of only two nodes (includingthe depot) [3].The structure of the paper is as follows. Section 2 contains a formal descrip-tion of the problem under investigation, as well as some notation and definitions.Properties of an optimal schedule are established at the beginning of Section 3which also contains a description of the exact algorithm for solving the problem. I. Chernykh, A. Kononov and S. Sevastyanov
The analysis of its qualities follows in Section 4. Section 5 concludes the paperwith some open questions for further investigation.
Farther, throughout the paper, an expression of the form x ∈ [ α, β ] (where α and β are integers, and x is an integer variable, by definition) means that x takesany integral values from this interval; [ β ] . = { , , . . . , β } . In this paper we willconsider the following problem. Problem → RF || C max . We are given n jobs { J , . . . , J n } that are to be pro-cessed by two dedicated machines denoted as A and B . For each j ∈ [ n ], job J j consists of two operations that should be performed in the given order: firstthe operation on machine A , and then on machine B . Processing times of theoperations are equal to a j and b j , respectively. All jobs are located at nodes of atransportation network; the machines move between those nodes along the arcsof that network. At the beginning of the process, both machines are located ata node called a depot , and they must return to that very node after completingall the jobs.Without loss of generality of the problem (and for the sake of convenience ofthe further description and analysis of the algorithm presented in Section 3), wewill assume that a reduced network G = ( V, E ) ( | V | = g + 2) is given, in which:(1) only active nodes are retained, i.e., the nodes containing jobs (they will bereferred to as job nodes ) and two node-depots : the start-depot and the finish-depot ; (2) there are no jobs in both depots (otherwise, we split the original depotinto three copies, the distances between which are equal to zero; one of thosecopies is treated as a job node, while the other two are job-free); the start-depotand the finish-depot get indices 0 and g + 1, respectively, while all job nodesget indices i ∈ [ g ] ( g is the number of job nodes); thus, starting from the start-depot, each machine will travel among the job nodes, and only after completingall the jobs it may arrive at the finish-depot; (3) G is a complete directedgraph in which each arc e = ( v i , v j ) ∈ E is assigned a non-negative weight ρ ( e ) = ρ i,j representing the shortest distance between the nodes correspondingto i and j in the source network in the given direction ; therefore, the weightsof arcs satisfy the triangle inequalities; at that, the symmetry of the weightsis not assumed , i.e., the weights of the forward and the backward arcs maynot coincide. The objective function C ( S ) is the time, when machine B arrivesat the finish-depot in schedule S , and this time should be minimized.Other designations: N . = ( n , . . . , n g ), where n i denotes the number of jobslocated at job node i ∈ [ g ]. k K k . = P i ∈ [ g ] | k i | denotes the of vector K = ( k , . . . , k g ).Given an integer d >
0, we define a partial order ⋖ on the set R d of d -dimensional real-valued vectors, such that for any two vectors x ′ = ( x ′ , . . . , x ′ d ), x ′′ = ( x ′′ , . . . , x ′′ d ) ∈ R d the relation x ′ ⋖ x ′′ holds, if and only if x ′ i ≤ x ′′ i , ∀ i ∈ [ d ].By J ( v ), we will denote the set of indices of jobs located at node v ∈ V . n the asymmetric two-machine routing flow shop 5 By a schedule , we will mean, as usual, the set of starting and the completiontimes of all operations. Since, however, such a schedule model admits a contin-uum set of admissible values of its parameters, it will be more convenient for usto switch to a discrete model in which any schedule is determined by a pair ofpermutations { π ′ , π ′′ } specifying the orders of processing the jobs by machines A and B , respectively. Each pair ( π ′ , π ′′ ) uniquely defines both the routes of themachines through the nodes of network G and an active schedule S ( π ′ , π ′′ ) ofjob processing which is defined as follows.A schedule S ( π ′ , π ′′ ) is called active , iff : (1) it is feasible for the given in-stance of problem → RF || C max ; (2) it meets the precedence constraints imposedby permutations { π ′ , π ′′ } ; (3) the starting time of no operation in this schedulecan be decreased without violating the above mentioned requirements.An active schedule S ( π ′ , π ′′ ) is called a permutation one, if π ′ = π ′′ . Definition 1.
For each j ∈ [ n ], we define a priority vector χ j = ( χ ′ j , χ ′′ j , j )of job J j , where ( χ ′ j = 1 , χ ′′ j = a j ), if a j ≤ b j , and ( χ ′ j = 2 , χ ′′ j = − b j ),otherwise. We next define a strict linear order ≺ on the set of jobs: for two jobs J j , J k ( j, k ∈ [ n ]) the relation J j ≺ J k holds, iff χ j < lex χ k (i.e., vector χ j islexicographically less than χ k ). Clearly, for any two jobs J j , J k ( j = k ), one andonly one of two relations holds: either J j ≺ J k or J k ≺ J j .We will say that a permutation of jobs π and the corresponding permuta-tion schedule meet the Johnson local property , if for each node v ∈ V the jobsfrom J ( v ) are sequenced in permutation π properly , which means: in the lexico-graphically increasing order of their priority vectors. (Johnson [6] showed thatin the case of the networkless two-machine flow shop problem, such a job order π provides the optimality of the corresponding permutation schedule.) → RF || C max The algorithm described in this section is based on two important properties ofthe optimal schedule established in the following theorems.
Theorem 1.
For any instance I of problem → RF || C max there exists an optimalschedule which is a permutation one. Theorem 2.
For any instance I of problem → RF || C max there exists a permuta-tion schedule which meets the Johnson local property and provides the minimummakespan on the set of all permutation schedules. The proofs of these theorems are omitted due to the volume limitations. Theycan be found in Appendix. Two theorems above imply the following
Corollary 1.
For any instance I of problem → RF || C max there exists an optimalschedule which is a permutation one and meets the Johnson local property. I. Chernykh, A. Kononov and S. Sevastyanov
The algorithm for computing the exact solution of problem → RF || C max isbased on the idea of Dynamic Programming and on the two properties of optimalsolutions mentioned in Corollary 1 (and, thus, enabling us to restrict the set ofschedules under consideration by job sequences which meet these properties).So, from now on, we will consider only permutation schedules which meet theJohnson local property.Let us number the jobs at each node v i properly , i.e., in the ascending orderof the relation ≺ (see Definition 1, p. 5). Then, due to Theorem 2, jobs at eachnode v i ( i ∈ [ g ]) should be processed in the order π i = (1 , , . . . , n i ). Accordingto this order, the jobs at node v i will be numbered by two indices: J ij ( j ∈ [ n i ]).In the schedule under construction, we will highlight the time moments whena machine M ∈ { A, B } completes a portion of jobs at node v i and is preparingto move to another node. Each such moment will be called an intermediatefinish point of machine M or, in short, an if-point of machine M . It follows fromTheorem 2 that at each if-point t ′ of machine A the set of jobs already completedby the machine is a collection of some initial segments [1 , . . . , k i ] of sequences { π i | i ∈ [ g ] } . This collection can be specified by a g -dimensional integral vector K = ( k , . . . , k g ) (and will be denoted as J ( K )), where k i denotes the numberof jobs performed by machine A at node i by time t ′ .By Theorem 1, machine B completely reproduces the route of machine A through network nodes (as well as the order of processing the jobs by thatmachine) and, at some (later) point in time t ′′ ≥ t ′ , it also finds itself at its if -point with the same set J ( K ) of completed jobs, defined by vector K . Thus, anatural correspondence is established between the if -points of machines A and B : they are combined into pairs ( t s ′ , t s ′′ ) of if -points at which the sets of jobscompleted by machines A and B coincide and are defined by the same vector K s = ( k s , . . . , k sg ). The pairs of if -points divide the whole process of performingthe jobs by machines A and B into steps ( s = 1 , , . . . , ¯ s ), each step s beingdefined by two parameters: the node index ( i s ) and the number of jobs ( d s )performed in this step at node i s .The tuple b K . = (
K, i ∗ ) consisting of a value of vector K = ( k , . . . , k g ) anda value of a node index i ∗ determines a configuration of a partial schedule ofprocessing the subset of jobs J ( K ), with the final job at node i ∗ . The set of admissible configurations is defined as the set including all basic configurations (with values K = ( k , . . . , k g ) ∈ [0 , n ] × · · · × [0 , n g ] , i ∗ ∈ [ g ], such that k i ∗ > special configurations : the initial one b K S = ( ,
0) and the final one b K F = ( N , A DP for constructing the optimal schedule makes two things:1) it enumerates all possible configurations of partial schedules, and 2) for eachof them, it accumulates the maximum possible set of pairwise incomparablesolutions (characterized by pairwise incomparable pairs ( t ′ , t ′′ ) of if -points withrespect to the relation ⋖ ). In other words, given a configuration b K . = (
K, i ∗ ), weconsider a “partial” bi-criteria problem P ( b K ) of processing the jobs from J ( K ),with the final job at node i ∗ . The objective is to minimize the two-dimensionalvector-function ¯ F . = ( F , F ), where F , F are the completion times of jobs from n the asymmetric two-machine routing flow shop 7 J ( K ) by machines A and B , respectively. We compute the complete set F ( b K )of representatives of Pareto-optimal solutions of this problem.For each solution ¯ F = ( F , F ) ∈ F ( b K ), let us define the parameter ∆ ( ¯ F ) = F − F . The set F ( b K ) for each configuration b K will be stored as the list sortedin the ascending order of component F . (At that, the values of F and ∆ ( ¯ F )strictly decrease.) The first element of each list F ( b K ) will be a solution with thevalue F = 0. This is either a dummy solution ˜ F = (0 , ∞ ) (added to each list F ( b K ) at the beginning of its formation), or a real solution with the value F = 0(if it is found).In the course of the algorithm, configurations { b K = ( K, i ∗ ) } are enumerated(in order to create lists F ( b K )) in non-decreasing order of the norm k K k ofvectors K . At that, the whole algorithm is divided into three stages: the initial ,the main and the final one. Configurations with i ∗ = 0 are considered in theinitial and the final stages only.In the initial stage , list F ( b K S ) for the initial configuration b K S = ( ,
0) iscreated. It consists of the single solution (0 , main stage , for each k . = k K k = 1 , . . . , n , vectors K are enumeratedin lexicographical ascending order; for each given vector K = ( k , . . . , k g ), thosevalues of i ∗ ∈ [ g ] are enumerated only for which k i ∗ > final stage , for the final configuration b K F = ( N , g variants of solutions obtained from the optimalsolutions of configurations { ( N , i ) | i ∈ [ g ] } . For each configuration b K i = ( N , i ),its optimal solution ¯ F ∗ i = ( F ∗ , F ∗ ) (with the minimum value of the component F ) is located at the very end of list F ( b K i ). Having added to F ∗ the distance ρ i, from node v i to the depot, we obtain the value of the objective function C ( S ) of our problem for the given variant of schedule S . Having chosen (from g variants) the variant with the minimum value of the objective function, we findthe optimum.To create list F ( b K ) for a given configuration b K = ( K, i ∗ ) of the main stage ,we enumerate such values of the configuration b K ′ = ( K ′ , i ′ ) obtained at thecompletion of the previous step of the algorithm (we will call that configurationa pre-configuration , or “p-c”, for short), that i ′ = i ∗ , and that the vectors K and K ′ differ in exactly one ( i ∗ th) component, so as k ′ i ∗ < k i ∗ . At that, if K ′ = ,then i ′ = 0, which means that b K ′ is the initial configuration . If, alternatively, K ′ = , then k ′ i ′ >
0. (Clearly, there is no need for a machine to come to node v i ′ without doing any job at it.)We note that for each configuration b K = ( K, i ∗ ) of the main stage, eachvariant of its p-c b K ′ = ( K ′ , i ′ ) can be uniquely defined by the pair D = ( d, i ′ ),where i ′ ∈ [0 , g ] \ { i ∗ } , and d ∈ [ k i ∗ ] is the number of jobs being processed inthis step at node v i ∗ . The pairs ( d, i ′ ) are enumerated so as the loop on d is anexterior one with respect to the loop on i ′ .For each given value of d , we construct an optimal schedule S d = S ( b K, d )in problem F || C max for the jobs from J ( b K, d ) = { J i ∗ ,j | j ∈ [ k i ∗ − d + 1 , k i ∗ ] } ,and then compute three characteristics of that schedule: L ( b K, d ) and L ( b K, d ), I. Chernykh, A. Kononov and S. Sevastyanov which are the total workloads of machines A and B on the set of jobs J ( b K, d ),and also δ ( b K, d ) = C ∗ max ( b K, d ) − L ( b K, d ), where C ∗ max ( b K, d ) is the length ofschedule S d .After that, we start the loop on i ′ in which we will adjust the current list F ( b K ) of solutions for configuration b K . (Before starting the loop on d , the listconsists of the single dummy solution ˜ F = (0 , ∞ ).) At each i ′ , for the p-c b K ′ =( K ′ , i ′ ), we enumerate its Pareto-optimal solutions ¯ F ′ = ( F ′ , F ′ ) ∈ F ( b K ′ ) in theascending order of F ′ (and the descending order of ∆ ( ¯ F ′ ) = F ′ − F ′ ). Given asolution ¯ F ′ and schedule S d , we form a solution ¯ F ′′ = ( F ′′ , F ′′ ) for configuration b K as follows. F ′′ := F ′ + ρ i ′ ,i ∗ + L ( b K, d ). F ′′ := ( F ′ + ρ i ′ ,i ∗ + L ( b K, d ) , if ∆ ( ¯ F ′ ) ≥ δ ( b K, d ) ( a solution of type ( a )); F ′ + ρ i ′ ,i ∗ + C ∗ max ( b K, d ) , if ∆ ( ¯ F ′ ) < δ ( b K, d ) ( a solution of type ( b )).Case ( b ) means that the component F ′ does not affect the parameters of theresulting solution ¯ F ′′ any more, and so, considering further solutions ¯ F ′ ∈ F ( b K ′ )(with greater values of F ′ and smaller values of ∆ ( ¯ F ′ )) makes no sense, since itis accompanied by a monotonous increasing of both F ′′ and F ′′ (between which,a constant difference is established equal to C ∗ max ( b K, d ) − L ( b K, d )). Thus, forany given p-c b K ′ , a solution of “type (b)” can be obtained at most once.For each solution ¯ F ′′ obtained, we immediately try to understand whether itshould be added to the current list F ( b K ), and if so, whether we should removesome solutions from list F ( b K ) (majorized by the new solution ¯ F ′′ ).To get answers to these questions, we find a solution ¯ F ℓ = ( F ℓ , F ℓ ) in list F ( b K ) with the maximum value of the component F ℓ such that F ℓ ≤ F ′′ . Sucha solution always exists (we call it a control element of list F ( b K )). Since in theloop on ¯ F ′ ∈ F ( b K ′ ), component F ′′ monotonously increases, the search for thecontrol element matching ¯ F ′′ can be performed not from the beginning of list F ( b K ), but from the current control element. Before starting the loop on ¯ F ′ , weassign the first item of list F ( b K ) to be the current control element.If the inequality F ℓ ≤ F ′′ holds, the current step of the loop on ¯ F ′ ∈ F ( b K ′ )ends without including the solution ¯ F ′′ in list F ( b K ) (we pass on to the nextsolution ¯ F ′ ∈ F ( b K ′ )). Otherwise, if F ′′ < F ℓ , we look through list F ( b K )(starting from the control element ¯ F ℓ ) and remove from the list all solutions¯ F = ( F , F ) majorized by the new solution ¯ F ′′ (which is expressed by the re-lations F ′′ ≤ F , F ′′ ≤ F ). At that, the condition F ′′ = F is sufficient forremoving the current control element, while the inequality F ′′ ≤ F is sufficientfor removing subsequent elements. The scanning of list F ( b K ) stops as soon aseither the first non-majorized list item is found distinct from the control element(for this item and for all subsequent items, the relations F < F ′′ hold), or if thelist has been scanned till the end. Include solution ¯ F ′′ in list F ( b K ) and assignit to be a new control element , which completes the current step of the c-loop. n the asymmetric two-machine routing flow shop 9 A DP Theorem 3.
Algorithm A DP finds an optimal solution of problem → RF || C max in time O ( n g +1 ) .Proof. Since the optimality of the solution found by algorithm A DP followsexplicitly from the properties of the optimal solution proved in Theorems 1 and2, to complete the proof of Theorem 3, it remains to show the validity of boundson the running time of the algorithm; to that end, it is sufficient to estimate therunning time ( T BS ) of the Main stage of the algorithm.In the Main stage, for each basic configuration b K , the set F ( b K ) of all itsPareto-optimal solutions is found. Since this set is formed from the solutionsobtained in the previous steps of the algorithm for various pre-configurations ofconfiguration b K , the obvious upper bound on the value of T BS is the product ofthe number of configurations ( N C ), of the number of pre-configurations ( N P C )for a given configuration, and of the bound ( T step ) on the running time of anystep of the loop on configurations and pre-configurations (called a c-loop ).In each step of the c-loop, list of solutions F ( b K ′ ) of a given p-c b K ′ is scanned.From each such solution, a solution for configuration b K is generated which isthen either included or not included in list F ( b K ). The solutions included inthe list in this step of the c-loop will be called “new” ones; other solutions,included in F ( b K ) before starting this step will be called “old”.While estimating a new solution claiming to be included in F ( b K ), we scansome “old” solutions of list F ( b K ), which is performed in two stages. In thefirst stage, we look through the elements from F ( b K ), starting from the current control element , in order to find a new control element immediately precedingthe applicant. In the second stage (in the case of the positive decision onincluding the applicant in the list), we check the (new) control element and thesubsequent elements from F ( b K ) subject to their removal from the list (if theyare majorized by the applicant). We continue this process until we find eitherthe first undeletable element or the end of the list. We would like to know: howmany views of items of list F ( b K ) will be required in total in one step of thec-loop? It is stated that no more than O ( Z ), where Z is the maximum possiblesize of list F ( b K ) in any step of the algorithm for all possible configurations b K .To prove this statement, we first note that none of the “new” elements in-cluded in list F ( b K ) in this step of the c-loop will be deleted in this step, sinceall “new” solutions included in the list are incomparable by the relation ⋖ . Thisfollows from the facts that: 1) all applicants formed by type (a) are incompara-ble; 2) if the last solution is formed by type (b), then it is either incomparablewith the previous applicant, or majorized by it (and therefore, is not included inthe list). Thus, only “old” elements will be deleted from the list, and the total(in the c-loop step) number of such deletions does not exceed Z .In addition, the viewing of an element from F ( b K ), when it receives the statusof a “control element”, occurs at most once during each c-loop step, and so, thetotal number of such views in one step does not exceed Z . There may be also “idle views” of elements subject to assigning them the status of a “control element”.Such an idle view may happen only once for each applicant, and so, the numberof such idle views during one step of the c-loop does not exceed |F ( b K ′ ) | ≤ Z .Next, the total (over a step of the c-loop) number of views of elements from F ( b K ) subject to their removal from the list does not exceed O ( Z ), as well.Indeed, viewing an element of F ( b K ) with its removal occurs, obviously, foreach element at most once (or, in total over the whole step, at most Z times).Possible “idle view” of an element from F ( b K ) (without its deleting) happensat most once for each applicant, which totally amounts (over the current stepof the c-loop) at most |F ( b K ′ ) | ≤ Z . Thus, the total number of views of itemsfrom F ( b K ), as well as the total running time of the c-loop step ( T step ), does notexceed O ( Z ). Let us estimate now number Z itself.We know that for any given configuration b K the solutions ¯ F = ( F , F ) fromlist F ( b K ) are incomparable with respect to relation ⋖ . Thus, the number ofelements in list F ( b K ) does not exceed the number of different values of thecomponent F . The value of the component F is the sum of the workload ofmachine A and the total duration of its movement. (There are no idle times ofmachine A in the optimal schedule.) Since the workload of machine A (for afixed configuration b K ) is fixed, the number of different values of the component F can be bounded above by the number of different values that the length of amachine route along the nodes of network G can take. As we know, each passageof the machine along the arc ( v i , v j ) is associated with the performance of atleast one job located at node v j . Thus, any machine route contains x ≤ k j ≤ n j arcs entering node v j , and the same number of arcs ( x ) leaving the node.Let us define a configuration of a machine route as a matrix H = ( h ij ) of size g × g , where h ij ( i = j ) specifies the multiplicity of passage of an arc ( v i , v j ) ∈ G in the route; h jj = n j − P i = j h ij . Thus, for any j ∈ [ g ], the equality holds: g X j =1 h ij = n i . (1)Clearly, for any closed route the following equalities are also valid: g X i =1 h ij = n j , j ∈ [ g ] . (2)Hence, it follows that the number of different values of the route length of amachine does not exceed the number of configurations of a closed route. Thelatter does not exceed the number of different matrices H with properties (1)and (2). Let us (roughly) estimate from above the number ( Z ′ ) of such matriceswithout taking into account property (2). n the asymmetric two-machine routing flow shop 11 The number of variants of the i th row of matrix H does not exceed thenumber of partitions of the number n i into g parts, i.e., is not greater than C g − n i + g − = 1( g − n i + 1)( n i + 2) . . . ( n i + g − n g − i ( g − (cid:18) n i (cid:19) (cid:18) n i (cid:19) . . . (cid:18) g − n i (cid:19) ≤ n g − i ( g − g − g n i . Since the value of sup n i ∈ [1 , ∞ ) exp ( g − g n i depends only on g , we obtain an upperbound of the form C g − n i + g − ≤ f ( g ) n g − i . Denote Π = n n . . . n g . Then Z ≤ Z ′ ≤ ( f ( g )) g · Π g − , and the number ofconfigurations ( N C ) can be bounded above by O ( gΠ ). Finally, the number ofpre-configurations is bounded by N P C ≤ O ( gn ). Taking into account the abovebounds, the bound Π ≤ n g /g g , and the boundedness of the parameter g by aconstant, we obtain the final bound on the running time of the algorithm: T A ≈ T BS ≈ N C N P C T step ≤ ϕ ( g ) · O ( Π g n ) ≤ O ( n g +1 ) . Theeorem 3 is proved. ⊓⊔ We have considered the two-machine routing flow shop problem on an asym-metric network ( → RF || C max ). We have improved the result by Yu et al. [10] byshowing that for a more general problem (the problem with an arbitrary asym-metric network) the property of existing an optimal permutation schedule alsoholds. Next, we have presented a polynomial time algorithm for the problem witha fixed number of nodes, which is the first positive result on the computationalcomplexity of the general → RF || C max problem.We now propose a few open questions for future investigation. Question 1.
What is the parametrized complexity of problem → RF || C max with respect to the parameter g ? Question 2.
Are there any subcases of problem → RF || C max with unbounded g (e.g., G is a chain, or a cycle, or a tree of diameter 3, or a tree with a constantmaximum degree, etc. ) solvable in polynomial time? Question 3.
Are there any strongly NP-hard subcases of problem → RF || C max for which NP-hardness is not based on the underlying TSP ? In other words, isit possible that for some graph structure G = W the TSP on W is easy, butproblem → RF | G = W | C max is strongly NP-hard? References
1. Averbakh, I., Berman, O.: Routing two-machine flowshop problems on net-works with special structure. Transportation Science (4), 303–314 (1996).https://doi.org/10.1287/trsc.30.4.3032. Averbakh, I., Berman, O.: A simple heuristic for m -machine flow-shop and itsapplications in routing-scheduling problems. Operations Research (1), 165–170(1999). https://doi.org/10.1287/opre.47.1.1653. Averbakh, I., Berman, O., Chernykh, I.: The routing open-shop problem on a net-work: complexity and approximation. European Journal of Operational Research (2), 531–539 (2006). https://doi.org/10.1016/j.ejor.2005.01.0344. Chernykh, I., Kononov, A., Sevastyanov, S.: Exact polynomial-time algorithm forthe two-machine routing flow shop problem with a restricted transportation net-work. In: Optimization problems and their applications (OPTA-2018), Abstractsof the VII International Conference, Omsk, Russia, July 8–14, 2018. pp. 37–37.Omsk State University (2018)5. Garey, M.R., Johnson, D.S., Sethi, R.: The complexity of flowshop and job-shop scheduling. Mathematics of Operations Research (2), 117–129 (1976),
6. Johnson, S.M.: Optimal two- and three-stage production sched-ules with setup times included. Rand Corporation (1953),
7. J´ozefczyk, J., Markowski, M.: Heuristic solution algorithm for routingflow shop with buffers and ready times. In: Swiatek, J., Grzech, A.,Swiatek, P., Tomczak, J. (eds.) Advances in Intelligent Systems and Com-puting, vol. 240, pp. 531–541. Springer International Publishing (2014).https://doi.org/10.1007/978-3-319-01857-7 528. Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., Shmoys, D.B.: Chapter 9. Se-quencing and scheduling: Algorithms and complexity. In: Logistics of Productionand Inventory, Handbooks in Operations Research and Management Science, vol. 4,pp. 445–522. Elsevier (1993). https://doi.org/10.1016/S0927-0507(05)80189-69. Yu, V.F., Lin, S., Chou, S.: The museum visitor routing prob-lem. Applied Mathematics and Computation (3), 719–729 (2010).https://doi.org/10.1016/j.amc.2010.01.06610. Yu, W., Liu, Z., Wang, L., Fan, T.: Routing open shop and flow shop schedulingproblems. European Journal of Operational Research (1), 24–36 (aug 2011).https://doi.org/10.1016/j.ejor.2011.02.02811. Yu, W., Zhang, G.: Improved approximation algorithms for routing shopscheduling. Lecture Notes in Computer Science , 30–39 (2011).https://doi.org/10.1007/978-3-642-25591-5 5
A The proof of Theorems 1 and 2
Proof of Theorem 1.
For the reasons of convenience, in the proof of thistheorem only we will assume that each job is located at a separate job node.This enables us to assume that each node is visited by each machine only once,and thus the route of each machine in this model is a
Hamiltonian path in adirected network G ∗ = ( V ∗ , E ∗ ) from node 0 to node ( n + 1), or a permutation n the asymmetric two-machine routing flow shop 13 π = ( π , π , . . . , π n +1 ) of indices from 0 to n + 1 starting with π = 0 and endingwith index π n +1 = n + 1. (The set of such permutations will be denoted as P .)At that, network G ∗ admits arcs of zero length.Let i ≺ π j and i (cid:22) π j denote strict and non-strict precedence of node i tonode j in a route π (in particular, i (cid:22) π j admits i = j ). We will assume thatjob j ∈ [ n ] is located at the node with the same index j . Given a permutation π ∈ P , V ( π, j ) . = { i > | i (cid:22) π j } and V ′ ( π, j ) . = { i > | i ≺ π j } will denote thesets of jobs located in the initial segments of sequence π including or excludingjob j , respectively.In the course of the proof, we will construct dense schedules S D h r ; π ′ , π ′′ i determined by three parameters: a time moment r ≥ π ′ , π ′′ ∈P specifying the routes of machines A and B . The set of such schedules will bedenoted as S D . Each schedule S D h r ; π ′ , π ′′ i ∈ S D is constructed according tothe following rules: machines A and B , starting from node 0 at moments 0 and r , respectively, follow ( without any idle times ) the routes specified by thepermutations π ′ and π ′′ , spending all the time just for processing the jobs andmoving between nodes.It is clear that schedule S D h r ; π ′ , π ′′ i ∈ S D can be infeasible for some values ofparameters r, π ′ , π ′′ . At that, for any pair of permutations π ′ , π ′′ ∈ P there existssuch a value r = ˆ r for which the corresponding schedule S D h ˆ r ; π ′ , π ′′ i is feasible,and its length coincides with the length of the active schedule S ( π ′ , π ′′ ). It is alsoclear that ˆ r is the minimum possible value of r for which schedule S D h r ; π ′ , π ′′ i is feasible. Such a value ˆ r is uniquely defined for any given pair ( π ′ , π ′′ ); thisfunction will be denoted as ˆ r ( π ′ , π ′′ ).In fact, schedule S D h r ; π ′ , π ′′ i is feasible, iff machine B arrives at each node i ∈ [ n ] not earlier than machine A completes its operation of job i : r + R ( π ′′ , j ) + B ( V ′ ( π ′′ , j )) ≥ R ( π ′ , j ) + A ( V ( π ′ , j )) , j ∈ [ n ] , (3)where A ( Y ) , B ( Y ) denote the total length of operations of machines A and B over the jobs from set Y , R ( π, j ) = P i ∈ [ k ] ρ ( π i − , π i ) is the length of path( π , π , . . . , π k ) from the depot to node π k = j .Given an instance I of problem → RF || C max , let S be such an optimalschedule in which machine B follows the shortest route around networknodes (among all routes of machine B in optimal schedules ). Let π , π ∈ P be the routes of machines A and B in that schedule S ; ˆ r . = ˆ r ( π , π ) and b S . = S D h ˆ r ; π , π i . Then schedule b S ∈ S D is feasible and optimal.Let us number the nodes of network G ∗ (as well as jobs) according to the orderof their passing by machine A : nodes are numbered by indices from 0 to n +1, andjobs by indices from 1 to n . Thus, π = ( π , π , . . . , π n +1 ) = (0 , , . . . , n + 1). Letus define in sequence π a sub-sequence of marked nodes π ∗ = ( π ν , π ν , . . . , π ν T )by the recursion: ν = 0, ν t = min { j | π j > π ν t − } , t ∈ [ T ]; ν T = n + 1. In otherwords, we go along the route of machine B and “mark” the nodes according to asimple algorithm: first, we mark node 0; next, we mark the first met node witha larger index, and so on, until we arrive at node ( n + 1) (which we also mark).Then we have: 0 = ν < ν (= 1) < · · · < ν T = n + 1 and 0 = π ν < π ν < · · · < π ν T − (= n ) < π ν T = n + 1. It can be also easily seen that π ∗ is a sub-sequenceof both sequences: π and π .Let L denote the set of marked nodes. Other nodes will be called mobile ones.We denote by W t , W t ( t ∈ T ) the sets of mobile nodes being passed by machines A and B between two consecutive marked nodes: π ν t − and π ν t . (These sets willbe referred to as segments of permutations π and π .) Clearly, W = W T = ∅ .While speaking on the difference between permutations π and π , it can beobserved that each mobile node is located in π in a segment with a lesser indexthan in π . (For example, all elements of W come there from W .) Based on thisproperty of permutations π , π , procedure Trans described below transforms theroute of machine B step by step, transferring exactly one mobile node to a newposition in each step. In the course of this transformation, the current (variable)permutation specifying the route of machine B will be denoted by ˜ π . Since thistransformation of the route of machine B leaves the mutual order of the markednodes stable, we can transfer the above definition of segments (of permutations π and π ) onto permutation ˜ π .At the end of procedure Trans , the route of machine B will coincide withthe route of machine A (i.e., we will have ˜ π = π ), and the corresponding denseschedule S (˜ π ) . = S D h ˆ r ; π , ˜ π i will become a permutation one. After completingthe description of procedure Trans , Lemma 1 is proved providing some importantproperties of schedules S (˜ π ) obtained in steps of the procedure. Procedure
Trans
Procedure is divided into T stages , where in the t th stage ( t ∈ [ T ]) we considerthe transfer of all mobile nodes from the t th segment of permutation π to their“proper places”, i.e., to those segments where they stand in permutation π ,and in the ascending order of their numbers . (The first stage is, therefore,empty, since W = ∅ .) The t th stage is divided into steps , where in each stepthe transfer of the current mobile node standing in the current permutation ˜ π atposition ν t − current ( t th) segment of permutation ˜ π , and so, after a finite number of steps, we willsee in this position the marked node π ν t − , which means the completion of thestage. (cid:4) We notice that in any step of stage t , all nodes preceding in ˜ π the markednode π ν t − (inclusively) are sequenced in ˜ π in the ascending order of theirnumbers . Lemma 1.
A dense schedule S (˜ π ) obtained after each step of procedure Trans is feasible. At that, machine B arrives at each marked node in schedule S (˜ π ) notearlier (and by a not shorter way) than in schedule b S .Proof. Since we start in
Trans from permutation ˜ π := π , the correspondingschedule S (˜ π ) = b S is feasible. Suppose that in some step of stage t , schedule S (˜ π ) is still feasible, and let an item ˜ π j . = y of ˜ π be transferred from position n the asymmetric two-machine routing flow shop 15 j = ν t − i < j . (At the same time, all items { ˜ π i , . . . , ˜ π j − } increasetheir positions in ˜ π by 1.) For convenience, we leave the notation ˜ π for the routeof machine B before this transposition, while the new permutation of nodes( after the transposition of node y ) will be denoted by ˜ π ′ .Note that ˜ π ′ i − (= ˜ π i − ) < y < ˜ π ′ i +1 (= ˜ π i ). Suppose that in schedule S (˜ π )machine B leaves node ˜ π i − at time τ . Then it appears at node ˜ π i at time τ + ρ (˜ π i − , ˜ π i ) ≥ τ ′ (4)(where τ ′ is the time when machine A leaves node ˜ π i ), since schedule S (˜ π ) isfeasible. What will change in the schedule after the transposition of node y tothe position between nodes ˜ π i − and ˜ π i ? In the new permutation ˜ π ′ , machine B goes from node ˜ π i − (at time τ ) to node y , and only after that to node ˜ π i .Suppose that schedule S (˜ π ′ ) is infeasible with respect to job y . This means thatin this schedule machine B arrives at node y too early — when machine A has not yet completed this job. Suppose, it needs ε > A will exit from node y at time τ + ρ (˜ π i − , y ) + ε and willarrive at node ˜ π i not earlier than by time τ + ε + ρ (˜ π i − , y ) + ρ ( y, ˜ π i ). Hence, τ ′ = τ + ε + ρ (˜ π i − , y )+ ρ ( y, ˜ π i )+ ∆ , where ∆ is a non-negative additive (includingthe length of the operation of job ˜ π i on machine A ). In view of the triangleinequality, we obtain the inequality τ ′ > τ + ρ (˜ π i − , ˜ π i ), contradicting (4). Thisimplies that schedule S (˜ π ′ ) is feasible for job y = ˜ π ′ i . It remains feasible forjobs ˜ π ′ i +1 , . . . , ˜ π ′ j as well, since the insertion of job y prior to these jobs in theroute of machine B just postpones their execution by machine B to a later time(compared to that in schedule S (˜ π )).In the subsequent positions ( j + 1 = ν t and onwards), sequence ˜ π has notchanged (and coincides with π ). Let us show that for jobs in these positionsschedule S (˜ π ′ ) is also feasible. To that end, it is sufficient to show that in schedule S (˜ π ′ ), machine B arrives at node z . = ˜ π ′ ν t = π ν t (and therefore, at the subsequentnodes, as well) not earlier than in b S . Indeed, if to assume that it arrives at node z in schedule S (˜ π ′ ) earlier than in b S (at that, having processed the same setof jobs V ′ (˜ π ′ , z ) = V ′ ( π , z )), this would mean that its path from the depot tonode z is shorter in the route ˜ π ′ than in π . In this case, we could define a newschedule b S ′ being a combination of schedules S (˜ π ′ ) and b S . For machine A , itremains the same as in both schedules; for machine B , we take schedule S (˜ π ′ ) forjobs from V ′ (˜ π ′ , z ) = V ′ ( π , z ) and schedule b S for the remaining jobs. Clearly,schedule b S ′ is not dense. However, it would be feasible and optimal , since itslength coincides with that of schedule b S . In addition, as we established, the route˜ π ′ of machine B is shorter than in schedule b S , which contradicts the choice ofschedule b S . Thus, the feasibility of schedule S (˜ π ′ ) obtained after performing thecurrent step of procedure Trans is confirmed. At the same time, we have provedthat machine B arrives in schedule S (˜ π ′ ) at the marked node z (and hence, atthe subsequent marked nodes) not earlier than in schedule b S . It is clear thatduring the subsequent steps of the Trans procedure this property will not beviolated, since during the subsequent transpositions only additional nodes willbe inserted prior to node z . Lemma 1 is proved. ⊓⊔ Let us proceed with the proof of Theorem 1. Applying Lemma 1 to permu-tation ˜ π = π obtained at the completion of procedure Trans , we obtain: R ( π , j ) ≤ R ( π , j ) , ∀ j ∈ L. Applying these inequalities to relations (3) for j ∈ L , we derive the inequalities:ˆ r + B ( V ′ ( π , j )) ≥ A ( V ( π , j )) , j ∈ L. (5)Now, let us consider an arbitrary j ∈ [ n ], and let z . = π ν t (cid:22) π j ≺ π π ν t +1 . Thenˆ r + B ( V ′ ( π , j )) ≥ ˆ r + B ( V ′ ( π , z )) (5) ≥ A ( V ( π , z )) ≥ A ( V ( π , j )) . (6)The last inequality in this chain follows from the fact that all nodes standingin permutation π prior to j (inclusively) have indices not greater than z . (It isevident for marked nodes; the mobile nodes, appeared in permutation π in the( t + 1)th or preceding segments, precede to the marked node z in permutation π , whence V ( π , j ) ⊆ V ( π , z ).)And now the final! Consider the permutation schedule S ∗ . = S D h ˆ r ; π , π i inwhich machine A follows the route π . Then both machines pass the same waysto any node j . Adding the length of that path to both parts of the resultinginequality (6), we obtain relations:ˆ r + R ( π , j ) + B ( V ′ ( π , j )) ≥ R ( π , j ) + A ( V ( π , j )) , j ∈ [ n ] , sufficient for the feasibility of schedule S ∗ (see (3)). Besides, since the schedule ofmachine B in S ∗ coincides with that in schedule b S , both schedules have the samelength, which means that schedule S ∗ is also optimal. Theorem 1 is proved. ⊓⊔ Now we return to the original model on network G , where we will consider permutation schedules only. A permutation π = ( π , . . . , π n ) of job indicesof a given instance I will specify the order of job processing by both machines,which also uniquely defines the routes of the machines. Function ˜ S ( I, π ) willdefine an active schedule, specified for a given instance I by permutation π . Definition 2.
Farther, for the convenience of arguing, we renumber the jobs J j ( j ∈ [ n ]) in the descending order of their Johnson’s priorities (see Definition 1).Given a permutation π = ( π , . . . , π n ) of job indices and a node v ∈ V , we willsay that a pair of jobs π i , π j ∈ J ( v ) , i < j, stays improperly (and forms an inversion ) in π , if π i > π j ; Inv ( π ) will denote the total (over all v ∈ V ) numberof such pairs in π . Jobs π i , π j ∈ J ( v ) , i < j , will be called v -neighbors , if thereare no other jobs from J ( v ) between them in π . Proof of Theorem 2.
Among the set of permutations defining optimal per-mutation schedules (this set is nonempty, by Theorem 1), we choose the permu-tation π ∗ on which the minimum of the function Inv ( π ) is attained, and showthat Inv ( π ∗ ) = 0. n the asymmetric two-machine routing flow shop 17 Suppose that
Inv ( π ∗ ) >
0. Then in network G there is a node v ∗ ∈ V and acouple of jobs in this node standing in permutation π ∗ improperly , and so, thereare also v ∗ -neighbors in π ∗ with this property: π ∗ i > π ∗ j , i < j . To show thatthis is impossible, we transform the original instance I of problem → RF I of problem F S ∗ = ˜ S ( I, π ∗ ). In this active schedule, we will distinguish the time inter-vals of processing the jobs from J ( v ∗ ). All other time intervals of “not workingat node v ∗ ” (where a machine either performs jobs not from J ( v ∗ ), or moves be-tween network nodes, or simply stands idle) will be called “inserts”. (In fact, “in-serts” will stand for such maximal by inclusion time intervals for each machine.)Due to the permutability of schedule S ∗ , there is a one-to-one correspondencebetween the inserts of machines A and B . They are divided into pairs related toprocessing the same subsets of jobs not lying at v ∗ . We represent each such pairof inserts in the form of a schedule of some pseudo-job . (These pseudo-jobs willbe assigned to additional indices k ∈ K .) For such a schedule to be feasible, it isnecessary for each pseudo-job to get rid of the overlapping of its “operations”.To that end, we do the following transformation of schedule S ∗ : if for somepseudo-job k ′ ∈ K , the intervals [ s k ′ , s k ′ + a k ′ ] and [ s k ′ , s k ′ + b k ′ ] of processingits “operations” on machines A and B have an intersection of length λ k ′ > λ k ′ ; new operation durations are: a ′ k ′ := a k ′ − λ k ′ ; b ′ k ′ := b k ′ − λ k ′ . And farther, instead of the pseudo-job k ′ , we willconsider a “new job” with the same index k ′ and the “shortened” lengths of theoperations: ( a ′ k ′ , b ′ k ′ ). The intervals of processing these operations do not overlapin the modified schedule, and so, the schedule becomes feasible with respect tojob k ′ . It is also clear that such a transformation does not violate the feasibility ofthe schedule for other jobs (since the schedule for all jobs following k ′ is shiftedsynchronously on machines A and B by the same amount λ k ′ ) and does notchange the values of λ k already set for other pseudo-jobs. At that, the schedulelength is reduced by λ k ′ .We will denote by ˆ I the instance of problem F J ( v ∗ ); b S will stand for the schedule for this set of jobs,obtained from S ∗ after performing the reduction of pseudo-jobs. Clearly, it isa permutation schedule. Let ˆ π denote the permutation of jobs of instance ˆ I inschedule b S .What else can be said about schedule b S ? First, that C max ( b S ) = C ( S ∗ ) − Λ, (7)where Λ . = P k ∈K λ k . Second, that the schedule is “absolutely dense”, i.e., con-tains nothing but the intervals of processing the operations in the whole intervalfrom 0 to C max ( b S ). (All machine movements and idle times were “packed” intothe operations of pseudo-jobs.) Third, it is feasible for jobs of instance ˆ I and isoptimal (its length coincides with the workload of each machine). Evidently, b S is the active schedule defined by permutation ˆ π , i.e., b S = ˜ S ( ˆ I, ˆ π ). Let us transform permutation ˆ π into a permutation ˆ π ′ as follows. It can beseen that the v ∗ -neighbors in permutation π ∗ , jobs i ∗ = π ∗ i and j ∗ = π ∗ j standingthere improperly , are either adjacent in permutation ˆ π or separated from eachother by a new job k ∈ K . In the first case, just swap them. In the second case,it is clear that job k stands improperly with at least one of its neighbors: either k ≺ i ∗ or j ∗ ≺ k (since otherwise, the relation i ∗ ≺ j ∗ would follow from thetransitivity of the relation ≺ , which would contradict the choice of these jobs).Suppose, for example, that k ≺ i ∗ . Then, swap i ∗ first with k and then with j ∗ .As was shown by Johnson [6], rearranging neighboring jobs standing improperlydoes not increase the length of the schedule. Thus, in both cases, the length ofthe active schedule b S ′ = ˜ S ( ˆ I, ˆ π ′ ) will not exceed the length of schedule b S : C max ( b S ′ ) ≤ C max ( b S ) . (8)Finally, we transform schedule b S ′ into a schedule S ′ of the original instance I by increasing the durations of the operations of each job k ∈ K by λ k (an“anti-reduction” procedure) and by restoring (in the time intervals obtained) theschedule for those jobs of the original instance that were performed in schedule S ∗ within these pseudo-jobs. What can we say about schedule S ′ ?Firstly, it is feasible . For jobs from J ( v ∗ ), this follows from the feasibilityof schedule b S ′ and from the obvious fact that the “anti-reduction” procedureof each “new job” k ∈ K synchronously shifts the intervals of processing theoperations of any subsequent job j ∈ J ( v ∗ ) on machines A, B by the sameamount, which preserves the feasibility of the schedule for job j . Let us showthe feasibility for the remaining jobs of the original instance I (not belonging to J ( v ∗ ), and thus, included in some pseudo-jobs).We will show this for jobs included in a pseudo-job k ′ ∈ K . If the “oper-ations” of this pseudo-job do not overlap in schedule S ∗ (and thus, they havenot undergone the reduction procedure while the transformation S ∗ → b S of theoriginal schedule), then they do not overlap in schedule b S ′ , as well, since thelatter is feasible. Clearly, the anti-reduction procedure (implemented to otherpseudo-jobs) has no impact on the mutual positions of the two operations of job k ′ (they do not overlap in schedule S ′ , as before), which implies the feasibilityof S ′ with respect to pseudo-job k ′ , as well as for all original jobs included in k ′ .Let us next assume that the “operations” of pseudo-job k ′ have undergonethe reduction procedure. After the reduction of these “operations”, a minimumpossible gap is established between the starting times of the reduced operationsof the new job k ′ (it is minimum possible, since it coincides with the lengthof the A -operation of this job). In the course of further transformations of theschedule, consisting in a series of transpositions of neighboring jobs and resultedin schedule b S ′ , this gap could not decrease, since schedule b S ′ is feasible. Finally,the last transformation of the schedule (from b S ′ to S ′ ) preserves all the gapsachieved in the previous stage, because the “anti-reduction” procedure can onlyshift the starting times of both operations of each job by the same amount. Thus,we may conclude that the gap between the starting times of two operations ofpseudo-job k ′ in schedule S ′ cannot be less than in schedule S ∗ . Now, for proving n the asymmetric two-machine routing flow shop 19 the feasibility of schedule S ′ with respect to all jobs included in pseudo-job k ′ ,it is sufficient to take into account the following simple facts:1) The original schedule S ∗ was feasible with respect to these jobs.2) The “operations” of pseudo-job k ′ have the same durations in schedules S ∗ and S ′ .3) Relative positions of the operations of real jobs within the corresponding“inserts” remain the same in S ′ as in S ∗ .So, we have proved that schedule S ′ is feasible.Next, we have C ( S ′ ) = C max ( b S ′ ) + Λ , which, together with (7) and (8),implies the inequality C ( S ′ ) ≤ C ( S ∗ ) (and therefore, schedule S ′ is optimal).And finally, it is clear that the transpositions of neighboring jobs imple-mented in the course of the transformation of schedule b S to schedule b S ′ reduceby 1 the number of inversions in the permutation of the original jobs (i.e., for thejob permutation π ′ corresponding to schedule S ′ , relation Inv ( π ′ ) = Inv ( π ∗ ) − π ∗ . The contradiction com-pletes the proof of Theorem 2.. The contradiction com-pletes the proof of Theorem 2.