Two-Machine Open Shop Scheduling with Secondary Criteria
-
Upload
ifn-magdeburg -
Category
Documents
-
view
3 -
download
0
Transcript of Two-Machine Open Shop Scheduling with Secondary Criteria
Two-Machine Open Shop Scheduling withSecondary Criteria
Jatinder N. D. Gupta∗
Department of Management, Ball State UniversityMuncie, IN 47306, USAemail: [email protected]
Frank Werner and Gunnar Wulkenhaar
Fakultat fur Mathematik, Otto-von-Guericke-Universitat, PSF 4120,39106 Magdeburg, Germany
email: [email protected]
June 21, 2005
Abstract
This paper considers two-machine open shop problems with secondary criteriawhere the primary criterion is the minimization of makespan and the secondary cri-terion is the minimization of the total flow time, total weighted flow time, or totalweighted tardiness time. In view of the strongly NP-hard nature of these problems,two polynomially solvable special cases are given and constructive heuristic algorithmsbased on insertion techniques are developed. A strongly connected neighborhood struc-ture is derived and used to develop effective iterative heuristic algorithms by incorpo-rating iterative improvement, simulated annealing and multi-start procedures. Theproposed insertion and iterative heuristic algorithms are empirically evaluated by solv-ing problem instances with up to 80 jobs.
Keywords: Open Shop Scheduling, Secondary Criteria, Constructive and Iterative Heuris-tics, Threshold Accepting, Simulated Annealing, Empirical Evaluation.
∗Please direct all correspondence to: Dr. Jatinder N. D. Gupta, Department of Management, Ball StateUniversity, Muncie, IN 47306, USA, e-mail: [email protected], Telephone: 765-285-5301, FAX: 765-285-8024.
1 Introduction
Traditional research to solve multi-stage scheduling problems has focused on single criterion.
However, in industrial scheduling practices, managers develop schedules based on multicri-
teria. Multiple objective optimization problems are quite common in practice. However,
while solving scheduling problems, optimization algorithms often consider only a single ob-
jective function. Consideration of multiple objectives makes even the simplest multi-machine
scheduling problems NP-hard. Reviews of algorithms and complexity results for the multiple
criteria scheduling problems by Lee and Vairaktarakis [11], Nagar et al. [14], and T’Kindt
and Billaut [17] indicate that only a few papers have considered multiple machine settings,
although a shop configuration with multiple machines is a more appropriate depiction of
most shop floors. Nagar et al. [14] suggest the reason for the lack of research in this area
may be due to the complex nature of these problems. Scheduling problems involving mul-
tiple criteria require significantly more effort in finding acceptable solutions and hence have
not received much attention in the literature.
The need to consider multiple criteria in scheduling is widely recognized. Either a simulta-
neous or a lexicographic (also called hierarchical) approach can be adopted. For simultaneous
optimization, there are two approaches. First, all efficient schedules can be generated, where
an efficient schedule is one in which any improvement to the performance with respect to
one of the criteria causes a deterioration with respect to one of the other criteria. Second,
a single objective function can be constructed, for example by forming a linear combination
of various criteria, which is then optimized. Under the lexicographic approach, the criteria
are lexicographically ranked in order of importance; the first criterion is optimized first, the
second criterion is then optimized, subject to achieving the optimum with respect to the
first criterion, and so on.
In practice, minimization of makespan is important as it tends to maximize the throughput
rate and the machine utilization representing the amount of resources tied to a set of jobs.
Minimizing total weighted flow time is equivalent to minimizing the work-in-process cost [15].
Minimization of average throughput time and average weighted tardiness of a job in the shop
are important in the current manufacturing environment of time-based management and
1
just-in-time manufacturing practices [15]. With the current desire to optimize production
rate, work-in-process inventory, or penalty cost of tardy jobs, minimizing makespan as the
primary objective and minimizing the total flow time, total weighted flow time, or total
weighted tardiness as a secondary objective is a useful criterion in practice. In addition
to being important in practice, these secondary criteria also reflect the level of difficulty in
designing solution algorithms.
This paper considers the two-machine open shop scheduling problem where the order of
processing the tasks of a job on various machines is immaterial as there is no precedence
relation among the tasks comprising a job. The primary objective is the minimization of the
makespan (henceforth called criterion C1) and the secondary objective is one of minimizing
the total flow time, total weighted flow time, or total weighted tardiness (henceforth called
criterion C2). Thus, it is required to find a schedule for which criterion C2 is minimized,
subject to the constraint that no reduction in makespan (criterion C1) is possible. Clearly,
our problem is one of lexicographic (hierarchical) multi-criteria scheduling.
Open shop scheduling problems arise in several industrial situations. For example, con-
sider a large aircraft garage with specialized work-centers. An airplane may require repairs
on its engine and electrical circuit system. These two tasks may be carried out in any or-
der although it is not possible to do these tasks on the same plane simultaneously. Other
applications of open shop scheduling models in automobile repair, quality control centers,
semiconductor manufacturing, teacher-class assignments, examination scheduling, and satel-
lite communications are described by Kubiak et al. [9], Liu and Bulfin [12], and Prins [16].
The literature in solving bicriteria open shop problems is limited to only two papers.
The first paper by Masuda and Ishii [13] considers the two-machine open shop bicriteria
scheduling problem to find all efficient schedules with respect to their makespan and the
maximum lateness and shows that the problem either has a unique optimal solution or has
a line segment of nondominated solutions. The second paper by Kyparisis and Koulamas
[10] describes polynomially solvable special cases of the two-and-three-machine open shop
hierarchical criteria scheduling problems to find a schedule with minimum total flow time
subject to minimum makespan and proposes a heuristic algorithm to solve the problems
2
with a dominant machine. In view of such scare literature and the NP-hard nature of the
problem, this paper develops and evaluates heuristic algorithms for the two machine open
shop problem where the primary objective is the minimization of the makespan and the
secondary objective is one of minimizing the total flow time, total weighted flow time, or
total weighted tardiness. We focus on the properties of various neighborhood structures
and how they influence the performance of iterative improvement and simulating annealing
algorithms to solve multicriteria scheduling problems.
The remainder of the paper is organized as follows. Section 2 describes the problem def-
inition, complexity, and two polynomially solvable special cases. The constructive heuristic
algorithms are developed in Section 3 where different insertion rules are also described. Sec-
tion 4 discusses various neighborhoods and the application of iterative algorithms to solve
the open shop problem with lexicographic criteria. The computational results of our experi-
ments are given in detail in Section 5. Finally, Section 6 provides conclusions of this research
and suggests some directions for future research.
2 Problem Complexity and Special Cases
Extending the standard notations of scheduling problems [2] to multi-criteria scheduling
problems [17], the two-machine open shop problem considered here is represented as a
O2||Lex(C1, C2) problem where the notation Lex(C1, C2) indicates that criterion C1 is
minimized first and among all the schedules that minimize criterion C1, a schedule is found
that minimizes criterion C2.
2.1 Problem Definition
To succinctly define the problems under consideration, assume that a set N = {1, 2, . . . , n}of n simultaneously available jobs is to be processed on two machines M1 and M2. For each
job i ∈ N , a processing time ai on M1 and a processing time bi on M2 are given. Preemptions
of operations are not allowed. The order in which each job is processed on the machines
is immaterial. Then, the O2||Lex(C1, C2) problem is one of finding a schedule that first
minimizes criterion C1 and then among all the possible schedules with optimal C1 value,
3
minimizes criterion C2.
Several optimality criteria are based on the minimization of a function of the completion
times of jobs, f(C1, . . . , Cn) where for each job i ∈ N , Ci is defined as the time when the
second operation of job i is finished. Some regular optimality criteria are the minimization
of the makespan Cmax, the minimization of the total unweighted or weighted flow time (∑
Ci
and∑
wiCi, respectively), or the total weighted tardiness (∑
wiTi), where for each job i ∈ N ,
Ti = max{0, Ci − di}. In this paper, our primary criterion is makespan, i.e. C1 = Cmax,
while the secondary criterion is the minimization of total flow time, total weighted flow time,
or the total weighted tardiness, i.e. C2 ∈ {∑Ci,
∑wiCi,
∑wiTi}.
2.2 Problem Complexity
While the O2||Cmax problem is well known and polynomially solvable using the simple algo-
rithm given by Gonzalez and Sahni [6] (henceforth called algorithm GS), the O2||∑ Ci prob-
lem is strongly NP-hard [1]. Using these results, we can prove that the O2||Lex(Cmax, C2)
problem is strongly NP-hard where C2 ∈ {∑Ci,
∑wiCi,
∑wiTi}.
Theorem 1 The O2||Lex(Cmax, C2) problem where C2 ∈ {∑Ci,
∑wiCi,
∑wiTi} is strongly
NP-hard.
The proof of Theorem 1 is given in the appendix.
2.3 Polynomially Solvable Special Cases
While the problems considered in this paper are strongly NP-hard, some special cases of the
O2||Lex(Cmax,∑
Ci) and O2||Lex(Cmax,∑
wiCi) problems are polynomially solvable.
It is well-known that an optimal schedule for problem O2||Cmax has the following makespan
value:
Coptmax = max{X1, X2, ar + br},
where X1 =∑
j aj, X2 =∑
j bj and r is the job with the largest sum of both processing
times.
First, if Coptmax = ar + br, then the following O(n log n) algorithm generates an optimal
schedule for the O2||Lex(Cmax,∑
wiCi) problem.
4
Algorithm W: Weighted Shortest Processing Time Procedure
Input: ai, bi for i = 1, . . . , n and job r such that Coptmax = ar + br = maxi∈N{ai +
bi, X1, X2}.Step 1: Find schedule S1 by scheduling job r first on M1, and then processing the
remaining jobs on this machine ordered according to the WSPT1 rule (i.e. se-
quence the job with the smallest ratio ai/wi on this machine first), and on M2,
job r is scheduled last and the remaining jobs before in arbitrary order.
Step 2: Find schedule S2 by scheduling job r first on M2, and then processing the
remaining jobs on this machine ordered according to the WSPT2 rule (i.e. se-
quence the job with the smallest ratio bi/wi on this machine first), and on M1,
job r is scheduled last and the remaining jobs before in arbitrary order.
Step 3: Among the two schedules S1 and S2, select the one with minimum∑
wiCi
value as an optimal schedule.
Theorem 2 If Coptmax = ar + br, where r is the job with the largest sum of both processing
times, then algorithm W optimally solves the O2||Lex(Cmax,∑
wiCi) problem
Proof. Notice that the first operations of all jobs except r can be sequenced in arbitrary
order when processing job r on the other machine. For the second operations of the jobs
except job r, it is optimal to sequence them in WSPT order of the processing times on
the corresponding machine. Therefore, only the two schedules S1 and S2 generated by Al-
gorithm W have to be considered, and the schedule with better C2 value is obviously an
optimal schedule.
In view of Theorem 2, in the remainder of the paper we consider only problems with
Coptmax = max{X1, X2}. Further, without loss of generality, we assume that Copt
max = X1, i.e.
X1 ≥ X2. Moreover, let pi be the job sequence on machine Mi(i = 1, 2).
We now give a polynomially solvable special case of O2||Lex(Cmax,∑
wiCi) problem which
is based on the WSPT rule for solving the 1||∑ wiCi problem. The processing times of this
5
special case satisfy the following condition:
min{ai | i = 1, . . . , n} ≥ 2max{bi | i = 1, . . . , n}. (1)
In this case the following algorithm provides an optimal schedule.
Algorithm EW: Extended Weighted Shortest Processing Time Procedure
Input: ai, bi for i = 1, . . . , n.
Step 1: Generate n schedules as follows: Choose job k, 1 ≤ k ≤ n, as the first job on
M1 and let pWSPTk = (pWSPTk1 , . . . , pWSPTk
n−1 ) the sequence of the jobs of N \ {k}according to nonincreasing ratios wi/ai.
Step 2: Determine the objective function value of schedule P k = (p1, p2) with p1 =
(k, pWSPTk) and p2 = (pWSPTk1 , pWSPTk
2 , k, pWSPTk3 , . . . , pWSPTk
n−1 ) and only job k is
processed first on M1.
Step 3: Select the best schedule P ∗ with respect to criterion C2 among the n generated
schedules.
Theorem 3 For the O2||Lex(Cmax,∑
wiCi) problem satisfying condition (1), algorithm EW
generates an optimal schedule P ∗.
Proof. Each of the n schedules generated has an optimal makespan value Coptmax = X1
(since due to condition (1), there is no idle time on M1 in each of them). Moreover, due to
condition (1), for any job k sequenced first on machine M1, we have Ck = ak + bk. On the
other hand, the objective function contributions of the remaining jobs cannot be reduced
since pWSPT generates an optimal solution for problem 1||∑ wiCi (in particular, notice that
no improvement can be obtained by changing the machine route of any job contained in
pWSPTk). Therefore, the best of the n schedules generated meets a lower bound for the
optimal objective function value∑
wiCi.
Note that (1) is a sufficient condition to guarantee the smallest possible completion time
Ck = ak +bk of job k by sequencing k at position 3 on M2 (in the case of arbitrary processing
6
times, the completion time of job k is usually greater than ak+bk and finding the best position
of job k is difficult). Algorithm EW can be run in O(n2) time.
Algorithms W and EW extend and improve the results of Kyparisis and Koulamas [10]
since O2||Lex(Cmax,∑
Ci) problem is a special case of the O2||Lex(Cmax,∑
wiCi) problem
with wi = 1 for all i ∈ N .
3 Constructive Algorithms
Since the general O2||Lex(Cmax, C2) problem is strongly NP-hard, this section develops
constructive heuristic algorithms to find a starting solution for the iterative algorithms. The
constructive algorithms are based on a modification of Gonzalez and Sahni’s algorithm [6]
for problem O2||Cmax and insertion techniques.
3.1 Modifications of Gonzalez and Sahni’s Algorithm
Since the primary optimality criterion is makespan, we can use an algorithm that optimally
solves this problem as an initial solution. Therefore, we first describe the O(n) algorithm of
Gonzalez and Sahni to find an optimal solution for the O2||Cmax problem.
Algorithm GS: Gonzalez and Sahni’s Algorithm
Input: ai, bi for i ∈ N = {1, . . . , n}, X1 =∑
j∈N aj, and X2 =∑
j∈N bj.
Step 1: Let K = {i ∈ N |ai ≥ bi} and L = {i ∈ N |ai < bi}. Find two different jobs r
and s such that ar ≥ maxi∈K{bi} and bs ≥ maxi∈L{ai}. Let K ′ = K \ {s, r} and
L′ = L \ {r, s}. In addition, assume that X1 − as ≥ X2 − br (the other case is
symmetric). Enter step 2.
Step 2: Let the sequence of jobs on machine M1 and M2 be p1∗ = (r, p(K ′), p(L′), s)
and p2∗ = (p(K ′), p(L′), s, r), respectively, where p(K ′) and p(L′) are arbitrary
sequences of the sets K ′ and L′. P ∗ = (p1∗, p2∗) is an optimal schedule with only
job r processed first on M1 and all remaining jobs processed first on M2.
While the O2||Cmax problem can be polynomially solved using the above algorithm GS,
its worst-case performance ratio for the O2||∑ Lex(Cmax,∑
Ci) problem is n + 1 [10]. For
7
the problem with a given secondary criterion, one can improve algorithm GS by choosing
particular jobs r and s and particular sequences of the jobs in K ′ and L′. Based on initial
tests, we have chosen jobs r and s as the shortest jobs satisfying the given inequalities in
step 1 of algorithm GS, and the jobs of the sets K ′ and L′ have been sequenced as follows:
• for C2 =∑
Ci, according to the SSPT rule (job with shortest sum of both processing
times first);
• for C2 =∑
wiCi, according to the WSSPT rule (job with the smallest ratio (ai+bi)/wi
first) and
• for C2 =∑
wiTi, according to the WSSPTD rule (job with the smallest ratio (ai + bi) ·di/wi first).
3.2 Insertion Algorithms
In the rest of the paper, let Ji be the set of jobs which are selected to be processed first on
Mi(i = 1, 2), and for a set Z of jobs with the same chosen machine route, let q(Z) denote
the Johnson sequence for the corresponding flow shop problem with fixed machine route [7].
Based on a chosen job list L = (j1, j2, . . . , jn) (where for the different C2 criteria the
list according to the above rules has been taken), we generate a makespan optimal schedule
with the following structure: p1 = (k, p∗) and p2 = (p′∗, k, p′′∗), where p∗ = (p′∗, p′′∗). Only
job k is processed first on M1, the other jobs are processed first on M2, i.e. J1 = {k} and
J2 = N \ {k}. Using these developments, the insertion procedure chooses the first possible
job in the job list which can be taken as the first job in p1. For a candidate job k, this can be
checked by means of the Johnson sequence q(J2) for the jobs in J2 = N \{k}. If for schedule
p1 = (k, q(J2)) and p2 = (q(J2), k) the makespan value does not exceed Cmax = X1, the
first job in p1 has been determined. While such a structure of the generated schedule (i.e.
exactly one job is processed on M1 first, and all remaining jobs are processed in the same
sequence on both machines) does not necessarily contain an optimal schedule, it can lead to
a significant improvement in the value of the secondary criterion C2 over an arbitrary GS
schedule.
8
We now take one job in p∗ and perform an insertion procedure within the partial sequence
p∗ (note that all jobs in the partial sequence p∗ belong to J2). Assume that a partial
schedule P = (p1, p2) with p1 = (k, p∗1, . . . , p∗t ) and p2 = (p∗1, . . . , p
∗t ), which can be completed
to a makespan optimal schedule, has already been obtained. Then, we again choose the
first unscheduled job u in the job list and insert it at each of the last h positions within
the current partial sequence p∗. Among all partial schedules which can be extended to a
makespan optimal sequence, we choose the partial schedule with the best objective function
value with respect to criterion C2. In the following algorithm, t denotes the number of jobs
in J2 inserted into the partial schedule, S is the set of partial schedules constructed, pi is a
partial job sequence of the jobs in J2 and Pidenotes a complete schedule of all jobs. The
steps of our first insertion algorithm, therefore, are as follows:
Algorithm IS1: First Insertion Algorithm
Input: ai, bi for i = 1, . . . , n, Coptmax = X1 =
∑ai, ordered insertion list L and the
insertion depth h.
Step 1: Find the first job k in L such that for J1 = {k}, J2 = N \{k} and P = (p1, p2)
with p1 = (k, q(J2)), p2 = (q(J2), k), equality Cmax(P ) = Copt
max holds. Remove job
k from L; set p∗ = ∅ and t := 1; choose the first job u in list L and enter step 2.
Step 2: Set h∗ := min{h, t}; S := ∅ and i = t + 1− h∗. Enter step 3.
Step 3: Insert job u on position i in p∗, i.e. consider pi = (p∗1, . . . , p∗i−1, u, p∗i , . . . , p
∗t−1).
Complete pi by appending the Johnson sequence q(Z) of unscheduled jobs Z to
pi, and consider Pi
= (p1i, p2i) with p1i = (k, pi, q(Z)) and p2i = (pi, q(Z), k).
Enter step 4.
Step 4: If Cmax(Pi) = Copt
max set S := S ∪ Pi. Enter step 5.
Step 5: If i = t, enter step 6; otherwise set i = i + 1, and return to step 3.
Step 6: If S = ∅ then choose the next job u from L and return to step 2; otherwise
enter step 7.
Step 7: Choose the schedule Pv
= (p1v, p2v) ∈ S with the smallest C2 criterion value
summed over the t jobs contained in the corresponding partial sequence pv of a
9
subset of jobs of J2. Set p∗ = pv; t := t+1; remove job u from L. If t ≤ n, return
to step 2; otherwise shift job k as much as possible to the left in p2v such that
for the resulting schedule P = (p1, p2) equality Cmax(P ) = Coptmax is satisfied and
STOP (the schedule finally obtained is the heuristic solution).
The complexity of algorithm IS1 is O(hn3). So, if all possible positions are considered
(we denote this variant as h = all), an O(n4) algorithm results, and for constant h, the
complexity is O(n3).
—> Insert Table 1 about here <—
Example 1: To illustrate Algorithm IS1, we consider the following example with the
data given in Table 1. We apply the secondary criterion C2 =∑
Ci, and according to the
SSPT rule, list L is given by L = (1, 2, 3, 4). We use h = all, and the first job which can be
taken as k is job 3 (for k ∈ {1, 2}, the Cmax value of the completed schedules would exceed
X1 = 29 since in each case an idle time on M1 occurs after processing the first job on M1).
Now t = 1, u = 1 and p1 = (1). We get the Johnson sequence q(Z) = (4, 2) and schedule
P1
= (p11, p21) with p11 = (3, 2, 1, 4), p21 = (2, 1, 4, 3) and Cmax(P1) = 29, i.e. S = {P 1}.
Therefore, p∗ = (1) is taken.
Now, t = 2 and we use the first unscheduled job u = 2 from list L. We have to consider
the partial sequences p1 = (2, 1) and p2 = (1, 2). Completing these sequences by appending
job 4 and scheduling job 3 on the last position on M2, we get the schedules P1
and P2
with
Cmax(P1) = 35 and Cmax(P
2) = 30, i.e. S = ∅ and job u = 2 cannot be taken for insertion
now. Therefore, we choose the next job from the list, i.e. u = 4. We have to consider
the partial sequences p1 = (4, 1) and p2 = (1, 4). Completing these partial sequences by
appending job 2 and scheduling job 3 on the last position on M2, we get the schedules
P1
and P2
with Cmax(P1) = Cmax(P
2) = X1 = 29, i.e. S = {P 1
, P2}. We consider the
contributions of jobs 1 and 4 to criterion C2 and obtain for p1 the value 22 + 26 = 48 and
for p2 the value 12 + 26 = 38. Therefore, we select p∗ = p2 = (1, 4).
Now, t = 3 and u = 2. We have to consider the partial sequences p1 = (2, 1, 4), p2 =
(1, 2, 4) and p3 = (1, 4, 2). Completing these sequences by sequencing job 3 on the last
10
position on M2, we find that only p3 can be completed to a makespan optimal schedule, i.e.
S = {P 3}. Finally, we check whether job k can be shifted to the left on M2 in order to
reduce the C2 value. This is not possible here since processing job 3 as the second-to-last
job on M2 would increase the Cmax value to 30. Therefore, the schedule P3
= (p13, p23) with
p13 = (3, 1, 4, 2), p23 = (1, 4, 2, 3) and the C2 value C1 +C2 +C3 +C4 = 12+29+27+26 = 94
is the heuristic solution.
The insertion algorithm IS1 evaluates a partial sequence by considering the objective
function value contributions of job u and the jobs of sequence p∗, but not of job k (where only
the first operation is scheduled so far). However, it is possible to improve the effectiveness of
this algorithm by considering the objective function value contributions of the jobs k, u and
of those contained in p∗, where job k is put first at the last position in p2 and then shifted
to the left as much as possible without violating the minimal makespan constraint in order
to reduce its contribution to the objective function value. Based on this observation, our
second insertion heuristic, called algorithm IS2 can be described as follows:
Algorithm IS2: Second Insertion Algorithm
Input: Same as algorithm IS1.
Steps 1 through 3: Same as algorithm IS1.
Step 4: If Cmax(Pi) = Copt
max, shift job k in pi2 as much as possible to the left so that
a schedule P with Cmax(P ) = Coptmax results, and set S := S ∪ {P}. Enter step 5.
Steps 5 and 6: Same as algorithm IS1.
Step 7: Choose the schedule Pv ∈ S with the smallest C2 criterion value summed
over all jobs contained in pv and job k. Set p∗ = pv; t := t +1; remove job u from
L. If t ≤ n, return to step 2; otherwise STOP (the schedule finally obtained is
the heuristic solution).
In algorithms IS1 and IS2, the objective function contribution (C2) of the currently
unscheduled jobs was not explicitly considered. However, this consideration may improve
the effectiveness of an insertion heuristic. Therefore, the following algorithm IS3 is a slight
modification of algorithm IS2 and also considers the objective function contribution of the
11
jobs added according to Johnson’s rule to complete the current partial schedule.
Algorithm IS3: Third Insertion Algorithm
Input: Same as algorithm IS1
Step 1: Find the first job in L such that for J1 = {k}, J2 = N \ {k} and P = (p1, p2)
with p1 = (k, q(J2)), p2 = (q(J2), k), equality Cmax(P ) = Copt
max holds. Shift job k
in p2 as much as possible to the left such that a schedule P with Cmax(P ) = Coptmax
results and let F best be the C2 value of this schedule; remove job k from L; set
p∗ = ∅ and t := 1; choose the first job u from L and enter step 2.
Steps 2 through 6: Same as algorithm IS2.
Step 7: Find schedule Pv
with the smallest C2 criterion value summed over all jobs,
let F ∗ be the corresponding C2 value. If F ∗ ≥ F best, choose the next job u from
list L provided that L contains further jobs and enter step 8, otherwise STOP
(the schedule finally obtained is the heuristic solution).
Step 8: Set p∗ = pv; t := t + 1; remove job u from L. If t ≤ n, return to step 2;
otherwise STOP (the schedule finally obtained is the heuristic solution).
We note that algorithms IS2 and IS3 generate the same solution if we always have
F ∗ < F best in step 7 of algorithm IS3 (in the other case the insertion of job u is delayed
since currently it does not lead to an improvement of the C2 value of the whole solution
when inserting this job now). The complexity of both algorithms IS2 and IS3 increases in
comparison with algorithm IS1 by a factor n due to shifting job k in p2 as much as possible
to the left.
The above insertion algorithms generate schedules with only one job processed first on
machine M1. These insertion procedures can be used such that more than one job is processed
first on M1. In this case, we partition the job set initially into two sets J1 and J2 such that
for the resulting flow shop problems the optimal makespan value does not exceed the value
Coptmax. Based on initial tests, we consider the first z possible jobs from the insertion list as
the jobs in J1 (provided that both resulting flow shop problems have an optimal makespan
value not greater than Coptmax and that z of such jobs exist), and we insert the jobs according
12
to their chosen machine route as described above. We denote the described variants of the
insertion algorithm as IS1(h,z), IS2(h,z) and IS3(h,z), where h denotes the above introduced
insertion depth and z denotes the cardinality of set J1.
Theorem 4 The algorithms IS1(h,z), IS2(h,z) and IS3(h,z) generate a makespan optimal
schedule provided that both optimal objective function values for the flow shop problems with
the job sets J1 and J2 do not exceed the optimal makespan value Coptmax for the open shop
problem.
Proof. In any schedule for the open shop problem, the job set N is partitioned into
two job sets J1 and J2. Moreover, assume that the resulting flow shop problems with
the job sets J1 and J2, respectively, have optimal makespan values C1max and C2
max with
max{C1max, C
2max} ≤ Copt
max. Consider the case z = |J1| = 1. The existence of such a makespan
optimal schedule follows from the algorithm by Gonzalez and Sahni [6]. The starting solu-
tion in the insertion algorithm P = (p1, p2) with p1 = (k, q(J2)), p2 = (q(J2), k) is obviously
makespan optimal (note that the existence of such a job k follows from the algorithm by
Gonzalez and Sahni). In further insertion steps, Johnson’s sequence can be violated but only
if the completion to a makespan optimal schedule is still possible in every step. The above
arguments can be extended in a straightforward way to the case of z > 1.
4 Iterative Algorithms
In this section, we discuss the development of appropriate iterative heuristics for solving the
O2||Lex(Cmax, C2) problem heuristically. First we derive a neighborhood that is strongly
connected. Then we suggest two types of neighborhoods, where in both cases the set of
neighbors of each feasible schedule contains the set of neighbors in the strongly connected
neighborhood. Then, we briefly explain the metaheuristics applied in the local search with
the goal to evaluate the influence of various neighborhood structures on the performance of
iterative improvement and simulated annealing algorithms.
13
4.1 Neighborhoods
Assume now that a feasible makespan optimal schedule P = (p1, p2) for the open shop
problem is given by the job sequences p1 and p2 on the machines M1 and M2 and a partition
of the set of jobs into the sets J1 and J2 describing the machine routes of the jobs. We
first give a neighborhood η among the set of feasible makespan optimal schedules. It can be
described by 5 different types of neighbor generation of schedule P as follows:
1) Select two adjacent jobs in one of the sequences pj(j ∈ {1, 2}) and interchange them
without changing sets J1 and J2, if feasibility and makespan optimality are maintained.
2) Select two jobs that are adjacent in both sequences p1 and p2 (if any), and interchange
them without changing sets J1 and J2, if makespan optimality is maintained.
The following neighbor generations are only considered when the current starting schedule
P = (p1, p2) has the structure p1 = (q(J1), q(J2)), p2 = (q(J2), q(J1)).
3) Select a job in one of the sets Ji with |Ji| > 1 and insert it into the other set Jj, i.e.
remove it from the corresponding sequence q(J1) or q(J2), and insert it at an arbitrary
position in the other sequence, if makespan optimality is maintained.
4) Select two jobs in different sets J1 and J2 and interchange them in both sets but
not necessarily at the position of the other job, i.e. remove the chosen jobs from
the sequences q(J1) and q(J2) and insert them at an arbitrary position in the other
sequence, if makespan optimality is maintained.
5) Interchange both sets J1 and J2, and consider the reverse job sequences on both ma-
chines, where the last job becomes the first one, the second to last job becomes the
second job, and so on.
After making one of the moves described above, one has to calculate the completion times
of all operations in the resulting schedule. For neighbor generations of types 1) through 4),
one has to check whether the minimal makespan constraint is satisfied. Only if this is
the case, a neighbor in neighborhood η has been obtained. Whereas a neighbor generation
14
according to 1) or 2) performs one or two adjacent pairwise interchanges but does not change
the machine route of any job, a neighbor generation according to 3) changes the machine
route of one job, and a generation according to 4) changes the machine route of exactly two
jobs. A neighbor generation according to 5) changes the machine route of all jobs.
The neighborhood η can be described by an undirected graph G(η), where the vertex set
is given by the set of schedules with minimum makespan and two vertices (schedules) are
connected by an undirected edge if each of them can be generated by one of the above types
of neighbor generations from the other one. Then we obtain the following result:
Theorem 5 Let M be the set of feasible schedules for an O2||Lex(Cmax, C2) problem. Then
the neighborhood graph G(η) defined on M is strongly connected.
The proof of Theorem 5 is given in the appendix. The diameter of the neighborhood graph
gives the maximum number of moves between any two vertices if one chooses a sequence of
moves with smallest possible number. Obviously, for our problem the number of makespan
minimal schedules depends on the problem data and therefore also the diameter of the
resulting neighborhood graph. Based on the proof of Theorem 5, we can estimate the number
of required moves to transform an arbitrary makespan optimal schedule P into schedule PGS.
Theorem 6 An arbitrary makespan optimal schedule P with given sets J1 and J2 can be
transformed by no more than⌊n
(n− 1
2
)⌋moves in G(η) into schedule PGS.
The proof of Theorem 6 is given in the appendix.
Corollary 1 The diameter of graph G(η) is not greater than 2⌊n
(n− 1
2
)⌋.
Proof. Using the proof of Theorem 5 and the estimation of Theorem 6, each of two arbitrary
makespan optimal schedules P 1 and P 2 can be transformed by no more than⌊n
(n− 1
2
)⌋
neighbor generations in η into schedule PGS. Combining both paths in the graph G(η) proves
the corollary.
We now suggest two neighborhoods that both contain neighborhood η given above. Thus,
both neighborhoods are represented by a strongly connected graph. We again describe the
15
neighborhoods by specifying several types of neighbor generations of a makespan optimal
schedule P = (p1, p2) with given sets J1 and J2.
4.1.1 Neighborhood ηA
The different types of neighbor generations of a schedule P = (p1, p2) with corresponding
sets J1 and J2 in neighborhood ηA are as follows:
1) Select an arbitrary pair of adjacent jobs i and k in one of the sequences p1 and p2. Then
a neighbor is obtained by interchanging both adjacent jobs on the chosen machine, if
feasibility and a minimal makespan of the schedule is maintained.
2) Select a pair of jobs that are adjacent in both sequences p1 and p2 and interchange
them, if makespan optimality is maintained.
3) Select an arbitrary job (say y) in a set Ji with |Ji| > 1 and remove it from both
sequences. Within the sequence of the jobs of the other set Jj(j 6= i), find a position
u with 1 ≤ u ≤ |Jj|+ 1 at which to reinsert job y. Now, in both sequences p1 and p2,
find a position satisfying the above constraint (i.e. job y is now the u-th job from Jj),
if makespan optimality is maintained.
4) Select arbitrary jobs i ∈ J1 and j ∈ J2 and remove these jobs from both sequences p1
and p2. Determine the positions u and v with 1 ≤ u ≤ |J2| and 1 ≤ v ≤ |J1| in the
partial sequences of the jobs of the new sets J ′1 = J1 \{i}∪{j} and J ′2 = J2 \{j}∪{i}.In both sequences p1 and p2, find a position satisfying the above constraint (i.e. that
job i is now the u-th job from J ′2 and job j is now the v-th job from J ′1), if makespan
optimality is maintained.
5) Interchange sets J1 and J2 and consider the reverse permutations (i.e. the first job
becomes the last one, the second job becomes the second-to-last one, and so on).
Neighborhood ηA is an adaptation of neighborhood η to the case of arbitrary starting solu-
tions (notice that in contrast to neighborhood η, neighborhood ηA does not require a special
structure of the starting schedule to generate neighbors according to types 3) through 5)).
16
Neighbor generation according to type 4) is illustrated using the following example. Con-
sider a schedule P = (p1, p2) with p1 = (1, 3, 4, 7, 2, 6, 5, 9, 8, 10), p2 = (6, 7, 9, 1, 8, 3, 10, 2, 4, 5),
J1 = {1, 2, 3, 4, 5} and J2 = {6, 7, 8, 9, 10} (and assume that this schedule is makespan op-
timal for certain given processing times of the operations). Assume that jobs i = 3 and
j = 7 have been selected to be interchanged. Thus, we obtain J ′1 = {1, 2, 4, 5, 7} and
J ′2 = {3, 6, 8, 9, 10}. Assume further that we have randomly determined position v = 4 and
u = 3 for reinserting jobs 7 and 3, respectively, into the partial sequences of the jobs of sets
J1 \ {3} and J2 \ {7}. Thus, job 7 can be inserted between the third and fourth jobs of
J1 \ {3} on M1 and M2, i.e. job 7 can be inserted between jobs 2 and 5 in p1 (in this case
we have two possible insertion positions from which we choose one randomly) and between
jobs 4 and 5 in p2 (in this case the insertion position is uniquely determined). Then job 3
can be inserted between the second and the third jobs of J2 \ {7} on M1 and M2, i.e. job 3
can be inserted between jobs 9 and 8 in p1 (the insertion position is uniquely determined)
and between jobs 9 and 8 in p2 too (we have two possible insertion positions). If a generated
neighbor does not fulfill the minimal makespan constraint, we reject the generated schedule
and continue with another neighbor generation.
In our implementation, the generation of neighbors is controlled by probabilities. One
reason for this is that the generation of a neighbor according to type 5) will probably not
often be applied since the function value of the secondary criterion can change greatly.
Moreover, the number of schedules that can be obtained by the particular types of neighbor
generations can differ considerably.
4.1.2 Neighborhood ηB
We now introduce another neighborhood which allows more general neighbor generations
than according to type 1) and 2). The extension is based on the observation that usually pair-
wise interchange neighborhoods produce better results than adjacent pairwise interchange
neighborhoods. We consider a neighborhood ηB, where types 1) and 2) of neighbor genera-
tions are replaced by types 1’) and 2’) as follows (types 3 through 5 of neighbor generations
are as in neighborhood ηA):
17
1’) Select an arbitrary job i and interchange it on one machine with a job j such that
the difference in the positions in the job sequence on this machine is not bigger than
a given constant τ , if feasibility and makespan optimality is maintained (notice that
for τ = 1, type 1’) corresponds to type 1) for neighborhood ηA). Based on our initial
tests, in our implementation of neighborhood ηB, we set τ = n/3.
2’) Select an arbitrary pair (i, j) of jobs from the same set Jk with k ∈ {1, 2} and in-
terchange the jobs on both machines, if makespan optimality is maintained (for the
special case of adjacent pairwise interchanges, this move corresponds to move 2) for
neighborhood ηA).
The neighborhoods ηA and ηB may reduce the required number of moves to transform
some arbitrary makespan optimal schedule P 1 into schedule PGS given in Theorem 5 since
replacing moves of types 1) and 2) by types 1’) and 2’) may reduce the number of required
moves to transform schedule P 1 into schedule P ′ introduced in the proof of Theorem 5.
However, it is difficult to use this fact for a strengthened estimation of the diameter of the
resulting graphs since one must guarantee that all intermediate schedules obtained have also
a minimum makespan.
4.2 Local Search Algorithms
The different neighborhoods have been included into local search heuristics combined with
simple metaheuristics. Our goal is to study the influence of various neighborhood structures
on the effectiveness of local search algorithms to solve multicriteria scheduling problems.
Since our intention is not to directly compare the performance of local search algorithms, tabu
search and genetic algorithms were not included in our study. Both tabu search and genetic
algorithms require much longer runs (due to the complete or almost complete investigation
of the neighborhoods in tabu search and the use of a population in the genetic algorithm).
Moreover, simulated annealing (SA), one of the iterative algorithms included in this paper,
is quite competitive in solving several scheduling problems.
The simplest procedure is iterative improvement (descent algorithm), where only neigh-
bors of the current starting solution with a better objective function value with respect to
18
criterion C2 are accepted. We include into our tests a variant that randomly generates a
neighbor. If a schedule that violates the minimal makespan constraint has been generated,
this schedule is rejected and the search is continued from the current (makespan optimal)
starting solution. In this way, the search is always performed among makespan optimal
schedules. The procedure stops after a given number of iterations has been performed (i.e.
a given number of schedules has been generated independent on whether they are makespan
optimal or not). This algorithm is denoted as algorithm II.
A slight modification is obtained when neighbors with an equal objective function value
are also accepted as new starting solution for the next iteration. Such an approach can be
interpreted as a threshold accepting procedure [5] with a constant threshold value of zero,
denoted as TA(0). This modification has particular importance when neighbors with equal
objective function values can be frequently expected (as in the case of tardiness based criteria,
e.g. in [3] such a TA(0) variant has obtained better results than iterative improvement for
the single machine weighted tardiness problem). In addition to the above procedures, we
also use multi-start procedures and simulated annealing variants. In a multi-start procedure
we repeatedly apply algorithm TA(0) with b different starting solutions (algorithm MS(b)).
Simulated annealing [8] has its origin in statistical analysis. In our implementation we
start with an initial schedule P = (p1, p2) (with given sets J1 and J2) and generate randomly
a neighbor P ′ = (p1′ , p2′) (with sets J ′1 and J ′2). Then the objective function value difference
∆ = F (P ′)− F (P )
is calculated. If ∆ ≤ 0, schedule P ′ is accepted as the new starting solution for the next
iteration. But if ∆ > 0, schedule P ′ is accepted as new starting solution with probability
exp(−∆/Temp), where Temp is a parameter known as the temperature. In the initial phase
of the algorithm, the temperature is often rather high so that escaping from a local optimum
is rather easy. In the course of the computations, the temperature is step by step decreased
so that finally a very low temperature is used (and thus the algorithm corresponds basi-
cally to an iterative improvement procedure). Often, a geometric cooling scheme is applied,
which we also use. After a fixed number of generated solutions, the temperature is reduced
according to Tempnew = λTempold, where 0 < λ < 1 and Tempnew and Tempold denotes the
19
new and old temperatures, respectively. A stopping criterion is a final cycle with a low tem-
perature. In the literature, sometimes the final temperature Tempend = 0.01 is chosen (see
e.g. [4]). In our tests we used an initial temperature Temp0 ∈ {F (P )/5, F (P )/15, F (P )/30}and a final temperature Tempend ∈ {1, 0.1, 0.01, 0.001}, where F (P ) denotes the objective
function value of the initial schedule P . We have found that both parameter settings do
not significantly influence the quality of the results. It has been observed that there is a
tendency to favor smaller values of both parameters (an explanation for this behavior will
be given in the next section when presenting the results from the comparative study). How-
ever, to prevent simulated annealing from being too close to iterative improvement, we used
Temp0 = F (P )/15 and Tempend = 0.01.
Since more flexible cooling schemes in connection with simulated annealing may be su-
perior to monotonously decreasing cooling schemes, we also test such flexible schemes in the
following way. We start with the final temperature Temp = 0.01 in the geometric cooling
scheme, and if for a number of iterations TCON no neighbor has been accepted, we increase
the temperature Temp = Temp + ∆Temp. If during the next TCON iterations, again no
neighbor has been accepted, we increase value Temp further by ∆Temp. However, as soon as
a neighbor has been accepted, the temperature is reset at the initial value Temp = 0.01. In
initial tests, we experimented with both parameters TCON and ∆Temp, and, for simplicity,
applied constant values with 10 ≤ TCON ≤ 60 and 0.01 ≤ ∆Temp ≤ 20. Notice that big
values of both parameters lead this procedure closer to iterative improvement since worse
neighbors are seldom accepted, but if they are considered, then they are accepted with a big-
ger probability. Based on our initial tests, we recommend TCON = 15 and ∆Temp = 1.5
which performed particularly well for∑
wiTi (and for this criterion the differences in the
results with the individual parameter settings were bigger). We denote the variant with
a decreasing cooling scheme as SA-dec and the variant with a variable cooling scheme as
SA-var.
20
5 Computational Results
In this section we report on computational results with the constructive algorithms, vari-
ous neighborhood structures, and the iterative algorithms suggested. First we describe the
generation of the test problems. For the problems considered in this paper, no optimization
algorithms exist. In addition, our attempts to develop effective lower bounds failed. Even if
some lower bounds could be developed, they will be so weak that the performance evaluation
of these heuristics from such lower bounds unnecessarily would show poor results. For these
reasons, we compare the performance of these proposed heuristic algorithms relative to one
of them (for the constructive algorithms) and to the best one (for the iterative algorithms).
5.1 Test Problems and Initial Solutions
The number of jobs in the test problems varies from 20 through 80. In each problem,
processing times are random integers from the uniform distribution [1, 100]. The job weights
wi are randomly chosen integers from the interval [1, 10]. For problems with C2 =∑
wiTi,
we determine di as a uniformly distributed integer from the interval [0.25X, 0.75X], where
X is the bigger machine load of both machines, i.e. X = max{∑ni=1 ai,
∑ni=1 bi}. For each
of the four problem sizes and each C2 criterion, 20 problems were generated, thus giving a
total of 240 test problems.
For the constructive algorithms, we first determined a suitable job list which obviously
depends on the specific criterion C2. We have found that for C2 =∑
Ci the SSPT rule
(shortest sum of processing times first), for C2 =∑
wiCi the WSSPT rule (shortest weighted
sum of processing times first, i.e. the job with the smallest ratio (ai + bi)/wi is considered
first), and for C2 =∑
wiTi the job with the smallest ratio (ai + bi) · di/wi is considered
first. In particular, it has been observed that these rules work significantly better than those
considering only the processing times of one of the machines. It also confirms the list used in
algorithm GS. We observed that for a small insertion depth h, the influence of the insertion
list applied is rather significant, and that this influence is reduced for big values of h.
21
5.2 Comparative Evaluation of Constructive Algorithms
We compared various constructive algorithms with z = 1 (i.e. |J1| = 1) for their improve-
ments over the schedule generated by the algorithm of Gonzalez and Sahni (GS). We used
the values h ∈ {1, 3, 6, all} for the insertion depth. Notice that for h = 1, this is simply an
appending procedure without insertion. Table 2 presents the average percentage improve-
ments of the objective function value for each class of 20 instances specified by the number
n of jobs. Since for h = 1, variants IS1 and IS2 generate the same solutions, we dropped
IS2(1,1) in Table 2. Row ‘ave’ gives the average values over all 4 problem classes.
—> Insert Table 2 about here <—
Concerning parameter h, we have found that a suitable value depends on the criterion
C2. In the case of C2 ∈ {∑ Ci, wiCi} (flow time based secondary criteria), for most instances
the consideration of h = 6 is sufficient. Here we often observe no further improvement for
h = all in comparison with h = 6, and the average percentage improvements with h = all
are usually not larger than 0.25 % in comparison with h = 6 (independent on the variant of
the insertion algorithm). The differences are a bit larger for C2 =∑
wiTi, however h = 6
already produces good results in comparison with h = 1 or h = 3.
Comparing the insertion algorithms IS1 - IS3, we observed that algorithm IS3 is prefer-
able for all criteria C2 considered. The differences are again small for flow time based
secondary criteria, but they are bigger for C2 =∑
wiTi. This shows that problems with
C2 =∑
wiTi are the most difficult problems among those considered. From these tests,
algorithm IS3(6,1) can be recommended, and for C2 =∑
wiTi also h = all can be used,
which however, increases the complexity of the algorithm by the factor n.
Finally we performed experiments with algorithm IS3(h,z) and z > 1. Table 3 shows the
results for insertion algorithm IS3 with different values of z, where columns 2 - 4 give the
average percentage improvements over variant IS3(6,1) and columns 5 - 7 give the average
percentage improvements over variant IS3(all,1). Except for the problem instances with
n = 20 and the application of variant IS3(h,z) for C2 =∑
wiTi with h ∈ {6, all} and
z ∈ {2, 3}, consideration of z > 1 does not improve the results.
22
—> Insert Table 3 about here <—
5.3 Comparative Evaluation of Iterative Algorithms
In this section, we discuss the results with the iterative algorithms. For the algorithm TA(0)
we first determined neighbors in neighborhoods ηA and ηB in order to find appropriate
probabilities to control the generation of the different types of neighbors within the chosen
neighborhood. Of course, we can expect that generating a neighbor of type 5 should seldom
be applied (since reversing the sequences on the machines does not change the Cmax value, but
has a big influence on the value of the C2 criterion). Although it is difficult or impossible
to find an ‘optimal’ probability vector for applying the individual types of the neighbor
generation, we have made the following observations. Types 1 and 2 of the above neighbor
generation should be applied approximately with the same probability. Type 3 should be
applied more often than a neighbor generation according to type 4 (the ratio between both
types of neighbor generation should approximately be equal to 3 : 1). Although there are
slightly different observations for neighborhoods ηA and ηB, we decide to apply a common
variant for both neighborhoods.
Based on our initial tests, we recommend a variant pr = (pr1, . . . , pr5) = (0.24, 0.24, 0.36,
0.12, 0.04), where pri gives the probability that a neighbor of type i is generated as described
above. We notice that we decided to select this probability vector due to the good results
particularly for the ‘difficult’ criterion C2 =∑
wiTi.
We tested different iterative algorithms with neighborhoods ηA and ηB and the above
neighbor generation probabilities. We tested with a good (IS3(6,1)) and a bad (GS) ini-
tial solution. We performed short runs (100n generated solutions), medium-size runs (300n
generated solutions) and long runs (500n generated solutions). Among the iterative algo-
rithms, we included threshold accepting with a threshold value of zero (algorithm TA(0)),
iterative improvement (algorithm II), simulated annealing with decreasing schemes (SA-dec),
simulated annealing with variable schemes (SA-var), and a multi-start algorithm with four
restarts (MS(4)). In the latter case, the number of generated solutions per run is obtained
23
by division of the total number by 4, i.e. for short runs we have 25n generated solutions per
run and for long runs we have 125n generated solutions per run. As starting solutions we
applied algorithms GS, IS1(6,1), IS3(6,1), and IS3(all,1).
Tables 4 – 6 present the average percentage deviations for the 20 instances of each problem
class from the best objective function value obtained for each instance by the iterative
algorithms. Moreover, we state in parentheses how often the corresponding algorithm has
obtained this best value for the instances of each class. In the second column, the type of
neighborhood is given, and in the third column, parameter It indicates that It · n solutions
have been generated. Since, as expected, algorithm TA(0) is usually superior to algorithm
II, we dropped the results produced by the latter algorithm from our tables.
—> Insert Table 4 about here <—
From these tables, we can observe the differences between the flow time based and tardiness
based secondary criteria. For flow time based criteria, the results with shorter runs are rather
good in comparison with those with the longer runs even when starting with a bad solution,
whereas for tardiness based secondary criteria short runs in general produce solutions of in-
ferior quality. In the latter case, longer runs where 500n solutions are generated considerably
improve the objective function value obtained by generating only 100n solutions.
—> Insert Table 5 about here <—
One reason for the bigger differences in the average percentage deviations from the best
value obtained could be due to the smaller objective function values for the tardiness based
criteria in comparison with the flow time based secondary criteria. Thus, for the same
difference of the objective function value from the best solution obtained, it usually results
in a much bigger relative or percentage deviation for C2 =∑
wiTi in comparison with the
other C2 criteria. Moreover, the average percentage deviations of the runs for C2 =∑
wiTi
are strongly influenced by a small number of very big deviations.
—> Insert Table 6 about here <—
24
To illustrate, we give in Table 7 the maximum percentage deviation for C2 =∑
wiTi
within each class of 20 instances. From Table 7, it can be seen that for long runs, variant
SA-dec often produced a rather small maximum error, and from this point of view variant
SA-dec is superior to the other procedures. We mention, that for flow time based secondary
criteria, these maximum percentage deviations are much smaller! For ηB and 500n generated
solutions, the maximum percentage deviations of all three algorithms TA(0), SA-dec and
SA-var with a bad starting solution is 2.71 % for C2 =∑
Ci and 4.17 % for C2 =∑
wiCi.
—> Insert Table 7 about here <—
For all secondary criteria, neighborhood ηB yields significantly better results than neigh-
borhood ηA. This can be seen particularly in the results for C2 =∑
wiTi in Tables 6 and 7.
The influence of the chosen neighborhood is much stronger than the influence of the chosen
metaheuristic. The good performance of algorithm TA(0) is quite surprising. For the flow
time based criteria with a good starting solution, this is due to the good initial objective
function value of the starting solution (see also later in connection with Table 8). In general,
no algorithm is superior to the remaining algorithms from the point of the average percentage
deviation. In addition to the results presented in the tables, we mention that for tardiness
based secondary criteria, the iterative improvement algorithm is much worse than algorithm
TA(0) (a similar observation has been made in [3] for the single machine weighted tardiness
problem).
Concerning the importance of the quality of the starting solution, we observed that, as
expected, for shorter runs a good starting solution is much preferable whereas for the long
runs and neighborhood ηB, the differences in the results with a good and a bad starting
solution become smaller. Nevertheless, in most cases the best objective function value has
been obtained with a good starting solution.
5.4 Constructive vs. Iterative Algorithms
We now compare the objective function values obtained from the constructive algorithms
with the best value obtained by some variant of the iterative algorithms. The average
percentage deviations of the constructive algorithms from the best value obtained are given
25
in Table 8. It can be seen that for most of the problems with flow time based secondary
criterion, the quality of the insertion algorithm is rather good, and as observed earlier, the
differences between h = 6 and h = all are rather small. Both results are in contrast to those
obtained for the tardiness based secondary criterion. In this case, the differences between
h = 6 and h = all are larger (except for n = 20). We also note that for short runs (100n
generated solutions), the iteratively determined solution is sometimes still worse than the
solution obtained by algorithm IS3(6,1) whereas for C2 =∑
wiTi after 100n generated
solutions, the schedule obtained is usually better than that obtained with IS3(6,1).
—> Insert Table 8 about here <—
6 Conclusions
This paper developed and tested various constructive and iterative heuristic algorithms for
solving the two-machine open shop problem with a secondary criterion. We designed ex-
periments and empirically evaluated the influence of various neighborhood structures on the
effectiveness of the proposed iterative improvement and simulated annealing algorithms in
finding optimal or near optimal solutions for the problem. From our tests, the following
conclusions can be drawn:
• Constructive insertion algorithms considerably improve the objective function value in
comparison with a Gonzalez-Sahni type schedule. This is due to the ‘increasing degree
of freedom’ when sequencing the last jobs which allows a better compensation of ‘bad
initial decisions’. In the case of flow time based criteria, in many cases the consideration
of a restricted insertion algorithm with an insertion depth of approximately 6 is suf-
ficient. In comparison with the iterative algorithms, the insertion algorithms produce
solutions of excellent quality. However, for the most difficult criterion C2 =∑
wiTi,
the quality of the solution obtained by the insertion algorithm could be far away from
that finally obtained by local search.
• Both neighborhoods applied in the iterative algorithms are strongly connected (so that
a global optimum can be theoretically reached by our search algorithms). They also
26
operate well from a practical point of view, where (as expected) neighborhood ηB is
superior. This indicates that nonadjacent interchanges of jobs should be considered in
order to limit the diameter of the neighborhood graph (and the number of required
iterations for obtaining a good heuristic solution). For C2 =∑
wiTi, the differences in
the solution quality of neighborhoods ηA and ηB are rather large, whereas the differ-
ences between both neighborhoods for the flow time based criteria are much smaller
(nevertheless, they are still significant).
• The quality of the starting solution plays an important role for the shorter runs. As
expected, for the runs with 100n generated solutions the results with the starting
solution obtained by the insertion algorithm are clearly better, whereas for the runs
with 500n generated solutions the results with a good starting solution are slightly
better (particularly from the point how often the best value has been obtained).
• We observed differences in the effectiveness of the proposed heuristics in solving the
flow time and tardiness based secondary criteria problem instances. In the case of
C2 =∑
wiTi, we found considerably larger percentage improvements of the objective
function value than in the case of flow time based secondary criteria. As a conse-
quence, for flow time based secondary criteria, short runs of the iterative algorithms or
even the constructive insertion algorithm often produce an acceptable solution quality,
whereas for tardiness based secondary criteria longer runs are necessary (to obtain a
low percentage deviation from the best value obtained).
• The neighborhood considerations and the iterative algorithms proposed in this paper
are also useful for solving the open shop problem where the performance measure
is the linear combination of makespan and C2 ∈ {∑Ci,
∑wiCi,
∑wiTi}. In these
cases, strong connectivity is automatically guaranteed since all feasible schedules for
the single criterion problem are also feasible for the bicriteria problem even when the
number of machines m > 2.
We conclude the paper with some research issues that are worthy of future investigations.
Firstly, developing better constructive algorithms for the two-machine open shop problems
27
to minimize weighted total tardiness would be useful to improve the performance of the
proposed algorithms. Secondly, in view of the surprising trend of rather good performance
of iterative algorithms that do not accept worse solutions (which is in contrast to most
other scheduling or combinatorial optimization problems), further theoretical investigations
of neighborhoods in the case of tardiness based objective functions would provide a better
insight into this phenomenon. Thirdly, extensions of the proposed heuristic algorithms to
solve m-stage open shop problems with a secondary criterion is both interesting and useful.
Since in this case the problem of minimizing the makespan is NP-hard, one can consider a
slightly different problem, where we want to minimize the secondary criterion subject to the
constraint that the primary criterion deviates from the optimal value by no more than some
constant (provided there exists an approximation algorithm with this performance guaran-
tee). However, in this case it is not clear that the neighborhoods are strongly connected which
makes the design of an appropriate iterative heuristic algorithm difficult. Finally, further
study of constructive and iterative techniques to solve other types of multicriteria scheduling
problems will enhance the applicability of scheduling theory to industrial practices.
References
[1] Achugbue, J.O. and Chin, F.Y., Scheduling the open shop to minimize mean flow time,SIAM J. on Computing, 11, 1982, 709 - 720.
[2] B. Chen, Potts, C. N., and Woeginger, G. J., A review of machine scheduling: complex-ity, algorithms and applications. In Handbook of Combinatorial Optimization. (D-Z Duand P. M. Pardalos, Eds) pp. 21-169. Kluwer, Dordrecht, Netherlands, 1998.
[3] Crauwels, H.A.J., Potts, C.N., and van Wassenhove, L.N., Local search heuristics forthe single machine total weighted tardiness scheduling problem, INFORMS Journal onComputing, 10, 1998, 341-350.
[4] Danneberg, D., Tautenhahn, T., Werner, F., A comparison of heuristic algorithms forflow shop scheduling problems with setup times and limited batch size, Mathl. Comput.Modelling, 29, 1999, 101 – 126.
[5] Dueck, G., and Scheuer, T., Threshold accepting: a general purpose optimization al-gorithm appearing superior to simulated annealing, J. Comp. Physics 90, 1990, 161 –175.
[6] Gonzalez, S., and Sahni, T.: Open shop scheduling to minimize finish time, J. Assoc.of Comput. Mach. 23, 1976, 665 – 679.
28
[7] Johnson, S. M., Optimal two- and three-stage production schedules with set-up timesincluded, Naval Research Logistics Quarterly, 1, 1954, 61-68.
[8] Kirkpatrick, S.; Gelatt Jr., C.D.; Vecchi, M.P., Optimization by simulated annealing,Science 220, 1983, 671 - 680.
[9] Kubiak, W., Sriskandarajah, C., and Zara, K., A note on the complexity of open shopscheduling problems, INFOR, 29, 1991, 284-294.
[10] Kyparisis, G. J., and Koulamas, C., Open shop scheduling with makespan and totalcompletion time criteria, Computers and Operations Research, Vol. 27, 2000, 15 – 27.
[11] Lee, C.-Y., and Vairaktarakis, G. L., Complexity of Single Machine HierarchicalScheduling: A Survey, Research Report No. 93-10, Department of Industrial and Sys-tems Engineering, University of Florida, Gainesville, Fl, USA, 1993.
[12] Liu, C. Y., and Bulfin, R. L., Scheduling ordered open shops, Computers and OperationsResearch, 14, 1987, 257-264.
[13] Masuda, T. and Ishii, H., Two machine open shop problem with bi-criteria, DiscreteApplied Mathematics, 52, 1994, 253 – 259.
[14] Nagar, A., Haddock, J., and Heragu, S., ”Multiple and bicriteria scheduling: A literaturesurvey,” European Journal of Operational Research, 81, 1995, 88-104.
[15] Pinedo, M., and Chao, X., Operations scheduling with applications in manufacturingand services, Irwin/McGraw Hill, 1999.
[16] Prins, C., ”An overview of scheduling problems arising in satellite communications,”Journal of the Operational Research Society, 40, 1994, 611-623.
[17] V. T’Kindt and J-C. Billaut. Multicriteria scheduling problems: A survey. RAIROOperations Research, 35, 2001, 143 - 163.
29
Appendix
Proof of Theorem 1: Achugbue and Chin [1] prove that the O2||∑ Ci problem is stronglyNP-hard by reduction from 3-partition defined as follows: given positive integers n, B, anda set A of integers {a1, a2, . . . , a3n} with
∑3ni=1 ai = nB and B/4 < ai < B/2, 1 ≤ i ≤ 3n,
does there exist a partition into 3-element sets such that the sum of the three elements ineach partition is equal to B? By means of this instance, Achugbue and Chin [1] define aninstance for the O2||∑ Ci problem and a bound D such that there exists a 3-partition if andonly if there exists a schedule for the O2||∑ Ci problem with an objective function value notgreater than D.
From the schedule construction given in [1], it follows that a schedule with minimal totalflow time must also have a minimum makespan. Consequently, a 3-partition exists if and onlyif there exists a feasible (i.e. Cmax optimal) schedule for the O2||Lex(Cmax,
∑Ci) problem
with a∑
Ci value not greater than D.Since the O2||Lex(Cmax,
∑Ci) problem is a special case of the O2||Lex(Cmax,
∑wiCi)
and the O2||Lex(Cmax,∑
wiTi) problems, it follows that the problems with the C2 criteriaconsidered in this paper are strongly NP-hard.
Proof of Theorem 5: We prove that any makespan optimal schedule P can be transformedinto a makespan optimal schedule having the structure of a schedule generated by the al-gorithm of Gonzalez & Sahni, i.e. we have a schedule of the form PGS = (pGS1, pGS2) withpGS1 = (j, q(JGS
2 )) and pGS2 = (q(JGS2 ), j), where JGS
1 = {j} and JGS2 = N \ {j} (without
loss of generality we can assume that we have the Johnson sequence q(JGS2 ) of the jobs of
set JGS2 determined for the machine route M2 → M1).
Let P = (p1, p2) be an arbitrary makespan optimal schedule with the sets J1 and J2. Firstwe transform schedule P into a schedule having the form P ′ = (p′1, p′2) with J ′1 = J1, J
′2 =
J2, p′1 = (q(J1), q(J2)) and p′2 = (q(J2), q(J1)). Let i be the last job of J1 in sequence p2. If
job i is not at the last position of p2, then, by adjacent pairwise interchanges with jobs fromJ2 in p2 we put this job at the last position in p2. In each step, the completion time of thefinal job on M2 does not increase. Next, we consider the second-to-last job k of J1 in p2,and by consecutive adjacent pairwise interchanges with jobs from J2, we can put this job atthe second-to-last position on M2, where again in each step the completion time of the finaljob on M2 does not increase. Continuing in this way, we get a makespan optimal schedule,where all jobs of set J1 are sequenced at the end on M2 and in the same relative order asbefore.
Next, we consider the jobs in J2, and reschedule them on M1 in the same manner asdescribed above for the jobs in J1, i.e. we first sequence the last job of J2 on M1 at the lastposition on M1 by consecutive adjacent pairwise interchanges with jobs of J1 on M1, andso on. Again, in each step the completion time of the final job on M1 does not increase.Thus, we have obtained a schedule, where on M1 first all jobs of J1 and then all jobs of J2
are processed, on M2 first all jobs of J2 and then all jobs from J1 are processed, and onboth machines the relative order of the jobs of J1 and J2, respectively, is the same as in thestarting schedule P . The latter schedule has been obtained by generating neighbors only oftype 1) given above.
Next, we transform this schedule such that the jobs of both sets J1 and J2 are processed
30
according to the corresponding Johnson sequence. This is done by neighbor generationsaccording to type 1) first and then according to type 2) given above. To this end, we choosetwo adjacent jobs i and k of J2 which are not in Johnson’s order on one machine only (if any).In such a job pair exists, we interchange both jobs on this machine and by this interchange,the completion time of the final job on this machine does not increase. Then we continuewith the next pair of adjacent jobs of J2 which are not in Johnson’s order on one machineonly. If such a pair does not exist, we possibly still have some pairs of adjacent jobs thatare not in Johnson’s order on both machines. Consider such a pair i, k of currently adjacentjobs. By interchanging both jobs on both machines, the completion time of the final job onM1 again does not increase. The latter result holds since, if job k is sequenced before job iin Johnson’s sequence, i.e. we have
min{bk, ai} ≤ min{ak, bi},
or equivalentlymax{−bi,−ak} ≤ max{−bk,−ai},
then inequality
max{bk + ak + ai, bk + bi + ai} ≤ max{bi + ai + ak, bi + bk + ak}
is valid, and thus the Cmax value after interchanging jobs i and k cannot increase (since thelongest path in the network with operations as vertices and arcs describing the precedencerelations between operations cannot increase if the latter inequality holds).
Then we continue with interchanging further pairs of currently on both machines adjacentjobs of J2 that are not in Johnson’s order, until we have obtained the Johnson sequence q(J2)for the jobs of J2 on both machines. In the same manner we transform the current sequenceof the jobs in J1 into Johnson’s sequence q(J1) on both machines. Thus, we have obtainedschedule P ′.
Finally, we transform the schedule P ′ with J ′1 = J1 and J ′2 = J2 obtained into schedulePGS with JGS
1 = {j} and JGS2 = N \ {j}. This is done by generating neighbors according
to types 3), 4), or 5). First we note that it can be easily determined whether a schedule ofthe form P ′ does not exceed the optimal makespan Copt
max. Schedule P ′ is makespan optimalif and only if both resulting flow shop subproblems with the job sets J ′1 and J ′2, respectively,have a makespan value not greater than Copt
max.Assume that for job j in schedule PGS, we have j ∈ J ′1. In this case we can consecutively
remove the remaining jobs from J ′1 and insert them step by step into set J ′2. It correspondsto a neighbor generation according to step 3) above, and we insert a removed job z into thecurrent permutation q(J ′′2 ) at its position according to Johnson’s sequence for set J ′′2 ∪ {z}.Each intermediate schedule has an optimal makespan value. Since we remove in each stepa job in J ′1, this does not increase the makespan value. In set J ′2 we add a job, however, thecurrent set J ′2 is a subset of the jobs of set JGS
2 = N \ {j} in schedule PGS. Therefore, themakespan value of the schedule obtained for each intermediate set J ′′2 does also not exceedthe optimal makespan value Copt
max. Thus, we get after |J ′1| − 1 steps schedule PGS, and eachintermediate schedule has the optimal makespan value Copt
max.Next, we consider the case that we have J ′1 = {h} with h 6= j in schedule P ′. Then
we generate a neighbor according to type 4) by interchanging jobs h ∈ J ′1 and j ∈ J ′2.
31
Reinserting both jobs in the new sequence at its position according to Johnson’s sequenceleads to schedule PGS.
Finally, we consider the case that |J ′1| > 1 and j ∈ J ′2 in schedule P ′. We note that, if aschedule with the structure of P ′ is makespan optimal with the sets J ′1 and J ′2, then there isalso a makespan optimal schedule with both sets J ′1 and J ′2 interchanged. This can be seen bygenerating a neighbor according to type 5) which obviously has the same makespan value.Notice that by such a neighbor generation the Johnson sequences may change since themachine routes of all jobs change, however, transforming the resulting reverse permutationsinto the corresponding Johnson sequences does not increase the makespan value. So, wetransform schedule P ′ in this case into schedule PGS first by neighbor generations of type3), and then by a neighbor generation of type 5). When putting a job h ∈ J ′2 with h 6= jinto the current set J ′1 and inserting this job into the corresponding sequence at the positionaccording to Johnson’s sequence, the makespan value of the resulting sequence does notchange since in J ′2 a job is removed and in J ′1, where a job is added, we get a job set which isa subset of JGS
2 in schedule PGS. By taking the above comment on the interchange of bothsets J ′1 and J ′2 into account, we get the same makespan value of the new schedule. After|J ′2| − 1 steps we have obtained a schedule which differs from schedule PGS by interchangedsets JGS
1 and JGS2 . Now we generate a neighbor according to type 5) which does not change
the makespan value. However, now both sequences of the resulting sets JGS1 and JGS
2 arenot necessarily in the same order as in PGS (since usually several Johnson orders of thejobs of J1 and J2 exist). Repeated application of a neighbor generation according to type 2)changes both partial sequences into Johnson’s sequences, where in each step the makespanvalue does not increase. So, we have obtained schedule PGS.
Now consider any two makespan optimal schedules P 1 and P 2. According to the trans-formations above, we can transform both schedules by repeated applications of a neighborgeneration according to types 1) - 5) into schedule PGS. Since neighborhood η is symmet-ric, we can transform also schedule PGS into P 2. Thus, there is a path from an arbitrarymakespan optimal schedule P 1 to another arbitrary makespan optimal schedule P 2 whichcompletes the proof.
Proof of Theorem 6: Let the schedule P with given sets J1 and J2 be makespan optimal.Then, by no more than 2|J1||J2|moves of type 1), it is possible to transform P into a schedulewhere all jobs of Ji are processed first on Mi and the remaining jobs are processed withoutchanging the relative order of jobs sets Ji (i ∈ {1, 2}). Then we can transform this scheduleinto a schedule P ′ with the same sets J1 and J2, where the jobs of both sets are in Johnson’s
sequence on both machines. This can be done by no more than(|J1|
2
)and
(|J2|2
)adjacent
pairwise interchanges of jobs on one machine (i.e. by a move of type 1)) or on both machines(i.e. by moves of type 2)).
Next, we show that no more than n − 1 neighbor generations of types 3), 4), or 5) arenecessary to transform schedule P ′ into schedule PGS. Consider the individual cases in theproof of Theorem 5. If j ∈ J ′1, there are no more than |J ′1| − 1 moves of type 3) required, ifJ ′1 = {h} with h 6= j, one move of type 4) is required, and if |J ′1| > 1 and j ∈ J ′2, then thereare no more than |J ′2| − 1 moves of type 4) and one move of type 5) required to transformP ′ into PGS. The estimation in the latter case uses the fact that we can break ties withinthe set of Johnson sequences by a lexicographical ordering of the jobs in the case of being
32
processed first on machine M1 and the reverse permutation if being processed first on M2.This tie breaking procedure guarantees that, after applying a move of type 5), the jobsare automatically again in the (uniquely defined) Johnson sequence without any additionalmoves of type 2).
Consequently, we get that the shortest number of required moves l(P, PGS) to transformP into PGS can be estimated as
l(P, PGS) ≤ 2|J1||J2|+(|J1|
2
)+
(|J2|2
)+ max{|J1| − 1, 1, (|J2| − 1) + 1}.
Taking into account, that
|J1||J2| ≤⌊n2
4
⌋,
(|J1|2
)+
(|J2|2
)≤
(n− 1
2
)and max{|J1|, |J2|} ≤ n− 1,
we obtain the estimation
l(P 1, PGS) ≤⌊1
2n2
⌋+
(n− 1
2
)+ (n− 1)
which can be written as
l(P 1, PGS) ≤⌊n
(n− 1
2
)⌋.
This completes the proof.
33
i 1 2 3 4ai 4 3 8 14bi 7 9 6 5
ai + bi 11 12 14 19
Table 1: Data of Example 1
n IS1(h,1) IS2(h,1) IS3(h,1)1 3 6 all 3 6 all 1 3 6 all
C2 =∑
Ci
20 16.81 16.70 16.56 16.57 19.46 19.58 19.58 18.14 20.18 20.35 20.3640 19.65 20.59 20.51 20.44 21.29 22.09 22.13 20.40 22.16 22.81 22.8460 21.23 21.57 21.83 21.84 22.81 23.28 23.31 21.87 23.40 23.85 23.9380 20.57 21.19 21.41 21.49 21.89 22.61 22.80 20.93 22.19 22.82 23.07ave 19.57 20.01 20.08 20.08 21.36 21.89 21.95 20.34 21.98 22.46 22.55
C2 =∑
wiCi
20 24.26 24.93 24.56 24.59 25.99 27.87 28.10 28.04 29.74 29.90 29.9040 29.89 30.69 30.33 30.11 32.41 32.98 32.95 31.45 33.38 33.79 33.8260 31.11 30.71 30.85 30.93 33.27 33.73 33.97 31.72 33.65 34.11 34.1680 30.76 31.77 31.82 31.67 32.79 33.47 33.96 31.87 33.24 33.82 34.16ave 29.01 29.52 29.39 29.33 31.12 32.01 32.25 30.77 32.50 32.91 33.01
C2 =∑
wiTi
20 51.96 56.49 57.27 57.56 60.17 63.53 63.95 57.07 63.79 66.09 66.2340 59.37 63.01 65.22 65.77 64.75 68.48 69.73 64.11 68.53 71.18 72.3860 62.62 61.85 63.40 64.91 67.12 69.48 70.90 65.34 69.22 71.14 72.7580 59.05 61.29 63.39 66.12 62.89 65.98 69.62 63.33 66.63 69.20 71.93ave 58.25 60.66 62.32 63.59 63.74 66.87 68.55 62.46 67.04 69.40 70.82
Table 2: Average percentage improvements of the insertion algorithms over algorithm GS
34
n IS3(6,2) IS3(6,3) IS3(6,5) IS3(all,2) IS3(all,3) IS3(all,5)C2 =
∑Ci
20 0.19 0.11 -1.02 0.19 0.11 -1.0240 -0.17 -0.14 -0.17 -0.04 -0.07 0.1160 -0.23 -0.34 -1.22 -0.14 -0.34 -0.6080 -0.27 -0.48 -1.17 -0.12 -0.27 -0.43ave -0.12 -0.21 -0.90 -0.03 -0.14 -0.49∑
wiCi
20 0.12 0.16 -0.61 0.12 0.16 -0.6140 0.24 0.10 -0.49 0.27 0.14 0.0760 -0.18 -0.75 -1.37 0.17 -0.13 -0.2680 -0.20 -0.54 -1.28 -0.19 -0.34 -0.75ave -0.01 -0.26 -0.94 0.09 -0.04 -0.39∑
wiTi
20 2.10 1.39 -1.29 2.47 1.53 -1.0440 -2.66 0.52 -2.61 -0.40 0.29 0.2860 -0.44 -1.08 -1.96 0.38 0.09 0.1280 0.08 -0.71 -1.58 0.46 0.16 0.49ave -0.23 0.03 -1.86 0.73 0.51 -0.04
Table 3: Percentage improvements over algorithms IS3(6,1) and IS3(all,1) with z > 1
35
∑Ci
n N It TA(0) SA-dec SA-var TA(0) SA-dec SA-var MS(4)
starting solution GS starting solution IS3(6,1)
20 ηA 100 3.62 (0) 4.24 (0) 3.88 (0) 1.07 (1) 1.44 (0) 1.11 (1) 1.11 (1)ηB 100 2.05 (0) 1.96 (0) 2.04 (0) 0.79 (0) 1.35 (0) 0.87 (0) 1.04 (0)ηA 300 1.94 (0) 2.94 (0) 2.40 (0) 0.82 (1) 1.28 (0) 0.68 (1) 0.99 (1)ηB 300 0.95 (0) 0.92 (0) 0.90 (0) 0.59 (2) 0.70 (0) 0.46 (0) 0.67 (2)ηA 500 1.68 (0) 1.80 (1) 2.26 (0) 0.60 (2) 1.04 (0) 0.61 (0) 0.68 (1)ηB 500 0.61 (0) 0.37 (0) 0.43 (0) 0.52 (3) 0.46 (4) 0.38 (8) 0.47 (4)
40 ηA 100 3.75 (0) 4.85 (0) 3.93 (0) 0.62 (1) 0.97 (0) 0.64 (1) 0.73 (1)ηB 100 2.68 (0) 2.62 (0) 2.48 (0) 0.63 (0) 0.97 (0) 0.61 (0) 0.64 (0)ηA 300 2.89 (0) 2.66 (0) 3.08 (0) 0.47 (1) 0.88 (0) 0.49 (1) 0.61 (1)ηB 300 1.00 (0) 1.00 (0) 1.14 (0) 0.40 (0) 0.64 (0) 0.38 (0) 0.55 (0)ηA 500 2.60 (0) 1.92 (0) 1.99 (0) 0.44 (1) 0.80 (0) 0.45 (1) 0.52 (1)ηB 500 0.78 (0) 0.61 (0) 0.72 (0) 0.30 (10) 0.40 (4) 0.20 (7) 0.38 (0)
60 ηA 100 4.37 (0) 5.51 (0) 4.14 (0) 0.20 (0) 0.38 (0) 0.17 (0) 0.21 (0)ηB 100 2.74 (0) 3.14 (0) 2.43 (0) 0.20 (0) 0.38 (0) 0.23 (0) 0.22 (0)ηA 300 3.05 (0) 3.11 (0) 2.88 (0) 0.15 (1) 0.38 (0) 0.15 (0) 0.16 (0)ηB 300 1.14 (0) 1.11 (0) 1.07 (0) 0.08 (0) 0.34 (0) 0.16 (0) 0.15 (0)ηA 500 2.83 (0) 2.28 (0) 2.82 (0) 0.10 (0) 0.38 (0) 0.11 (0) 0.15 (0)ηB 500 0.81 (0) 0.70 (0) 0.77 (0) 0.04 (12) 0.29 (2) 0.12 (4) 0.12 (2)
80 ηA 100 4.54 (0) 6.21 (0) 4.21 (0) 0.14 (0) 0.40 (0) 0.12 (0) 0.08 (0)ηB 100 2.30 (0) 2.77 (0) 2.26 (0) 0.13 (0) 0.40 (0) 0.19 (0) 0.09 (0)ηA 300 3.36 (0) 3.21 (0) 2.95 (0) 0.07 (0) 0.40 (0) 0.06 (0) 0.06 (0)ηB 300 1.23 (0) 1.18 (0) 1.20 (0) 0.06 (0) 0.37 (0) 0.13 (0) 0.09 (0)ηA 500 3.07 (0) 2.43 (0) 2.71 (0) 0.04 (0) 0.37 (0) 0.05 (0) 0.05 (0)ηB 500 0.89 (0) 0.79 (0) 0.78 (0) 0.05 (12) 0.34 (0) 0.10 (2) 0.05 (6)
Table 4: Average percentage deviations from the best value obtained for C2 =∑
Ci
36
∑wiCi
n N It TA(0) SA-dec SA-var TA(0) SA-dec SA-var MS(4)
starting solution GS starting solution IS3(6,1)
20 ηA 100 4.97 (0) 5.36 (0) 3.57 (0) 1.93 (0) 2.68 (0) 2.25 (0) 2.31 (0)ηB 100 3.23 (0) 2.65 (0) 3.78 (0) 2.10 (0) 1.74 (0) 1.90 (0) 2.18 (1)ηA 300 2.62 (0) 1.66 (0) 2.95 (0) 1.95 (0) 1.62 (0) 2.19 (0) 1.55 (1)ηB 300 1.17 (0) 1.23 (0) 1.43 (0) 1.51 (0) 0.75 (1) 1.67 (1) 1.28 (0)ηA 500 2.41 (0) 1.52 (0) 2.65 (0) 1.96 (0) 1.60 (0) 1.78 (0) 1.30 (0)ηB 500 1.59 (0) 0.34 (0) 0.80 (0) 1.02 (5) 0.50 (6) 0.86 (4) 0.77 (3)
40 ηA 100 5.37 (0) 7.29 (0) 4.90 (0) 1.18 (0) 1.69 (0) 1.36 (0) 1.33 (0)ηB 100 2.80 (0) 3.11 (0) 3.08 (0) 1.17 (0) 1.58 (0) 1.39 (0) 1.49 (0)ηA 300 3.54 (0) 2.81 (0) 3.60 (0) 1.06 (0) 1.27 (0) 1.07 (0) 1.05 (0)ηB 300 1.28 (0) 1.03 (0) 1.15 (0) 0.72 (0) 0.63 (0) 0.63 (0) 1.03 (0)ηA 500 2.86 (0) 2.10 (0) 3.33 (0) 0.83 (0) 1.02 (0) 0.88 (1) 0.90 (0)ηB 500 0.76 (0) 0.56 (0) 0.57 (0) 0.69 (5) 0.22 (5) 0.45 (5) 0.56 (2)
60 ηA 100 5.88 (0) 6.61 (0) 6.12 (0) 0.53 (0) 1.00 (0) 0.61 (0) 0.64 (0)ηB 100 3.34 (0) 4.24 (0) 3.05 (0) 0.64 (0) 0.89 (0) 0.50 (0) 0.63 (0)ηA 300 4.12 (0) 3.62 (0) 4.84 (0) 0.41 (0) 0.89 (0) 0.51 (0) 0.56 (0)ηB 300 1.15 (0) 1.21 (0) 1.13 (0) 0.29 (0) 0.62 (0) 0.25 (0) 0.50 (0)ηA 500 3.58 (0) 2.55 (0) 3.91 (0) 0.45 (0) 0.71 (0) 0.39 (0) 0.49 (0)ηB 500 0.87 (0) 0.73 (0) 0.68 (0) 0.15 (10) 0.43 (1) 0.15 (7) 0.41 (1)
80 ηA 100 5.04 (0) 7.73 (0) 5.79 (0) 0.27 (0) 0.47 (0) 0.28 (0) 0.26 (0)ηB 100 3.04 (0) 4.06 (0) 3.50 (0) 0.26 (0) 0.48 (0) 0.25 (0) 0.25 (0)ηA 300 3.54 (0) 4.04 (0) 3.95 (0) 0.22 (0) 0.47 (0) 0.25 (0) 0.24 (0)ηB 300 1.46 (0) 1.43 (0) 1.33 (0) 0.15 (0) 0.43 (0) 0.11 (0) 0.20 (0)ηA 500 3.13 (0) 3.31 (0) 2.98 (0) 0.18 (0) 0.47 (0) 0.17 (0) 0.22 (0)ηB 500 1.01 (0) 0.80 (0) 0.74 (0) 0.06 (8) 0.35 (2) 0.11 (2) 0.16 (5)
Table 5: Average percentage deviations from the best value obtained for C2 =∑
wiCi
37
∑wiTi
n N It TA(0) SA-dec SA-var TA(0) SA-dec SA-var MS(4)
starting solution GS starting solution IS3(6,1)
20 ηA 100 29.25 (1) 18.34 (2) 21.61 (1) 14.79 (4) 13.11 (3) 12.92 (3) 14.15 (4)ηB 100 11.87 (4) 7.66 (4) 11.52 (1) 10.45 (6) 7.57 (4) 9.89 (4) 10.25 (2)ηA 300 15.21 (5) 8.63 (4) 10.02 (6) 9.58 (5) 7.49 (5) 8.53 (5) 9.63 (5)ηB 300 4.93 (6) 4.41 (6) 7.60 (3) 8.03 (7) 3.84 (4) 3.56 (6) 6.34 (5)ηA 500 10.76 (5) 8.32 (5) 10.96 (7) 7.62 (6) 4.08 (7) 8.04 (6) 6.19 (6)ηB 500 2.99 (8) 3.51 (6) 4.27 (6) 5.44 (9) 4.06 (8) 4.28 (9) 5.00 (5)
40 ηA 100 28.26 (0) 24.17 (0) 30.83 (0) 15.65 (0) 18.21 (0) 13.44 (0) 26.99 (0)ηB 100 11.43 (0) 14.28 (0) 12.05 (0) 9.86 (0) 12.84 (0) 9.89 (0) 18.83 (0)ηA 300 11.85 (1) 14.35 (0) 15.40 (0) 6.94 (1) 7.88 (2) 9.43 (0) 11.48 (0)ηB 300 6.04 (0) 5.71 (0) 7.78 (0) 5.81 (2) 4.91 (1) 5.47 (2) 5.78 (1)ηA 500 7.74 (2) 8.48 (1) 11.42 (1) 6.27 (2) 3.94 (3) 7.51 (1) 9.28 (0)ηB 500 5.79 (1) 3.14 (0) 3.27 (1) 4.95 (7) 2.17 (7) 4.41 (7) 4.49 (2)
60 ηA 100 32.28 (0) 36.80 (0) 26.20 (0) 16.17 (0) 22.56 (0) 17.68 (0) 23.96 (0)ηB 100 16.02 (0) 14.23 (0) 13.70 (0) 9.24 (0) 13.36 (0) 9.99 (0) 18.26 (0)ηA 300 19.35 (1) 13.01 (0) 20.90 (0) 8.65 (1) 9.42 (0) 9.10 (1) 13.51 (0)ηB 300 5.95 (1) 7.51 (0) 3.91 (0) 4.68 (0) 4.75 (0) 4.24 (1) 9.10 (0)ηA 500 15.24 (1) 5.13 (0) 14.15 (0) 7.23 (1) 6.00 (0) 6.91 (1) 9.51 (0)ηB 500 2.86 (0) 5.40 (0) 2.80 (0) 6.19 (3) 3.59 (8) 4.88 (5) 4.10 (3)
80 ηA 100 27.67 (0) 29.39 (0) 24.54 (0) 17.93 (0) 26.52 (0) 19.20 (0) 26.19 (0)ηB 100 13.97 (0) 17.51 (0) 13.13 (0) 9.61 (0) 14.08 (0) 9.90 (0) 20.03 (0)ηA 300 13.27 (0) 13.24 (0) 17.70 (0) 8.45 (0) 12.69 (0) 12.86 (0) 16.73 (0)ηB 300 5.42 (0) 7.27 (0) 5.86 (0) 4.62 (0) 6.89 (0) 4.87 (0) 9.32 (0)ηA 500 9.24 (0) 8.14 (0) 9.46 (0) 7.80 (0) 6.55 (0) 8.50 (1) 11.46 (0)ηB 500 2.60 (0) 4.02 (0) 3.56 (0) 3.78 (3) 4.20 (3) 3.78 (5) 5.54 (1)
Table 6: Average percentage deviations from the best value obtained for C2 =∑
wiTi
38
∑wiTi
n N It TA(0) SA-dec SA-var TA(0) SA-dec SA-var MS(4)
starting solution GS starting solution IS3(6,1)
20 ηA 100 135.12 71.16 87.87 46.70 39.60 39.60 42.51ηB 100 36.12 19.58 82.00 38.21 19.58 25.53 26.41ηA 300 87.22 48.86 36.60 37.46 27.26 31.45 25.86ηB 300 25.79 22.89 30.71 37.84 16.90 16.84 26.41ηA 500 40.04 32.37 54.22 21.22 14.40 29.40 19.10ηB 500 17.35 23.12 23.62 25.21 18.77 27.26 16.23
40 ηA 100 83.13 85.29 80.83 55.59 42.32 57.30 85.41ηB 100 28.12 36.11 31.05 48.59 36.72 50.02 60.61ηA 300 30.47 40.30 60.61 41.88 19.24 48.59 51.46ηB 300 29.06 20.17 33.05 35.95 17.15 42.19 17.70ηA 500 20.17 28.87 79.66 46.25 12.24 48.59 43.26ηB 500 72.79 15.93 15.27 47.21 8.09 34.15 23.79
60 ηA 100 74.73 82.89 63.14 61.02 41.66 48.66 40.23ηB 100 32.75 24.73 26.71 37.99 21.87 32.79 36.83ηA 300 57.32 25.48 73.16 38.07 25.25 39.37 50.22ηB 300 19.92 19.15 20.64 16.73 10.06 22.38 31.77ηA 500 40.22 16.18 78.74 41.40 17.66 37.61 26.24ηB 500 17.97 14.13 9.34 39.02 17.05 37.41 10.14
80 ηA 100 80.68 52.27 70.63 42.10 48.41 45.77 53.93ηB 100 45.33 37.98 40.50 30.15 29.31 25.35 50.66ηA 300 31.95 41.00 59.84 26.71 34.13 35.48 38.26ηB 300 11.82 15.77 16.56 12.16 15.91 20.83 25.03ηA 500 28.39 24.07 47.96 26.65 22.74 21.44 27.26ηB 500 11.03 13.58 14.20 10.58 9.79 15.53 12.43
Table 7: Maximum percentage deviation from the best value obtained for C2 =∑
wiTi
∑Ci
∑wiCi
∑wiTi
n GS IS3(6,1) IS3(all,1) GS IS3(6,1) IS3(all,1) GS IS3(6,1) IS3(all,1)20 28.26 1.56 1.56 48.59 3.79 3.31 332.93 32.22 31.6040 31.11 0.97 0.93 54.70 1.83 1.78 475.51 56.19 48.9260 30.59 0.38 0.29 54.07 1.00 0.93 478.02 60.16 51.0980 31.23 0.40 0.12 53.43 0.48 0.33 461.23 64.31 50.05
Table 8: Average percentage deviations of algorithms GS, IS3(5,1) und IS3(all,1) from thebest value obtained
39