An ant colony system for permutation flow-shop sequencing

11
Available online at www.sciencedirect.com Computers & Operations Research 31 (2004) 791 – 801 www.elsevier.com/locate/dsw An ant colony system for permutation ow-shop sequencing Kuo-Ching Ying a; b , Ching-Jong Liao a ; a Department of Industrial Management, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC b Department of Industrial Management, Huafan University, Taipei, Taiwan, ROC Abstract Ant colony system (ACS) is a novel meta-heuristic inspired by the foraging behavior of real ant. This paper is the rst to apply ACS for the n=m=P=C max problem, an NP-hard sequencing problem which is used to nd a processing order of n dierent jobs to be processed on m machines in the same sequence with minimizing the makespan. To verify the developed ACS algorithm, computational experiments are conducted on the well-known benchmark problem set of Taillard. The ACS algorithm is compared with other mata-heuristics such as genetic algorithm, simulated annealing, and neighborhood search from the literature. Computational results demonstrate that ACS is a more eective mata-heuristic for the n=m=P=C max problem. ? 2003 Elsevier Ltd. All rights reserved. Keywords: Scheduling; Permutation ow-shop; Ant colony system; Meta-heuristic 1. Introduction Consider a sequencing problem that has n dierent jobs to be processed on m machines. Each job has one operation on each machine and all jobs have the same ordering sequence on each machine. At any time, each machine can process at most one job and each job can be processed on at most one machine. Preemption is not allowed. The objective is to nd a permutation of jobs that minimizes the maximum completion time, or makespan. Conventionally, this problem is denoted by n=m=P=C max and is a well-known NP-hard problem [1]. During the last 40 years, the n=m=P=C max problem has held the attention of many researchers [2]. Although optimal solutions of n=m=P=C max problem can be obtained via enumeration techniques such as exhaustive enumeration and branch and bound method [3], these methods may take a prohibitive amount of computation even for moderate size problem. For practical purposes, it is often more Corresponding author. Tel.: +886-2-2737-6337; fax: +886-2-2737-6344. E-mail address: [email protected] (C.-J. Liao). 0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0305-0548(03)00038-8

Transcript of An ant colony system for permutation flow-shop sequencing

Available online at www.sciencedirect.com

Computers & Operations Research 31 (2004) 791–801www.elsevier.com/locate/dsw

An ant colony system for permutation &ow-shop sequencing

Kuo-Ching Yinga;b, Ching-Jong Liaoa ;∗

aDepartment of Industrial Management, National Taiwan University of Science and Technology, Taipei,Taiwan, ROC

bDepartment of Industrial Management, Huafan University, Taipei, Taiwan, ROC

Abstract

Ant colony system (ACS) is a novel meta-heuristic inspired by the foraging behavior of real ant. This paperis the 2rst to apply ACS for the n=m=P=Cmax problem, an NP-hard sequencing problem which is used to 2nda processing order of n di6erent jobs to be processed on m machines in the same sequence with minimizingthe makespan. To verify the developed ACS algorithm, computational experiments are conducted on thewell-known benchmark problem set of Taillard. The ACS algorithm is compared with other mata-heuristicssuch as genetic algorithm, simulated annealing, and neighborhood search from the literature. Computationalresults demonstrate that ACS is a more e6ective mata-heuristic for the n=m=P=Cmax problem.? 2003 Elsevier Ltd. All rights reserved.

Keywords: Scheduling; Permutation &ow-shop; Ant colony system; Meta-heuristic

1. Introduction

Consider a sequencing problem that has n di6erent jobs to be processed on m machines. Eachjob has one operation on each machine and all jobs have the same ordering sequence on eachmachine. At any time, each machine can process at most one job and each job can be processed onat most one machine. Preemption is not allowed. The objective is to 2nd a permutation of jobs thatminimizes the maximum completion time, or makespan. Conventionally, this problem is denoted byn=m=P=Cmax and is a well-known NP-hard problem [1].During the last 40 years, the n=m=P=Cmax problem has held the attention of many researchers [2].

Although optimal solutions of n=m=P=Cmax problem can be obtained via enumeration techniques suchas exhaustive enumeration and branch and bound method [3], these methods may take a prohibitiveamount of computation even for moderate size problem. For practical purposes, it is often more

∗ Corresponding author. Tel.: +886-2-2737-6337; fax: +886-2-2737-6344.E-mail address: [email protected] (C.-J. Liao).

0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved.doi:10.1016/S0305-0548(03)00038-8

792 K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801

appropriate to look for heuristic method that generates a near-optimal solution at relatively minorcomputational expense. This leads to the development of many heuristic procedures.

Currently available heuristics for solving this problem in literature can be classi2ed into twocategories: constructive heuristics and improvement heuristics [4]. In a constructive heuristic, oncea job sequence is determined, it is 2xed and cannot be reversed. Several constructive heuristics forn=m=P=Cmax are proposed in the literature [5–8]. Computational results [2,6,7,9] show that the methodsof NEH [7] and SPIRIT [2] are currently the best available ones as we know. On the other hand,the improvement heuristics start with an initial solution and then provide a scheme for iterativelyobtaining an improved solution. In recent years, studies with meta-heuristics have been extensivelycarried out on this argument. The meta-heuristic is a rather general algorithmic framework that canbe applied to di6erent optimization problems with minor modi2cations. Essentially, it is a type ofrandomized improvement heuristic [4]. Methods of this type include genetic algorithm (GA) [10,11],simulated annealing (SA) [12,13], and tabu search [14,15]. Literature shows that these methods canobtain very good results for NP-hard combinatorial optimization problems [16,17].

Ant colony system (ACS), 2rst proposed by Dorigo and Gambardella in 1997 [18], is one of themost recent and hopeful meta-heuristics for combinatorial optimization problems. In this paper, wedevelop an ACS algorithm for the n=m=P=Cmax problem. The proposed algorithm will be comparedto the previous implementation of GA, SA, and neighborhood search (NS). A set of benchmarkproblems proposed by Taillard [19] will be used for this purpose. The remainder of this paper isorganized as follows. In the next section, we introduce the background of ACS. Then we introduce theACS version that we apply in Section 3. Our implementation of ACS to the n=m=P=Cmax problem isthe subject of Section 4. Computational results on Taillard’s benchmark problems and performancescompared with other heuristics are provided in Section 5. Finally, we conclude the paper with asummary in Section 6.

2. The background of ACS

ACS is a particular heuristic of ant colony optimization (ACO) [20], one of the nature-inspiredmeta-heuristics to the solution of discrete optimization problems. The 2rst ACO system was intro-duced by Dorigo [21], which is called ant system (AS). It is the result of research on computationalintelligence approaches to the combinatorial optimization problems. The inspiring natural process ofAS is the foraging behavior of ants. Real ants are capable of 2nding the shortest path from a foodsource to their nest without using visual cue [22]. They communicate information concerning foodsources via an aromatic essence, called pheromone. While walking, ants secrete pheromone on theground and follow, in probability, pheromone previously laid by other ants. A greater amount ofpheromone on the path gives an ant a stronger stimulation and thus a higher probability to followit. Since ants passing through food source by shorter path will come back to nest sooner than antsvia longer paths, the shorter path will have a higher traLc density than that of the longer one. As aconsequence, it makes the quantity of pheromone laid down on the shorter path grow faster than onthe longer one, and causes any single ant to choose the path quickly biased towards the shorter one[18]. The described foraging behavior of real ant colonies can be used to solve combinatorial opti-mization problems by simulation [21]: the objective value corresponding to the quality of the foodsources, arti2cial ants searching the solution space simulating real ants searching their environment,

K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801 793

and an adaptive memory corresponding to the pheromone trail. In addition, the arti2cial ants areequipped with a local heuristic function to guide their search through the set of feasible solutions.

The ACO method has been successfully applied to solve di6erent types of combinatorial optimiza-tion problems. Examples include traveling salesman problem [18,23], sequential ordering problem[24], quadratic assignment problem [25,26], vehicle routing problem [27,28], scheduling problem[29,30], graph coloring problem [31], partitioning problem [32,33], power system optimization prob-lem [34], and telecommunications networks problem [35,36]. One of the most eLcient ACO-basedimplementations has been ACS, the heuristic that will be discussed in the next section.

3. Ant colony system

ACS has recently been shown to be competitive with other meta-heuristics on the symmetric andasymmetric traveling salesman problems. It takes the form shown in Fig. 1 [18]. Three proceduresare iterated until some end conditions (e.g., iteration times) are veri2ed. First, a set of arti2cial ants(ants, for short) is initially positioned on starting nodes according to some initialization rule (e.g.,randomly). Each ant builds a tour (i.e., a feasible solution to n=m=P=Cmax) by repeatedly applyinga stochastic greedy rule (i.e., the state transition rule). Then, while constructing its tour, an antalso updates the amount of pheromone on the visited edges by applying the local updating rule.Finally, once all ants have terminated their tours, pheromone trails on the edges are modi2ed againby applying the global updating rule. In the following we discuss the details about the related rules.

3.1. ACS state transition rule

When building a tour in ACS, an ant k at the current position of node i chooses the next node jto move to by applying the state transition rule given by the following equation [18]:

j =

argmaxu∈Sk (i)

{[�(i; u)][ (i; u)]�} if q6 q0;

J otherwise;(1)

where �(i; u) is the pheromone trail of edge (i; u), the heuristic desirability (i; u) = 1=�(i; u) is theinverse of the length from node i to node u(�(i; u)), Sk(i) is the set of nodes that remain to be

Initialize the pheromone trail; set parametersLoop /* at this level each loop is called an iteration */ A colony of ants is initially positioned on starting nodes Loop /* at this level each loop is called a step */

Each ant repeatedly applies state transition rule to select thenext node until a tour is constructedApply local updating rule to decrease pheromone on visitededges of its tour

Until all ants have built a complete tour Apply global updating rule to increase pheromone on edges of the

current best tour and decrease pheromone on other edgesUntil Stopping criteria are verified

Fig. 1. The ACS algorithm.

794 K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801

visited by ant k positioned on node i (to make the solution feasible). Also, � is a parameter whichdetermines the relative importance of pheromone versus distance (�¿ 0), q is a random numberuniformly distributed in [0,1], and q0 is a parameter (06 q06 1) which determines the relativeimportance of exploitation versus exploration. In addition, J is a random variable which gives theprobability with which ant k in node i chooses to move to node j that is selected according to theprobability distribution, called a random-proportional rule, given in the following equation:

pk(i; j) =

[�(i; j)][ (i; j)]�∑u∈Sk (i) [�(i; u)][ (i; u)]

� if j∈ Sk(i);

0 otherwise:

(2)

When building their tours, the chosen edges are guided by both heuristic information andpheromone information. The state transition rule resulting from Eqs. (1) and (2) favor the choice ofnodes connected by shorter edges with a greater amount of pheromone. Every time an ant in nodei has to choose a node j to move to, it samples a random number q. If q6 q0, then the best edge(according to Eq. (1)) is chosen (exploitation), otherwise an edge is chosen according to Eq. (2)(biased exploration).

3.2. ACS local updating rule

While constructing a tour, ant changes the pheromone level to its visited edges by applying thelocal updating rule as follows [18]:

�(i; j) := (1− �)�(i; j) + ��0;

where �0 is the initial pheromone level and 0¡�¡ 1 is the pheromone evaporating parameter.The e6ect of local updating rule is to make the desirability of edges change dynamically in order

to shuNe the tour. If ants explore di6erent paths, then there is a higher probability that one of themwill 2nd an improving solution than they all search in a narrow neighborhood of the previous besttour. Every time an ant constructs a path, the local updating rule will make its visit edges’ pheromonediminish and become less attractive. Hence, the nodes in one ant’s tour will be chosen with a lowerprobability in building other ants’ tours. As a consequence, ants will favor the exploration of edgesnot yet visited and prevent converging to a common path.

3.3. ACS global updating rule

Global updating rule is performed after all ants have completed their tours. In order to makethe search more directed, global updating is intended to provide a greater amount of pheromone toshorter tours and reinforce them. Therefore, only the globally best ant that found the best solution(i.e., the shortest tour) up to the current iteration of the algorithm is permitted to deposit pheromone.The pheromone level is modi2ed according to [18]

�(i; j) := (1− �)�(i; j) + �O�(i; j);

where

O�(i; j) =

{(Lgb)−1 if (i; j)∈ global best tour;

0 otherwise:

K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801 795

In the above equation, � (0¡�¡ 1) is the pheromone evaporating parameter and Lgb is the lengthof the globally best tour found up to the current iteration.

4. Implementation of ACS to n=m=P=Cmax

After introducing the ACS heuristic, we now describe in detail our implementation of ACS to then=m=P=Cmax problem. The major issue is to transform the problem into a problem solvable by ACS.

4.1. Representation of n=m=P=Cmax in ACS

The n=m=P=Cmax problem can be represented in ACS by a disjunctive graph. Given an instance ofn=m=P=Cmax, we can associate it with a disjunctive graph G=(O;C;D), where O is a set of nodes, Cis a set of conjunctive directed (solid) arcs, and D is a set of disjunctive undirected (broken) arcs.The set O stands for all of the processing operations oij acted upon the n jobs, C corresponds tothe precedence relationships between the processing operations of a single job, and D represents themachine constraint of operations belonging to di6erent jobs. In addition, there are a nest N and afood source F , which are dummy nodes. Node N has conjunctive directed arcs emanating to the 2rstoperations of the n jobs and F has conjunctive directed arcs coming from all the 2nal operations.We have therefore (nm+2) nodes, where all the nodes of the same machine are pairwise connectedin both directions.

Fig. 2 gives an instance of 3=4=P=Cmax. For simplicity, all the arrows of nodes in this 2gurethat are pairwise connected in both directions are omitted. The disjunctive undirected arcs D formm cliques, one for each machine, which can determine the processing order of operations on themachine. Since, all jobs have the same ordering sequence on each machine for n=m=P=Cmax, it isenough to 2nd the 2rst clique’s sequence. When building a feasible solution, the next node an antchooses to move to is calculated by applying the state transition rule given by Eq. (1). The chosennode is then added to the tabu list and the process is iterated until the last job of the 2rst cliqueis chosen. It is noted that the tabu list we used here is di6erent from that in the tabu search. In

11o 12o 13o 14o

21o 22o 23o 24oN F

31o 32o 33o 34o

Fig. 2. An instance of 3=4=P=Cmax. Legend: (· · ·) arcs for machine 1, (- - -) arcs for machine 2, (- · -) arcs for machine 3,and (- · · -) arcs for machine 4.

796 K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801

the tabu search we usually record permutations in the tabu list but we record nodes here in ACS.In the end, the node permutation given by the tabu list can determine the job sequence. Now, it ispossible to compute the length of the longest path of the obtained oriented graph (i.e., to calculatethe makespan of the feasible solution by job processing time). The pheromone trails can thus becomputed and laid down as speci2ed by the ACS algorithm. Nevertheless, to get a more satisfactorysolution, an adjacent pairwise interchange method is used after each iteration when all ants havecompleted their solutions.

4.2. The length between two jobs

We now de2ne the length between two nodes in the oriented graph. The idea of the length weuse comes from the Palmer’s method [8], a quick method to obtain a near optimum solution ofn=m=P=Cmax. The method measures the slope index for each job i (i = 1; 2; : : : ; n) by

Si =m− 12

pi;m +m− 32

pi;m−1 + · · · − m− 32

pi;2 − m− 12

pi;1:

The sequence is then constructed based on the descending order of the magnitude of Si. The ideaof Palmer’s method is to give priority to the jobs having the strongest tendency to progress fromshort times to long times in the sequence of processes.

Following Palmer’s idea, we revise the slope index and de2ne the length between node i(i = 1; 2; : : : ; n ∪ N ) and node u (u= 1; 2; : : : ; n) of the 2rst clique by

�(i; u) = 1= (i; u);

where

(i; u) = (m− 1)pu;m + (m− 3)pu;m−1

+ · · · − (m− 3)pu;2 − (m− 1)pu;1 −minu{ (i; u)}+ 1 (i �= u):

Using this function as the length between two nodes may form a basis for the state transitionrule of ACS. In fact, we can directly calculate and use the heuristic desirability (i; u) and letACS search only the neighborhood around the best solution. It is likely to improve the solution ofPalmer’s method by a constraint-directed random search when we apply ACS to search the bestsolution.

5. Computational results

One diLculty faced by researchers in scheduling is to compare their developed heuristics withthose of other researchers. If the standard set of test problems is accessible, di6erent algorithms’performances can be compared on exactly the same set of test problems. For this reason we chose120 benchmark problems from Taillard [19] as the test problems of this study.

Taillard has produced a set of unsolved n=m=P=Cmax problems with 5, 10, and 20 machines andfrom 20 to 500 jobs. The instances were randomly generated as follows [19]: for each job i (i =1; : : : ; n) on each machine j (j = 1; : : : ; m), an integer processing time pi;j was generated from theuniform distribution [1, 99]. In order to propose problems that are as diLcult as possible, Taillard

K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801 797

Table 1Comparison of di6erent algorithms

Problem size ACS Palmer GA SA NS

Quality Timea Quality Quality Quality Quality

20/5 1.19 11 10.81 1.61 1.27 1.4620/10 1.70 12 15.27 2.29 1.71 2.0220/20 1.60 16 16.34 1.95 0.86 1.1050/5 0.43 44 5.23 0.45 0.78 0.7950/10 1.89 54 13.48 2.28 1.98 3.2150/20 2.71 73 15.46 3.44 2.86 3.90100/5 0.22 163 2.38 0.23 0.56 0.76100/10 1.22 197 9.08 1.25 1.33 2.69100/20 2.22 264 13.24 2.91 2.32 3.98200/10 0.64 826 4.75 0.50 0.83 3.81200/20 1.30 1895 12.15 1.35 1.74 6.07500/20 1.68 15399 6.72 −0:22 0.85 9.07

Average 1.40 10.41 1.50 1.42 3.24

aCPU time in seconds.

generated many instances of problems, then for each size of problems chose the 10 instances thatseemed to be the hardest ones to form a basic problems set. Thus, there were 10 instances foreach problem size and 120 problem instances in all. Subsequently, he gave each of these instanceswith the following information: initial value of the random generator’s seed, a lower bound andan upper bound of the optimal makespan. The test problem 2les are available via Taillard’s website (URL: http://www.idsia.ch/∼eric/) or can be downloaded from the OR-Library web site (URL:http://mscmga.ms.ic.ac.uk/jeb/orlib/&owshopinfo.html).

In the preliminary experiment, 2ve di6erent ant sizes (5, 10, 20, 50, 100) were tested, where20 was superior and used for all further studies. In all experiments of this paper, the numericparameters are set according to the previous studies by Dorigo [18]: �=0:1, �=2, �=0:1, q0 = 0:9,�0 = (nUB)−1, where UB is the upper bound of the optimal makespan. The algorithm terminateswhen either the total number of iterations reaches 5000 or the lower bound is obtained.

The algorithm was coded in Visual C++, and run on an AMD 700 MHz PC. A pseudo-codedescription of the implemented program is given in the appendix. To evaluate the algorithm, each ofthe problem instances was tested for 2ve trials. The best one trial was chosen and the 10 instancesfor the same problem size were averaged. The 2nal results are shown in Table 1, which gives acomparison with other mata-heuristics such as GA, SA, and NS from the literature [37]. The solutionquality is measured by the mean percentage di6erence from Taillard’s upper bound.

On the whole, ACS outperformed GA, SA, and NS. For the mean percentage deviation, an averageof 1.40 with a maximum of 2.71 was achieved. It is expected that the deviation from the optimalsolutions is even lower. It can be seen that ACS can e6ectively improve the solutions of Palmer’smethod. Furthermore, the ACS algorithm was superior in 10 out of 12 problem sizes compared toGA and SA, and in 11 out of 12 problem sizes compared to NS. It is also worth pointing out that the

798 K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801

CPU times of ACS for the same size problems are as close as can be ignored. Since the CPU timesof GA, SA, and NS are not provided by the literature and vary according to hardware, software andcoding, the computational eLciency cannot be compared directly in this paper. Nonetheless, it canbe seen that the ACS algorithm can get very good solutions at a reasonable CPU time.

When applying heuristics, the properties of reliability and consistency are both important. Asdemonstrated, ACS is very consistent in the solution quality, with respect to chance of obtaining abad solution, and the computational expense. It is clear that ACS is promising for the n=m=P=Cmax

problem.

6. Conclusions

The main aim of this research is to explore the potential of ACS heuristic for scheduling prob-lems. Since ACS is a versatile and robust heuristic, the proposed algorithm of this study can beapplied to di6erent scheduling problems and other combinatorial optimization problems with minormodi2cations. We have demonstrated the e6ectiveness of ACS for solving the n=m=P=Cmax problem.It clearly suggests such a meta-heuristic to be well worth exploring in the context of solving di6erentscheduling problems.

There are several possible extensions to this study in the future research. First, except for thePalmer’s method, di6erent dispatching rules may be tested. Second, the ACS provides a variety ofoptions and parameter settings that is worth fully examining. Third, the developed ACS algorithmmay be extended to other manufacturing environments such as job shop and open shop, which arecurrently investigated by the authors. Finally, one may continue the research in other ACO methodsfor this sequencing problem.

Appendix.

A pseudo-code of the developed ACS algorithm is given below:

Initialize the pheromone trail, set parametersfor (iteration cnt=0;iteration cnt¡iteration;iteration cnt++)

{for (ant cnt= 0;ant cnt¡ant;ant cnt++)

{// path selection

// select 2rst nodeif ( ranq¡= q0)get max-¿(stnode.pheromone[k]) * pow((double)path[k],(double) �);elserandomly choose by random-proportional rule// select 2rst node end// decrease pheromone of arc from nest to the selected nodestnode.pheromone[i] = ((1− �) * ( stnode.pheromone[i]))+� ∗ �0;

K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801 799

// decrease pheromone of arc from nest to the selected node end//select the N th nodefor I = 1 to Nif (ranq¡= q0)

{max-¿testmaxj=(double)(nodeaddr[i]-¿pheromone[j]) *(double)pow((double)path[j],(double)�);// pheromone decrease

nodeaddr[i]-¿pheromone[nextjob]=((1− �)∗(nodeaddr[i]-¿pheromone[nextjob]))+� ∗ �0;

// pheromone decrease end}

else{randomly choose by random-proportional rule// decrease pheromone of arc from nest to the selected node

nodeaddr[i]-¿pheromone[nextjob]=((1− �)∗(nodeaddr[i]-¿pheromone[nextjob]))+� ∗ �0;

// decrease pheromone of arc from nest to the selected node end}

end-ifend-for// select the N th node end

// path selection end// calculate Cmax

// calculate Cmax end// remain min Cmax and the correspondent path

get min Cmax

// remain min Cmax and the correspondent path end}

// ant cnt for loop endAPI switch

// remain min Cmax and the correspondent pathget min Cmax

// remain min Cmax and the correspondent path end// pheromone increase

global updating rule// pheromone increase end}

// iteration cnt for loop end

References

[1] Rinnooy KAHG. Machine scheduling problems. The Hague: Martinus Nijho6, 1967.

800 K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801

[2] Widmer M, Hertz A. A new heuristic method for the &ow shop sequencing problem. European Journal of OperationalResearch 1989;41:186–93.

[3] Pinedo M. Scheduling: theory, algorithm, and system. Englewood Cli6s, NJ: Prentice-Hall, 1995.[4] Osman IH, Potts CN. Simulated annealing for permutation &ow-shop scheduling. OMEGA, International Journal of

Management Science 1989;17:551–7.[5] Campbell HG, Dudek RA, Smith ML. A heuristic algorithm for the n-job, m-machine sequencing problem.

Management Science 1970;16:B630–7.[6] Dannenbring DG. An evaluation of &owshop sequencing heuristic. Management Science 1977;23:1174–82.[7] Nawaz M, Enscore JEE, Ham I. A heuristic algorithm for the m-machine, n-job &ow-shop sequencing problem.

Management Science 1983;11:91–5.[8] Palmer DS. Sequencing jobs through a multi-stage process in the minimum total time—a quick method of obtaining

a near optimum. Operational Research Quarterly 1965;16:101–7.[9] Turner S, Booth D. Comparison of heuristics for &ow shop sequencing, OMEGA. International Journal of

Management Science 1987;15:75–8.[10] Goldberg DE. Genetic algorithms in search, optimization and machine learning. Reading, MA: Addison-Wesley,

1989.[11] Holland JH. Adaptation in natural and arti2cial systems. Ann Arbor: The University of Michigan Press, 1975.[12] Van Laarhoven PJM, Aarts EHL. Simulated annealing: theory and applications. Dordrecht: D. Reidel Publ. Co.,

1987.[13] Metropolis N, Rosenbluth A, Rosenbluth M, Teller A, Teller E. Equation of state calculation by fast computing

machine. Journal of Chemical Physics 1953;21:1087–92.[14] Glover F. Tabu search, part I. ORSA Journal on Computing 1989;1:190–206.[15] Glover F. Tabu search, part II. ORSA Journal on Computing 1990;2:4–32.[16] Glover F, Greenberg HJ. New approaches for heuristic search: a bilateral linkage with arti2cial intelligence. European

Journal of Operational Research 1989;39:119–30.[17] Reeves CR. Modern heuristic techniques for combinatorial problems. Oxford: Blackwell, 1993.[18] Dorigo M, Gambardella LM. Ant colony system: a cooperative learning approach to the travelling salesman problem.

IEEE Transactions on Evolutionary Computation 1997;1:53–66.[19] Taillard E. Benchmarks for basic scheduling problems. European Journal of Operational Research 1993;64:278–85.[20] Dorigo M, Di Caro G, Gambardella LM. Ant algorithms for discrete optimization. Arti2cial Life 1999;5:137–72.[21] Dorigo M. Optimization, learning and natural algorithm. Ph.D. thesis, DEI, Politecnico di Milano, Italy, 1992.[22] Beckers R, Deneubourg JL, Goss S. Trails and U-turns in the selection of the shortest path by the ant Lasius Niger.

Journal of Theoretical Biology 1992;159:397–415.[23] Dorigo M, Gambardella LM. Ant colonies for the traveling salesman problem. BioSystem 1997;43:73–81.[24] Gambardella LM, Dorigo M. HAS-SOP: hybrid ant system for the sequential ordering problem. Technical Report

IDSIA-11-97, IDSIA, Lugano, Switzerland, 1997.[25] Gambardella LM, Taillard E, Dorigo M. Ant colonies for the quadratic assignment problem. Journal of Operational

Research Society 1999;50:167–76.[26] Maniezzo V, Colorni A. The ant system applied to the quadratic assignment problem. IEEE Transactions on

Knowledge and Data Engineering 1999;11:769–84.[27] Bullnheimer B, Hartl RF, Strauss C. An improved ant system algorithm for the vehicle routing problem. Annals of

Operations Research 1999;89:319–34.[28] Bullnheimer B, Hartl RF, Strauss C. Applying the ant system to the vehicle routing problem. In: Voss S, Martello

S, Osman IH, Roucairol C, editors. Meta-heuristics: advances and trends in local search paradigms for optimization.Boston: Kluwer, 1999.

[29] Colorni A, Dorigo M, Maniezzo V, Trubian M. Ant system for job-shop scheduling. Belgian Journal of OperationsResearch 1994;34:39–53.

[30] Forsyth P, Wren A. An ant system for bus driver scheduling. Proceedings of the Seventh International Workshopon Computer-Aided Scheduling of Public Transport, Boston, August 1997.

[31] Costa D, Hertz A. Ants can colour graphs. Journal of the Operational Research Society 1997;48:295–305.[32] Kuntz P, Snyers D. Emergent colonization and graph partitioning. Proceedings of the Third International Conference

on Simulation of Adaptive Behavior: From Animals to Animates, vol. 3. Cambridge, MA: The MIT Press, 1994.

K.-C. Ying, C.-J. Liao / Computers & Operations Research 31 (2004) 791–801 801

[33] Kuntz P, Layzell P, Snyers D. A colony of ant-like agents for partitioning in VLSI technology. Proceedings of theFourth European Conference on Arti2cial Life. Cambridge, MA: The MIT Press, 1997.

[34] Song YH, Chou CS. Application of ant colony search algorithms in power system optimization. IEEE PowerEngineering Review 1998;18:63–78.

[35] Schoonderwoerd R, Holland O, Bruten J, Rothkrantz L. Ant-based load balancing in telecommunications networks.Adaptive Behavior 1997;5:169–207.

[36] Di Caro G, Dorigo M. AntNet: distributed stigmergetic control for communications networks. Journal of Arti2cialIntelligence Research 1998;9:317–65.

[37] Colin RR. A genetic algorithm for &owshop sequencing. Computers and Operations Research 1995;22:5–13.

Kuo-Ching Ying is an Instructor of Industrial Management at the Huafan University. He is also a Ph.D. candidate atNational Taiwan University of Science and Technology. His current research interests include the construction of antcolony optimization for various types of scheduling problem and grey theory, on which he has published several papers.

Ching-Jong Liao is a Professor of Industrial Management at the National Taiwan University of Science and Technology.He completed his Ph.D. in Industrial Engineering from the Pennsylvania State University in 1988. His current researchinterests include production scheduling and inventory control. His papers have appeared in Operations Research Letters,IIE Transactions, Journal of the Operations Research Society, Computers & Operations Research, OMEGA, NavalResearch Logistics, etc.