Uniform parallel-machine scheduling to minimize makespan with position-based learning curves

6
Uniform parallel-machine scheduling to minimize makespan with position-based learning curves q Wen-Chiung Lee a , Mei-Chi Chuang c,, Wei-Chang Yeh b,c a Department of Statistics, Feng Chia University, Taichung, Taiwan b Advanced Analytics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia c Integration and Collaboration Laboratory, Department of Industrial Engineering and Engineering Management, National Tsing Hua University, P.O. Box 24-60, Hsinchu 300, Taiwan, ROC article info Article history: Received 19 December 2011 Received in revised form 10 May 2012 Accepted 12 May 2012 Available online 18 May 2012 Keywords: Scheduling Learning curve Uniform parallel-machine Makespan abstract Scheduling with learning effects has become a popular topic in the past decade; however, most of the research focuses on single-machine problems. In many situations, there are machines in parallel and the skills of workers might be different due to their individual experience. In this paper, we study a uni- form parallel machine problem in which the objective is to jointly find an optimal assignment of opera- tors to machines and an optimal schedule to minimize the makespan. Two heuristic algorithms are proposed and computational experiments are conducted to evaluate their performance. Ó 2012 Elsevier Ltd. All rights reserved. 1. Introduction In classical scheduling, the processing times of jobs are assumed to be fixed and known. In reality, in many situations, employees do the same job repeatedly. As a consequence, they learn and are able to perform similar jobs in a more efficient way. That is, the actual processing time of a job is shorter if it is scheduled later. This phenomenon is known as the ‘‘learning effect’’ and has become a popular topic in the past decade. Lee and Wu (2004) considered a two-machine flowshop with learning effects to minimize total completion time. Wang and Xia (2005) studied flow-shop schedul- ing with learning effects. Wu and Lee (2008) considered a number of single-machine scheduling problems with learning effects. Cheng, Lee, and Wu (2008) considered single-machine and flow- shop permutation problems with deteriorating jobs and learning effects, and provided optimal solutions for several scheduling problems. Wang, Ng, Cheng, and Liu (2008) studied single machine scheduling with a time-dependent learning effect. Biskup (2008) provided a comprehensive review of scheduling models with learning considerations. Janiak and Rudek (2009) considered a learning effect model in which the learning curve is S-shaped. They provided NP-hard proofs for two cases of the problem to minimize the makespan. Cheng, Lai, Wu, and Lee (2009) developed a learning effect model in which the job processing time is a logarithm function of the nor- mal processing time of jobs already processed. They provided opti- mal solutions for several single-machine problems. Cheng, Lee, and Wu (2010) considered a new scheduling model with deteriorating jobs, learning and setup times. They obtained polynomial-time optimal solutions for single-machine problems. Janiak and Rudek (2010) suggested a new approach called multi-ability learning that generalizes existing ones and more precisely models real-life set- tings. They focused on the makespan problem and provided opti- mal polynomial time algorithms for special cases. Ji and Cheng (2010) considered a scheduling problem with job-dependent learning effects and multiple rate-modifying activities. They showed that the problem of minimizing the total completion time is polynomially solvable. Lee, Wu, and Hsu (2010) investigated a single-machine problem with the learning effect and release times to minimize the makespan. Wang, Wang, and Wang (2010) consid- ered resource allocation scheduling with learning effects where the processing time of a job is a function of the job’s position in a se- quence and its resource allocation. They provided a polynomial algorithm to find the optimal job sequence and resource allocation. Zhang and Yan (2010) provided a general learning effect model and derived optimal solutions for single-machine and flowshop prob- lems. Wang, Sun, and Sun (2010) and Wang and Wang (2011) pro- vided optimal solutions for a number of single-machine problems with an exponential sum-of-actual-processing-time-based learn- ing effect. Lai and Lee (2011) proposed a learning effect model in 0360-8352/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cie.2012.05.003 q This manuscript was processed by Area Editor T.C. Edwin Cheng. Corresponding author. E-mail address: [email protected] (M.-C. Chuang). Computers & Industrial Engineering 63 (2012) 813–818 Contents lists available at SciVerse ScienceDirect Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

Transcript of Uniform parallel-machine scheduling to minimize makespan with position-based learning curves

Computers & Industrial Engineering 63 (2012) 813–818

Contents lists available at SciVerse ScienceDirect

Computers & Industrial Engineering

journal homepage: www.elsevier .com/ locate/caie

Uniform parallel-machine scheduling to minimize makespanwith position-based learning curves q

Wen-Chiung Lee a, Mei-Chi Chuang c,⇑, Wei-Chang Yeh b,c

a Department of Statistics, Feng Chia University, Taichung, Taiwanb Advanced Analytics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australiac Integration and Collaboration Laboratory, Department of Industrial Engineering and Engineering Management, National Tsing Hua University, P.O. Box 24-60, Hsinchu 300, Taiwan,ROC

a r t i c l e i n f o a b s t r a c t

Article history:Received 19 December 2011Received in revised form 10 May 2012Accepted 12 May 2012Available online 18 May 2012

Keywords:SchedulingLearning curveUniform parallel-machineMakespan

0360-8352/$ - see front matter � 2012 Elsevier Ltd. Ahttp://dx.doi.org/10.1016/j.cie.2012.05.003

q This manuscript was processed by Area Editor T.C⇑ Corresponding author.

E-mail address: [email protected] (M.-C.

Scheduling with learning effects has become a popular topic in the past decade; however, most of theresearch focuses on single-machine problems. In many situations, there are machines in parallel andthe skills of workers might be different due to their individual experience. In this paper, we study a uni-form parallel machine problem in which the objective is to jointly find an optimal assignment of opera-tors to machines and an optimal schedule to minimize the makespan. Two heuristic algorithms areproposed and computational experiments are conducted to evaluate their performance.

� 2012 Elsevier Ltd. All rights reserved.

1. Introduction

In classical scheduling, the processing times of jobs are assumedto be fixed and known. In reality, in many situations, employees dothe same job repeatedly. As a consequence, they learn and are ableto perform similar jobs in a more efficient way. That is, the actualprocessing time of a job is shorter if it is scheduled later. Thisphenomenon is known as the ‘‘learning effect’’ and has become apopular topic in the past decade. Lee and Wu (2004) considereda two-machine flowshop with learning effects to minimize totalcompletion time. Wang and Xia (2005) studied flow-shop schedul-ing with learning effects. Wu and Lee (2008) considered a numberof single-machine scheduling problems with learning effects.Cheng, Lee, and Wu (2008) considered single-machine and flow-shop permutation problems with deteriorating jobs and learningeffects, and provided optimal solutions for several schedulingproblems. Wang, Ng, Cheng, and Liu (2008) studied single machinescheduling with a time-dependent learning effect. Biskup (2008)provided a comprehensive review of scheduling models withlearning considerations.

Janiak and Rudek (2009) considered a learning effect model inwhich the learning curve is S-shaped. They provided NP-hardproofs for two cases of the problem to minimize the makespan.

ll rights reserved.

. Edwin Cheng.

Chuang).

Cheng, Lai, Wu, and Lee (2009) developed a learning effect modelin which the job processing time is a logarithm function of the nor-mal processing time of jobs already processed. They provided opti-mal solutions for several single-machine problems. Cheng, Lee, andWu (2010) considered a new scheduling model with deterioratingjobs, learning and setup times. They obtained polynomial-timeoptimal solutions for single-machine problems. Janiak and Rudek(2010) suggested a new approach called multi-ability learning thatgeneralizes existing ones and more precisely models real-life set-tings. They focused on the makespan problem and provided opti-mal polynomial time algorithms for special cases. Ji and Cheng(2010) considered a scheduling problem with job-dependentlearning effects and multiple rate-modifying activities. Theyshowed that the problem of minimizing the total completion timeis polynomially solvable. Lee, Wu, and Hsu (2010) investigated asingle-machine problem with the learning effect and release timesto minimize the makespan. Wang, Wang, and Wang (2010) consid-ered resource allocation scheduling with learning effects where theprocessing time of a job is a function of the job’s position in a se-quence and its resource allocation. They provided a polynomialalgorithm to find the optimal job sequence and resource allocation.Zhang and Yan (2010) provided a general learning effect model andderived optimal solutions for single-machine and flowshop prob-lems. Wang, Sun, and Sun (2010) and Wang and Wang (2011) pro-vided optimal solutions for a number of single-machine problemswith an exponential sum-of-actual-processing-time-based learn-ing effect. Lai and Lee (2011) proposed a learning effect model in

814 W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818

which the processing time of a job is a general function of the nor-mal processing times of the jobs already processed and its ownscheduled position. Rudek (2011) provided the computationalcomplexity and solution algorithms for flowshop scheduling prob-lems with the learning effect. Yang and Yang (2011) consideredsingle-machine group scheduling with setup and job processingtimes with the learning effect to minimize the makespan. Theyproved that the problem is polynomially solvable. Li, Hsu, Wu,and Cheng (2011) proposed a truncated position-based learningmodel to minimize the total completion time. They developed abranch-and bound algorithm and three simulated annealingalgorithms to solve the two-machine flowshop problem. Zhu,Sun, Chu, and Liu (2011) investigated single-machine group sched-uling problems with resource allocation and learning effects. Lee(2011) proposed a general position-based learning curve modelin which the plateau and S-shaped curves are its special cases.He provided optimal solutions for single-objective and multiple-objective problems on a single machine. Wu, Huang, and Lee(2011) studied a two-agent scheduling problem on a single ma-chine with learning considerations. The objective was to minimizethe total tardiness of the jobs for the first agent, given that no tardyjobs are allowed for the second agent. Cheng, Cheng, Wu, Hsu, andWu (2011) studied a two-agent single-machine problem withtruncated sum-of-processing-times-based learning effects. Theyutilized a branch-and-bound algorithm and three simulatedannealing algorithms to obtain optimal and near-optimal solu-tions. Cheng, Wu, Cheng, and Wu (2011) studied a two-agent prob-lem on a single machine in which the deteriorating jobs andlearning effects were considered concurrently. Cheng, Wu, Chen,Wu, and Cheng (2012) considered a two-machine flowshop sched-uling with a truncated learning function to minimize the make-span. They proposed a branch-and-bound algorithm and geneticalgorithm to find the optimal and approximate solutions. Yang,Cheng, Yang, and Hsu (2012) considered an unrelated parallel-ma-chine scheduling with aging effects and multi-maintenance activi-ties to minimize the total machine load. They provided an efficientalgorithm to solve the problem.

Pinedo (2008) pointed out that a bank of machines in parallel isa setting that is important from both a theoretical and a practicalpoint of view. From a theoretical viewpoint, it is a generalizationof the single machine and a special case of the flexible flowshop.From a practical point of view, it is important because the occur-rence of resources in parallel is common in the real world. How-ever, research on scheduling with learning effects on parallelmachines is relatively unexplored. Eren (2009) considered a bi-criteria parallel machines scheduling problem with a learning ef-fect of setup times and removal times. He provided a mathematicalprogramming model for the problem of minimizing the weightedsum of total completion time and total tardiness. Toksari andGuner (2009) studied a parallel machine earliness/tardiness (ET)scheduling problem with different penalties under the effects oflearning and deterioration. Okołowski and Gawiejnowicz (2010)considered the parallel-machine makespan problem in which thelearning effect on job processing times is modeled by the generalDeJong’s learning curve. They proposed two exact algorithms: asequential branch-and-bound algorithm and a parallel branch-and-bound algorithm for this NP-hard problem. Hsu, Kuo, and Yang(2011) studied an unrelated parallel machine problem with setuptime and learning effects in which the setup time of each job ispast-sequence-dependent. They derived a polynomial time solu-tion for the total completion time problem. Using the same model,Kuo, Hsu, and Yang (2011) considered the problem of minimizingthe total absolute deviation of job completion times as well asthe total load on all machines. They showed that the proposedproblem is polynomially solvable.

In many situations, there are machines operating in parallel andthe skills of the workers might be different due to their individualexperience. In this paper, we study a uniform parallel machinescheduling problem. The objective is to jointly find a near-optimalschedule and an assignment of operators to machines to minimizethe makespan. The remainder of this paper is organized as follows.The problem formulation is presented in the next section. In Sec-tion 3, a lower bound and two heuristic algorithms are proposedfor this problem. The computational experiments are conductedin Section 4. The conclusion is given in the last section.

2. Problem formulation

Formulation of the proposed learning effect model in the uni-form parallel machine is as follows. There are n jobs to be sched-uled on m uniform parallel machines. All the jobs are availablefor processing at time 0. Job j has a processing time pj, and canbe processed on a machine i. Without loss of generality, we assumethat p1 P p2 P � � �P pn. Once a job starts to be processed, it mustbe completed without interruption. Let si denote the speed ofmachine i. Without loss of generality, we assume that s1 P s2

P � � �P sm. Each machine can process one job at a time, and thereis no precedence relation between jobs. In addition, there are moperators to be assigned to one machine each. In this paper, weconsider the learning effect model proposed by Lee (2011) andthe assignment of operators to machines. The actual processingtime of job j is

pikj½r� ¼ pj

Yr�1

l¼0

al;k=si ð1Þ

if it is processed by operator k on machine i and scheduled in the rthposition in a sequence, where a0,k = 1 and 0 < al;k 6 1 for l = 1, . . . ,nand k = 1, . . . ,m. Note that al,k denotes the learning rate on process-ing the lth position job by operator k and

Qrl¼0al;k denotes the cumu-

lative level of the learning effect after processing r jobs by operatork. Let S = (S1, . . . ,Sm) denote a schedule where Si is the schedule of asubset of jobs assigned to machine i under S. Let p = (p(1), . . . ,p(m))denote an assignment of operator p(i) to machine i for i = 1, . . . ,m.Thus, the objective of this paper is to jointly find a near-optimalschedule S� and a near-optimal assignment of operators to ma-chines such that

max Cp�ð1ÞðS�Þ;Cp�ð2ÞðS�Þ; . . . ;Cp�ðmÞðS�Þ� �

6 max Cpð1ÞðSÞ;Cpð2ÞðSÞ; . . . ;CpðmÞðSÞ� �

Using the conventional notation, our problem is denoted asQm|LE|Cmax.

3. Heuristic algorithms

The problem under study is NP-hard even without consider-ation of the learning effects (Lenstra, Rinnooy Kan, & Brucker,1977). Thus, developing efficient heuristic algorithms would be agood approach. In this paper, we utilize the genetic algorithmand the particle swarm optimization method. Before presentingthe algorithms, we first provide a lower bound adapted fromPinedo (2008) to evaluate the performance of the heuristics.

Property 1. (Pinedo, 2008). Let A ¼min16k6mQn

l¼0al;k, then

LB¼A

�max p1=s1;ðp1þp2Þ=ðs1þs2Þ; . . . ;Xm�1

j¼1

pj

Xm�1

i¼1

si

,;Xn

j¼1

pj

Xm

i¼1

si

,( )

is a lower bound of the makespan.

Parent 1

Parent 2

Offspring

1.54 0.27 2.35 0.74 1.77 2.13 0.41 1.54

2.51 0.25 1.45 1.22 2.47 2.89 1.74 2.65

1.54 0.25 1.45 0.74 1.77 2.89 1.74 1.54

Fig. 1. Crossover operator.

W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818 815

Lee (2011) showed that the shortest processing time (SPT) orderprovided the optimal sequence for the single-machine makespanproblem under the proposed model, as stated in Property 2. Wewill use it to facilitate the search process of both algorithms. Withthe help of Property 2, we need to determine the assignment ofoperators and jobs to machines, and the job sequence on each ma-chine must follow the SPT rule to achieve local optimality.

Property 2. (Lee, 2011). For the 1|LE|Cmax problem, the optimalschedule is obtained by sequencing jobs in the shortest processing time(SPT) order.

3.1. Genetic algorithm (GA)

The genetic algorithm (GA) has received considerable attentionregarding its potential as an optimization technique for many com-plex problems. It has been successfully applied in the area of indus-trial engineering, which includes scheduling and sequencing,reliability design, vehicle routing, facility layout and location,transportation, and other areas.

The usual form of GA has been described by Goldberg (1989).The GA starts with an initial population of random solutions. Eachindividual, or chromosome, in the population represents a solutionto the problem. Chromosomes evolve through successive genera-tions. In each generation, chromosomes are evaluated by a mea-sure of fitness. Chromosomes with smaller fitness values havehigher probabilities of being selected to produce offspring throughthe crossover and mutation operations. The crossover operatormerges two chromosomes to create offspring which inherit fea-tures from their parents. The mutation operator produces a spon-taneous random change in genes to prevent prematureconvergence. The procedures of the GA implemented to the prob-lem are as outlined below.

3.1.1. Chromosome representationFor a problem of n jobs and m machines, each chromosome has

n + m genes. We randomly generate the (n + m)-dimensional posi-tion of a chromosome with n + m uniform random numbers be-tween 0 and m. The integer parts of the first n genes plus 1represent the machine that jobs are assigned to. The ordering ofthe last m genes represents the assignment of operators to the ma-chines. For example, a chromosome of (2.32,0.12,1.56,2.79,1.43,2.34,1.45,0.28) in a problem with 5 jobs and 3 machines wouldrepresent the schedule of job 2 assigned to machine 1, jobs 3 and5 assigned to machine 2, jobs 1 and 4 assigned to machine 3,

0.54 1.47 2.48 1.89 0.45 1.25 0.47 2.77

Parent

Fig. 2. Mutation

operator 3 assigned to machine 1, operator 2 assigned to machine2, and operator 1 assigned to machine 3.

3.1.2. InitializationGenerate an initial population of N chromosomes and calculate

their objective values.

3.1.3. EvaluationCalculate the fitness value of each chromosome as follows

fitnessi ¼ b� logðCmaxiÞ ð2Þ

where Cmaxiis the makespan of each chromosome and b is chosen to

be a sufficiently large number such that the fitness value of eachchromosome remains positive.

3.1.4. SelectionWe use the roulette wheel method to select the parent chromo-

somes where the selection probability is given as

pk ¼ fitnessk

XN

i¼1

fitnessk

,ð3Þ

where pk and fitnessk are the selection probability and the fitness va-lue of chromosome k.

3.1.5. CrossoverIn this study, we use the flat crossover operator which equally

selects the value of each gene from its parents’ genes to form theoffspring. As shown in Fig. 1, the resulting offspring would be(1.54,0.25,1.45,0.74,1.77,2.89,1.74,1.54) if the 1st, 4th, 5th and8th genes are selected from parent 1, and the other genes are fromparent 2.

3.1.6. MutationThe mutation mechanism randomly selects a gene and alters its

value. As shown in Fig. 2, the fourth gene is randomly selected andaltered to 0.66.

3.2. Particle swarm optimization method (PSO)

Particle swarm optimization (PSO) is an evolutionary computa-tional method advocated by Kennedy and Eberhart (1995). Its mainconcept is inspired by the collective behaviors of animals. Bees oranimals exchange their experiences of searching for food. Thiscauses the bees to fly, or the animals to move, to the food usingthe shortest path or in a more efficient way. In PSO, each particleis embedded with the relevant information regarding the decisionvariables and a fitness value that gives an indication of the perfor-mance of this particle. Basically, the trajectory of each particle isupdated according to its own flying experience as well as that ofthe best particle in the swarm. The elements of the PSO imple-mented to the problem are shown below.

3.2.1. InitializationThe representation of the position of the particle is the real

number coding as described in Section 3.1.1.

Offspring

0.54 1.47 2.48 0.66 0.45 1.25 0.47 2.77

procedure.

Table 1The actual values of learning rates for large job-sized problems.

Pattern Learning rates

No LE al,k = 1 for l = 1, . . . ,9, k = 1, . . . ,mConstant a1,k � U(0.95,0.99) for i = 1,2, . . . ,m

al,k = a1,k for l = 2, . . . ,4al,k = 1 for l P 5

Increasing a1,k � U(0.93,0.95) for i = 1,2, . . . ,mal,k = al�1,k + 0.002 for l = 2, . . . ,4al,k = 1 for l P 5

Decreasing a1,k � U(0.98,1) for i = 1, 2, . . . ,mal,k = al�1,k � 0.002 for l = 2, . . . ,4al,k = 1 for l P 5

Table 2Different level of every parameter for GA and PSO.

GA

Crossover rate 0.7 (Pc 1) 0.8 (Pc 2) 0.9 (Pc 3)Mutation rate 0.01 (Pm 1) 0.02 (Pm 2) 0.03 (Pm 3)

PSOMaximum velocity 28 (MV1) 29 (MV2) 30 (MV3)Weight 0.7 (W1) 0.8 (W2) 0.9 (W3)The cognition

learning factorc1 = 2, c2 = 2(CLF1)

c1 = 2, c2 = 1.49(CLF2)

c1 = 1.49, c2 = 1.49(CLF3)

Table 4Parameter settings of GA and PSO.

GA PSO

Iteration number is 500 Iteration number is 500Population size is 300 Particle number is 300Crossover rate is 0.9 Maximum velocity is 30Mutation rate is 0.03 Weight is 0.8b Is set to be 100 The cognition learning factor: c1 = 2,

c2 = 1.49 b is set to be 100

816 W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818

3.2.2. Calculate the fitness valueThe fitness value of each particle is calculated according to Eq.

(2), and the value of parameter b is the same as that in Section3.1.3.

3.2.3. Update the particle local bestFor each particle, compare its current fitness value with its opti-

mal fitness value. Replace its optimal fitness value and optimal po-sition if its current fitness value is smaller than its optimal fitnessvalue.

3.2.4. Update the global bestFor each particle, compare its optimal fitness value with the glo-

bal optimal fitness value. Replace the global optimal fitness valueand the global optimal position if its optimal fitness value is smal-ler than the global optimal fitness value.

3.2.5. Update the velocity and position of particlesAt iteration t, update the velocity and position of each particle

as follows,

v tij ¼

wt� v t�1

ij þ c1 � randð Þ � pt�1ij � xt�1

ij

� �þ c2 � randð Þ

� gt�1 � xt�1ij

� �ð4Þ

Table 3L9 orthogonal table and results for GA and PSO.

No. GA PSO

Crossover rate Mutation rate S/N ratio Maximum

1 Pc 1 Pm 1 �36.4569 MV1

2 Pc 1 Pm 2 �36.6233 MV1

3 Pc 1 Pm 3 �35.8838 MV1

4 Pc 2 Pm 1 �36.579 MV2

5 Pc 2 Pm 2 �36.9935 MV2

6 Pc 2 Pm 3 �36.2314 MV2

7 Pc 3 Pm 1 �36.4448 MV3

8 Pc 3 Pm 2 �35.8875 MV3

9 Pc 3 Pm 3 �35.7491 MV3

xtij ¼ xt�1

ij þ v t�1ij ð5Þ

where w is the inertia weight, and its value will decrease when iter-ation increases; v t�1

ij is the velocity of particle i in the jth dimensionat iteration t � 1, c1 and c2 is the cognition learning factor, rand( ) is arandom number between 0 and 1, pt�1

ij is the best solution of particlei in the jth dimension up to iteration t � 1, gt�1 is the best solutionamong all the particles up to iteration t � 1, xt�1

ij is the position ofparticle i in the jth dimension at iteration t � 1. In addition, we setan upper and lower limit for the velocity of the particles as follows

If v tij > vmax; then v t

ij ¼ vmax

If v tij < �vmax; then v t

ij ¼ �vmaxð6Þ

4. Computational experiments

To evaluate the performance of the proposed algorithms, a com-putational experiment is conducted in this section. All the algo-rithms are coded in Visual Basic 6.0 and run on Intel Coreversion i3 on a personal computer with 2.94 GHz CPU and 1.9 GBRAM on Windows XP.

The proposed heuristic algorithms are tested with two differentnumbers of jobs: n = 100 and 200. The numbers of machines (m)are set at three levels, namely 10, 20 and 30. The processing times(pj) are generated from a discrete uniform distribution, namelyU(1,100). The speeds of the machines (si) are generated from a con-tinuous uniform distribution, namely U(1,10). Four patterns oflearning effects are studied. They are no learning effect (No LE),constant learning rate (Const), increasing learning rate (Inc), anddecreasing learning rate (Dec). The framework of generating actualvalues is presented in Table 1. A set of 100 instances is randomlygenerated for each situation. As a result, 24 experimental casesare conducted and a total of 2400 instances are tested.

In this paper, we utilize the Taguchi method to determine thevalues of the parameters in GA and PSO. The Taguchi method is astatistical method to achieve robust parameter design (Cheng &Chang, 2007; Kuo, Syu, Chen, & Tien, 2012; Yildiz, 2009). After sev-eral pretests, we set each of these parameters at three levels asshown in Table 2. The orthogonal arrays represent a set ofexperiments. In PSO, the total number of possible experiments is

velocity Weight The cognition learning factor S/N ratio

W1 CLF1 �34.6270W2 CLF2 �32.9278W3 CLF3 �32.6328W1 CLF2 �32.2835W2 CLF3 �33.4760W3 CLF1 �34.7037W1 CLF3 �33.1549W2 CLF1 �33.4526W3 CLF2 �32.9062

Table 5The computational results of GA and PSO for large job-sized problems.

m n Pattern GA/LB PSO/LB Mean CPU time

Mean Max Mean Max GA PSO

10 100 No LE 1.3559 2.0576 1.0453 1.1826 68.9081 74.5113Const 1.4377 2.5324 1.0931 1.3011 66.0978 71.9083Inc 1.4095 2.1110 1.0950 1.2221 63.1877 71.3081Dec 1.4642 2.4890 1.1048 1.3240 63.1443 71.1983

200 No LE 1.6092 2.7562 1.1020 1.5175 170.2711 200.6799Const 1.5899 2.8091 1.1384 1.3778 171.9185 195.0400Inc 1.6987 3.1736 1.1407 1.4440 171.0761 198.3597Dec 1.5855 2.3808 1.1368 1.4762 171.2615 195.8638

20 100 No LE 1.7188 2.4287 1.2234 1.4147 65.5876 79.0461Const 1.8630 2.7147 1.3652 1.5619 65.7636 78.3473Inc 1.8791 2.3866 1.3537 1.5493 66.3714 79.4691Dec 1.8773 2.5087 1.3560 1.6084 65.7289 79.5114

200 No LE 1.8310 2.5588 1.2528 1.4790 140.9204 172.0282Const 1.9945 2.9792 1.3873 1.6155 141.3012 173.5059Inc 1.9680 2.7165 1.3849 1.6936 141.3661 173.1888Dec 1.9567 2.6474 1.3873 2.0076 144.6989 173.6605

30 100 No LE 1.9963 2.5716 1.3995 1.6705 82.5751 98.6717Const 2.4733 3.3445 1.6510 1.8995 82.3577 102.7313Inc 2.4035 3.2332 1.6603 2.0911 82.2678 99.7031Dec 2.4226 3.3710 1.6614 1.9864 82.0006 102.9380

200 No LE 2.0876 3.1295 1.4032 1.6415 154.4243 186.1148Const 2.4073 3.4658 1.6147 1.9137 149.0319 187.7024Inc 2.3985 3.2602 1.6078 2.1447 148.9144 192.0494Dec 2.4235 3.3066 1.5925 2.0043 155.3245 187.8692

W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818 817

33, but the orthogonal array needs a set of only 9 experiments. Withthe help of these 9 experiments, we can find a suitable level of eachfactor; the results are illustrated in Table 3. Each experiment isimplemented three times. The best values of the parameters inGA and PSO are shown in Table 4.

The mean and maximum ratios of GA/LB and PSO/LB are given,where GA denotes the solution obtained by the genetic algorithm,PSO denotes the solution obtained by the particle swarm optimiza-tion method, and LB denotes the lower bound from Property 1. Inaddition, the average execution times of the two algorithms are gi-ven in Table 5. The results indicate that the performance of PSO isconsistently better than that of GA when the number of jobs is large,although it consumes more time. The ratio with no learning effect isalways the smallest one among the four learning curves. It impliesthat the lower bound is better, or that the heuristics yield bettersolutions in this case. It is worth mentioning that the operator withthe best learning ability is not always assigned to the highest speedmachine in the optimal solution, although this phenomenon is notpresent in Table 5. Moreover, it is clear that the distribution of thespeeds of the machines has no influence on the performance of thePSO and the GA. It is observed that the ratios of GA/LB and PSO/LBremain the same when the number of jobs increases. Thus, PSO isrecommended for the proposed problem.

5. Conclusion

In this paper, we utilized GA and PSO algorithms to solve a uni-form parallel machine problem with learning effects. The objectivewas to jointly find an optimal assignment of operators to machinesand an assignment of jobs to machines such that the makespan isminimized. Computational experiments were conducted to evaluatethe performance of the heuristics for several different scenarios. Theresults show that PSO outperforms GA, and thus it is recommended.

Acknowledgements

The authors are grateful to the area editor and the referees,whose constructive comments have led to a substantial improve-ment in the presentation of the paper.

References

Biskup, D. (2008). A state-of-the-art review on scheduling with learning effect.European Journal of Operational Research, 188(2), 315–329.

Cheng, B. W., & Chang, C. L. (2007). A study on flowshop scheduling problemcombining Taguchi experimental design and genetic algorithm. Expert Systemswith Applications, 32(2), 415–421.

Cheng, T. C. E., Cheng, S. R., Wu, W. H., Hsu, P. H., & Wu, C. C. (2011). A two-agentsingle-machine scheduling problem with truncated sum-of-processing-times-based learning considerations. Computers and Industrial Engineering, 60(4),534–541.

Cheng, T. C. E., Lai, P. J., Wu, C. C., & Lee, W. C. (2009). Single-machine schedulingwith sum-of-logarithm-processing-times-based learning considerations.Information Sciences, 179(18), 3127–3135.

Cheng, T. C. E., Lee, W. C., & Wu, C. C. (2008). Some scheduling problems withdeteriorating jobs and learning effects. Computers and Industrial Engineering,54(4), 972–982.

Cheng, T. C. E., Lee, W. C., & Wu, C. C. (2010). Scheduling problems withdeteriorating jobs and learning effects including proportional setup times.Computers and Industrial Engineering, 58(2), 326–331.

Cheng, T. C. E., Wu, C. C., Chen, J. C., Wu, W. H., & Cheng, S. R. (2012). Two-machineflowshop scheduling with a truncated learning function to minimize themakespan. International Journal of Production Economics. doi: org/10.1016/j.ijpe.2012.03.027.

Cheng, T. C. E., Wu, W. H., Cheng, S. R., & Wu, C. C. (2011). Two-agent schedulingwith position-based deteriorating jobs and learning effects. Applied Mathematicsand Computation, 217(21), 8804–8824.

Eren, T. (2009). A bicriteria parallel machine scheduling with a learning effect ofsetup and removal times. Applied Mathematical Modelling, 33(2), 1141–1150.

Goldberg, D. (1989). Genetic Algorithms in Search, Optimization and Machine Learning.Reading, MA: Addison-Wesley.

Hsu, C. J., Kuo, W. H., & Yang, D. L. (2011). Unrelated parallel machine schedulingwith past-sequence-dependent setup time and learning effects. AppliedMathematical Modelling, 35(3), 1492–1496.

Janiak, A., & Rudek, R. (2009). Experience based approach to scheduling problemswith the learning effect. IEEE Transactions on Systems, Man, and Cybernetics –Part A, 39(2), 344–357.

Janiak, A., & Rudek, R. (2010). A note on a makespan minimization problem with amulti-abilities learning effect. Omega, The International Journal of ManagementScience, 38(3–4), 213–217.

Ji, M., & Cheng, T. C. E. (2010). Scheduling with job-dependent learning effects andmultiple rate-modifying activities. Information Processing Letters, 110(11),460–463.

Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceedings of IEEEInternational Conference on Neural Networks, 4, 1942–1948.

Kuo, W. H., Hsu, C. J., & Yang, D. L. (2011). Some unrelated parallel machinescheduling problems with past-sequence-dependent setup time and learningeffects. Computers and Industrial Engineering, 61(1), 179–183.

Kuo, R. J., Syu, Y. J., Chen, Z. Y., & Tien, F. C. (2012). Integration of particle swarmoptimization and genetic algorithm for dynamic clustering. Information Sciences,195, 124–140.

818 W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818

Lai, P. J., & Lee, W. C. (2011). Single-machine scheduling with general sum-of-processing-time-based and position-based learning effect. Omega, TheInternational Journal of Management Science, 39(5), 467–471.

Lee, W. C. (2011). Scheduling with general position-based learning curves.Information Sciences, 181(24), 5515–5522.

Lee, W. C., & Wu, C. C. (2004). Minimizing total completion time in a two-machineflowshop with a learning effect. International Journal of Production Economics,88(1), 85–93.

Lee, W. C., Wu, C. C., & Hsu, P. H. (2010). A single-machine learning effect schedulingproblem with release times. Omega, The International Journal of ManagementScience, 38, 3–11.

Lenstra, J. K., Rinnooy Kan, A. H. G., & Brucker, P. (1977). Complexity of machinescheduling problems. Annals of Discrete Mathematics, 1, 343–362.

Li, D. C., Hsu, P. H., Wu, C. C., & Cheng, T. C. E. (2011). Two-machine flowshopscheduling with truncated learning to minimize the total completion time.Computers and Industrial Engineering, 61(3), 656–662.

Okołowski, D., & Gawiejnowicz, S. (2010). Exact and heuristic algorithms forparallel-machine scheduling with DeJong’s learning effect. Computers andIndustrial Engineering, 59(2), 272–279.

Pinedo, M. (2008). Scheduling: Theory, algorithms, and systems (3rd ed.). UpperSaddle River, NJ: Prentice Hall.

Rudek, R. (2011). Computational complexity and solution algorithms for flowshopscheduling problems with the learning effect. Computers and IndustrialEngineering, 61(1), 20–31.

Toksari, M. D., & Guner, E. (2009). Parallel machine earliness/tardiness schedulingproblem under the effects of position based learning and linear/nonlineardeterioration. Computers and Operations Research, 36(8), 2394–2417.

Wang, J. B., Ng, C. T., Cheng, T. C. E., & Liu, L. L. (2008). Single-machine schedulingwith a time-dependent learning effect. International Journal of ProductionEconomics, 111(2), 802–811.

Wang, J. B., Sun, L. H., & Sun, L. Y. (2010). Scheduling jobs with an exponentialsum-of-actual-processing-time-based learning effect. Computers andMathematics with Applications, 60(9), 2673–2678.

Wang, J. B., & Wang, J. J. (2011). Single-machine scheduling jobs with exponentiallearning functions. Computers and Industrial Engineering, 60(4), 755–759.

Wang, D., Wang, M. Z., & Wang, J. B. (2010). Single-machine scheduling withlearning effect and resource-dependent processing times. Computers andIndustrial Engineering, 59(3), 458–462.

Wang, J. B., & Xia, Z. Q. (2005). Flow-shop scheduling with learning effect. Journal ofthe Operational Research Society, 56(11), 1325–1330.

Wu, C. C., Huang, S. K., & Lee, W. C. (2011). Two-agent scheduling with learningconsideration. Computers and Industrial Engineering, 61(4), 1324–1335.

Wu, C. C., & Lee, W. C. (2008). Single-machine scheduling problems with a learningeffect. Applied Mathematical Modelling, 32, 1191–1197.

Yang, D. L., Cheng, T. C. E., Yang, S. J., & Hsu, C. J. (2012). Unrelated parallel-machinescheduling with aging effects and multi-maintenance activities. Computers andOperations Research, 39(7), 1458–1464.

Yang, S. J., & Yang, D. L. (2011). Single-machine scheduling simultaneous withposition-based and sum-of processing-times-based learning considerationsunder group technology assumption. Applied Mathematical Modeling, 35(5),2068–2074.

Yildiz, A. R. (2009). A new design optimization framework based on immunealgorithm and Taguchi’s method. Computers in Industry, 60(8), 613–620.

Zhang, X. G., & Yan, G. L. (2010). Machine scheduling problems with a generallearning effect. Mathematical and Computer Modelling, 51(1–2), 84–90.

Zhu, Z., Sun, L., Chu, F., & Liu, M. (2011). Single-machine group scheduling withresource allocation and learning effect. Computers and Industrial Engineering,60(1), 148–157.