Experimental analysis of optimization techniques on the road passenger transportation problem

15
Experimental analysis of optimization techniques on the road passenger transportation problem Beatriz Lo ´ pez a, , Victor Mun ˜oz a , Javier Murillo a , Federico Barber b , Miguel A. Salido b , Montserrat Abril b , Mariamar Cervantes b , Luis F. Caro a , Mateu Villaret a a University of Girona, Campus Montilivi, edifice P4, 17071 Girona, Spain b DSIC, Universidad Polite´cnica de Valencia, Spain article info Article history: Received 14 September 2007 Received in revised form 3 March 2008 Accepted 31 October 2008 Available online 21 December 2008 Keywords: Optimization Constraints Search Distributed problems Metaheuristics Bioinspired approaches abstract Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory task. Literature provides surveys based on summaries of previous studies, which are often based on theoretical descriptions of the methods. An engineer, however, requires some evidence from experimental evaluations in order to make the appropriate decision when selecting a technique for a problem. This is what we have done in this paper: experimentally analyzed a set of representative state- of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation problem. This is an optimization problem in which drivers should be assigned to transport services, fulfilling some constraints and minimizing some function cost. The experimental results have provided us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime behavior, computational time, memory requirements, parameters, and free downloadable tools. Based on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is also helpful for other engineers facing a similar problem. & 2008 Elsevier Ltd. All rights reserved. 1. Introduction Whenever engineers or researchers face a new problem, they need to review the state of the art in that particular field, in order to be sure that the problem has not already been solved in the past, and to ascertain what the most promising approach to follow could be in order to proceed in an appropriate way. In each discipline, surveys are published periodically. However, most surveys are summaries of previous studies which give some hints as to what the techniques are. But based on the information provided in the surveys it is often difficult to evaluate the suitability of the techniques for the problem at hand, unless the techniques are being applied to the particular problem. This was the situation we have found ourselves in. We are dealing with the road passenger transportation problem. Road passenger transportation has been a matter of concern for traffic authorities for years, trying to minimize bus accidents. European law is evolving in order to regulate professional driving licences and driving times, with the aim of assuring the highest level of safety for citizens who use road passenger transport. These new laws and regulations mean a considerable number of requirements needed to be met by inter-urban transport compa- nies operating just-in-time services. That is, services required within a short period of time, usually from one day to the next (conference events, holidays, excursions). The problem facing these companies is the allocation of one-day drivers to required services. The road passenger transportation problem fits neatly into the category of optimization problems, in which resources (drivers) have to be assigned to tasks (transport services) fulfilling some constraints and minimizing some function cost. Although in most cases these problems are solved by state-of-the-art techniques, there are still a lot of recent papers dealing with the driver allocation problem, such as Abbink et al. (2007), Portugal et al. (2006), and Laplagne et al. (2005), indicating that the problem is still open, due to the different problem specificities the engineers have to tackle. There do not appear to be any general guidelines for choosing the appropriate technique given the specific description of a problem. In our case, services are inter-urban, just-in-time scheduling is required, and new legislation defines new constraints. In addition, computational efficiency is the main feature used by researchers to compare techniques, while other features are also important. Expressiveness, for example, is an interesting issue regarding solution interpretation or even the subsequent incorporation of robustness into the solution. Our intention with this experimental analysis is to contribute to the ARTICLE IN PRESS Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/engappai Engineering Applications of Artificial Intelligence 0952-1976/$ - see front matter & 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2008.10.014 Corresponding author. Tel.: +34 972418880; fax: +34 972418976. E-mail address: [email protected] (B. Lo ´ pez). Engineering Applications of Artificial Intelligence 22 (2009) 374–388

Transcript of Experimental analysis of optimization techniques on the road passenger transportation problem

ARTICLE IN PRESS

Engineering Applications of Artificial Intelligence 22 (2009) 374–388

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence

0952-19

doi:10.1

� Corr

E-m

journal homepage: www.elsevier.com/locate/engappai

Experimental analysis of optimization techniques on the road passengertransportation problem

Beatriz Lopez a,�, Victor Munoz a, Javier Murillo a, Federico Barber b, Miguel A. Salido b,Montserrat Abril b, Mariamar Cervantes b, Luis F. Caro a, Mateu Villaret a

a University of Girona, Campus Montilivi, edifice P4, 17071 Girona, Spainb DSIC, Universidad Politecnica de Valencia, Spain

a r t i c l e i n f o

Article history:

Received 14 September 2007

Received in revised form

3 March 2008

Accepted 31 October 2008Available online 21 December 2008

Keywords:

Optimization

Constraints

Search

Distributed problems

Metaheuristics

Bioinspired approaches

76/$ - see front matter & 2008 Elsevier Ltd. A

016/j.engappai.2008.10.014

esponding author. Tel.: +34 972418880; fax:

ail address: [email protected] (B. Lopez)

a b s t r a c t

Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory

task. Literature provides surveys based on summaries of previous studies, which are often based on

theoretical descriptions of the methods. An engineer, however, requires some evidence from

experimental evaluations in order to make the appropriate decision when selecting a technique for a

problem. This is what we have done in this paper: experimentally analyzed a set of representative state-

of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation

problem. This is an optimization problem in which drivers should be assigned to transport services,

fulfilling some constraints and minimizing some function cost. The experimental results have provided

us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime

behavior, computational time, memory requirements, parameters, and free downloadable tools. Based

on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is

also helpful for other engineers facing a similar problem.

& 2008 Elsevier Ltd. All rights reserved.

1. Introduction

Whenever engineers or researchers face a new problem, theyneed to review the state of the art in that particular field, in orderto be sure that the problem has not already been solved in thepast, and to ascertain what the most promising approach to followcould be in order to proceed in an appropriate way. In eachdiscipline, surveys are published periodically. However, mostsurveys are summaries of previous studies which give some hintsas to what the techniques are. But based on the informationprovided in the surveys it is often difficult to evaluate thesuitability of the techniques for the problem at hand, unless thetechniques are being applied to the particular problem. This wasthe situation we have found ourselves in.

We are dealing with the road passenger transportation problem.Road passenger transportation has been a matter of concern fortraffic authorities for years, trying to minimize bus accidents.European law is evolving in order to regulate professional drivinglicences and driving times, with the aim of assuring the highestlevel of safety for citizens who use road passenger transport.These new laws and regulations mean a considerable number of

ll rights reserved.

+34 972418976.

.

requirements needed to be met by inter-urban transport compa-nies operating just-in-time services. That is, services requiredwithin a short period of time, usually from one day to the next(conference events, holidays, excursions). The problem facingthese companies is the allocation of one-day drivers to requiredservices.

The road passenger transportation problem fits neatly into thecategory of optimization problems, in which resources (drivers)have to be assigned to tasks (transport services) fulfilling someconstraints and minimizing some function cost. Although in mostcases these problems are solved by state-of-the-art techniques,there are still a lot of recent papers dealing with the driverallocation problem, such as Abbink et al. (2007), Portugal et al.(2006), and Laplagne et al. (2005), indicating that the problem isstill open, due to the different problem specificities the engineershave to tackle. There do not appear to be any general guidelinesfor choosing the appropriate technique given the specificdescription of a problem. In our case, services are inter-urban,just-in-time scheduling is required, and new legislation definesnew constraints. In addition, computational efficiency is the mainfeature used by researchers to compare techniques, while otherfeatures are also important. Expressiveness, for example, is aninteresting issue regarding solution interpretation or even thesubsequent incorporation of robustness into the solution. Ourintention with this experimental analysis is to contribute to the

ARTICLE IN PRESS

Optimizationmethods analyzed

Constraint-basedapproaches Metaheuristics Marked-based

approaches Bioinspired

Systematicsearch Distributed

GRASPTabu

Mathematicaloptimization

Constraint logicprogramming

Chronological backtrackingBranch and bound

Mixed integer programmingSynchronous backtrackingAsynchronous backtracking

General search

Prolog

Forward checking

Contraint propagation

Combinatorial auctions Genetic algorithmsAnt colony optimization

Fig. 1. Methods analyzed grouped in categories.

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 375

understanding of current state-of-the-art techniques, beyond apure efficiency analysis, but with other interesting features suchas model expressiveness, anytime behavior, memory require-ments, parameter tuning and tools availability. The characteriza-tion of the techniques provided in this paper could be a first steptowards the selection of the appropriate general technique whendealing with similar resource allocation problems and looking theproperties of a particular techniques.

We have experimentally analyzed a total of 12 techniques,grouped into four categories: bioinspired methods, metaheuris-tics, constraint-based methods, and market-based methods (seeFig. 1). Note that this classification is not crisp, since some of themethods can be assigned to more than one category. For example,genetic algorithms (GA) can be classified as metaheuristics orbioinspired methods. Nevertheless, we think that the methodschosen to perform the analysis widely covers the differentmethods available in the literature in the fields of both Opera-tional Research and Artificial Intelligence, including the newestdistributed and market-based approaches. It is important to notethat each technique requires a specific approach to the problem.So, for each technique analyzed, we provide some generalitiesregarding the technique, the modeling of the problem accordingto its requirement, and the experimental results obtained. Byproviding an illustrative use of several techniques for dealing withthe same combinatorial problem we intend to help both beginnersin the optimization world, and experts who wish to update theirscope in the area.

1 When more than one driver can be assigned to a journey (for reserve

situations, for example Ramalhinho Lourenc-o et al., 2001), the problem is known

to be an instance of the set covering problem, also NP-complete.

2. Problem description

In the road passenger transportation problem we are presentedwith a set of resources (drivers) D ¼ fd1; . . . ; dng and a set of tasks(services) S ¼ fs1; . . . ; smg to be performed by using the resources.The problem consists of finding the best assignment of drivers toservices, given a cost function and subject to the constraints andpreferences provided by the administration (local, national orEuropean). We are dealing, then, with a constraint optimization

problem (COP), and in particular, a scheduling problem, since weare interested in knowing the schedule of each driver in order todeploy all of the services requested. More specifically, since

drivers are our resources, we are dealing with a resourceallocation problem.

There is an alternative approach to the problem, in whichservices are grouped according to possible driver journeys. Ajourney is then defined as the set of services that can be assignedto a single driver (journey duties driver). Journey generation isknown as the crew scheduling problem, which is complementedby the rostering problem in which journeys are assigned to drivers(Ramalhinho Lourenc-o et al., 2001). In our experimental study,tackling the problem as a whole (service approach) or in two steps(journey approach) depends on the modeling capacities of thetechniques.

Regarding the problem complexity, when only one driver isassigned to a journey, as in our case, the crew scheduling problemis known to be an instance of the set partitioning problem(Laplagne et al., 2005; Kohl, 2003), which is NP-complete (Balasand Padberg, 1976; Garey and Johnson, 1979).1 Similarly, whenfacing the problem according to the service approach, thecomplexity is also NP-complete. In so far as we are looking forsolutions that minimize the allocation costs, we are dealing withan NP-hard problem.

2.1. Problem formalization

Definition 1. A service is a tuple si ¼ hsli; fli; sti; ftii where sli is thestart location, fli the final location, sti the initial time, and fti thefinal time ðstioftiÞ.

Definition 2. A driver is a tuple di ¼ hbci; kmci; sli; fli;hwi;hcii

where bci is the basic cost, kmci is the cost per kilometer, sli isthe start location, fli is the final location (often sli ¼ fli), hwi are theaccumulated two week hours, and hci is the cost per time unit.

There are two kinds of services to be considered: requested andintervening. Requested services (or services for short) are the onesthat customers have applied for, while intervening services arethose required to move the driver from the end location of aservice to the start location of the next service assigned to him.

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388376

Definition 3. Given two services, si and sj, with ftiostj, anintervening service between si and sj is defined as a tuple si�j ¼

hsli�j; fli�j; sti�j; fti�jiwhere sli�j is the start location (with sli�j ¼ fli),fli�j the final location (with fli�j ¼ slj), sti�j the initial time, and fti�j

the final time, with sti�j4fti and fti�jostj.

Given a set of services S, and a set of drivers D, a total number ofk intervening services could be required. Let I be the set of suchintervening services.

Definition 4. An allocation based on services is a list of pairs Ai ¼

½ðs1; di1 Þ; ðs2; di2 Þ; . . . ðsl; dil Þ� where si 2 S [ I, dj 2 D, and in which allconstraints are satisfied. Furthermore,

Ssi2ðAinIÞ

¼ S, that is, allrequested services are covered, and

Tsi2ðAinIÞ

¼ ;, that is, no serviceis repeated.

Among all the possible constraints of the problem (see Lopez,2005, for a complete description of the problem) the followingconstraints have been considered for the experimental study:

Overlapping: A driver cannot be assigned to two differentservices with overlapping times. In addition, a driver assignedto a service that ends at time t and location l cannot beassigned to another service that starts at time t þ 1, unless thelocation of the new service is the same ðlÞ. � Maximum driving time (MaxDT): the driving time required for

both the requested and intervening services.

� Maximum journey length (MJ): the addition of the driving time

plus the free time among assigned services cannot be over themaximum journey length allowed.

� Maximum driving time per two-weeks (MTB): the maximum

driving time per two weeks cannot be over 90 h.

Definition 5. A journey is an ordered set ji ¼ fs1; . . . ; spg wheresj 2 S [ I, in which the overlapping, maximum driving time andmaximum journey length constraints are satisfied.

Definition 6. An allocation based on journeys is a list of pairs Ai ¼

½ðj1; di1 Þ; ðj2; di2 Þ; . . . ðjl; dil Þ�where jk is a journey, dk 2 D, S �S

kjk (allservices are covered) and in which all constraints are satisfied.

The cost function, which measures the individual cost of a driveri in an allocation Ak, is the following:

costðAk; diÞ ¼ bci þðdistanceðAk; diÞ � kmciÞ

a þ ðhðAk; diÞ � hciÞb, (1)

where distanceðAk; diÞ is the distance covered by the driver in theAk allocation measured in kilometers, hðAk;diÞ is the journey of thedriver in the Ak allocation (including non-occupied time) and aand b are parameters of the cost function whose purpose it tomake kilometers and hours, which have different scales (kilo-meters are usually defined in [0,100] while hours in [0,24]),comparable. After several tries, we have set a ¼ 10:0 and b ¼ 7:0,respectively.

The cost function that measures the cost of an allocation Ak isdefined as the addition of the individual costs of the driverscostðAk; diÞ, that is,

CðAkÞ ¼X

i2f1;...;ng

costðAk; diÞ. (2)

The road passenger transportation problem consists of finding theallocation that minimizes the cost ðargmin8iðCðAiÞÞÞ subject to theabove constraints.

2.2. The workbench

In order to experimentally analyze the different techniques, upto 70 problem instances have been generated with differingcomplexity. The data corresponding to services (start and enddestinations, and start and end times) and drivers (basic cost, costper kilometer, cost per time unit, start and end location, andcumulated driving hours) have been generated randomly for eachexample. So, the first instance has been defined with the firstgenerated service and driver, the second instance with the secondtwo generated services and drivers, and so on until the 70thexample, the complexity of the 70th instance being greater than ina real case of the application we are dealing with. In this sense, itcould be said that the problem instances of our workbench havebeen partially stochastically generated.

With this generation procedure, we have defined three differentscenarios depending on the constraints used (a time unit ¼ 1

2 h):

Normal: MaxDT ¼ 22 time units (tu), MJ ¼ 30 tu, andMTB ¼ 180 tu. � Relaxed: only considering overlapping constraints. � Harder: MaxDT ¼ 18 tu, MJ ¼ 25 tu, and MTB ¼ 180 tu.

By default, the methods have been tested using the normalscenario, and then, the remaining scenarios have been also used toanalyze other possible method behaviors.

3. Constraint-based approaches

Constraint-based approaches model the problem as a con-straint satisfaction problem (CSP) by means of variables, theirdomains, and constraints that express dependencies amongvariable assignments. One solution in this approach is theassignment of a single value from its domain to each variable sothat no constraint is violated (Dechter, 2003; Apt, 2003). Aproblem with a solution is termed satisfiable or consistent. A SAT

Problem consists of a CSP with Boolean variables, that is, eachvariable maintains two possible values (Rossi et al., 2006).

All optimization problems are CSPs in the general sense (Rossiet al., 2006). Thus a COP is defined as a CSP together with anoptimization function which maps every solution to a numericalvalue. Thus, the goal is to find the solution with the best(maximum or minimum) value.

Constraint methods studied in our analysis are organized intofour groups (see Fig. 1): systematic, constraint propagation,distributed methods and mathematical optimization. Systematicmethods comprise chronological backtracking, branch and bound,and constraint logic programming (CLP), in which the flexibility andexpressiveness of constraints is enhanced (Bartak, 1999; Garcia de laBanda et al., 1996). Constraint propagation techniques are combinedwith systematic search methods in various forms to reduce thesearch space (Dechter, 2003; Apt, 2003). Forward checking (FC) isthe easiest example of this kind of hybrid method. The third groupincludes the new trends in distributed backtracking algorithms(Yokoo and Hirayama, 2000) and, finally, the fourth categorycorresponds to classic mathematical optimization methods.

3.1. Chronological backtracking

This is the simplest search algorithm. This algorithm exploresthe search tree for all possible assignment alternatives, employinga depth-first strategy. At each step, a node is expanded at thelowest level in the tree, which means that a value is assigned to avariable in which constraints are satisfied (partial solution). This

ARTICLE IN PRESS

s1=d1s1=d2

s2=d2 s2=d3

s3=d3

s2=d1

s3=d2s3=d1 s3=d3s3=d2s3=d1

s2=d2 s2=d3s2=d1

s3=d3s3=d2s3=d1 s3=d3s3=d2s3=d1

s2=d2 s2=d3s2=d1

s1=d3

Fig. 2. Search space of the chronological backtracking approach.

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 377

process is repeated until either a complete solution is found or afailure arises, that is, no assignment is possible. Then, thealgorithm returns to a higher level, at which it resumes the nodeexpansion (Apt, 2003; Dechter, 2003) (see Fig. 2). Some heuristicscan be applied in order to sort the variables and values to beassigned first, as well as the constraint to be checked first. When asolution is found, the cost of the solution is computed and theprocess continues. So the complete search space is explored inorder to look for alternative, low-cost solutions.

3.1.1. Problem modeling

We have formulated the service approach of our problem interms of variables, domains and constraints as follows: servicesare our variables, xi; the domain of each variable is the set ofdrivers D. So when a variable has a value assigned xi ¼ dj, it meansthat service si has the dj driver assigned to it. Regarding heuristics,we have used the following:

sorting variables: services have been ordered according to theirinitial start time; � sorting values: drivers to be assigned to each variable have

been ordered according to their cost; so lower-cost drivers aretried first;

� sorting constraints: overlapping constraint, driving time,

journey length and cumulated driving time.

Constraints are modeled as follows:

Overlapping 8i; j, xi ¼ xj, then ftiostj þ durationðsi�jÞ orftjosti þ durationðsj�iÞ where durationðskÞ ¼ ftk � stk.P � Maximum driving time: MaxDTXTSPi þ j2dutiesðdiÞ

durationðsjÞþ

TFPi where TSPi is the time required for driving from thestarting driver position to the starting location of the firstservice; dutiesðdiÞ are the services already assigned to thedriver in a partial or candidate solution, and TFPi is the timerequired for driving from the final location of the last service tothe final position of the driver.

� Maximum journey length: MJXTJi where TJi is the journey

duration of driver di. It is computed as TJi ¼ ðftlastðdutiesðdiÞÞþ

TFPiÞ � ðstfirstðdutiesðdiÞÞ� TSPiÞ where the first term computes the

initial time in which the driver starts his/her duties and thesecond term, the final time.

� Maximum driving time per two weeks: MTBXhwi þ TJi.

3.1.2. Results

We implemented the algorithm in Delphi and ran the differentworkbench problems. The results obtained with simple back-tracking are shown in Fig. 3 (see CB line). The x-axis shows thecomplexity of the problem, while the y-axis provides the time in

milliseconds. As can be seen in the graph, time increasesexponentially and no tests have been performed for cases beyond12. We have obtained similar results in each workbench scenario.

Regarding expressiveness, constraints are coded so anyproblem constraint can be programmed. The program input dataare the services, drivers, locations, and constants used in theconstraints (that is, MaxDT, MJ, and MTB). The inputs and theresults are easy to provide and interpret by a programmer.

3.2. Branch and bound

Whenever a partial solution is found, instead of traversing allof the search space, the branch and bound method computes itscost and compares it with the best solution found so far (upperbound). If the cost of the partial solution is higher, the algorithmbacktracks, pruning the subtree below it (Dechter, 2003). Inaddition, if the cost of the partial solution is not higher, but it ispossible to estimate its final cost, and the estimation goes over theupper bound, it is also pruned. So the key issue in this kind ofapproach is to define the appropriate estimator.

3.2.1. Problem modeling

The modeling of the problem regarding variables and con-straints is the same as in the chronological backtracking method(service approach). The main difficulty when using the branch andbound method was defining the appropriate function to estimatethe cost of a partial solution in order to prune the search andprovide an answer in a reasonable time (thus improve the resultsobtained from the chronological backtracking method). Thisestimation function should take into account the remainingassignments to be performed, which depend on both therequested services and the intervening services. This functioncan underestimate the real cost, but never overestimate it, inorder to ensure that we are not pruning optimal solutions.

The estimated cost function Fe has been defined as the sum ofthe individual estimation cost f e of the remaining services, R; thatis,

FeðRÞ ¼

Xsi2R

f eðsiÞ. (3)

The individual estimation of a remaining service f eðsiÞ is based on

the minimum driver cost to cover the service. According to theindividual driver cost of Eq. (1), two different situations canconflict when calculating the minimum: the driver with theminimum cost dc , or a driver with the minimum distance to thestart location of the service dd. In order to solve this conflict,the distance to be covered by both drivers is analyzed, and theminimum one is selected as the value of f e

ðsiÞ.

ARTICLE IN PRESS

0

50000

100000

150000

200000

250000

300000

350000

400000

1services, drivers

exec

utio

n tim

e (m

s)FCCBB&BMIPCLPABGRASPTabuCAGAAntsSB-DSB-C

4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70

Fig. 3. Times (ms) required by the different techniques analyzed.

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388378

3.2.2. Results

The results of the branch and bound method can be seen inFig. 3 (see B&B line). We also tested the algorithm whenconstraints are relaxed or, conversely, harder, where the algorithmbehaves in a similar way. The results are slightly better than inchronological backtracking, but the design (modeling) andimplementation required greater effort. That is to say, thedefinition of the estimator and the search strategy took longerto achieve than the naive chronological backtracking approach.

The branch and bound method implemented here is partiallyanytime. In other words, if the algorithm is stopped at anymoment, it provides the best solution found so far. But it cannotresume its execution.

3.3. Constraint propagation

Constraint propagation algorithms deal with search spacereduction via a process of inference which reduces variable values(Apt, 2003). There are two basic schemas: look-back and look-ahead (Bartak, 1999). The former checks through variables alreadyinstantiated and solves inconsistencies when they occur. Thelatter schema is proposed to prevent future conflicts.

FC is the easiest schema of a look-ahead strategy. When a valueis assigned to the current variable, any value in the domain of afuture variable which conflicts with this assignment is tempora-rily removed from the domain (Bartak, 1999).

3.3.1. Problem modeling

In this model, we have considered the SAT formulation of theservice approach to the problem whereby each variable maintainstwo possible values f0;1g. More specifically, our variables are theproduct of drivers and services, as follows:

8di 2 D; 8sj 2 S; Xij,

where Xij ¼ 0 when the service sj is not allocated to driver di,otherwise Xij ¼ 1. According to the SAT formulation, constraintsare modeled as follows:

Each service is allocated to only one driver:

8sj 2 S;Xdi�D

Xij ¼ 1.

Overlapping. If sj and sk are services with overlapping times:8di 2 D; Xij þ Xikp1. � Maximum driving time: 8di 2 D; MaxDTXTSPi þ TDi þ TTi þ

TFPi where� TSPi: driving time from the start point of the di driver to the

start location of his first service

TSPi ¼Xsj2S

ðTspislj �maxððXij �maxðminða;1Þ;0ÞÞ;0ÞÞ,

where parameter Tspislj is the driving time from the startpoint of the di driver to the start location of the sj service,and a is the sum of all the services whose start time o starttime of sj.� TDi: driving time dedicated to the services allocated to the

di driver

TDi ¼Xsj2S

ðXij � TSjÞ,

where parameter TSj is the driving time dedicated to the sj

service.� TTi: driving time required for the intervening services of the

di driver

TTi ¼Xsj ;sk

Tfljslk �maxðminðXij;CikÞ �maxðminðb;1Þ;0Þ;0Þ,

where parameter Tfljslk is the driving time from the finallocation of the sj service to the start location of the sk

service, Cik are the services ranged between the final time ofsi and the start of sk, and b is the sum of all the servicesranged between the final time of sj and the start time of sk.� TFPi: driving time from the final location of the last service

of the di driver to his final point

TFPi ¼Xsj2S

ðTfljfpi �maxððXij �maxðminðd;1Þ;0ÞÞ;0ÞÞ,

where parameter Tfljfpi is the driving time from the finallocation of the sj service to the final point of the di driver,and d is the sum of all services whose start time 4 starttime of sj.

Journey length:

8di 2 D; MJXTJi,

where TJi is the journey time of the di driver,

ARTICLE IN PRESS

T/re

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 379

TJi ¼ max8jððftj þ TfljfpiÞ � XijÞ

�min8jðð10001� ðXij � 10000ÞÞ � ð1� Cij þ stj � TspisljÞÞ

ftj being the final time of the sj service, and stj the start time ofsj (see Definition 1).

� Maximum driving time per two-weeks

8di 2 D; MTBXhwi þ TJi,

where hwi is the number of hours that the di driver has drivenduring the last two weeks (see Definition 1).

3.3.2. Results

We have used a well-known CSP solver to evaluate our modelby means of FC.2 Fig. 3 shows the computational time required tosolve the problem (see FC line). As we can see, when makinga comparison with the previous branch and bound approach,the inclusion of constraint propagation techniques improves theresults. Now it is possible to solve up to the 20th case in areasonable computational time. However, modeling the problemas SAT is not so easy, and requires some additional modeling skills.

3.4. Distributed approaches. Synchronous backtracking

In a distributed constraint-based approach, variables andconstraints are distributed among automated agents (Yokoo etal., 1998; Yokoo and Hirayama, 2000). In Yokoo and Hirayama(2000), Yokoo and Hirayana put forward a formalization andalgorithms for solving distributed CSPs, classifying them as eithersynchronous backtracking or asynchronous backtracking anddifferentiating them from previous centralized methods (Yokooand Hirayama, 2000). In a centralized approach a single agentstores all of the information about variables, their domains andconstraints, and solves the problem using classic constraintalgorithms (such as the chronological and branch and boundmethods). In a distributed approach a set of agents are committedto a set of subproblems in order to solve the global problem. Theseagents can work synchronously or asynchronously. In a synchro-nous approach, agents agree on an instantiation order for theirvariables. Each agent, receiving a partial solution from theprevious agent, instantiates its variables based on the constraintsit is aware of. If it finds such a value, it adds this to the partialsolution and passes it onto the next agent. Conversely, it sends abacktrack message to the previous agent (Yokoo et al., 1998;Hirayama and Yokoo, 1997). In an asynchronous approach, eachagent runs concurrently and asynchronously (see next section).

In this section we deal with the synchronous approachfollowing the distributed framework of Salido and Barber (2006)in which the problem is partitioned into k subproblems, which areas independent as possible. The subproblems are classified in theappropriate order and are solved concurrently.

3.4.1. Problem modeling

The distribution of the road passenger transportation problemcan be carried out by means of problem topological properties(number of variables) or by means of problem size (number ofconstraints). In this study, we tested both kinds of distributions:

DCSP by constraints. Each agent concentrates on constraints ofthe same type. Thus, the first agent is committed to solving theCSP restricted to allocating and overlapping constraints; the

2 FC was obtained from CON’FLEX. It can be found at: http://www.inra.fr/bia/

llier/Logiciels/conflex/.

second agent works on the constraints related to maximumdriving time; the third agent works on the journey lengthconstraints; and the fourth agent works with the constraintsrelated to the maximum driving time per two weeks.

� DCSP by drivers. Each agent is committed to assigning values to

variables related to a driver. Thus, many related variables aregrouped together in the same subproblem.

Regarding the method used in each isolated agent, we followedthe FC method from the previous section.

3.4.2. Results

Fig. 3 shows the computational time required to solve theproblem using the synchronous backtracking approach, both inthe DCSP by constraints and DCSP by drivers distributions (see theSB-C and SB-D lines correspondingly). As can be observed in theplot, the run-time in all instances was better in the distributedmodels than in the corresponding centralized FC model (see FCline). It must be taken into account that the number of variablesand constraints in the proposed model is large and the complexitygrows exponentially. However, the distributed models behavedmore consistently than the centralized model in all instances.

Fig. 3 also shows that the synchronous backtracking byconstraints also exhibited better behavior than the synchronousbacktracking by drivers. This is due to the fact that in theconstraint distribution there are fewer subproblems (four) than inthe driver distribution (as many subproblems as drivers). There-fore, if there are many drivers, communication between agentsbecomes hard, and the computational cost increases. Obviously,the required computational memory in the distributed model islower than in the previous FC case. Although each agent performsits own FC process, the problem size is distributed between allagents.

3.5. Distributed approach. Asynchronous backtracking

Asynchronous backtracking was one of the first algorithms tocope with distributed CSP (Yokoo and Hirayama, 2000; Yokoo etal., 1998). In this method, agents act asynchronously without anyglobal control. Each agent instantiates a single variable andcommunicates its value to the agents via connecting links. Twoagents are neighbors if the variables they control have anyconstraints. Thus, each agent is only aware of the constraintsassociated to the variables it controls. One of the requirements ofthe system, then, is to have a problem in which locality holds; thatis, the set of variables can be partitioned in such a way thatconstraints can be managed locally.

Asynchronous backtracking has been generalized to solveDCOP, which includes an objective function so that agentscoordinate in order to optimize it (Modi et al., 2005). One of thealgorithms proposed in the literature is ADOPT, in which thestrategy used to find the solution is called opportunistic best-firstsearch (Modi et al., 2005). In this strategy, agents are organizedinto a tree structure, which establishes a priority among them(parents to children); that is, constraints are only allowedbetween an agent in its ancestors or descendants. The priority isused to guide the backtracking process from the lower levels tothe upper ones. All agents begin to set their variables at an initialvalue and send this assignment to the lower levels. When an agentcannot perform an assignment, it asynchronously sends a nogood

message to its ancestors.

3.5.1. Problem modeling

In this model we followed the service approach, so eachvariable represents a service and is assigned to an agent. The cost

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388380

function is distributed among variables. So, given a pair ofvariables si; sj, the cost is represented as a ‘‘soft’’ constraint ofcompatible (good) values and their cost. For example, constraintci;j ¼ ðk; l;mÞ means that when si ¼ k then variable sj can be set tol; and the cost of this assignment is m. Thus, the domain of thevariables cannot directly be the drivers as expected, since the costassociated to a ‘‘soft’’ constraint does not only depend on the twoservices (variables) but on all of the services assigned to the samedriver on a journey. Therefore the domain of the variables isactually in ½1; . . . ;nc�, where nc is the number of combinations ofjourneys assigned to drivers ðncojourneys � driversÞ. If variable si

is set to j, it means that service i appears in the solution in the j

assignment (journey and driver).‘‘Hard’’ constraints (the primal constraints in our problem

definition) are represented by an 1 cost. The total number ofconstraints required is n2 � nc, where n is the number of variables(services) and nc the number of journey combinations, as above.

3.5.2. Results

Thanks to the fact that ADOPT can be used under the GNUlicence, we have had the opportunity to test it on our problem. Wecannot test the system from more than four variables, as shown inFig. 3 (see AB line). The number of constraints requiringspecification is huge, and so the algorithm gets rapidly over-whelmed. These results are not surprising since ADOPT wasoriginally designed to deal with CSP instead of DCOP problems.Recent modifications to the algorithm, such as Ali et al. (2005),could improve the results.

3.6. Mathematical optimization. Mixed integer programming

Mixed integer programming (MIP) is probably the mostimportant technique in the field of Operational Research (Marotoet al., 2003; Wolsey and Nemhauser, 1999). In this technique,problems are represented by mathematical models in which theobjective function is linear and the constraints are given by linearequations and inequalities, and so, variables are necessarilynumerical. If the domain of the variables is integer, we aredealing with integer programming. When dealing with integerand real variables, MIP methods are required. Furthermore, if therelationship among variables cannot be expressed by a linearfunction, then non-linear programming methods are necessary(Cooper and Farhangian, 1985). Other approaches, such asstochastic programming, can also be found in the literature. SeeOrden (1993) for a review.

3.6.1. Problem modeling

In order to model our problem in the MIP paradigm, allinformation about the problem should be known. In this sense,intervening services should be known in advance, conversely tothe service approach of the branch and bound or chronologicalbacktracking method, which can be generated while solving theproblem. Since the journey approach contemplates a completesimple formulation, we have adopted it for modeling the problemin linear programming. So, the variables required to model ourproblem are the following:

Data: drivers, journeys and services. � Parameters: cost of the journeys per driver (co), and services

included in journeys (mat).

3 ILOG CPLEX: http://www.ilog.com/products/cplex/.4 GLPK, GNU Linear Programming Kit, Free Software Foundation, http://

www.gnu.org/software/glpk/.

Decision variables: journeys included in the solution (sol), anddrivers assigned to those journeys (solution). The linearprogramming method should provide the values for thesevariables, so they are the solution to the problem. Thesedecision variables are binary (for example if journey j is

included in the solution the value of corresponding decisionvariable is 1, otherwise 0). Consequently decision variables areinteger values 2 f0;1g.

� Objective function to be minimized:

Xd2D;j2J

ðco½j; d� � solution½j; d�Þ.

Constraints: (1) Each journey should appear at most once in the solution

8j 2 JXd2D

solution½j; d�p1.

(2)

Each driver should be assigned at most once to a journey

8d 2 DXj2J

solution½j; d�p1.

(3)

Each service should appear at most once in the solution (sojourneys including the same services are incompatible)

8s 2 SXj2J

mat½j; s� � sol½j�: ¼ 1

(4)

Each journey included in the solution should have a driverassigned

8j 2 JXd2D

solution½j; d� ¼ sol½j�.

It is interesting to note, then, that MIP allows the definition of theconstraints outside the code of the algorithm, conversely to thebranch and bound methods, in which constraints are coded.However, it requires a lot of data (journeys, matrix of journeys andcosts, matrix of journeys and services). So for a large problem, apre-processing step is required in order to generate all of thesedata.

3.6.2. Results

According to their popularity and the benefits that MIPprovides to industries and companies, several tools have beendeveloped. In particular, CPLEX3 is one of the best tools found onthe shell. Recently, GLPK4 has been developed under the GNUlicence with an efficiency close to CPLEX. We have selected GLPKto carry out our experiments.

Fig. 3 shows the computational time required to find theoptimal solution using GLPK (see MIP line). MIP is able to solve allof the problems in our workbench, up to the last one, in 32 s. Inaddition to this time, a pre-processing time is required to generatethe journeys and other GLPK inputs (data, parameter, variables),summing up a total amount of 66 s for the 70th case. We havecarefully analyzed these results to understand such a goodbehavior. In order to solve the integer programming problem,GLPK first finds a relaxed solution to the problem with continuousvariables, and then it finds the closest integer solution to theoriginal problem. In our case, we have noticed that the continuoussolution almost always coincides with the integer one and this iswhy such efficient results have been obtained.

Nevertheless, we should also analyze other features of MIP. Forexample, the amount of memory required to solve the problem(space complexity). Fig. 4 shows the memory required in eachexperiment. If we need to extend the problem, we need to reviewour model, defining new parameters and data that could increasethe space complexity and pre-processing time. In addition, theinterpretation of both the model and the solution provided by this

ARTICLE IN PRESS

0.0

20.0

40.0

60.0

80.0

100.0

120.0

140.0

160.0

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70

Fig. 4. Memory in MB required to store the input files for GLPK.

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 381

technique is not always straightforward when the problembecomes more complex.

3.7. Constraint logic programming

CLP is a general purpose paradigm that deals with complexproblems by means of the power of constraint solvers and theversatility of the declarativity of logic programming (Jaffar andMaher, 1994; Hentenryck, 1989). Unfortunately, as we will see,this ‘‘general purpose’’ nature becomes a weakness for ourproblem, in comparison to specialized solvers like MIP techniques.The most popular language that implements the CLP paradigm isProlog, although there are several other good approaches (seeFernandez and Hill, 2000, for a comparative survey). From amongthem, we have selected GProlog,5 which includes a finite domainconstraint solver whose performance is comparable to othercommercial systems.6

The classic scheme of a CLP program consists of first creatingthe variables of our model and assigning them a domain, thenconstraining the variables depending on the requirements of ourproblem, and finally, asking for an assignment for the variables, inaccordance with their domains, which satisfies all their con-straints. Intrinsic Prolog backtracking allows us to enumerate allof solutions. In our case the variables assignment part must dealnot only with the constraints but also with an optimizationrequirement.

3.7.1. Problem modeling

We decided to use the journey approach because the transla-tion from the MIP implementation is quite immediate. In thisapproach the possible journeys and cost are already calculated.We could have used the service approach if our problem weresubject to further new constraint introductions or modifications,but for the purpose of our exploratory work we believe that thejourney approach is suitable. As we will see, the Prolog code thatwe propose is very concise and comprehensible.

First, we define a list L ¼ X1; . . . ;Xnc of variables with domainf0;1g, which correspond to all the combinations of journeysassigned to drivers. Xi ¼ 1 means that the combination has beenselected, while Xi ¼ 0 means that it has not. Then, the variable

5 GNU Prolog, Free Software Foundation, http://www.gnu.org/software/

gprolog/.6 We also tried SICStus Prolog, but GProlog performance was slightly better.

Cost is the sum of Xi � ci’s, where ci is the pre-calculated costassociated to the journey and the driver of the i combination. Sothe problem code starts as follows:

transportsð½CostjL�Þ:-

L ¼ ½X1 ; . . . ;Xnc �,

fd_domainðL;0;1Þ,

Cost# ¼ c1 � X1 þ � � � þ cnc � Xnc ; . . .

We have to impose the constraints of the journey approach modelto the variables. First of all, we require each driver to be able to do

at most one journey: for each driver d we take the variablesX1

d ; . . . ;Xdj

d , corresponding to all their possible dj journeys, andforce them all to be equal to 0 except one, which can be equal to 1.To do so we use the finite domain predicate fd_atmost(N,List,V) which posts the constraint that at most N values ofList are equal to V: fd_atmostð1; ½X1

d ; . . . ;Xdj

d �;1Þ.Now, we require that each service be done by exactly one of the

journeys proposed by the solution: for each service s, we take thevariables X1

s ; . . . ;Xsts , corresponding to all the st journeys that

include the service, and force them all to be equal to 0 exceptexactly one. To do so we use the finite domain predicate:fd_exactly(N,List,V) which posts the constraint that exactlyN variables of List are equal to the value V. That is,fd_exactlyð1; ½X1

s ; . . . ;Xsts �;1Þ.

Finally, we ask the GProlog constraint solver to give anassignment to the list of variables L so that it minimizes the valueof the variable Cost, i.e. the cost of the journeys. We do so usingthe following finite domain predicates:

fd_labeling(Vars,Options), which assigns a value to eachvariable of the list Vars satisfying all the constraints that thevariables may have. The Options parameter allows us tocontrol the way in which the assignments are obtained. In ourapplication, the option value_method(max), which forces thesolver to enumerate the values from greater to smaller, hasbeen empirically crucial; � and the optimization predicate fd_minimize(Goal,X) which

repeatedly calls Goal to find a value that minimizes thevariable X. In fact, this predicate uses a branch and boundalgorithm with restart.

The resulting combination of both predicates is the following:fd_minimize(fd_labeling (L, [value_method(max)]),Cost).

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388382

3.7.2. Results

Results on Prolog have not been very encouraging. Thecomputational time obtained for the tests applied led us to stopthe experimentation on the 10th example (see CLP line in Fig. 3). Itis one of the worst obtained in our experiment. In addition, Prologonly allows a solution to be obtained when it finishes, that is, itdoes not exhibit anytime behavior.

In our opinion, CLP could be a helpful tool for a dynamicproblem where constraints evolve and versatility is a crucial point.In a study like this where the constraints and the model are sofixed, specialized tools like the MIP system take advantage of theirspecialization with a very powerful (linear) constraint solver andbeat general purpose paradigms like CLP.

Fig. 5. Basic Tabu search procedure.

4. Metaheuristics

The word metaheuristic was coined by Glover (1986) and itsmeaning has been changing ever since. According to the originaldefinition, metaheuristics are methods that combine localimprovement procedures and higher level strategies to create aprocess capable of escaping from local optima and performing arobust search for a solution space (Glover and Kochenberger,2003). Nowadays, metaheuristics can be seen as intelligentstrategies to design or improve heuristics procedures with a highperformance, generally combining constructive methods, localsearch methods, concepts that come from Artificial Intelligence,biological evolution and statistics methods (Melian et al., 2003).In this paper we have analyzed GRASP and Tabu search (TS) asrepresentative methods. Genetic algorithms can also be found inthis category depending on the information source consulted.

4.1. GRASP

Greedy Randomized Adaptive Search Procedure (GRASP) wasdeveloped by Feo and Resende (1995). It is an iterative procedurewith two phases: the first constructs an initial solution using arandomized greedy function; and the second improves the qualityapplying a local search procedure. The best overall solution is keptas the result.

In the first phase, a feasible solution is constructed iteratively.All of the elements are ordered in a candidate list with respect to agreedy function, which measures the benefit of selecting eachelement. The list of best candidates is the restricted candidate list(RCL). The factor a determines the quality of the solutions in theRCL; if a ¼ 0 only the best candidate is in the RCL, making thealgorithm pure greedy; on the other hand, if a ¼ 1 all of thefeasible candidates are on the list. One candidate of the list ischosen randomly. It is said that the heuristic is adaptive becausethe benefit associated with every element is updated after theselection of the candidate at every iteration to reflect the changesbrought about by the selection. Using this technique differentsolutions are obtained at each GRASP iteration.

The solutions generated by the construction phase do notguarantee that they are locally optimal with respect to a simpleneighborhood; therefore, it is advisable to apply a local search toattempt to improve the solution constructed. A local searchalgorithm works in an iterative way by replacing the currentsolution with a better one in the neighborhood. It finishes whenno better solution is found.

4.2. Problem modeling

We have formulated the problem using the service approach.For the constructive phase all of the services are ordered by

departure time. A list of all possible drivers that can perform aservice is made and the cost of assigning the service to that driveris calculated. The RCL is built accordingly with the randomizefactor a, which was set to a ¼ 0:1 after 20 trials. A driver from theRCL is selected randomly and all of the variables are updated. Wefollow this procedure until all of the services have a driverassigned to them.

The local search attempts to reduce the cost by reducing thenumber of assigned drivers. In order to do this, the algorithmlooks for drivers that perform only one service and finds out ifanother driver is able to do it.

4.3. Results

GRASP solves all of the problems on the workbench, so thesolutions found by this technique satisfy the constraints (seeGRASP line in Fig. 3). The objective function value is not optimal,but it is close enough. We have measured the percentage ofaverage deviation over the optimal solution in the 200,000solutions generated: 5.07% (with s ¼ 0:015). Similar results wereobtained when relaxing the problem or adding harder constraints.In all cases, the required computational memory is not relevant.

4.4. Tabu search

TS was introduced by Glover (1986) and its main characteristicis the use of adaptive memory which allows the exploration ofdifferent regions of the search space (short term memory) and theintensification of the search in promising areas (long termmemory). Advanced features can be found in Melian et al. (2003).

When implementing a basic TS procedure, the first step is toconstruct an initial solution ðsÞ. Then, a neighborhood NðsÞ isconstructed to find adjacent solutions that can be reached fromthe current one. A move leads from one solution to the next in theneighborhood. The Tabu structure records a subset of the possiblemoves in the neighborhood as forbidden (Tabu) because theywere made in the recent past. Therefore, when doing the localsearch, a move that it is not Tabu is found.

The move can improve or unimprove the value of the objectivefunction, and the best overall solution is kept as the result. Fig. 5shows a pseudocode of this procedure. A comprehensive exam-ination of this methodology and advanced features can be foundin Glover and Laguna (1997).

4.4.1. Problem modeling

We have formulated the problem using the service approach.The initial solution is constructed using the same proceduredescribed in Section 4.1 using a ¼ 0. One of the key points is the

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 383

definition of the neighborhood for the problem under considera-tion. We propose an exchange neighborhood, i.e. remove a driverassigned to one service and add a new one to cover it. Theneighborhood considers all solutions that can be obtained fromthe current solution by the exchange of drivers.

In this implementation, a move consists of changing oneservice from the current driver to a new driver. The Tabu list keepsa record of the services moved. To make a move, drivers areordered increasingly by the number of services that they havebeen assigned. The procedure tries to find an active driver that cando the service. A move can be carried out if all the constraints forthe new driver are satisfied after the change and the move is noton the Tabu list. The value of each solution is the total cost of thedrivers.

The size of the Tabu list was determined experimentally with20 instances and a size of n ¼ 90% of the number of services in theproblem was chosen, which means that a move is on the Tabu listfor n iterations. For each problem one initial solution was builtand 200,000 moves were performed. The best overall solution ispresented as the solution of the problem.

4.4.2. Results

The solutions found by this TS implementation are a little moreexpensive than the ones obtained by the other metaheuristicmethod (GRASP): an average deviation of 8.66% (with s ¼ 0:30)over the optimal cost. This is due to the simple neighborhood andmovements design for the search. However, the algorithm has alower computational cost, as shown in Fig. 3 (see Tabu line). Inaddition, when using a harder scenario for our problem, it has lessimpact on the total computational time that GRASP has. Again, therequired computational memory is not relevant.

4.5. Combinatorial auctions

Auctions have been studied in Economics as a mechanism fordealing with shared resources. Among the different types ofauctions, combinatorial auctions allow bidders to submit bids onbundles or packages of items (Kalagnanam and Parkes, 2004;Cramton et al., 2006). Given a set of items I ¼ it1; . . . ; itn, each bidbj is characterized by the subset of items gðbjÞ � I that the bidder(agent) requests, and its price, pðbjÞ. Formally: bj ¼ hgðbjÞ; pðbjÞi.

The auctioneer is faced with a set of bids (price offers) forvarious bundles of goods and his aim is to allocate the goods in away that maximizes his revenue, which has been called thewinner determination problem (WDP) (Leyton-Brown, 2003). TheWDP is known to be an NP-complete problem and there are manyapproaches that can be used to solve this problem. On one handthere are specific algorithms that have been created exclusivelyfor this purpose, CABOB (Sandholm, 2002) having proved to beone of the best. On the other hand, the WDP can be modeleddirectly as a MIP and solved using a generic MIP solver. Due to theefficiencies of current solvers like GLPK (free) or CPLEX (commer-cial), the research community has now mostly converged towardsusing MIP solvers as the default approach for solving the WDP.

4.5.1. Problem modeling

In accordance with our problem, we define drivers as thebidders who are trying to buy services. This view of the problemmeans that driver constraints are managed locally by each driverand further extensions of the problem are facilitated, such asadding drivers’ preferences on services.

In addition, there is an auctioneer agent that decides upon theallocation by solving the WDP. First of all, the auctioneer opensthe action by announcing the services to be deployed. Then,

drivers submit their bids according to their constraints. And lastly,the auctioneer provides the final allocation.

Each driver generates as many bids as possible according to allof the possible combinations of services (journeys) he canaccomplish. The services of a bid ðgðbiÞÞ make up a journey andthe price ðpðbiÞÞ is consistently related to the cost of the journey.Regarding prices, since the WDP consists of maximizing theoutcome of the auction, we develop a mechanism to provide aninverse-like cost function. If cðbiÞ is the cost of the gðbiÞ services ofa bid (i.e. journey), it is not enough to define pðbiÞ ¼ 1=cðbiÞ due tothe price additivity, since 1=cðbiÞ þ 1=cðbjÞ is not the same as1=ðcðbiÞ þ cðbjÞÞ. Given such a situation, we have defined theinverse-like function as follows:

pðbiÞ ¼ lengthðgðbiÞÞ �maxCost � cðbiÞ, (4)

where maxCost is a value higher than the cost of any bid.

4.5.2. Results

Fig. 3 shows the computational time required to find theoptimal solution (see CA line); this time includes bid generation,and the time required to solve the WDP (including MIP modelgeneration and the GLPK time). We are able to solve up to 45 casesbefore the problem becomes untractable due to memoryconstraints. Even though MIP was efficient solving the originalproblem formulated as the journey approach (see Section 3.6), wehave already detected an exponential increase in the memoryrequired by the method (see Fig. 4). In the combinatorial auctionapproach, the number of journeys is multiplied by the number ofdrivers, so the MIP model that results from the WDP is n timeshigher than the original MIP formulation. As a consequence, GLPKmemory collapses after the 45th problem.

5. Bioinspired approaches

Evolutionary and bioinspired approaches are based on usinganalogies with natural or social systems to design non-determi-nistic and heuristic methods for searching, learning, behavior, etc.Evolutionary algorithms make use of computational models of thenatural processes of the evolution of individual populationsthrough selection and reproduction processes. These approachesinclude GA, population-based heuristics, memetic algorithmsrelated to cultural evolution, etc. Recently, another group ofproposals inspired by biological models has arisen, such as thosebased on colonies of ants, societies or clusters (hives), the immunesystem, self-organization or artificial life, etc.

Evolutionary and bioinspired approaches allow the resolutionof a great variety of problems of optimization and searching incomplex spaces to be addressed. Due to the scope of thesemethods, they are also related to metaheuristic methods, such asTabu search. For our exploratory work, we have selected GA andant colony methods.

5.1. Genetic algorithms

GA were first introduced by John Holland in 1975, and areinspired by evolutionary rules. The main idea is that duringevolution the fittest individuals are more likely to survive andreproduce, while the least fit will be eliminated.

The main features in GA are the codification of the n

individuals (population), and the fitness, selection, crossover andmutation functions. Initial population (p) is usually made up offeasible solutions. Each individual of the population can be seen asa solution, and the genetic information can be expressed as abinary vector where the solution is encoded. The evaluationfunction assigns a value (fitness) to each individual, usually as a

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388384

measure of its quality. The Selection Function determines whichindividuals will generate the new ones. Different types ofselection, directly or indirectly, use the fitness value to guide theprocedure to find better solutions (Alba et al., 2003). The Crossover

Operator interchanges the information between parents and theMutation Operators modify the information of the individual inorder to introduce diversity into the population.

Batch update completely replaces the initial population withthe new population, and is only performed when all of theindividuals have been generated. To prevent the loss of the bestindividual, batch update usually transfers the best individual inthe initial population into the new generation (elitism). Theprocess of generating the new population is repeated either for afinite number of iterations or until some given condition holds.

5.1.1. Problem modeling

We model the problem using the journey approach. This allowsus to divide the main problem into two subproblems: firstly, theassignment of the journeys to the solution and, secondly, theallocation of drivers to journeys. For the first subproblem we useda GA and for the second one a greedy function. To solve the firstsubproblem, each individual of the population codifies the set ofjourneys, so they have as many bits as journeys in the problem. Ifthe bit value is 1, the journey is in the solution, otherwise 0. Tobuild the initial population, journeys are selected randomly untilall of the services are covered.

The evaluation function counts the number of journeys thateach individual has, assigning that value as the individual fitness.The selection function is an inverted roulette where individualswith fewer journeys are more likely to be selected for crossover.

As the crossover operator we used the fusion operator proposedin Beasley and Chu (1996), which produces only one offspring andselects the offspring bit values based on the fitness of the parents.Let f P1

and f P2be the fitness of the parents P1 and P2

correspondingly, and let C be the offspring. Then, C is generatedas follows: for all i ¼ 1; . . . ;n

(1)

if P1½i� ¼ P2½i�, then C½i� ¼ P1½i� ¼ P2½i�

(2)

if P1½i�aP2½i�, then(a) C½i� ¼ P1½i� with probability p ¼ f P1

=P1 � P2

(b) C½i� ¼ P2½i� with probability 1� p

Mutation function is the standard bit-flip. After crossover andmutation, individuals may violate the problem constraints (i.e.some services are not covered). A repair operator was designed tomake all solutions feasible. Finally, we chose a batch populationupdate using the elitism operator.

The individual returned by the GA only has the journeys in thesolution. Thus, we eliminate the services that are covered by morethan one journey, by randomly selecting a journey in the solutionthat covers each one of them. Finally, to assign the driver, a greedyfunction finds the lowest cost driver for the first journey, thesecond lowest for the second journey and so on. This is possiblethanks to the journey’s structure which follows a decreasingpattern: the first journeys have more services than the last ones.

Fig. 6. Basic ACO pseudocode.

5.1.2. Results

First of all, we ran twenty experiments to determine a 90%crossover probability and 1% mutation probability. We set thenumber of individuals and the number of generations to 100.Regarding the workbench, and in order to take advantage of therandom feature of the algorithm, each problem was run 20 times,meaning that for each problem we generated 200,000 individuals.The second part of the solution is applied to the individuals in the

latest generations that have the best fitness, and the best overallsolution is presented as the solution of the problem.

The computing time using this technique, as shown in Fig. 3(see GA line), improves the results of all of the previousapproaches except MIP and GRASP. In terms of the solution costs,however, the GA systematically finds better solutions than Tabusearch or GRASP: the average deviation over the optimal solutionis 4.09% ðs ¼ 0:023Þ.

5.2. Ant colony optimization

Another bioinspired technique for solving combinatorialoptimization problems is ant colony optimization (ACO). Thistechnique was proposed by Dorigo and others (Dorigo et al., 1999;Dorigo and Di Caro, 1999) and is inspired by the behavior of antsin order to find food. The ACO algorithm consists of a colony ofants that looks for solutions performing randomized walks on acompletely connected graph GC ¼ ðC; LÞ. The nodes belong to afinite set of components C ¼ fc1; c2; . . . ; cng and every candidatesolution x is equal to a sequence of these componentsx ¼ fci; cj; . . . ; ch; . . . g. GC is called the construction graph andelements of L are called connections. Each connection has a valuetði;jÞ that represents the suitability of using that connection.

We can see the pseudocode of the ACO algorithm in Fig. 6.Three main steps are considered. First, the ‘‘ConstructAntsSolu-tions’’ consists of creating the ant colony, and sending all of itsindividuals to find solutions in the graph (problem). Ants walk thegraph. An ant at node i that has walked a partial solution xh

decides the next node to visit chþ1 ¼ j based on the probabilityPrðchþ1 ¼ jjxhÞ. This probability is computed as follows:

Ptðchþ1 ¼ jjxhÞ ¼

taijPði;lÞ2Nk

itail

if ði; jÞ 2 Nki ;

0 otherwise;

8><>:

(5)

where

Nki is the neighborhood of ant k at node i; the set consists of all

the nodes that the ant k can visit after node i;

� tði;jÞ is the connection strength of the node i with its neighbors

(pheromone level); it is initialized randomly and updatedthroughout the process;

� a is an algorithm parameter.

The ants stop walking the graph when a feasible solution has beenconstructed or when the neighborhood is empty. In the secondcase, the ant is useless and is not taken into account for thefollowing steps.

The second step of the algorithm in Fig. 6 consists of updatingtði;jÞ. On the one hand, values of the connections that are part ofone solution are increased. On the other hand, all connections thatdo not participate in a solution are decreased (pheromoneevaporation). The objective of pheromone evaporation is to avoida convergence of the algorithm for a suboptimal solution. Finally,several centralized heuristics can be applied, when required, inthe last step of the algorithm, DaemonActions.

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 385

These three steps are repeated in the loop until a terminationcondition is met. This condition could be, among other things, agiven number of iterations or a convergence criteria. Then, thewalk corresponding to the highest tij values constitutes the bestsolution.

5.2.1. Problem modeling

To solve the problem studied in this article using ACO, weconsider the construction graph GC where C ¼ S [ D [ dummy

Node, which means that the nodes are drivers and services, andthere is an additional initial node that has neither a service nordriver. The graph is fully connected, so there is a label li;j for everypair of nodes ci; cj. An ant that walks over the link lði;jÞ means thatthe service i is assigned to the driver j.

Regarding the first step of the algorithm, the ConstructAntSolu-

tion, all ants are created at the dummy initial node. Ants decidethe next node to visit according to Eq. (5) with the followingmodification. The probability becomes 0 when visiting a neighbornode that breaks some problem constraints. Thus, if i is the initialnode, the neighborhood is all the services; if i is a service node, theneighborhood is all the drivers that can attend to that servicewithout breaking the problem constraints; finally, if i is a drivernode, the neighborhood is the service nodes that have not beenvisited by the ant.

For the second step of the algorithm, the UpdatePheromones,two updating functions were defined. First, when each of the antsfinds a feasible solution in the previous step of the algorithm, tij isupdated according to the cost. Regarding pheromone evaporation,the following function was used:

tij�!tij � ð1� rÞ, (6)

where r is the evaporation factor and is another parameter of thealgorithm. Finally, no special methods have been implemented forthe third step of the algorithm (DaemonActions).

5.2.2. Results

Based on a series of experiments, the parameters used werea ¼ 2, r ¼ 0:1 and a colony size equal to 100. Due to the fact thatthe algorithm is probabilistic, each problem was run 20 times andthe best overall solution is presented as the solution to theproblem. The time required for the execution of the algorithm isshown in Fig. 3 (see Ants line). We have not found the optimalsolution in any of the cases; and the solutions found are muchworse than in the metaheuristic and GA approaches. This could bedue to stagnation: the undesirable situation in which all antsrepeatedly construct the same solutions making any furtherexploration in the search process impossible. Recent versions ofthe algorithm propose several alternatives to avoid this, bycombining exploitation and exploration (Maniezzo et al., 2004).Regarding the former, the ants using past information findeffective solutions in order to choose the node to visit. On theother hand, exploration favors the discovery of new paths, tryingto avoid stagnation.

6. Discussion

Based on the experience obtained from our experimentaltesting of the different methods, several factors have been foundto be relevant when selecting a technique. In particular, wedistinguish the following:

Modeling expressiveness: whether the methods allow thespecification of constraints as part of the program (constraintscoded), or whether constraints should be explicitly provided(the declarative way) by means of either compatible variable-

value pairs or with the use of functions. When constraintsmust be explicitly provided and the problem is large, somekind of pre-processing step is required to obtain the constraintset, even though the specification language sometimes pro-vides a way to specify them by means of complex expressions.

� Anytime: whether the algorithm can be stopped and its

execution resumed, giving the best solution found so far ornot, or only partially (can stop but not resume).

� Time complexity: whether the method as tested is able to solve

up to the 70th test case (low), up to the 20th case (medium) orfew cases (high).

� Memory complexity: the amount of memory required by the

method, to store either constraints or internal data. Highmeans that the method requires a lot of memory, and dynamicmemory or other kinds of programming tricks should be usedto keep handling memory in an efficient way.

� Parameter tuning: whether the method requires several runs

in order to tune the parameters required. In this sense, thelabel ‘‘Yes’’ indicates that with the current parameter estima-tions the algorithm has not found the best solution.

� Tool: whether there is a free licence tool on the shell to test the

problem or not. Note that tool availability could force theproblem to be modeled according to the tool requirements. Inaddition, the tools available are not always the most efficientones.

In Table 1 there is a summary of the methods analyzed togetherwith a checklist of the properties they exhibit. Regardingcomputational time, MIP is the one which exhibits the bestbehavior. Other recent paradigms like Tabu search, GRASP, GA andACO are also able to deal with the whole workbench with areasonable computational effort. In Fig. 7 the methods areorganized into the three time complexity categories according tothe results obtained in our experiments. This does not mean thatthe optimal solution could be found in either Tabu, GRASP or GA,but a quasi-optimal one (see Fig. 8 for a comparison of the cost ofthe solutions found). Much more effort should be made to find thedifferent parameters that tune the algorithms. In this sense, thereis much more uncertainty in the development of the algorithmfrom an engineering point of view.

Regarding modeling expressiveness, in general, declarativemethods are the easiest way to make initial approaches to simpleproblems. However, when dealing with complex problems, withmany constraints, search methods (chronological backtrackingand branch and bound) and constraint propagation methods haveproved to be the easiest way to approach the problem the firsttime. This has been our case. So, even though better computa-tional times are obtained with MIP, the MIP model was hard tobuild from scratch, but easier after the approximation achieved inthe systematic approaches.

The requirement of defining all constraints explicitly is also ahard limitation of MIP. The amount of memory required, as well asthe pre-processing steps to generate them are also issues thatshould not be forgotten, especially when dealing with large-scaleproblems. For example, in Lim et al. (2005), heuristic approachesperformed better than CPLEX in large-scale problems. In addition,if we wish to contemplate other issues such as delays orexceptions in our problem (see the complete definition of theproblem in Lopez, 2005), it is not so easy to imagine how thesenew constraints could be linearized. Thus, constructive models(like chronological backtracking, branch and bound, etc.) canmanage these kinds of complex constraints. Moreover, the greatimportance of adequate modeling in GRASP, Tabu and GAapproaches should be taken into account, as has been shown inthe respective sections.

ARTICLE IN PRESS

Table 1Summary of the analysis: techniques and their properties.

Method Modeling Anytime Time Memory Tuning Tool

Chronological backtracking Coded Partial High Low No No

Branch and bound Coded Partial High Low No No

Mixed integer programming Declarative No/yes Low High No GLPK

Constraint logic programming Declarative No High Low No GProlog

Forward checking Declarative No Medium Medium No ConFlex

Synchronous backtracking Declarative No/yes Medium Medium No No

Asynchronous backtracking Declarative No High High No ADOPT

Tabu Coded Yes Low Medium Yes No

GRASP Coded Yes Low Medium Yes No

Combinatorial auctions Coded Partial Medium High No GLPK

Genetic algorithms Coded Yes Low Low Yes No

Ant colony optimization Coded No Low Medium Yes No

0

50000

100000

150000

200000

250000

300000

350000

400000

0Size (services, drivers)

exec

utio

n tim

e (m

s)

CB

Allworkbench

B&B

CLP

AB

FCSD

CA

GA

Ants

GRASP

MIP

Tabu

HIGH MEDIUM LOW10 20 30 40 50 60 70 80

Fig. 7. Maximum problems managed by the methods and the associated execution time.

MIP (66389, 4075.15)

Tabu (3500, 4529.15)GA (292422, 4422.55)

Ants (191797, 6343.35)

GRASP (95704, 4329.8)

4000

4500

5000

5500

6000

6500

0stop time (ms)

cost

MIPGRASPTabuGAAnts

50000 100000 150000 200000 250000 300000 350000

Fig. 8. Time (ms) required to find a solution for the 70 case and the cost of the solution. Only MIP finds the optimal solution.

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388386

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388 387

Conversely, distributed approaches and, particularly, thesynchronous one that allows the problem to be partitioned intodifferent sizes show that an average computational time hasinteresting additional features, making their continued explora-tion worthwhile. First of all, the synchronous distributed approachcan take advantage of classic techniques when solving subpro-blems. In our experimental analysis, each subproblem was solvedusing FC methods. In addition, they exhibit good modelingexpressiveness. Since it is backtracking based, it is possible toimagine the extension of the algorithm to exhibit an anytimebehavior.

Hybridization is also present in another of the techniquesanalyzed: combinatorial auctions. We have used MIP techniquesto solve the WDP that the combinatorial auction poses. The bestcurrent algorithm, CABOB, also uses MIP techniques (CPLEX) toperform estimations in order to prune the search space.Hybridization tries to use the advantages of classical methods,and it is a good method to look to, since looking at Table 1 itcannot be claimed that any single method is the best, whenconsidering all of the properties.

Finally, we should remark that in this paper, only generalmethods and techniques for solving combinatorial problems havebeen analyzed. Some of these techniques make use of domain-independent heuristics in the search (for instance, constraintssatisfaction techniques). Others methods (for instance, branch andbound techniques) require the design of domain-dependentheuristics. However, we have not gone in-depth into the designof specific domain-dependent heuristics. It is clear that adequatedomain-dependent heuristics, specifically designed for each kindof problem, can obtain good solutions with good computationaltimes. But a more in-depth study would be required for each ofthe techniques and the results would strongly depend on eachdomain of problems. Therefore, the scope of our results would betoo narrow to be of general interest. Our goal in this first approachto our problem is to show how general techniques can be appliedto combinatorial problems, and the specific features of eachapproach in order to facilitate the selection of the appropriatetechnique and to dedicate much more effort to solving thecomplete problem with the most suitable one.

7. Conclusions

When facing a new complex problem, especially in optimiza-tion, several techniques are available, either from the field ofOperational Research or Artificial Intelligence. The selection of onetechnique or another is a critical issue. Even though in theliterature there have previously been some similar problems,slight differences in the problem data can decide whether onetechnique or the other is more suitable. Experimentation with theproblem data is often the only way to know, step by step, what theparticular challenge of the problem is and what the differenttechniques offer. This is what we have done when trying to selectthe most appropriate technique for the road passenger transpor-tation problem.

In this paper we have experimentally analyzed severaltechniques, from classic ones like search, constraint propagationand mixed integer programming, to new ones as metaheuristics,combinatorial auctions and distributed approaches. All of theresults were obtained in a comparable PC configuration (PentiumIV 3 GHz, 1 Gb of RAM).

The results have shown us that even though mixed integerprogramming is the one that outperforms the rest in a non-large-scale problem, the modeling of a problem using this method is notalways straightforward. A change in the problem data could causethe revision of the complete model of the problem. Conversely,

general search methods are more flexible and versatile and allowany new constraint to be defined at your convenience, but at ahigher computational cost. New search distributed methods couldoffer a new opportunity for general search methods in order toimprove efficiency, in addition to guaranteeing data privacy. Thislatter feature is useful for our problem, since driving preferencescan be added to the problem in a distributed way. In addition,distributed approaches make it possible to hybridize the solution,by merging well-proven classic techniques in the subproblems.

To summarize our results, we provide a feature table, in whichthe main differences among the analyzed methods have beenpresented as guidelines that should help when choosing onetechnique or another, given a specific problem. We hope that thisinformation will be of use to other engineers and researchers forunderstanding the kinds of optimization techniques currentlyavailable on the shelf.

Acknowledgments

This research work has been partially funded by the SpanishMEC Projects TIN2004-06354-C02-02 and TIN2004-06354-C02-01, and by the DURSI Automation Engineering and DistributedSystems Group, 00296.

References

Abbink, E., van’t Wout, J., Huisman, D., 2007. Solving large scale crew schedulingproblems by using iterative partitioning. In: ATMOS 2007, pp. 96–106.

Alba, E., Laguna, M., Marti, R., 2003. Metodos evolutivos en problemas deoptimizacion. Revista de la Facultad de Ingenierıa de la Universidad deCarabobo, vol. 10, pp. 80–89.

Ali, S., Koeing, S., Tambe, M., 2005. Preprocessing techniques for accelerating theDCOP algorithm ADOPT. In: AAMAS, pp. 1041–1048.

Apt, K., 2003. Principles of Constraint Programming. Cambridge University Press,Cambridge.

Balas, E., Padberg, M.W., 1976. Set partitioning: a survey. SIAM Review 18 (4),710–760.

Bartak, R., 1999. Constraint programming: in pursuit of the Holy Grail. In:Proceedings of the Week of Doctoral Students (WDS), Prague, Czech Republic,pp. 555–564.

Beasley, J., Chu, P., 1996. A genetic algorithm for the set covering problem.European Journal of Operational Research (94), 392–404.

Cooper, M., Farhangian, K., 1985. Multicriteria optimization for nonlinear integer-variable programming. Large Scale Systems 9, 73–78.

Cramton, P., Shoham, Y., Steinberg, R., 2006. Combinatorial Auctions. The MIT Press,Cambridge, MA.

Dechter, R., 2003. Constraint Processing. Morgan Kaufmann Publishers, Los Altos,CA.

Dorigo, M., Di Caro, G., 1999. The ant colony optimization meta-heuristic. In: Corne,D., Dorigo, M., Glover, F. (Eds.), New Ideas in Optimization. McGraw-Hill,London, pp. 11–32.

Dorigo, M., Caro, G.D., Gambardella, L., 1999. Ant algorithms for discreteoptimization. Artificial Life 5 (2), 137–172.

Feo, T., Resende, M., 1995. Greedy randomized adaptive search procedures. Journalof Global Optimization (6), 109–133.

Fernandez, A.J., Hill, P.M., 2000. A comparative study of eight constraintprogramming languages over the Boolean and finite domains. Constraints 5(3), 275–301.

Garcia de la Banda, M., Hermenegildo, M.M.B., Dumortier, V., Janssens, G., Simoens,W., 1996. Global analysis of constraint logic programs. ACM Transactions onProgramming Languages and Systems 18 (5), 564–614.

Garey, M.R., Johnson, D.S., 1979. Computers and Intractability: A Guide to theTheory of NP-Completeness. W.H. Freeman, New York.

Glover, F., 1986. Future paths for integer programming and links to artificialintelligence. Computers and Operations Research 13 (5), 533–549.

Glover, F., Kochenberger, G.E., 2003. Handbook of Metaheuristics. Springer, Berlin.Glover, F., Laguna, M., 1997. Tabu Search. Kluwer Academic Publishers, Dordrecht.Hentenryck, P.V., 1989. Constraint Satisfaction in Logic Programming. The MIT

Press, Cambridge, MA.Hirayama, K., Yokoo, M., 1997. Distributed partial constraint satisfaction problem.

In: Principles and Practice of Constraint Programming, pp. 222–236.Jaffar, J., Maher, M.J., 1994. Constraint logic programming: a survey. Journal of Logic

Programming 19/20, 503–581.Kalagnanam, J., Parkes, D., 2004. Auctions, bidding and exchange design. In: Chi-

Levi, D.S., Wu, S.D., Shen, M. (Eds.), Handbook of Quantitative Supply Chain

ARTICLE IN PRESS

B. Lopez et al. / Engineering Applications of Artificial Intelligence 22 (2009) 374–388388

Analysis: Modeling in the E-Business Era. Kluwer Academic Publishers,Dordrecht (Chapter 5).

Kohl, N., 2003. Solving the world’s largest crew scheduling problem. ORbit8–12.

Laplagne, I., Raymond, S.K., Kwan, A.S.K., 2005. A hybridised integer programmingand local search method for robust train driver schedules planning. In: Burke,E., Trick, M. (Eds.), PATAT 2004. Lecture Notes in Computer Science, vol. 3616.Springer, Berlin, pp. 71–85.

Leyton-Brown, K., 2003. Resource allocation in competitive multiagent systems.Ph.D. Thesis, Stanford University.

Lim, A., Rodrigues, B., Zhu, Y., 2005. Airport gate scheduling with time windows.Artificial Intelligence Review 24, 5–31.

Lopez, B., 2005. Time control in road passenger transportation. Problemdescription and formalization. Research Report IIiA 05–04, University ofGirona.

Maniezzo, V., Gambardella, L.M., De Luigi, F., 2004. Ant colony optimization. In:Onwubolu, G.C., Babu, B.V. (Eds.), New Optimization Techniques in Engineer-ing. Springer, Berlin, pp. 101–117.

Maroto, C., Alcaraz, J., Ruiz, R., 2003. Investigacion Operativa. Models i tecniquesd’optimitzacio. Universitat Politecnica de Valencia.

Melian, B., Moreno, J., Moreno, J., 2003. Metaheuristics a global view. RevistaIberoamericana de Inteligencia Artificial (19), 7–28.

Modi, P., Shen, W., Tambe, M., Yokoo, M., 2005. ADOPT: asynchronous distributedconstraint optimization with quality guarantees. Artificial Intelligence Journal161, 149–180.

Orden, A., 1993. LP from the ’40s to the ’90s. Interfaces 23 (5), 2–12.Portugal, R., Ramalhinho Lourenc-o, H., Paixao, J.P., 2006. Driver scheduling

problem modelling. Working Paper, num. 991, Departamento de Economa yEmpresa, Universitat Pompeu Fabra.

Ramalhinho Lourenc-o, H., Pinto Paixao, J., Portugal, R., 2001. Metaheuristics for thebus-driver scheduling problem. Transportation Science 3 (35), 331–343.

Rossi, F., van Beek, P., Walsh, T., 2006. Handbook of Constraint Programming.Elsevier, Amsterdam.

Salido, M.A., Barber, F., 2006. Distributed CSPs by graph partitioning. AppliedMathematics and Computation (183), 491–498.

Sandholm, T., 2002. Algorithm for optimal winner determination in combinatorialauctions. Artificial Intelligence 135 (1–2), 1–54.

Wolsey, A., Nemhauser, G.L., 1999. Integer and Combinatorial Optimization. Wiley-Interscience, New York.

Yokoo, M., Hirayama, K., 2000. Algorithms for distributed constraint satisfaction: areview. Autonomous Agents and Multi-Agent Systems 3, 185–207.

Yokoo, M., Durfee, E., Ishida, T., Kuwabara, K., 1998. The distributed constraintsatisfaction problem: formalization and algorithms. IEEE Transactions onKnowledge and DATA Engineering 10 (5), 673–685.