A distributed agent-based approach for simulation-based optimization

19
A distributed agent-based approach for simulation-based optimization Van Vinh Nguyen , Dietrich Hartmann, Markus König Institute for Computational Engineering, Ruhr-Universität Bochum, Universitätsstrasse 150, D-44780 Bochum, Germany article info Article history: Received 27 September 2011 Received in revised form 24 May 2012 Accepted 10 June 2012 Available online 15 July 2012 Keywords: Non-standard optimization Agent-based optimization Simulation-based optimization Computational steering abstract Structural design and optimization in engineering are increasingly addressing non-standard optimization problems (NSPs). These problems are characterized by a complex topology of the optimization space with respect to nonlinearity, multimodality, discontinuity, etc. By that, NSP can only be solved by means of computer simulations. In addition, the corresponding numerical approaches applied often tend to be noisy. Typical examples for NSP occur in robust optimization, where the solution has to be robust with respect to implementation errors, production tolerances or uncertain environmental conditions. How- ever, a generally applicable strategy for solving such problem categories always equally efficiently is not yet available. To improve the situation, a distributed agent-based optimization approach for solving NSPs is intro- duced in this paper. The elaborated approach consists of a network of cooperating but also competing strategy agents that wrap various strategies, especially optimization methods (e.g. SQP, DE, ES, PSO, etc.) using different search characteristics. In particular, the strategy agents contain an expert system modeling their specific behavior in an optimization environment by means of rules and facts on a highly abstract level. Further, different common interaction patterns have been defined to describe the structure of a strategy network and its interactions. For managing the complexity of NSPs using multi-agent systems (MASs) efficiently, a simulation and experimentation platform has been developed. Serving as a computational steering tool, it applies MAS technology and accesses a network of various optimization strategies. As a consequence, an elegant inter- active steering, a customized modeling and a powerful visualization of structural optimization processes are established. To demonstrate the far reaching applicability of the proposed approach, numerical exam- ples are discussed, including nonlinear function and robust optimization problems. The results of the numerical experiments illustrate the potential of the agent-based strategy network approach for collab- orative solving, where observed synergy effects lead to an effective and efficient solution finding. Ó 2012 Elsevier Ltd. All rights reserved. 1. Introduction In recent years, structural design and optimization in engineer- ing have increasingly led to so-called non-standard optimization problems (NSPs). Customarily, the category of problems associated with NSP are characterized by a multimodal, nonsmooth, nonlinear and constrained optimization search space, for which the global optimum ought to be found. In most cases, the numerical ap- proaches or simulations applied to evaluate the optimization crite- ria and constraints also tend to be noisy 1 and, in general, need multi-level or even multi-physics simulations. Thus, relevant optimi- zation criteria as well as prescribed constraints with inherent non- linearities can only be evaluated by virtue of computer simulations (simulation-based optimization). Typical examples of an NSP are problems stemming from robust optimization, where the design solution has to be robust with respect to e.g. implementation errors, production tolerances, uncertain environmental conditions, etc. Furthermore, solving a robust optimization problem needs to take uncertainties into consideration. Beyer and Sendhoff [9] identify several sources of uncertainties during a design and optimization process of a techni- cal system (see Fig. 1): changing environmental and operating conditions (type A), production tolerances and implementation errors (type B), uncertainties in the system output (type C), and feasibility uncertainties (type D). The mathematical description of a general simulation-based problem, also considering uncertainties, takes the following form: 1474-0346/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.aei.2012.06.001 Corresponding author. E-mail address: [email protected] (V.V. Nguyen). 1 The simulation is noisy if its output is uncertain and fuzzy. Hence, the optimization criterion cannot be evaluated precisely and has to be estimated. Advanced Engineering Informatics 26 (2012) 814–832 Contents lists available at SciVerse ScienceDirect Advanced Engineering Informatics journal homepage: www.elsevier.com/locate/aei

Transcript of A distributed agent-based approach for simulation-based optimization

Advanced Engineering Informatics 26 (2012) 814–832

Contents lists available at SciVerse ScienceDirect

Advanced Engineering Informatics

journal homepage: www.elsevier .com/ locate /ae i

A distributed agent-based approach for simulation-based optimization

Van Vinh Nguyen ⇑, Dietrich Hartmann, Markus KönigInstitute for Computational Engineering, Ruhr-Universität Bochum, Universitätsstrasse 150, D-44780 Bochum, Germany

a r t i c l e i n f o

Article history:Received 27 September 2011Received in revised form 24 May 2012Accepted 10 June 2012Available online 15 July 2012

Keywords:Non-standard optimizationAgent-based optimizationSimulation-based optimizationComputational steering

1474-0346/$ - see front matter � 2012 Elsevier Ltd. Ahttp://dx.doi.org/10.1016/j.aei.2012.06.001

⇑ Corresponding author.E-mail address: [email protected] (V.V. Nguyen

1 The simulation is noisy if its output is unceroptimization criterion cannot be evaluated precisely an

a b s t r a c t

Structural design and optimization in engineering are increasingly addressing non-standard optimizationproblems (NSPs). These problems are characterized by a complex topology of the optimization space withrespect to nonlinearity, multimodality, discontinuity, etc. By that, NSP can only be solved by means ofcomputer simulations. In addition, the corresponding numerical approaches applied often tend to benoisy. Typical examples for NSP occur in robust optimization, where the solution has to be robust withrespect to implementation errors, production tolerances or uncertain environmental conditions. How-ever, a generally applicable strategy for solving such problem categories always equally efficiently isnot yet available.

To improve the situation, a distributed agent-based optimization approach for solving NSPs is intro-duced in this paper. The elaborated approach consists of a network of cooperating but also competingstrategy agents that wrap various strategies, especially optimization methods (e.g. SQP, DE, ES, PSO,etc.) using different search characteristics. In particular, the strategy agents contain an expert systemmodeling their specific behavior in an optimization environment by means of rules and facts on a highlyabstract level. Further, different common interaction patterns have been defined to describe the structureof a strategy network and its interactions.

For managing the complexity of NSPs using multi-agent systems (MASs) efficiently, a simulation andexperimentation platform has been developed. Serving as a computational steering tool, it applies MAStechnology and accesses a network of various optimization strategies. As a consequence, an elegant inter-active steering, a customized modeling and a powerful visualization of structural optimization processesare established. To demonstrate the far reaching applicability of the proposed approach, numerical exam-ples are discussed, including nonlinear function and robust optimization problems. The results of thenumerical experiments illustrate the potential of the agent-based strategy network approach for collab-orative solving, where observed synergy effects lead to an effective and efficient solution finding.

� 2012 Elsevier Ltd. All rights reserved.

1. Introduction

In recent years, structural design and optimization in engineer-ing have increasingly led to so-called non-standard optimizationproblems (NSPs). Customarily, the category of problems associatedwith NSP are characterized by a multimodal, nonsmooth, nonlinearand constrained optimization search space, for which the globaloptimum ought to be found. In most cases, the numerical ap-proaches or simulations applied to evaluate the optimization crite-ria and constraints also tend to be noisy1 and, in general, needmulti-level or even multi-physics simulations. Thus, relevant optimi-zation criteria as well as prescribed constraints with inherent non-

ll rights reserved.

).tain and fuzzy. Hence, thed has to be estimated.

linearities can only be evaluated by virtue of computer simulations(simulation-based optimization).

Typical examples of an NSP are problems stemming from robustoptimization, where the design solution has to be robust withrespect to e.g. implementation errors, production tolerances,uncertain environmental conditions, etc. Furthermore, solving arobust optimization problem needs to take uncertainties intoconsideration. Beyer and Sendhoff [9] identify several sources ofuncertainties during a design and optimization process of a techni-cal system (see Fig. 1):

� changing environmental and operating conditions (type A),� production tolerances and implementation errors (type B),� uncertainties in the system output (type C), and� feasibility uncertainties (type D).

The mathematical description of a general simulation-basedproblem, also considering uncertainties, takes the following form:

Uncertainty

(Black Box)

Eval

uatio

n

Env

ironm

ent

Design Vector

A B C

δ

α

Fig. 1. System uncertainties [9].

A

A

AAgent

Agent

Agent

msgMessage

msg

msgmsg

Fig. 2. Multi-agent system.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 815

minx

~f� ðxþ d; aÞ with x 2 S; d 2 Ud; a 2 Ua ð1Þ

S ¼ fxjx 2 Rn; xl 6 x 6 xu; ~gðxþ d; aÞ 6 0; ~hðxþ d;aÞ ¼ 0gUd;Ua . . . uncertainty setsx . . . design vectord . . . uncertainty perturbation vectora . . . uncertainty state vector

where S denotes the set of feasible designs, f the objective function(optimization criteria), g the inequality and h the equality con-straints. The subscript ‘‘ � ’’ of the optimization criteria f indicatesthat the evaluation of the vector functions f, g and h requires, if nec-essary, multi-level computer simulations. The tilde ‘‘�’’ on top of f,g, h denotes that uncertainty has to be considered for the evaluationof the specified quantities. In the following, the objective functionand the constraints are denoted as problem functions. A comprehen-sive overview of the state of the art in the field of robust optimiza-tion is given in [9,49,7].

However, a general strategy for solving different problem typesin an equally efficient manner is not – and will never be – available(see the ‘‘No Free Lunch’’ theorem by Wolpert and Macready [62]).In order to solve a simulation-based optimization problem in asophisticated manner, extensive knowledge and experience-steered interactions between the optimization expert and the opti-mization system implemented are now applied.

In this context, the coupling of global and local optimizationmethods is often more promising than an approach based on onlyone optimization method (see [42,29,26,51] for effective hybridapproaches). Unfortunately, it is unknown which strategy combi-nation has to be chosen to get a robust and an always efficientsolution with regard to the global optimality. A strategy combina-tion that has previously worked appropriately may become ineffi-cient or even unstable if only slight changes are made in theproblem model. The time effort to systematically test which com-bination of optimization methods is best may then increase drasti-cally. Further, the combination of optimization methods as well astheir strategy parameters has generally to be defined in advance.Indeed, there are self-tuning optimization methods, e.g. the param-eter-free Particle Swarm Optimization TRIBES [13] or the EvolutionStrategy [50] with endogenous strategy parameters describingvariances and covariances for self-adaptation. However, manystate-of-the-art optimization methods are still providing tunablestrategy parameters whose appropriate determination is a greatchallenge. As a consequence, significant experience is needed todetermine the ‘‘right’’ optimization methods along with their strat-egy parameters.

To face such difficulties, an agent-based approach is suggestedfor solving the NSP. The paradigm of multi-agent systems (MASs)has proven to be the appropriate computational and theoretical ba-

sis and workbench to solve an NSP effectively. MAS is defined asfollows (see Fig. 2):

An MAS is a set of entities, called agents, which have autono-mous behavior, diverging or common goals, and which interactwith each other through messages.

As a relatively novel conceptual and programming paradigm,MAS promotes the design and implementation of distributed sys-tems to accelerate the search for a new solution. The benefits ofusing an agent-based approach, compared to traditional computerscience concepts, are obvious in cases where a problem solutiondemands for interaction (e.g. coordination, cooperation and com-munication) between various autonomous participants. Woold-ridge [63, p. 9] emphasizes that interaction is ‘‘probably the mostimportant single characteristic of complex software’’. To this end,the paradigm of MAS contains a remarkable portion of autonomousand emergence behavior which, in addition, can be formalized nat-urally and, hence, be automatized. Hereby, agents adapt their solu-tion behavior in an event-driven manner in harmony with differentNSPs. In most cases, the basis of collaboration in such scenarios isheterogeneous and time-variant, has a partially fuzzy structureand requires an organizational network of the participants in-volved in the solution finding process.

Such a situation also holds for the solution of an NSP wheremultiple participants, i.e. competing optimization strategies inter-connected by a relationship network, have to find a common solu-tion. By the assistance of an agent-based strategy network, theexperience-oriented combination of numerical optimization strat-egies (until now carried out more intuitively) can be replaced by amore systematic approach. As a consequence, the manual effort ofinitializing new optimization iterations for strategy combinationscan be shifted to the agents. Intelligent and distributed optimiza-tion components, represented by means of agents, can replacehuman optimization experts by using acquired knowledge, incor-porated in a rule base for optimization. Hereby, agents adapt theirsolution behavior in an event-driven manner according to the dif-ferent NSPs occurring. In summary, the primary goal of the strategynetwork approach is not only to find the single best optimizationmethod (out of a fixed set of methods) for given unknown NSPs.Rather, this approach is used as a high-level concept for decidinghow to hybridize various optimization strategies in a flexible andconsistent manner to let them find better solutions in cooperation.Hereby (self-)adaptation mechanism for optimization (e.g. pro-posed in [33,58,10]) and interactions (e.g. information exchangeabout refrain or promising search regions) are crucial features ofthese strategy networks in order to gain benefit from synergyeffects of concurrently running agents.

Surprisingly, at present there are only few publications whichaddress agent-based optimization for NSP; for example, Kercelliet al. [32] apply MAS to global optimization, Persson et al. [43]combine MAS with optimization techniques for dynamically dis-tributed resource allocation, Xie and Liu [65,64] use MAS for com-binatorial optimization, Ullah et al. [58] present an agent-basedmemetic approach for solving constrained real-valued optimiza-

Fig. 3. Three columns optimization concept with MAS.

816 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

tion problems and Weichhart et al. [61] develop an agent-basedoptimization system for scheduling problems.

What is still lacking, despite the numerous suggestions formethods in distributed, cooperative and adaptive optimization(documented e.g. by Crainic et al. [18,17], Melab et al. [37], Talbiet al. [56,57], Nedic and Ozdaglar [40], Hogg and Huberman [25],Augugliaro et al. [1]), is a holistic approach for simulation-basedoptimization by using AI-technologies – in particular MAS andexpert or knowledge-based systems – along with a computer-aided simulation and experimentation environment for optimiza-tion. Hereby, the MAS paradigm provides the overall concept ofhow to organize and to design a strategy network and its inter-actions. The knowledge-based system in turn is used to modelthe behavior of several strategy agents. These two concepts com-plement each other and form the basis for the design of the pro-posed agent-based approach. Further, the MAS paradigmsupports the modeling of the strategy network and its coordina-tion in a decentralized manner. The entire knowledge of thestrategy network (defined by rules and facts) can be distributedover the agents, whereby the modeling complexity of the solu-tion finding process is reduced. Hence, the global behavior ofthe strategy networks results rather from the local behavior ofthe agents.

The main objective of this work, therefore, is to achieve a newquality for the computer-based solution of non-standard problemsin structural optimization and structural design by means of usingan agent-based strategy network of optimization strategies. Partic-ular emphasis has been placed on the development of a computa-tional steering application2 by which an interactive optimization,using MAS as a computational platform, is enabled. This computa-tional steering application provides the ability to explore high-levelemergent behaviors in optimization stemming from different low-level interaction rules.

As a matter of fact, this paper is a proof of concept and a reporton the current achievements. It describes the main componentsused to create a distributed agent-based optimization approach.It is organized as follows: In Section 2, the optimization conceptfor agent-based optimization is introduced. In Section 3, theconcept of a strategy network is presented. In Section 4, the

2 Computational steering denotes the ability to control and to interact with acomputer program during its execution.

distributed software system is introduced. Here, the essential con-cepts and aspects required for implementing a strategy networkare described. Furthermore, the optimization framework MOPACKproviding the foundation for the agent-based strategy network isspecified. Finally, in Section 5, the agent-based optimization ap-proach is validated by means of nonlinear function and robustoptimization problems.

2. Optimization concept

The well-known classical ‘‘Three-Columns-Concept’’ of thestructural optimization is a general approach for solving simula-tion-based optimization problems [24,21]. In this concept the solu-tion finding process is systematically divided into threeindependent, interconnected parts (‘‘columns’’): the optimizationmodel, the optimization methods and the structure simulation.The specific parts act here as placeholders and allow the flexiblecombination and reuse of various optimization methods with dif-ferent optimization models using specific simulations.

In the context of agent-based optimization, this optimizationconcept has been extended by an additional fourth column, i.e.the multi-agent system (MAS). The MAS represents the ArtificialIntelligence Part of the optimization concept linking the technicaloptimization model (structural optimization model) to the optimi-zation methods applied. In engineering, this concept provides ageneral solution methodology for simulation-based optimization,regardless of a specific application. Fig. 3 summarizes the optimi-zation concept using MAS emphasizing the central role of theMAS as a link between the optimization model and the optimiza-tion methods available. Accordingly, the following parts have tobe established:

� The Optimization Model is responsible for transforming a specificengineering problem to a mathematical optimization problem.Here, the design vector x, the objective function f and the con-straints g and h are defined. Consequently, the optimizationmethod is decoupled from concrete problems while the mathe-matical optimization problem acts as a general interface for theoptimization methods involved.� The Optimization Methods include different deterministic as well

as stochastic approaches, e.g. Evolutionary Algorithm (EA)[50,38], Sequential Quadratic Programming (SQP) [48], Differen-

A

A

A

EA-Agent

GlobalSearch-Agent

msgMessage

msg

msgmsg

A

A

LocalSearch-Agent

msg

msg

MAS EnvironmentAStrategy

Agent

Coordinator

SQP-Agent

Strategy Network

Fig. 4. Concept of a strategy network.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 817

tial Evolution (DE) [54], Probabilistic Global Search Lausanne(PGSL) [46], etc.� The Simulation incorporates complex processes or tailor-made

physical models using, e.g. Finite Element Method (FEM), Mul-tibody System (MBS), etc.� The Multi-Agent System represents an agent-based strategy net-

work and acts as a link between the optimization model andindividual optimization methods. It coordinates and controlson a higher level the connection between the two columns aswell as the interaction between various optimization strategies.In addition, it emphasizes the concurrent and cooperative exe-cution of heterogeneous interacting strategies not only for opti-mization, but also for the adaptation of the strategy networkand for the coordination and analysis of the optimization pro-cess (see Section 3.1).

The following sections demonstrate how the different above-mentioned parts are carried out. Firstly, the MAS part is specifiedin more detail. Then, it is described how the extended optimizationconcept is used to implement a sophisticated and user-friendlysoftware system.

3. Agent-based approach

The solution finding process for simulation-based problemswhich belong to the class of black-box-problems3 leads to funda-mental questions:

� Which class of optimization methods should be used and whatare their best strategy parameters?� How is the complexity of optimization problems determined?� How are heterogeneous optimization concepts and technologies

combined?� What are the mechanisms for cooperation and interaction that

are required to build up a distributed optimization system?

A promising approach for solving black-box-problems in asophisticated way is the hybridization of different concepts andstrategies into a heterogenous strategy network. In the following,the generally applicable concept of a strategy network is formallydescribed.

3 Black-box-problems are problems with unknown structure of the problemfunctions (objectives and constraints).

3.1. Strategy network

A strategy network SN is a set of individual agents, also calledstrategy agents, which have the common goal to solve an optimi-zation problem in cooperation or in competition with each other.Hence, they must negotiate their problem solutions achieved withother agents to find an optimal solution (see Fig. 4).

Hereby, the strategy network is defined by the first tuple:

SN ¼ ðA;QÞ ð2Þ

where A is the set of strategy agents and Q is the set of ontologies.The strategy agents in turn are defined by the second tuple:

A ¼ ðS; P;KLÞ ð3Þ

where S is the set of different strategies,4 P is the set of differentoptimization models and KL represents the local knowledge of anagent comprising facts and rules.

The ontology Q is divided into an ontology QP describing optimi-zation problems and an ontology QO describing the interactions be-tween agents. The strategy network SN is embedded in anenvironment Env, which is represented by the third tuple:

Env ¼ ðSN; P;KG;RÞ ð4Þ

where KG represents the global knowledge of the strategy network(can be modeled by the blackboard concept) and R stands for differ-ent external resources and services used by the agents, e.g. comput-ing or data services, simulations, optimization methods, etc.

In order to identify different types of strategy agents and theirparameters more precisely, the following name convention is used:

a : Aidt1;t2ðp1; . . . ; pnaÞ ð5Þ

where

a . . . strategy agent nameA . . . strategy agentid . . . instance ID of specific agent� typet1; t2; . . . type and subtype of a strategy agentpj . . . strategy agent parameter pj with j 2 f1; . . . ;nag

By that, several distinct types of strategy agents A can be identified:

(a) agents responsible for the coordination of the optimizationprocess (coordinator agents AC),

4 A strategy is defined as a well-planned and partly long-term oriented approach orspecific algorithm to achieve a goal.

818 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

(b) agents responsible for analyzing the progress and the resultsof the optimization progress (analysis agents AN),

(c) agents responsible for adapting the strategy network by ana-lyzing the progress and the results of the optimization pro-gress (adaptation agents AD),

(d) agents responsible for solving the optimization problem byusing attached optimization methods (search agents AS).

3.2. Interaction patterns

In this work, numerous common interaction patterns (IP) havebeen established to describe and to implement different strategynetworks. Gamma et al. [22] denote that knowledge and experi-ence are not organized on the level of syntactic knowledge, buton the level of larger structures like algorithms, data structuresand by plans to achieve specific goals. For this purpose, a catalogueof relevant interaction patterns mapping specific situations is de-fined compliant to given rules. These rules allow the strategy net-work to adapt to unknown sudden problems in an ‘‘intelligent’’ andeffective manner. The IPs are used as building blocks for modelingstrategy networks and, thus, for describing interaction and cooper-ation between the agents. In addition, closely related structuresand subgoals of a subset of agents (substrategy network) belongingto a strategy network can be identified. They assist in avoiding areinvention and re-implementation of established concepts and re-quired components (in this case, the strategy agents). In short, theyhelp to answer recurrent questions like [16]:

� How are the strategy network and their strategy agentscoordinated?� Which information is interchanged between the strategy

agents?� When and how is information exchanged?� How is information handled?

The structure of IPs is based on the MAGMA Framework5 intro-duced by Milano and Roli [39]. It is a conceptual hierarchical frame-work for describing hybrid metaheuristics by means of differenttypes of interacting agents. These agents are assigned to four differ-ent levels: (i) on the first level LEVEL-0 (L0), the solution buildersare implemented for the generation of new solutions; (ii) on the sec-ond level LEVEL-1 (L1), there are the solution improvers using localsearch strategies; (iii) on the third level LEVEL-2 (L2), the moresophisticated strategic agents use global search strategies; and (iv)on the last level LEVEL-3 (L3), the so-called coordination level,the agents for coordinating agents on lower levels are defined. Inthe context of the strategy network, the MAGMA framework is ide-ally suitable to describe the hierarchical structure of the IPs and theinteraction between the agents. It emphasizes the increasing com-plexity of the applied agents and their special role for optimizationin a clearly arranged manner. In the following, some selected IPsare summarized showing, for example, how a strategy network canescape from local minima (IP RELAY SEARCH), can adapt itself (IP SELF-ADAPTATION), and can minimize the communication costs or coordi-nate the information exchange (IP BLACKBOARD):

� IP PARALLELIZER: A coordinator agent creates a number of specificstrategy agents and delegates the optimization problem tothem, whereas different initial conditions are assigned to eachstrategy agent (see Fig. 5a).� IP BLACKBOARD: An agent can provide useful facts to other agents

by writing them on the blackboard. Registered agents are beinginformed about new facts and can react accordingly (see Fig. 5b).

5 MAGMA: MultiAGent Metaheuristics Architecture.

� IP HYBRID STRATEGY: A global strategy agent searches for promisingsolution areas and lets these areas be examined by local strat-egy agents (see Fig. 6a).� IP RELAY SEARCH: A common weak point of monolithic optimiza-

tion methods is their early convergence in the vicinity of thelocal minimum. This is often caused by inappropriate valuesof optimization parameters due to the unknown structure ofthe optimization problem (e.g. termination criterion, steplength in ES (Evolution Strategy) or cooling speed of an SA (Sim-ulated Annealing)). To avoid this disadvantage, the interactionpattern IP RELAY SEARCH can be used, which represents a sequenceof depending optimization methods (see Fig. 6b). If a searchagent converges to a local minimum or does not find any bettersolutions, it starts a new search agent with ‘‘refreshed’’ param-eters and delegates the problem solution to it. This agent canstart the search with the best solution found so far, or from astarting point with a minimum distance to this solution. Byapplying this IP, the probability of the strategy network forovercoming local minima more successfully is increased.6

� IP PARTITION STRATEGY: For the systematic (global) search, an agentdecomposes the solution space into subspaces and delegatesthe corresponding subproblems to strategy agents, which, inturn, can decompose their assigned search space further (seeFig. 7a).� IP ANALYSIS: The classification of an optimization problem is cru-

cial for the selection of an appropriate solution strategy. For thisreason, the analysis agent uses its ‘‘expert knowledge’’ (in termsof rules and facts) to classify the current optimization problem.Based on experience, the analysis agent tries to extract impor-tant characteristics of the optimization problem or useful infor-mation from the optimization process. Subsequently, theanalysis agent informs the sender about the new facts as wellas all other agents by using the blackboard (see also IP BLACK-

BOARD). Fig. 7b shows the IP ANALYSIS.� IP SELF-ADAPTATION: This interaction pattern is used if a strategy

agent has to adapt to its optimization performance and the per-formance of the other agents. For example, if the own perfor-mance is bad, the search is finished, or slowed down bywaiting for each optimization loop to change its own strategyparameters. If the search is successful, a further acceleration ispossible by re-modifying the strategy parameters (e.g. increas-ing step lengths). By applying this IP, the strategy network isable to adapt its global search behavior. In particular, the distri-bution of computing resources can be coordinated in a decen-tralized way, i.e. without the help of an external coordinator.There are two variants of the IP SELF-ADAPTATION: direct and black-board. In the case of IP Self-Adaptation (direct), the strategyagent requests (periodically) the performance of other knownstrategy agents by sending a direct request. After receiving aresponse, it evaluates the performances (e.g. best objectives,search regions, feasible solutions, etc.) and adapts itself (seeFig. 8a). A certain disadvantage of this interaction pattern isthe huge amount of message exchanges and the waiting timesfor responses. In the case of IP SELF-ADAPTATION (BLACKBOARD), thestrategy agent requests the required information from theblackboard, on which the other strategy agents have put theirdata asynchronously during the search. Using this interactionpattern, to a large extent, the message exchange is (in contrastto IP Self-Adaptation (direct)) reduced (see Fig. 8b).� IP Self-Adaptation: This interaction pattern is similar to the

IP Self-Adaption. In IP SELF-ADAPTATION, the strategy agent adaptsitself, whereas in the IP EXTERNAL ADAPTATION an external coordina-

6 This interaction pattern is based on a concept which has already been appliedsuccessfully by Leary [35] and Stützle [55].

Fig. 5. Interaction pattern I.

Fig. 6. Interaction pattern II.

Fig. 7. Interaction pattern III.

Fig. 8. Interaction pattern IV.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 819

tor agent is responsible for the adaptation. The advantage of thispattern is that the search agents are less complex, and that theirimplementation is easier compared to self-adapting agents (seeIP SELF-ADAPTATION). This is because the coordinating behavior isconcentrated in a dedicated coordinator agent. The coordinationpattern distinguishes between IP EXTERNAL ADAPTATION (DIRECT) (seeFig. 9a) and IP EXTERNAL ADAPTATION (BLACKBOARD) (see Fig. 9b).

3.3. Example scenario

To demonstrate the dynamic and complex behavior of theagent-based approach, the functioning of a specific strategy net-work is shown by considering a typical example scenario (seeFig. 10). Based on interaction patterns introduced in Section 3.2,it is now explained how a simulation-based problem with an

Fig. 9. Interaction pattern V.

Fig. 10. Example of interactions within a strategy network.

7 In order to receive notifications, strategy agents have to register at the blackboard

820 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

unknown structure is optimized. In particular, it is described howthe strategy network works during the optimization process:

1. The optimization starts with the reception of an optimiza-tion problem by the main coordinator agent A1

C . This agentdecides, depending on its knowledge base, how to solvethe given optimization problem.

2. To make the best decision, the coordinator agent decides todelegate the given problem to the first analysis agent A1

N .This agent examines the most important characteristics ofthe problem (e.g. the shape properties of the objective andconstraints, the evaluation time for a single solution or thenumber of optimization variables). For this purpose, theagent A1

N can use an available analysis service (e.g. data min-ing, stochastic line sampling [12], etc.). Having finished theanalysis, the agent inserts the data onto the blackboard, andinforms the coordinator agent about its results.

3. Upon the analysis results, the coordinator agent decideswhich further solution strategies are reasonable. In thisexample, the coordinator agent decides to create differentagents: (i) a group of heterogeneous search agentsA1

S ; . . . ;A4S , (ii) a further coordinator agent for hybrid search

A2C , (iii) two adaptation agents A1

D, A2D and (iv) a further (sec-

ond) analysis agent A2N . Then, the problem is delegated to

the group of search agents A1S ; . . . ;A4

S as well as to the coor-dinator agent A2

C for hybrid search (see IP HYBRID STRATEGY).4. After a while the search agents will find new intermediate

results, which they enter into the blackboard. Therefore,the first adaptation agent A1

D is informed about new solu-

tions by the blackboard.7 Based on the performance metrics,the adaption agent A1

D controls the progress of all searchagents. If necessary, it (i) changes the behavior of the searchagents by modifying the strategy parameters of the agents,(ii) stops the search agent or (iii) creates new promising ones.

5. In parallel to 4, the coordinator A2C starts a global search

agent and let the solutions found be improved by localsearch agents.

6. Assuring that the coordinator agent has initialized the sec-ond analysis agent A2

N , this agent also analyzes the solutionresults of all search agents found on the blackboard andestimates promising or refrain regions using approximationmodels (e.g. Kriging approach [31]). Findings are put on theblackboard.

7. The second adaptation agent A2D is informed about new

promising search regions and starts a new group of searchagents for ongoing exploration.

4. Optimization platform

The agent-based solution of simulation-based optimizationproblems and the analysis of the global behavior of a strategy net-work without the aid of computers are impossible. Particularly, theglobal behavior of a complex strategy network can only be ana-lyzed empirically through computer simulations. The macroscopicbehavior of the strategy network can neither be controlled nor de-

first.

Fig. 11. Architecture of the agent-based optimization system.

8 FEM: Finite Element Method.9 MBS: Multibody System.

10 FIPA: Foundation of Intelligent Physical Agents.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 821

fined on a global level. Rather, it is defined locally by means of themicroscopic behavior of the different agents. Only through ade-quate software is it possible to make the emergent behavior of astrategy network ‘‘visible’’.

4.1. Distributed software system

In the following, the prototype implementation of the object-oriented software system for agent-based optimization called JA-SOE (Java Agent Simulation and Optimization Environment) is elu-cidated. The architecture of JASOE is characterized by a modularand component-based design. It should be mentioned that JASOEis not a specific algorithm, but a framework for implementing spe-cific distributed and agent-based optimization algorithms.

Therefore, JASOE forms a distributed software system whichcomposes, in addition to the Eclipse-based graphical user interface(GUI), the JASOE Workbench (explained in Fig. 12), further subsys-tems as well as applications. Fig. 11 illustrates the software architec-ture of the developed simulation and experimentation platform.Designed as a computational steering application, JASOE providesnumerous control- and interaction capabilities allowing ‘‘experi-ments’’ using different optimization methods and strategy net-works. In detail, the following components have been established(see Fig. 11) representing the Physical View of the JASOE architecture:

� The JASOE Workbench/GUI (see Fig. 12) based on Eclipse con-stitutes the graphical user interface (GUI) and serves as thecontrol center for the optimization of simulation-basedproblems. The GUI provides numerous operating and controlmechanisms in harmony with the way in which an optimiza-tion expert works. In order to model optimization workflowsin a graphical and block-building manner, the open sourcesimulation environment PtolemyII [45] has been integratedinto the JASOE Workbench.

� The MAS represents the ‘‘Artificial Intelligence’’ heart or ker-nel of JASOE by which a network of autonomous and coop-

erating agents (strategy network) are searching for the bestglobal optimum.

� The Simulation Service provides the access to numerical sim-ulations, e.g. FEM,8 MBS,9 and links these simulations to theagent-based strategy network for solving different categoriesof structural mechanics or physical problems.

� The Solver represents an optimization server for accessingboth deterministic and stochastic optimization methodswhich are then used by the strategy network or by individ-ual agents. Integrated optimization methods are:

(a) Deterministic methods: SCPIP [67], FSQP [34], NLPQLP [48],KNITRO [11], Polytop [4], IPOPT [60], lp_solve [36], EGO(Efficient Global Optimization) [31] and DOT [59].

(b) Stochastic methods: EES (Extended Evolution Strategy) [41],PSO (Particle Swarm Optimization) [13], DE (DifferentialEvolution) [54], HS (Harmony Search) [23], SA (SimulatedAnnealing) [8], CMA (Covariance Matrix Adaptation Evolu-tion Strategy) [27], NSGA-II (Nondominated Sorting GeneticAlgorithm II) [20] and SOMA (Self-Organizing MigratingAlgorithm) [66].

� The Cluster/Server Farm is designed in a way that the evalu-ation of time-consuming mechanics or physics simulationsis transparently distributed to and executed on a remotecomputer cluster (UNIX-cluster).

� The central Database Server allows for storing optimizationprojects as well as intermediate and final results, but alsofor archiving rules and expert knowledge.

4.2. Design of the strategy network

In order to implement the agent-based strategy network, theFIPA10-compliant agent platform JADE [5] has been used. Hereby,

Fig. 12. JASOE Workbench.

822 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

the JADE agents have been appropriately extended for the agent-based optimization. JADE agents are also designated as JASOE11

agents in the following (see Section 4.1). By contrast, special JASOEagents additionally providing a JMX12-based management interface[30] are named MX agents.13

The different types of agents have been designed in a way thatboth a hierarchical and a dynamic organizational structure can bematerialized. A hierarchical organizational structure allows theagents to organize themselves within the strategy network interms of subgroups. Particular emphasis has been placed on thepossibility to change the organizational structure at runtimedepending on the optimization progress during the exploratoryphase of the solution finding process. Thus, new locally searchingagents may be added to the strategy network, while unnecessaryagents may be deletable. Also, data mining agents for the analysisof promising areas can be created.

In the context of a distributed optimization approach and withrespect to the exploration of strategy networks, it is crucial thatagents and their internal states can be managed, monitored andcontrolled in real time. By using JMX, the individual agents canbe extended by a management interface allowing the monitoringof process activities and states in an easy and flexible manner. Afurther advantage of the management interface is that agents canbe reconfigured during their lifetime. Likewise, a modification ofthe rule base of an agent or the analysis of resource requirementsis always possible.

11 JASOE: Java Agent Simulation and Optimization Environment.12 JMX: Java Management Extension.13 MX: Management Extension.

The management interface of each agent can be thought of as a‘‘sensor’’ for monitoring purposes. However, the autonomy of theagents should not be weakened using the management interface.Rather, it should only be possible to analyze the agent’s state andto understand agent reactions during an optimization process,e.g. by postprocessing in the JASOE Workbench.

The MX agents have been designed and implemented such thatvarious AI14-approaches (e.g. expert systems, neural networks, etc.)as well as access to external services (e.g. optimization server, data-base server, etc.) can be integrated (based on JADE behaviors). Theseextensions make the agent ‘‘intelligent’’ and allow an interactionwith the environment of agents. By that, the MX agent is equippedwith the open source Java-based expert system Drools [19] and withcomponents for accessing the remote optimization server. Duringthe startup of the strategy network, the appropriate knowledge ofthe agents is dynamically loaded, if necessary, from the knowledgebase. By using the applied expert system Drools, the behavior mod-eling of specific agents can be implemented in a flexible and declar-ative manner by means of rules. Fig. 13 schematically shows thestructure of an MX agent along with components accessing externalservices.

Using distributed components like software agents, whichencapsulate Artificial Intelligence technologies (e.g. expert systemswith rule and knowledge base tailored for optimization), optimiza-tion experts are supported efficiently in solving their optimizationproblems. Moreover, the agent-based strategy network of optimiza-tion methods is able to adapt itself during the information exchangeif various obstacles occur in searching the optimization space.

14 AI: Artificial Intelligence.

JMX

ManagementInterface

MX Agent

JMX

MX Agent

ACL

ExpertSystem

NeuralNetwork

DataMining

Optimizer

Connection toOptimization

Server

Fig. 13. Structure of a JASOE or MX agent.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 823

In order to exemplify how the complex behavior of strategyagents are implemented, Listing 1 shows the behavior of an adap-tive DE-search agent AS,SA(DE) defined by rules (with the Droolsrule language). This agent is informed indirectly over the black-board (see Section 4.3) about new solutions of other search agents.Depending on the quality of the received solution, it is able to inte-grate new solutions in its search process.

4.3. Communication between agents

The use of an adequate ontology is essential for the reliablecommunication between agents of the strategy network. Hence, aspecific ontology for the optimization (QO) has been defined whichsupports the coordination and the information exchange of optimi-zation results between the individual agents. For example, the QO

ontology is used to inform agents about new results or events, torequest agents to analyze or to optimize a given problem or to stopineffective agents (see Fig. 14).

In this context, the infrastructure of JASOE utilizes two types ofcommunication processes between agents, (a) the direct and (b)the indirect communication:

(a) Direct communication (agent, agent): In direct communi-cation, also denoted as ‘‘face-to-face’’-communication, themessage exchange takes place bidirectionally between twoagents, i.e. the sender and the receiver and vice versa. Directcommunication is the default communication type betweenagents in JASOE, where the immediate feedback of the recei-ver (agent) is expected.

(b) Indirect communication (agent, blackboard, agent): Inindirect communication, messages are exchanged througha medium or a tool. Indirect communication is implementedin JASOE using the blackboard concept, which is applied inmany fields of Artificial Intelligence for the solution of inac-curately determined complex applications. The blackboardconcept is based on a metaphor, where a group of differentspecialists is gathered around a blackboard to solve a prob-lem collaboratively [14,15,53]. The blackboard approachcombines the following advantages:

(i) The number of messages within the strategy networkcan be reduced significantly.

(ii) New information placed on the blackboard is receivedonly by that agent who has registered at the blackboard.

(iii) The cooperation between different types of agents iseasy to manage (e.g. between analysis agent and strat-egy agent).

(iv) A flexible and centralized deployment of various data isenabled.

In summary, Fig. 15 depicts schematically the hierarchical rela-tionship of a common strategy network with links to a knowledge

base and to external services such as simulation service and black-board service.

The different types of implemented strategy networks, pre-sented in the following Section 5, are designed as parameter-freeoptimization methods. By applying several mechanisms for adap-tation (e.g. creating and removing agents according to their perfor-mance, exchanging results, etc.) it adapts itself to the optimizationprocess. Two use cases are implemented for interacting with astrategy network: (1) the strategy network is provided as a soft-ware component with a general programming interface and (2)the strategy network is provided as an optimization service on aremote server, where optimization problems are received viaWeb Service, RMI, etc. Hereby, an optimization problem definedby the user is delegated to a main coordinator agent which isresponsible for initializing and starting a specific strategy networkfor solving the problem.

5. Application examples

The applicability of the agent-based approach is exemplarilydemonstrated by means of three highly nonlinear function optimi-zation problems as well as two simulation-based problems repre-senting robust optimization problems. Also, the potential andcapabilities of the MAS paradigm for the development of sophisti-cated and distributed solution strategies are illustrated. By usingthe MAS paradigm, adaptive optimization based on loosely coupledand autonomous components (strategy agents) could be realized inan elegant manner.

5.1. Nonlinear function optimization

In the following, one deterministic and two robust function testproblems are considered:

1. fpin is a highly nonlinear, parameterized test problem defined byPinter [44, p. 540]. As a minimization problem, the global min-imum is at x� ¼ x�1; . . . ; x�n

� �, where the problem parameters

s = 0.025n (depending on the dimension of the problem),ak = 1 and fk = 1(k 2 {1, . . . ,kmax}) determine the difficulty ofthe problem:

minx

f pinðxÞ with f pinðxÞ ¼ sXn

i¼1

xi � x�i� �2

þXkmax

k¼1

ak sin2½fkPkðx� x�Þ� ð6Þ

P1ðx� x�Þ ¼Xn

i¼1

xi � x�i� �

þXn

i¼1

xi � x�i� �2

P2ðx� x�Þ ¼Xn

i¼1

xi � x�i� �

Listing 1. Rules of an adaptive DE-search agent.

824 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

2. fber,r is a two-dimensional robust test problem15 (uncertaintyaccording to type B) defined by Bertsimas [6, p. 6]. Again, a min-imization problem is to be solved, where kdk2 6 � determines theneighborhood of x = (x0,x1) depending on the regularizationparameter � = 0.5.16 Here, a robust solution has to be found whichis insensitive to changes of the optimization vector x:

15 The robust problem is defined as a Worst Case Scenario problem [9].16 The regularization parameter e specifies the maximum size of a neighborhood for

which the perturbation vector d is defined.

minx

f ber;rðxÞ ¼minx

maxkdk26�

f berðxþ dÞ ð7Þ

fberðxÞ ¼ 2x60 � 12:2x5

0 þ 21:2x40 þ 6:2x0 � 6:4x3

0 � 4:7x20 þ x6

1

� 11x51 þ 43:3x4

1 � 10x1 � 74:8x31 þ 56:9x2

1

� 4:1x0x1 � 0:1x20x2

1 þ 0:4x21xþ 0:4x2

0x1

3. fjac,r is a two-dimensional parameterized robust test problem(again uncertainty according to type B) based on the Jacob prob-lem defined in [28, p. 267]. It represents a minimization prob-

OptimizationOntology

Action

ExploreRegion

Iterate

SolveProblem

Start/ Stop/Resume Action

Start/Stop Agent

AnalyzeProblem

ImproveSolution

InformMessage

NewResult

PromisingRegion

ProhibitiveRegion

New Fact

OptimizationElement

Fig. 14. Optimization ontology.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 825

lem with x = (x0,x1), a = 0.5, b = 1.3 and the regularizationparameter � = 1.8:

minx

f jac;rðxÞ ¼minx

maxkdk26�

f jacðxþ dÞ ð8Þ

fjacðx; a; bÞ ¼ �1� ½sincðx0 þ 11; x1 þ 9;1;1Þþsincðx0 � 11; x1 � 3; a; bÞþsincðx0 þ 6; x1 � 9;1;1Þ�

Fig. 15. Relationship of a strategy network.

5x10

DE (no0)DE (no5)DE (no4)DE (no7)DE (no6)DE (no3)DE (no2)DE (no1)

1.0

1.5

2.0

2.5x104

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Timestamp [ms]

Obj

ectiv

e

DE (no0)DE (no5)DE (no4)DE (no7)DE (no6)DE (no3)DE (no2)DE (no1)

-40

-20

0

20

40

-50 -40 -30 -20 -10 0 10 20 30 40 50

Variable

X-Axis Idx: 0

Y-Ax

is Id

x: 1

Fig. 16. Optimization progress of strategy network without cooperation.

DE (0)A-DE (0)

DE (1)A-DE (1)

DE (2)A-DE (2)

DE (3)A-DE (3)

-40

-20

0

20

40

-30 -20 -10 0 10 20 30 40 50

Variable

X-Axis Idx: 0

Y-Ax

is Id

x: 1

5x10

DE (0)A-DE (0)

DE (1)A-DE (1)

DE (2)A-DE (2)

DE (3)A-DE (3)

0.5

1.0

1.5

2.0

x104

0.0 0.5 1.0 1.5 2.0

Timestamp [ms]

Obj

ectiv

e

Fig. 17. Optimization progress with indirect cooperation.

826 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

sincðx; a; bÞ ¼ 50� b�sin a

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2

0 þ x21

q� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2

0 þ x21

q �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2

0 þ x21

q

5.1.1. Synergy effectsIn this section, the emergent behavior of the strategy network is

demonstrated by means of selected numerical experiments.Through graphical visualization, the emergent effects of the strat-

Table 1Statistics for 25 independent runs of the proposed experiments.

Name Best Solutiona Mean Std. Dev. Worst

Experiment 1 0.92 � 104 0.954 � 104 0.0006 � 103 0.98 � 104

Experiment 2 0.34 � 104 0.356 � 104 0.0168 � 103 0.37 � 104

Experiment 3b 0.00 � 104 0.016 � 104 0.1279 � 103 0.08 � 104

Experiment 4c 0.00 � 104 0.008 � 104 0.0912 � 103 0.07 � 104

a Best solution after 1.0 � 105 evaluations.b Quickest optimal solution found after 2.2 104 evaluations.c Quickest optimal solution found after 1.4 104 evaluations.

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 827

egy network are illustrated. It can be shown that the global behav-ior of the strategy network cannot be concluded from the localbehavior of the strategy agents. Also, the impact of different inter-action patterns (see Section 3.2) on the global behavior of the strat-egy network and its optimization progress are studied. For thispurpose, the 500-dimensional Pinter problem fpin (with optimalsolution at x⁄ = (�30, . . . ,�30)) is optimized using different strat-egy networks of increasing complexity (different types of strategyagents, coordination mechanism, etc.). The following numericalexperiments are accomplished.

5.1.1.1. Exp. 1: parallel execution of isolated working search agents. Anetwork of eight homogeneous DE-search agents A0

S;SBðDEÞ; . . . ;

A7S;SBðDEÞ is created in this experiment. The subscript SB denotes a

strategy agent which applies an optimization method given bythe strategy agent parameter p1. After each iteration loop of theoptimization method, it puts the intermediate result onto theblackboard. The start positions of the search agents are generatedby means of the sampling method LHC.17 Accordingly, the optimi-zation progress of this experiment is exemplarily shown in Fig. 16:to the left, the objectives of the search agents over time, and tothe right, the first two components of the solution vector in a scatterplot.18 Summarized, the different search agents show a similar opti-mization progress despite their uniformly distributed start positions.

5.1.1.2. Exp. 2: parallel execution of search agents with indirectcooperation. In this experiment, the cooperation mechanisms ofthe strategy network based on the interaction pattern IP Black-board are activated, where the search agents exchange solutionsindirectly over the blackboard. Here, four DE-search agentsA0

S;SBðDEÞ; . . . ;A3S;SBðDEÞ are in charge. Also, four new adaptive DE-

search agents A0S;SAðDEÞ; . . . ;A3

S;SAðDEÞ are chosen. The subscript SAdenotes a strategy agent which behaves like the strategy agentAS,SB(DE). In addition, however, it can handle the solutions foundby other agents. For this purpose, the strategy agent registers atthe blackboard for new solutions. Every time the strategy agentreceives a new solution, it integrates this into its current solutionset.19 In Exp. 2 cooperating (AS,SA) and non-cooperating search agents(AS,SB) are used to take into account the global as well as the local as-pects of the search. If only strategy agents of type AS,SA(DE) are used,which just exchange their best current solutions, the agents mightget stuck into a local minimum at an early stage.

The optimization progress of the strategy network is exemplari-ly shown in Fig. 17. As can be seen, the results of the second exper-iment demonstrate that even a simple cooperation mechanism(the indirect cooperation over the blackboard) has considerably in-creased the efficiency of the strategy network. It has been possible

17 LHC: Latin Hypercube Sampling.18 This plot gives an impression of the solution distribution in the search space.19 In the case of the population-based search strategy DE, the agent creates an

individual out of the solution and puts it into the population. As a consequence, thesearch strategy can select this individual depending on its fitness for the creation ofnew offspring in the next generations.

to reduce the average objective function value of Exp. 2 in compar-ison to Exp. 1 by approx. 62% from 0.954 � 104 to 0.356 � 104 (seeTable 1).

5.1.1.3. Exp. 3: parallel execution of search agents with indirectcooperation and local search. In the third experiment, three moreagent types are added: the adaptation agent AA,SB and two typesof so-called relay agents AS,SR and AS,SRB (see IP RELAY SEARCH). Thetask of the adaptation agent AA,SB is to accelerate the explorationof local areas by using locally working search agents. This strategyagent is constantly informed by the blackboard about the best cur-rent solution achieved. It immediately starts the local search agentAS,SB(SQP) to explore the region around this solution, if this has notalready been done. In contrast, the relay agents AS,SR and AS,SRB takeover, bringing more diversity into the strategy network. They workaccording to the interaction pattern IP RELAY-SEARCH described inSection 3.2. Using all agents chosen, the probability of getting stuckinto a local minimum can be decreased. The difference between theagents AS,SR and AS,SRB is that an AS,SR agent instructs its successorstarting the search around its last solution found, whereas an AS,SRB

agent decides between its best solution and the best global solu-tion found in the blackboard as the starting point for its successor.

The optimization progress of Exp. 3 is exemplarily shown inFig. 18. The average objective function value could be further im-proved from 0.356 � 104 (Exp. 2) to 0.016 � 104 (see Table 1).

5.1.1.4. Exp. 4: parallel execution of search agents with indirectcooperation, local optimization and external adaptation. A reason-able approach for adapting the strategy network during the opti-mization process is the continuous analysis of the optimizationprogress as well as of the agents’ performance (feedback-loop). Inthis experiment, a strategy agent is deployed for stopping ineffec-tive search agents (see IP External Adaptation (direct)). As a conse-quence, computing resources not used so far can be assigned toother waiting agents. To this end, a filter agent AD,F is defined. Itstask is to request other search agents for their performance dataafter a certain time interval. Depending on the response, the agentstops the search agents with the worst performance (according to aspecified performance metric). The effect of deploying the filteragent is exemplarily shown in Fig. 19, where in contrast to Exp. 3one of the global search agents is stopped. As a result, it has beenpossible to further improve the performance of this strategy net-work, because after the worst working agents were stopped, thenewly available resources have been used by the other agents.

Taken together, the numerical results of the Exps. 1–4 are sum-marized in Table 1. Hereby, the statistics of 25 independent runs ofthe proposed experiments are evaluated, where for each run thetotal number of evaluations (of all strategy agents) is limited to1.0 � 105. Table 1 shows that applying interactions and specialagent behaviors have a significant effect on the efficiency of thestrategy network. As a result, the impact of different interactionsand special strategy agents leads to more efficient and robust strat-egy networks than if the optimization methods applied in theexperiments (DE, SQP) were executed in parallel without interac-tions (see Exp.1).

5.1.2. EffectivityA specific strategy network SNeff with the interaction patterns IP

BLACKBOARD, IP ANALYSIS, IP HYBRID STRATEGY, IP PARALLELIZER, IP RELAY SEARCH

and IP EXTERNAL ADAPTIVE (DIRECT) has been chosen to solve the highlynonlinear function optimization problems. The appropriate work-ing of the strategy network is described in more detail in Section5.2.1.

In the case of the robust, nonlinear function problems fber,r andfjac,r, a fine-grained mesh is applied for evaluating the worst-casefunction value in the neighborhood of a specific design (bounded

[ ... ]

brain.DE1brain.DE2

brain.DE_req2brain.DE_req1

brain.A-DEbrain.DE_req3brain.DE_req4brain.DE-Relay

brain.DE-RelayBestb.ReporterSQP_BB0

b.ReporterSQP_R1b.ReporterSQP_R2b.ReporterSQP_R3b.ReporterSQP_R4b.ReporterSQP_R5

-40

-20

0

20

40

-40 -20 0 20 40

Variable

X-Axis Idx: 0

Y-Ax

is Id

x : 1

4x10

brain.DE1brain.DE2

brain.DE_req2brain.DE_req1

brain.A-DEbrain.DE_req3brain.DE_req4brain.DE-Relay

brain.DE-RelayBestb.ReporterSQP_BB0

b.ReporterSQP_R1b.ReporterSQP_R2b.ReporterSQP_R3b.ReporterSQP_R4b.ReporterSQP_R5

0.00.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

x104

0 1 2 3 4 5 6 7

Timestamp [ms]

Obj

ectiv

e

[ ... ]

Fig. 19. Optimization progress with indirect cooperation, local search and external adaptation.

brain.DE1brain.DE2

brain.DE_req2brain.DE_req3brain.DE_req4brain.DE_req1brain.DE-Relay

brain.A-DEbrain.DE-RelayBest

bde.ReporterDE_BB0b.ReporterSQP_BB0

b.ReporterSQP_R1b.ReporterSQP_R2b.ReporterSQP_R3[ ... ]

-40-30

-20

-10

0

10

20

30

40

50

-40 -20 0 20 40

Variable

X-Axis Idx: 0

Y-Ax

is Id

x: 1

4x10

brain.DE1brain.DE2

brain.DE_req2brain.DE_req3brain.DE_req4brain.DE_req1brain.DE-Relay

brain.A-DEbrain.DE-RelayBest

bde.ReporterDE_BB0b.ReporterSQP_BB0

b.ReporterSQP_R1b.ReporterSQP_R2b.ReporterSQP_R3[ ... ]

0.00.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

x104

0 1 2 3 4 5 6 7

Timestamp [ms]

Obj

ectiv

e

Fig. 18. Optimization progress with cooperation and local search.

5x10

sn.HSsn.NSGAII

sn.DEsn.CMAsn.SQPbc.A-DE

bc.DEbc.DE_req2bc.DE_req1bc.DE-Relay

bc.DE-RelayBestbde.ReporterDE_BB0b.ReporterSQP_BB0

b.ReporterSQP_R1 [...]

0

1

2

3

4

x104

0.0 0.5 1.0 1.5 2.0

Timestamp [ms]

Obj

ectiv

e

sn.HSsn.NSGAII

sn.DEsn.CMAsn.SQPbc.A-DE

bc.DEbc.DE_req2bc.DE_req1bc.DE-Relay

bc.DE-RelayBestbde.ReporterDE_BB0b.ReporterSQP_BB0

[...]

-40

-20

0

20

40

-40 -20 0 20 40

Variable

X-Axis Idx: 0

Y-Ax

is Id

x: 1

3x10

sn.SQPsn.HSsn.DE

sn.NSGAIIsn.CMA

bc.DEbc.DE_req1bc.DE_req2

b.ReporterSQP_BB0bc.DE-RelayBest

bc.DE-Relaybc.A-DE

bde.ReporterDE_BB0b.ReporterSQP_R1

0

1

2

3

4

5

x104

0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Timestamp [ms]

Obj

ectiv

e

4x10

sn.SQPsn.HSsn.DE

sn.NSGAIIsn.CMA

bc.DEbc.DE_req1bc.DE_req2

bc.A-DEbc.DE-Relay

bc.DE-RelayBestbde.ReporterDE_BB0b.ReporterSQP_BB0

0.4

0.6

0.8

1.0

1.2

1.4x102

0.0 0.5 1.0 1.5 2.0

Timestamp [ms]

Obj

ectiv

e

Fig. 20. Results of the function optimization.

828 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

Fig. 21. 18-bar truss and robust optimization result.

21 The FEA (Finite Element Analysis) is accomplished using the FEA-SoftwareminiFE [3].

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 829

by kdk2 6 �). The results are shown in Fig. 20, where the optimiza-tion history (objective function values) of different agents is plot-ted over time (the timestamp is given in [ms]). The histories givean impression of how a specific strategy network works on thethree different optimization problems.

To cope with the Pinter problem fpin (see Fig. 20a), the efficientadaptive progress of the search agent bc.A-DE:AS,SA(DE) (de-scribed in Section 5.1.1) is obvious. Initially, this agent searchesthe optimization space in parallel to the other agents while period-ically adapting the currently best solutions found by others. After awhile, the agent bc.A-DE is informed about the good performanceof the agent sn.NSGAII:AS,SA(NSGAII). It takes over sn.NSGAII’sbest solution in order to further improve it.20 In summary, the se-lected experiments show exemplarily the essential influence ofinteractions between different strategy agents on the optimizationprogress. Due to different coordination mechanisms (e.g. IP BLACK-

BOARD, IP SELF-ADAPTATION or IP EXTERNAL ADAPTATION), the strategy net-work is able to adapt itself to unknown optimization problems.Hence, both interaction and feedback lead to tremendous synergy ef-fects within the strategy network. In order to increase the capabilityof the strategy network, additional or new strategies for search oradaptation (defined as agents) can be integrated without majoreffort.

5.2. Robust optimization

5.2.1. 18-Bar trussIn this example, a robust optimization of an 18-bar truss is car-

ried out. Robust optimization means that implementation errors(caused by imperfect node positions and cross sections Ai) as wellas the stochastic distribution of the structural material parameter E(Young’s modulus) are incorporated into the structural optimiza-tion leading to a complex nonlinear problem. This robust optimiza-tion problem based on the classical (deterministic) 18-bar trussproblem which has been introduced, for example, by Salajeghehand Vanderplaats [47]. The objective of the classical optimizationproblem is to find the minimum weight of the truss structure whileconsidering both stress and buckling constraints:

rS;i ¼�kiEiAi

l2i

6 rp;i ) gs;i ¼rp;i

rS;i� 1 6 0 ð9Þ

20 This strategy network variant starts the search agent sn.NSGAII:-AS,SA(NSGAII)only for unconstrained, nonlinear optimization problems.

where rp,i is the compression or tensile stress, respectively, ki theprofile depending buckling coefficient and rS,i the Euler-bucklingstress.

In order to determine the robust solution of the 18-bar truss,the deterministic problem is transformed into a robust problemor, more precisely, into a Worst Case Scenario problem [9] bymeans of uncertainties. The uncertainties are of type B caused bythe perturbation vector d, as well as uncertainties of type A causedby the state vector a (see Eq. (1)). Hence, the mathematical formu-lation of the robust problem of the 18-bar truss takes the followingform:

minx

maxd;a

f�ðxþ d;aÞ with x 2 S d 2 Ud; a 2 Ua ð10Þ

S ¼ fxjx 2 Rn; xl 6 x 6 xu; gðxþ d; aÞ 6 0g

d ¼dNP

dA

� �¼

Nð0;3ÞNð0;0:5Þ

� �a ¼ aEð Þ ¼ NðE;1:0� 106Þ

� �

Each of the vectors dNP, dA and aE represent the normally distributedvariation (N(l,r)) of the node positions of the lower chord, thecross sections Ai and the Young’s modulus E.

Fig. 21 shows the 18-bar truss21 along with the probability dis-tributions associated with their corresponding optimization param-eters. The robustness measure of a certain design (maximumobjective function for given uncertainty distributions subject to allconstraints) is carried out using the stochastic sampling method La-tin-Hypercube Sampling.22 The strategy network SNeff using the inter-action patterns IP BLACKBOARD, IP ANALYSIS, IP HYBRID STRATEGY, IPPARALLELIZER, IP RELAY SEARCH and IP EXTERNAL ADAPTIVE (DIRECT) is appliedto solve the robust optimization problem. Fig. 21b presents thetime-dependent optimization process, where the current results(best objective function values) of the strategy agents are plottedover time.

During the above-considered optimization process, the follow-ing actions have been carried out23:

22 By using the sampling method Latin Hypercube, the number of samples can bereduced by a factor 12 compared to the plain Monte-Carlo method for the sameaccuracy of statistical values.

23 The names of the participating agents are shown in typewriter font.

Fig. 22. Connecting rod with initial geometry, optimization variables, loads and boundary conditions.

7x10

sn.CMAsn.SQP

b.ReporterSQP_BB0sn.DEbc.DE

bde.ReporterDE_BB0bc.A-DE

bc.DE_req2sn.HS

bc.DE-Relaybc.DE_req1

bc.DE-RelayBestb.ReporterSQP_R1b.ReporterSQP_R2b.ReporterSQP_R3b.ReporterSQP_R4b.ReporterSQP_R5b.ReporterSQP_R6

[...]

4.0

4.5

5.0

5.5

6.0

x102

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4

Timestamp [ms]

Obj

ectiv

e

Fig. 23. Robust optimization of a connecting rod using a strategy network.

830 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

1. After the optimization problem has been transferred to thestrategy network, the coordinator agent Main-Coordina-

tor:AC starts the optimization. This agent delegates the prob-lem to the analysis agent Classifier:AN, which analyzesand classifies the problem (IP CLASSIFIER). The results and analysisdata found are added to the blackboard.

2. Due to the characteristics of the problem (nonlinear, simula-tion-based), various search agents sn.SQP:AS,SB(SQP), sn.DE:AS,SB(DE), sn.CMA:AS,SB(CMA), sn.HS:AS,SB(HS) as well as adap-tive search agents sn.DE_req<i>:AS,SB(DE), sn.A-DE:AS,SB(DE)and relay search agents sn.DE:AS,SR(DE), sn.DE:AS,SRB(DE) asso-ciated with different initial values have been started (IP PARALLE-

LIZER). Each of these agents begins its search from an infeasiblepoint due to the fixed initial values of the problem.

3. In parallel to 2, the analysis agent bc.brain:AN and bc.brain-

DE:AN starts and registers at the blackboard. The analysis agentwaits for promising results which enter the blackboard by otherstrategy agents (IP ANALYSIS).

4. The individual strategy agents (e.g. sn.SQP:AS,SB(SQP),sn.CMA:AS,SB(CMA), sn.DE:AS,B(DE), sn.A-DE:AS(DE), etc.) searchthrough the search space and add their results to the black-board (IP BLACKBOARD).

5. In the meantime, the agents bc.brain:AN and bc.brainDE:AN

access the currently best results found from the blackboard.According to their knowledge and depending on the availablecomputing resources, they decide if local working strategyagents (in this example denoted as b.ReporterSQP_R<i>:AS,SB(SQP) and b.ReporterDE_R<i>:AS,SB(DE) respectively) can

V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832 831

be started to explore promising results in more detail (IP HybridStrategy).

The described arrangement of the strategy network has yieldedthe following result: The minimum weight of the 18-bar trusswithout taking into account uncertainties is mtruss = 4505.9 kg2.24

By contrast, taking uncertainties into account, the minimum weightincreases to mtruss = 5378.2 kg where the total numbers of evalua-tions during the optimization process neval is about 1.1 � 105.

5.2.2. Connecting rodIn the second example for robust optimization, the shape of a

connecting rod is optimized.25 The objective of the optimization isto find the minimum volume while considering the allowed equiva-lent stress rall = 1200 N/mm2.

Fig. 22 demonstrates the connecting rod and the initial geome-try, the optimization variables, corresponding loads and boundaryconditions [2].

Again, the robust optimization associated with uncertainties oftype B (caused by the perturbation vector d) is solved by means ofthe same strategy network as already used for the robust 18-bartruss problem (see Eq. (11)).

minx

maxd

f�ðxþ dÞ with x 2 S; d 2 Ud ð11Þ

S ¼ fxjx 2 Rn; xl 6 x 6 xu; gðxþ dÞ 6 0gd ¼ ðNð0;1ÞÞ

Fig. 23 depicts the results of the shape optimization with andwithout considering uncertainties. The optimized volume of theconnecting rod neglecting uncertainties is Vcr = 254.3 mm3. Consid-ering uncertainties and using the prescribed strategy networkyields the minimum Vcr = 380.069 mm3 where the total numberof evaluations neval is approximately 1.45 � 105.

6. Conclusion

A distributed agent-based optimization approach for solvingNSPs, in particular robust optimization problems, has been intro-duced in this paper. The suggested approach consists of a networkof cooperating and competing strategy agents wrapping variousoptimization methods and representing numerous search charac-teristics. The strategy agents rely on a knowledge-based systemencapsulating behavior-relevant rules and facts. To manage thecomplexity associated with the solution of NSP by virtue of MAS,a distributed simulation and experimentation platform has beendeveloped. This platform provides the appropriate infrastructurefor numerical optimization experiments using various categoriesof strategy networks.

The results of the numerical experiments considering nonlinearfunction as well as robust optimization problems demonstrate thatagent-based optimization is an exceptionally promising and effec-tive approach to solve sophisticated optimization problems in anadaptive and distributed manner. In the worst case, the agent-based strategy network is at least as good as the parallel and iso-lated execution of the same optimization methods used in thestrategy network. In the best case, interactions (e.g. informationexchange) and adaptation mechanism (e.g. changing networktopology during runtime, preferring well-performing agents, etc.)constitute the prerequisite for synergy and emergence effects tosolve unknown optimization problems more effectively. Furtheradvantages of the proposed optimization approach are, for exam-

24 The results have been determined by using the SQP-algorithm NLPQLP bySchittkowski after 395 evaluations.

25 Further information about this optimization problem can be found in [52,2].

ple, the reuse of search results stored in the blackboard to increasethe efficiency, the scalability of the strategy network regarding theexpandability by further optimization methods or technologiesencapsulated in agents and the decrease of modeling complexityfor implementing hybrid optimization approaches. This is due tothe fact that it is not the global behavior of the strategy networkwhich has to be defined explicitly, but only the local behavior ofthe agents. In contrast, the main drawbacks of this approach arethe increasing number of evaluations of problem functions, whichis implicitly caused by concurrently executing optimization meth-ods, and the increasing complexity of design and implementationof software for agent-based optimization.

Future research is still mandatory and has to focus on the addi-tional modeling of experience-steered knowledge for strategyagents in terms of rules, on integrating further Artificial Intelli-gence technologies such as data mining and, also, on an improvedusability of the computational steering framework.

Acknowledgement

This work has been supported by the German Research Founda-tion (DFG).

References

[1] A. Augugliaro, L. Dusonchet, E.R. Sanseverino, An evolutionary parallel tabusearch approach for distribution systems reinforcement planning, AdvancedEngineering Informatics 16 (3) (2002) 205–215. ISSN 1474-0346. <http://www.sciencedirect.com/science/article/pii/S1474034602000125>.

[2] M. Baitsch, D. Hartmann. Object-oriented finite element analysis for structuraloptimization using p-elements, in: K. Beucke, B. Firmenich, D. Donath, R.Fruchter, K. Roddis (Eds.), X. ICCCBE-Digital Conference Proceedings, Weimar,2004.

[3] M. Baitsch, T. Sikiwat, D. Hartmann, An object-oriented approach to high orderfinite element analysis of three-dimensional continua, in: C.A. Mota Soares,J.A.C. Martins, H.C. Rodrigues, J.A.C. Ambrósio (Eds.), Proceedings of the IIIrdEuropean Conference on Computational Mechanics Solids, Structures andCoupled Problems in Engineering, Springer, Lisbon, 2006.

[4] T. Barth, B. Freisleben, M. Grauer, F. Thilo, Distributed solution of simulation-based optimization problems on networks of workstation, Computación ySistemas 4 (2) (2000) 94–105.

[5] F. Bellifemine, G. Caire, D. Greenwood, Developing Multi-Agent Systems withJADE, Wiley & Sons, 2007.

[6] D. Bertsimas, O. Nohadani, Robust optimization with simulated annealing,Journal of Global Optimization (2009).

[7] D. Bertsimas, O. Nohadani, K.M. Teo, Robust optimization for unconstrainedsimulation-based problems, Operations Research 58 (1) (2010) 161–178.<http://or.journal.informs.org/cgi/content/abstract/opre.1090.0715v1>.

[8] D. Bertsimas, J. Tsitsiklis, Simulated annealing, Statistical Science 8 (1) (1993)10–15.

[9] H.-G. Beyer, B. Sendhoff, Robust optimization – a comprehensive survey,Computer Methods in Applied Mechanics and Engineering 196 (33–34) (2007)3190–3218. http://dx.doi.org/10.1016/j.cma.2007.03.003.

[10] E.K. Burke, M. Hyde, G. Kendall, G. Ochoa, E. Özcan, J. Woodward, Handbook ofMetaheuristics, second ed., Springer, 2010. pp. 449–468 (Chapter: AClassification of Hyper-heuristic Approaches).

[11] R.H. Byrd, J. Nocedal, R.A. Waltz, Large-Scale Nonlinear Optimization, Springer,2006. pp. 35–59 (Chapter KNITRO: An Integrated Package for NonlinearOptimization).

[12] J.W. Chinneck, Discovering the characteristics of mathematical programs viasampling, Optimization Methods and Software 17 (2) (2002) 319–352.

[13] M. Clerc, Particle Swarm Optimization. ISTE Ltd., 2006.[14] D.D. Corkill, Blackboard systems, AI Expert 6 (9) (1991) 40–47.[15] D.D. Corkill, Collaborating software: Blackboard and multi-agent systems &

the future, in: Proceedings of the International Lisp Conference, New York,2003.

[16] T.G. Crainic, Metaheuristic Optimization via Memory and Evolution, vol. 30,Tabu Search, pp. 283–302. Operations Research/Computer Science InterfacesSeries, 2005 (Chapter Parallel Computation, Co-operation).

[17] T.G. Crainic, M. Toulouse, Explicit and emergent cooperation schemes forsearch algorithms, in: Learning and Intelligent OptimizatioN Conference,volume 5315 of Lecture Notes in Computer Science, 2008, pp. 95–109.

[18] Teodor Gabriel Crainic, Ye Li, Michel Toulouse, A first multilevel cooperativealgorithm for capacitated multicommodity network design, Computers &Operations Research 33 (9) (2006) 2602–2622. ISSN 0305-0548 (Part SpecialIssue: Anniversary Focused Issue of Computers & Operations Research on TabuSearch). <http://www.sciencedirect.com/science/article/B6VC5-4H57TFV-1/2/0f6406b782b037b1d72cf3880fe2a20c>.

832 V.V. Nguyen et al. / Advanced Engineering Informatics 26 (2012) 814–832

[19] Drools. <http://www.jboss.org/drools>.[20] J.J. Durillo, A.J. Nebro, JMETAL: a java framework for multi-objective

optimization, Advances in Engineering Software 42 (2011) 760–771. ISSN0965-9978. <http://www.sciencedirect.com/science/article/pii/S0965997811001219>.

[21] H.A. Eschenauer, J. Geilen, H.J. Wahl, .A. Eschenauer, J. Geilen, H.J. Wahl,SAPOP: An Optimization Procedure for Multicriteria Structural Design,Birkhauser Verlag, Basel, Switzerland, Switzerland, 1993. ISBN 3-7643-2836-3. <http://dl.acm.org/citation.cfm?id=171568.171596>.

[22] E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns, Addison-Wesley,1995.

[23] Z.W. Geem, Music-Inspired Harmony Search Algorithm, Springer-Verlag, 2009.[24] M. Grauer, H.A. Eschenauer, Decomposition and parallelization strategies for

solving large-scale MDO problems, GAMM-Mitteilungen 30 (2) (2007) 269–286.

[25] T. Hogg, B.A. Huberman, Better than the best: the power of cooperation, in:Lynn Nadel, Daniel Stein (Eds.), 1992 Lectures in Complex Systems, SFIStudies in the Sciences of Complexity, vol. V, Addison-Wesley, Reading, MA,1993, pp. 165–184.

[26] S.-F. Hwang, R.-S. He, A hybrid real-parameter genetic algorithm for functionoptimization, Advanced Engineering Informatics 20 (1) (2006) 7–21. ISSN1474-0346. <http://www.sciencedirect.com/science/article/pii/S1474034605000807>.

[27] C. Igel, T. Suttorp, N. Hansen, A computational efficient covariance matrixupdate and a (1+1)-cma for evolution strategies, in: Proceedings of the Geneticand Evolutionary Computation Conference (GECCO 2006), 2006, pp. 453–460.

[28] C. Jacob, Illustrating Evolutionary Computation with Mathematica, MorganKaufman Publishers, 2001.

[29] A.A. Javadi, R. Farmani, T.P. Tan, A hybrid intelligent genetic algorithm,Advanced Engineering Informatics 19 (4) (2005) 255–262. ISSN 1474-0346(Computing in Civil Engineering). <http://www.sciencedirect.com/science/article/pii/S1474034605000601>.

[30] JMX. <http://java.sun.com/products/javamanagement>.[31] D.R. Jones, M. Schonlau, W.J. Welch, Efficient global optimization of expensive

black-box functions, Journal of Global Optimization 13 (4) (1998) 455–492.<http://www.springerlink.com/content/m5878111m101017p>.

[32] L. Kerçelli, A. Sezer, P. Yolum, S�. _I. Birbil, F. Öztoprak, MANGO: a multiagentenvironment for global optimization, in: First International Workshop onOptimisation in Multi-Agent Systems, 2008.

[33] A.J. Kulkarni, K. Tai, Probability collectives: a multi-agent approach for solvingcombinatorial optimization problems, Applied Soft Computing 10 (3) (2010)759–771. ISSN 1568-4946. <http://www.sciencedirect.com/science/article/B6W86-4X8CCTW-1/2/706a223be238eebb7701000e5282f6ac>.

[34] C. Lawrence, J.L. Zhou, A.L. Tits, User’s Guide for CFSQP Version 2.5, Universityof Maryland, 1997.

[35] R.H. Leary, Global optima of Lennard-Jones clusters, Journal of GlobalOptimization 11 (1997) 35–53. ISSN 0925-5001. <http://dx.doi.org/10.1023/A:1008276425464>.

[36] Lp solve. <http://lpsolve.sourceforge.net>.[37] N. Melab, E-G. Talbi, S. Cahon, On parallel evolutionary algorithms on the

computational grid, in: Nadia Nedjah, Luiza Mourelle, Enrique Alba (Eds.),Parallel Evolutionary Computations, Studies in Computational Intelligence,vol. 22, Springer, Berlin/Heidelberg, 2006, pp. 117–132. http://dx.doi.org/10.1007/3-540-32839-4_6.

[38] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrainedparameter optimization problems, Evolutionary Computation 4 (1) (1996) 1–32.

[39] M. Milano, A. Roli, Magma: a multiagent architecture for metaheuristics, IEEETransactions on Systems, Man and Cybernetics – Part B 34 (2) (2004) 925–941.

[40] A. Nedic, A. Ozdaglar, Distributed subgradient methods for multi-agentoptimization, IEEE Transactions on Automatic Control 54 (1) (2007) 48–61.http://dx.doi.org/10.1109/TAC.2008.2009515.

[41] V.V. Nguyen, Development of a Framework for Structural Optimization withEvolution Strategies based on the Eclipse Platform. Master’s thesis, Lehrstuhlfür Ingenieurinformatik im Bauwesen, Ruhr-Universität Bochum, 2005.

[42] G.C. Onwubolu, B.V. Babu, New Optimization Techniques in Engineering,Springer-Verlag, 2004.

[43] J.A. Persson, P. Davidsson, S.J. Johansson, F. Wernstedt. Combining agent-basedapproaches and classical optimization techniques, in: Proc. of the Europeanworkshop on Multi-Agent Systems (EUMAS 2005), 2005, pp. 260–269.

[44] J.D. Pinter, Handbook of Global Optimization, vol. 2, volume 62 of NonconvexOptimization and Its Applications, Springer, 2002, pp. 515–569 (Chapter:Global Optimization: Software, Test Problems, and Applications).

[45] PtolemyII. <http://ptolemy.eecs.berkeley.edu/ptolemyii>.[46] B. Raphael, I.F.C. Smith, A direct stochastic algorithm for global search, Applied

Mathematics and Computation 146 (2–3) (2003) 729–758. ISSN 0096-3003.<http://www.sciencedirect.com/science/article/B6TY8-47X6XDC-5/2/1483c1b0d7b512ff6bf010a6ac5d7b85>.

[47] E. Salajegheh, G.N. Vanderplaats, Optimum design of trusses with discretesizing and shape variables, Structural Optimization 6 (1993) 79–85.

[48] K. Schittkowski, NLPQLP: A Fortran Implementation of a Sequential QuadraticProgramming Algorithm with Distributed and Non-Monotone Line Search –Users Guide, Version 2.24, June 2007.

[49] G.I. Schuëller, H.A. Jensen, Computational methods in optimizationconsidering uncertainties – an overview, Computer Methods in AppliedMechanics and Engineering 198 (1) (2008) 2–13. ISSN 0045-7825. http://www.sciencedirect.com/science/article/B6V29-4SHF4D2-1/2/341eb6c39a7902978a343732782ac295.

[50] H.P. Schwefel, Evolution and Optimum Seeking, John Wiley, 1995.[51] Wanfeng Shang, Shengdun Zhao, Yajing Shen, A flexible tolerance genetic

algorithm for optimal problems with nonlinear equality constraints, AdvancedEngineering Informatics 23 (3) (2009) 253–264. ISSN 1474-0346. http://www.sciencedirect.com/science/article/pii/S1474034608000773.

[52] J. Sienz, E. Hinton, Reliable structural optimization with error estimation,adaptivity and robust sensitivity analysis, Computers & Structures 64 (1–4)(1997) 31–63.

[53] S.K. Stegemann, B. Funk, T. Slotos, A blackboard architecture for workflows, in:CAiSE Forum, 2007.

[54] R. Storn, K. Price, Differential Evolution – A Simple and Efficient AdaptiveScheme for Global Optimization over Continuous Spaces, Technical Report TR-95-012, ICSI, 1995.

[55] T. Stützle, Iterated Local Search for the Quadratic Assignment Problem,Technical Report, Technical Report aida-99-03, FG Intellektik, TU Darmstadt,1999.

[56] E.-G. Talbi, V. Bachelet, Cosearch: a parallel cooperative metaheuristic, in: C.Blum, A. Roli, M. Sampels (Eds.), First International Workshop on HybridMetaheuristics (HM 2004), 2004, pp. 127–140.

[57] El-Ghazali Talbi, Sanaz Mostaghim, Tatsuya Okabe, Hisao Ishibuchi, GnterRudolph, Carlos Coello Coello, Parallel approaches for multiobjectiveoptimization, in: Jrgen Branke, Kalyanmoy Deb, Kaisa Miettinen, RomanSlowinski (Eds.), Multiobjective Optimization, Lecture Notes in ComputerScience, vol. 5252, Springer, Berlin/Heidelberg, 2008, pp. 349–372. URL http://dx.doi.org/10.1007/978-3-540-88908-3-13.

[58] A.S.S.M. Barkat Ullah, R. Sarker, D. Cornforth, C. Lokan, AMA: a new approachfor solving constrained real-valued optimization problems, Soft Computing 13(8-9) (2009) 741–762.

[59] DOT Users Manual Version 5.0, Vanderplaats Research & Development, Inc., 1999.[60] A. Wächter, An Interior Point Algorithm for Large-Scale Nonlinear

Optimization with Applications in Process Engineering, Dissertation,Carnegie Mellon University, 2002.

[61] G. Weichhart, M. Affenzeller, A. Reitbauer, S. Wagner, Modelling of an agent-based schedule optimisation system, in: Proceedings of the IMS InternationalForum, 2004.

[62] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEETransactions on Evolutionary Computation 1 (1) (1997) 67–82.

[63] M. Wooldridge, An Introduction to Multiagent Systems, Wiley, 2009.[64] X.-F. Xie, J. Liu, Graph coloring by multiagent fusion search, Combinatorial

Optimization 18 (2) (2009) 99–123.[65] X.-F. Xie, J. Liu, Multiagent optimization system for solving the traveling

salesman problem (TSP), IEEE Transactions on Systems, Man, and Cybernetics,Part B: Cybernetics 39 (2) (2009) 489–502.

[66] I. Zelinka, Soma – self-organizing migrating algorithm, in: New OptimizationTechniques in Engineering, Springer, 2004, pp. 168–217.

[67] C. Zillober, Software Manual for SCPIP 3.0, November 2004.