Classifier-assisted constrained evolutionary optimization for automated geometry selection of...

8
Abstract—In orthodontics, retraction springs made of metallic wires are often used to move a tooth with respect to another by the virtue of the spring back effect. Specially selected form of spring may result in accurate force and moment required to move the tooth towards direction that suits a particular patient. In current practice, the geometry remains to be selected manually by orthodontists and no substantial automation of such process has been proposed to date. In this paper, we experiment with the automated geometry selection of the orthodontic retraction spring using constrained evolutionary optimization. Particularly, a Classifier-assisted Constrained Memetic Algorithm (CCMA) is designed for the purpose. The main feature of CCMA lies in the ability to identify appropriate spring structures that should undergo further refinement using a classifier system to perform the inference. Comparison to the baseline canonical Genetic Algorithm (GA) and Memetic Algorithm (MA) further highlights the efficacy of the proposed approach. In addition, to also assert the robustness of the CCMA for general complex design, further studies on commonly used constrained benchmark problems and existing constrained evolutionary optimization methods are also reported in the paper. I. INTRODUCTION ne important apparatus in the field of orthodontics is the metallic-wired retraction spring formed to suit individual orthodontic cases. It is used to retract or move a tooth with respect to another by virtue of the spring back effect. Parameters of the spring selected results in a set of unique force system, consisting of forces and moments in order to move the tooth towards certain direction. Currently, the geometry selection in this process still relies on manual selections by the orthodontists. Hence, it is apparent that automation in the form of optimization would be beneficial and desirable to improve the efficiency of such process. Early efforts towards solving design optimization are mostly based on pure mathematical analysis. For instance, in analytical constrained optimization, Kuhn-Tucker (K-T) necessary conditions for optimality are defined and then solved for a candidate optimal solution [1]. For a problem with objective functions f(x), inequality constraints g(x), and Manuscript received February 4, 2010. Dudy Lim is with the Centre for Computational Intelligence (C2i), School of Computer Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798 (e-mail: [email protected]). Yew-Soon Ong is the Director of Centre for Computational Intelligence (C2i) at the School of Computer Engineering, Information System Division, Nanyang Avenue, Singapore 639798 (e-mail: [email protected]). Rachman Setiawan and Muhammad Idris are with the Mechanical Engineering Design Research Division, Faculty of Mechanical & Aerospace Engineering, Institut Teknologi Bandung, West Java, Indonesia (e-mail: {rachmans, idris13}@ edc.ms.itb.ac.id). equality constraints h(x), the Lagrange function to be minimized, instead of f(x), is defined as: () ) x ( h v ) s ) x ( g ( u ) x ( x ) ) x ( ( ) x ( ) s v, u, x, ( 2 1 1 2 T T n i i i n i i i i f h v s g u f L h g + + + = + + + = = = (1) where u and v are Lagrange multipliers while s determines whether equality constraints are active. The i-th inequality constraint is active if g i (x)=0. Based on the K-T necessary conditions, there exist u and v vectors at the stationary/optimum point x*, such that: ; 0 1 * * = = = + + g h n i i j i n i i j i i j j x h v x g u x f x L h i n i h ,..., 1 ; 0 ) x ( * = = g i i n i s g ,..., 1 ; 0 ) x ( 2 * = = + g i i n i s u ,..., 1 ; 0 * = = g i n i u ,..., 1 ; 0 * = (2) where g n and h n denote the number of inequality and equality constraints, respectively. However, due to the moderately high dimensionality of design variables and complex geometry constraints involved in the problem, it is apparent that such paper-based analytical optimization is not convenient to adopt in practice. Beyond the analytical methods, deterministic numerical optimizers are developed to tackle the above shortcomings. The common idea behind these optimizers is to systematically iterate from an initial design and improve it until the optimality conditions are met, rather than to solve series of complex equations analytically. Some renowned optimizers belong to this class are steepest descent, conjugate gradient, quadratic programming, and linear approximation methods [2]. However, being deterministic, a relatively good starting point must be carefully selected to locate the global optimum; otherwise they only reach a local optimum. Further, the unavailability of accurate gradient information, noisy, and multimodal functional landscapes may also reduce their effectiveness. The last few decades have also been marked with much prominent advancement in the field of optimization study. Among those are a group of stochastic numerical optimization algorithms inspired by Darwin’s theory of evolution, collectively known as Evolutionary Algorithms (EAs) [3]. Based on Darwin’s survival of the fittest principle, Classifier-assisted Constrained Evolutionary Optimization for Automated Geometry Selection of Orthodontic Retraction Spring Dudy Lim, Yew-Soon Ong, Rachman Setiawan, and Muhammad Idris O

Transcript of Classifier-assisted constrained evolutionary optimization for automated geometry selection of...

Abstract—In orthodontics, retraction springs made of metallic wires are often used to move a tooth with respect to another by the virtue of the spring back effect. Specially selected form of spring may result in accurate force and moment required to move the tooth towards direction that suits a particular patient. In current practice, the geometry remains to be selected manually by orthodontists and no substantial automation of such process has been proposed to date. In this paper, we experiment with the automated geometry selection of the orthodontic retraction spring using constrained evolutionary optimization. Particularly, a Classifier-assisted Constrained Memetic Algorithm (CCMA) is designed for the purpose. The main feature of CCMA lies in the ability to identify appropriate spring structures that should undergo further refinement using a classifier system to perform the inference. Comparison to the baseline canonical Genetic Algorithm (GA) and Memetic Algorithm (MA) further highlights the efficacy of the proposed approach. In addition, to also assert the robustness of the CCMA for general complex design, further studies on commonly used constrained benchmark problems and existing constrained evolutionary optimization methods are also reported in the paper.

I. INTRODUCTION ne important apparatus in the field of orthodontics is the metallic-wired retraction spring formed to suit individual orthodontic cases. It is used to retract or

move a tooth with respect to another by virtue of the spring back effect. Parameters of the spring selected results in a set of unique force system, consisting of forces and moments in order to move the tooth towards certain direction. Currently, the geometry selection in this process still relies on manual selections by the orthodontists. Hence, it is apparent that automation in the form of optimization would be beneficial and desirable to improve the efficiency of such process.

Early efforts towards solving design optimization are mostly based on pure mathematical analysis. For instance, in analytical constrained optimization, Kuhn-Tucker (K-T) necessary conditions for optimality are defined and then solved for a candidate optimal solution [1]. For a problem with objective functions f(x), inequality constraints g(x), and Manuscript received February 4, 2010.

Dudy Lim is with the Centre for Computational Intelligence (C2i), School of Computer Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798 (e-mail: [email protected]).

Yew-Soon Ong is the Director of Centre for Computational Intelligence (C2i) at the School of Computer Engineering, Information System Division, Nanyang Avenue, Singapore 639798 (e-mail: [email protected]).

Rachman Setiawan and Muhammad Idris are with the Mechanical Engineering Design Research Division, Faculty of Mechanical & Aerospace Engineering, Institut Teknologi Bandung, West Java, Indonesia (e-mail: {rachmans, idris13}@ edc.ms.itb.ac.id).

equality constraints h(x), the Lagrange function to be minimized, instead of f(x), is defined as:

( )

)x(hv)s)x(g(u)x(

x))x(()x()sv,u,x,(

211

2

TT

n

iii

n

iiii

f

hvsgufLhg

+++=

+++= ∑∑==

(1)

where u and v are Lagrange multipliers while s determines whether equality constraints are active. The i-th inequality constraint is active if gi(x)=0. Based on the K-T necessary conditions, there exist u and v vectors at the stationary/optimum point x*, such that:

;01

**∑ ∑= =

=∂∂

+∂

+∂∂

≡∂∂ g hn

ii j

in

ii

j

ii

jj xh

vxg

uxf

xL

hi nih ,...,1;0)x( * ==

gii nisg ,...,1;0)x( 2* ==+

gii nisu ,...,1;0* ==

gi niu ,...,1;0* =≥

(2)

where gn and hn denote the number of inequality and

equality constraints, respectively. However, due to the moderately high dimensionality of design variables and complex geometry constraints involved in the problem, it is apparent that such paper-based analytical optimization is not convenient to adopt in practice.

Beyond the analytical methods, deterministic numerical optimizers are developed to tackle the above shortcomings. The common idea behind these optimizers is to systematically iterate from an initial design and improve it until the optimality conditions are met, rather than to solve series of complex equations analytically. Some renowned optimizers belong to this class are steepest descent, conjugate gradient, quadratic programming, and linear approximation methods [2]. However, being deterministic, a relatively good starting point must be carefully selected to locate the global optimum; otherwise they only reach a local optimum. Further, the unavailability of accurate gradient information, noisy, and multimodal functional landscapes may also reduce their effectiveness.

The last few decades have also been marked with much prominent advancement in the field of optimization study. Among those are a group of stochastic numerical optimization algorithms inspired by Darwin’s theory of evolution, collectively known as Evolutionary Algorithms (EAs) [3]. Based on Darwin’s survival of the fittest principle,

Classifier-assisted Constrained Evolutionary Optimization for Automated Geometry Selection of Orthodontic Retraction Spring

Dudy Lim, Yew-Soon Ong, Rachman Setiawan, and Muhammad Idris

O

EAs evolve a population of individuals, representing candidate solutions to a given problem that goes into a competition for survival. To date, EAs have emerged as a powerful paradigm for global optimization, solving optimization problems characterized by high dimensional, non-separable, multimodal, constrained, and non-differentiable fitness landscapes, which are often regarded as hard to handle by their deterministic counterparts. In recent decades, almost all forms of successful stochastic optimization algorithms including meta-heuristics and evolutionary algorithms involve some forms of lifetime learning or meme in their design. In particular, the hybridizations of population based and local heuristic search methodology, commonly known as Memetic Algorithms (MAs) [4][5][6][7] now represents one of the most popular and fastest growing areas of memetic computing research, where many success stories on real-world applications have been reported [8][9][10][11].

In this paper, to achieve the purpose of automating the retraction spring geometry selection, we perform optimization using a Classifier-assisted Constrained MA (CCMA). It is worth noting that the main feature of CCMA lies in the ability to identify appropriate spring structures that should proceed with further refinement for better alternatives, using a classifier system to perform the inference.

The rest of this paper is structured as follows. In Section II, the retraction spring geometry problem is defined and a brief literature review on constrained evolutionary optimization is presented. Subsequently, Section III introduces the proposed Classifier-assisted Constrained MA (CCMA). Section IV which forms the core contribution of this paper presents the use of CCMA for retraction spring geometry selection with a comparison study to other alternative evolutionary approaches including the basic GA and MA. Besides the real world spring geometry selection problem, this section also provides empirical results of CCMA on some representative benchmark constrained problems, to further assert the robustness of the approach on a variety of complex scenarios. Finally, Section V concludes this paper and outlines several interesting future works.

II. PROBLEM DEFINITION AND LITERATURE REVIEW

A. Problem Definition In the field of orthodontics, ideal retraction spring design

is needed to reach efficient tooth treatment, i.e., the tooth movement can be controlled towards desired location. Many designs of retraction spring have been developed by researchers, one of which is the T-Loop structure.

Fig. 1 shows the application of T-Loop to retract or move tooth. The forces (Fx, Fy) and moment will be produced by T-Loop after activation applied. Without loss of generality,

this paper focuses on optimum design with respect to T-Loop type retraction spring using objective function derived by analytical method using Castigliano’s theorem [12][13].

In principle, the retraction effect will generate axial force (Fx) and bending moment (Mz) on the edge of wire (support/bracket). Using Castigliano’s theorem, angular deflection θ is formulated as a function of strain energy (U) and moment (Mz) as:

zMU

∂∂=θ (3)

whereas the activation or linear deflection xu is function of strain energy (U) and force (Fx), i.e.:

xx F

Uu∂∂= (4)

Next, Ut, the strain energy total by adding strain energy at each wire section, is described as:

∑ ∫=

=tion nn

n

l

nnt dlMEI

Usec

1 0

2

21

(5)

where E and I are modulus elasticity and moment of inertia respectively, Mn is moment equation for each wire section, and n denotes the section index. Further, the Ut equation can be simplified by representing Mz

2 constant, 2MzFx constant, and Fx

2 constants as At, Bt, and Ct respectively, which results in the strain energy equation as follows:

[ ] [ ] [ ]( )t2

t2 CB2A

21

xtxzzt FFMMEI

U ++= (6)

Theoretical solution for T-Loop can be determined by developing moment equation for each wire section. Configuration for each wire section is illustrated in Fig. 2 and each equation showed in Table 1.

Fig. 1. Force system produced by T-Loop [14]

Fig. 2. Geometry of T-Loop

Table 1. Moment equations derivation of T-Loop, n denotes the section

index for eqn. (5)

n Moment equations (Mn) n = 1-9

1 θsin11 lFMM xz += 2 ( )212 sin lLFMM xz −+= θ 3 ( )θsin123 LLFMM xz −−= 4 ( )θθ cossin124 RRLLFMM xz −+−−= 5 ( )RLLFMM xz 2sin125 +−−= θ

6 ( )( )6 2 1 sin 2 cosz xM M F L L R R Rθ φ= − − + − −

7 ( )θsin127 LLFMM xz −−= 8 ( )6128 sin lLLFMM xz −−−= θ 9 ( )θθ sin)sin( 71269 lLLLFMM xz −−−−=

Meanwhile, displacement is the first derivation of strain

energy respect to Fx. It is basically the activation ( xu ) distance of wire which can be derived mathematically as:

[ ] [ ]( )tt C2B22

1xz

x

tx FM

EIFUu +=

∂∂= (7)

Angular displacement is the first derivation of strain energy with respect to moment. This displacement is applied for positioning the teeth. When the wire is mounted on bracket, gable will be zero. Mathematically, this is described as follows:

[ ] [ ]( )tt B2A22

12 xzz

t FMEIM

U+=

∂∂

=θ (8)

Eqns. (7) and (8) can be solved in matrix form, to find Fx and Mz solution, as follows:

EIu

ABBC

MF x

tt

tt

z

x

⎭⎬⎫

⎩⎨⎧

⎥⎦

⎤⎢⎣

⎡=

⎭⎬⎫

⎩⎨⎧

θ2

1

(9)

Finally, the optimization problem can be defined as minimizing the absolute error between analytical ratio (Rt) and actual ratio (Ra):

Objective Function :

minimize ( ) ( ) 2

a1

R,L

,L ⎟⎟⎠

⎞⎜⎜⎝

⎛−=

RRRF t (10)

where ( )x

zt F

MRR =,L

EIu

MF x

tt

tt

z

x⎥⎦

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

⎡−

θ2ABBC 1

(11)

Subject to: Equality constraints ( ) 0L 531 =−= LLh

( ) 0)(L 5342 =−+−= eLLLh ( ) 0)(L 623 =−−= dLLh

Inequality constraints ( ) tLeLLg ≤++= 711 L ( ) tHLLg ≤+= 862 2L

Bound constraints 3 3

14.5 10 7 10L− −× ≤ ≤ × m 3 3

24 10 6 10L− −× ≤ ≤ × m 3 3

33.5 10 5 10L− −× ≤ ≤ × m 3 3

48 10 16 10L− −× ≤ ≤ × m 3

53 105105.3 −− ×≤≤× L m

36

3 106104 −− ×≤≤× L m 3 3

74.5 10 7 10L− −× ≤ ≤ × m 3 30.5 10 1.5 10R− −× ≤ ≤ × m

Table 2. Geometries and material of wire.

Parameter Magnitude Material SS, Stainless Steel Modulus Elasticity, E 2 x 1011 N/m2

Width, B 0.5588 x 10-3 m Height, H 0.4046 x 10-3 m Cross section, A 2.3 x 10-9 m2

Moment of Inertia, I 3.1 x 10-15 m4

Angle of gable or theta, θ 0 o

Moment-to-force ratio (M/F), Ra 3 x 10-3 m Distance of Gap, e 0.5 x 10-3 m Total of Length, Lt 20 x 10-3 m Total of Height, Ht 15 x 10-3 m Offset, d 1 x 10-3 m

B. Literature Review on Constrained Evolutionary Optimization

As far as constrained design problem is concerned, some prominent techniques reported using evolutionary optimizers are summarized as follows: - Penalty-based methods. The most common approach for evolutionary constraint-handling techniques is to penalize infeasible solutions. Instead of minimizing )x(f , the optimizer’s task now is to minimize ρ+= )x()x(c ff , where ρ denotes a positive penalty term if x is infeasible. Different penalty-based methods have been proposed in the

literature, namely: death, static, dynamic, adaptive, and self-adaptive penalties. Death penalty simply limits the evolutionary search to the feasible regions by rejecting all infeasible solutions generated [15]. Static penalty methods penalize infeasible solutions based on the degree of constraint violation [16]. Dynamic penalty methods, on the other hand, uses penalty terms that change over time [17]. Adaptive penalty methods decide on the penalty magnitude based on feedback from the evolutionary search [18]. Last but not least, Self-adaptive penalty methods encode all possible penalty parameters into the chromosome of a candidate solution and evolve them together with the design vector [19]. - Repair-based methods. Another popular approach in evolutionary constrained optimization is to repair the infeasible solutions [20]. As the name suggests, the basic idea is to map a feasible solution into its feasible counterpart. This could be achieved via domain knowledge or going through several alternatives of solutions using heuristics or even greedy algorithms, to find a feasible solution associated with the particular infeasible solution. - Ranking-based methods. Two well-known ranking schemes proposed in the literatures for such purpose are the deterministic [21] and stochastic ranking schemes [22]. Deterministic ranking scheme can be summarized by the following three rules: 1) a feasible solution is preferred compared to the infeasible one, 2) Between any two feasible solutions, the one having better objective value is preferred, and 3) between any two infeasible solutions, the one having less constraint violation is preferred. On the other hand, stochastic ranking introduces randomness in the comparison criteria on whether the objective value or constraint violation is used for comparison. - Multi-Objective(MO)-based methods. The basic idea behind this method is to treat constraints as objectives in the MO context. Hence, an original problem of fn objectives,

gn inequality constraints, and hn equality constraints can

be redefined as a multi-objective problem with 1nf + or

hgf nnn ++ objectives. In [23], constraints are treated as many objectives in an optimization framework. On the other hand, [24] proposed to use aggregated constraints as a single additional objective. - Hybridization-based methods. Besides the abovementioned classes of algorithms, there has also been a recent trend on the hybridization or interplay with machine learning. In [25], regression models of the objective and constraint functions are used to perform the so-called approximate ranking scheme, where expensive evaluations are performed only when the rank induced after model update changes. Intriguing efforts in [26][27] utilize Support Vector Machine classifier to model the feasibility structure of a problem, i.e., whether the candidate solutions fall within the feasible region, near feasibility boundary, or alternatively within the infeasible region, while enhancing search efficiency.

III. CLASSIFIER-ASSISTED CONSTRAINED MEMETIC ALGORITHM

A. Memetic Algorithms(MAs) MAs are population-based meta-heuristic search methods that are inspired by Darwinian principles of natural evolution and Dawkins notion of a meme defined as a unit of cultural evolution capable of local refinements [4][5]. In its simplest form, a conventional MA which integrates local search procedures into EA can be formulated as in Algorithm 1. ________________________________________________ Algorithm 1. Memetic Algorithm _____________________________________________________ 1: Generate and evaluate a population of design vectors 2: while termination condition is not satisfied do 3: Generate offspring population using evolutionary operators 4: for each offspring x chosen for refinement do 5: Apply local search to find an improved solution, optx

6: Perform replacement using Lamarckian learning, i.e., 7: if )(x)(x ff opt < then

8: optxx =

9: end if 10: end for 11: end while ________________________________________________

While canonical EAs are generally known to be capable of exploring and exploiting promising regions of the search space, they can take a relatively long time to locate the exact local optimum with high precision. MAs, on the other hand mitigate such issue via the combination of global exploration and local exploitation.

B. Classifier-assisted Constrained MA (CCMA) One classical challenge of MA design lies in identifying

of appropriate individuals that would undergo local refinement. Being able to do so would improve the efficiency of search. To date, typically naïve solutions to this problem to do this is via random, sampling-based, or probabilistic selection [6]. However, it is worth noting that most of these efforts have concentrated on non-constrained or only bound-constrained problems. In contrast to earlier works that concentrated on regression meta-models to enhancing search [8][9], here we propose a classifier-assisted memetic algorithm, designed specifically for non-linear constrained optimization.

In the context of constrained optimization, knowledge on the feasible-infeasible separation boundaries would assist the designers in putting more attention on promising regions where good solution may reside, while avoiding the many computational intensive objective/fitness evaluations that evolutionary methods often demands. The fact that global optimum solutions are often located near the feasible-infeasible separation boundaries, where one or more constraints are active, further justifies the significance of

making such knowledge available. Since, there exists only two classes of solutions, i.e., either feasible or infeasible, a binary classification system may be easily formulated for this purpose. Taking this cue, we propose here a Classifier-assisted Constrained Memetic Algorithm (CCMA).

CCMA begins by initializing and evaluating a population of candidate solutions using Design of Experiment (DOE) technique. All evaluated individuals are archived into a database as training inputs for building the classifier, based on their feasibility condition. At this stage, it is possible that no classifier is built if the database has only archived one existing class, i.e. all data are either feasible or infeasible. Subsequently, the search proceeds similar to a canonical MA. In the local refinement phase, if a classifier exists, it will be used to test whether an individual should undergo refinement. In particular, local refinement is performed only on misclassified individuals. However, if no classifier is available due to insufficient data archived at runtime, local refinement is performed on η randomly selected individuals. The workflow of CCMA is detailed in Algorithm 2. ________________________________________________ Algorithm 2. Classifier-assisted Constrained Memetic Algorithm _____________________________________________________ 1: Generate and evaluate a population of design vectors 2: Update database and classifier with every newly evaluated design vector, x 3: while Computational budget is not exhausted do 4: Generate offspring population using evolutionary operators 5: Evaluate offspring population 6: Update database and classifier with every newly evaluated design vector, x 7: if Classifier exists then 8: Test offspring population against classifier 9: end if 10: if No misclassified offspring OR classifier does not exist then 11: Perform local search on η random individuals 12: else 13: Perform local search on misclassified individuals 14: end if 15: Update database and classifier with every newly evaluated design vector, x 16: end while ________________________________________________

Note that every newly evaluated design vector during the

search will update the database only if it is misclassified by existing classifier. This makes good sense since the inclusion of such new design vectors may potentially induce changes to the existing classifier. Figs. 3 and 4 illustrate two cases where the database update might trigger changes to the separation boundary. In the first case as depicted in Fig. 3, the new design vectors lie near the separation boundary, hence it can shown that their inclusions altered the corresponding separation boundary. On the other hand, Fig. 4 depicts the case where the new design vectors lie in previously unexplored regions, hence the separation boundary defined by the classifier is updated with the newly learned knowledge. In this manner, the proposed CCMA facilitates the exploration of uncertain regions, i.e., the separation boundaries and previously unexplored regions that not only serves to guide the evolutionary search towards

good quality solution, but allowed better discovery on knowledge about the feasible-infeasible boundaries.

IV. EMPIRICAL STUDY

A. Experiments on Retraction Spring Problem The performance of CCMA is compared against the canonical Genetic Algorithm and Memetic Algorithm with Deterministic Ranking (GA-DR and MA-DR) on the orthodontic retraction spring problem defined in Section 2. The algorithms considered are briefly discussed as follows: • GA-DR. This is the baseline Genetic Algorithm on

which CCMA builds upon. The deterministic ranking used, as explained in Section II-B, is based on 3 simple rules:1) a feasible solution is preferred compared to the infeasible one, 2) Between any two feasible solutions, the solution of better objective value is preferred, and 3) between any two infeasible solutions, the solution with less constraint violation is preferred.

• MA-DR. It is a hybridization of the GA-DR with local refinement procedure or the CCMA without any classifier assistance used in the search, i.e., a canonical MA. In consistent to GA-DR, deterministic ranking scheme is also used.

• CCMA. The Classifier-assisted Constrained MA described in Section III-B. In particular, the traditional 3-layer (input, hidden, output) feed-forward Artificial Neural Network (ANN) with back-propagation learning and the Sequential Quadratic Programming (SQP) are used as the classifier and local search techniques, respectively. Note that without loss of generality, other forms of classifiers and local solvers may be employed in the proposed algorithm.

The following parameters in Table 3 are used for GA-DR,

MA-DR, and CCMA. For fair comparison, 30 independent runs are conducted for each algorithm. Results obtained by the 3 algorithms are then summarized in Table 4.

From the results, several important observations can be drawn. Firstly, it is obvious that the two variants of MA considered, i.e., MA-DR and CCMA, outperformed the baseline GA-DR optimizer. Particularly, GA-DR could not locate any feasible solution at the end of 1.0E+03 fitness evaluations. In contrast, both MA-DR and CCMA have converged to sub-optimal fitness values for the same number of fitness evaluations used (see Table 4). This is a typical problem of canonical GAs for solving constrained optimization problem, where they are unable to quickly identify feasible solution at precisions comparable to MA.

Meanwhile, between the two MA variants, through the additional classifier-assisted mechanism to selecting appropriate individuals that undergo refinements, CCMA is observed to outperform the canonical MA-DR as the search progresses. The best optimized structures designed by GA-DR, MA-DR, and CCMA are depicted in Fig. 5, 6, and 7, with fitness values of 4.9694E-09, 5.7483E-19, and 6.5208E-21, respectively.

Table 3. Setting of experiments for GA-DR, MA-DR, and CCMA.

Parameters of GA-DR, MA-DR, and CCMA Population size 100 Crossover probability 0.9 Mutation probability 0.1 Maximum number of evaluations 1.0E+04 Evolutionary operators uniform crossover & mutation,

elitism and deterministic ranking selection

Number of independent runs 30 Parameters of MA-DR and CCMA

Local search iteration 10 Number of random individuals undergo local search

10% of population size

Table 4. Results obtained by GA-DR, MA-DR, and CCMA on the

orthodontic retraction spring problem after 1.0E+03, 2.5E+03, 5.0E+03, 7.5E+03, and 1.0E+04 evaluation count, respectively.

Method Evaluation Count Mean Standard

Deviation GA-DR 1.0E+03 N.A N.A

5.0E+03 1.85E-05 2.70E-05 7.5E+03 8.48E-06 2.11E-05 1.0E+04 6.36E-06 2.14E-05

MA-DR 1.0E+03 6.61E-11 2.93E-10 5.0E+03 2.17E-15 4.16E-15 7.5E+03 4.08E-16 5.91E-16 1.0E+04 2.86E-16 3.09E-16

CCMA 10E+03 7.77E-11 2.57E-10 5.0E+03 2.35E-15 4.74E-15 7.5E+03 7.40E-16 1.85E-15 1.0E+04 4.02E-17 8.09E-17

Fig. 5. Best design obtained by GA-DR,

with fitness value of 4.9694E-09

Fig. 6. Best design obtained by MA-DR,

with fitness value of 5.7483E-19

Fig. 7. Best design obtained by CCMA,

with fitness value of 6.5208E-21

(a) Original separation boundary (b) Updated separation boundary

Fig. 3. Classifier update near separation boundary.

(a) Original separation boundary (b) Updated separation boundary

Fig. 4. Classifier update in the unexplored region.

B. Experiments on Benchmark Problems To better assert the performance of the proposed CCMA, study on commonly used representative constrained benchmark problems (see Appendix) are used to pit the CCMA against 4 existing constrained evolutionary algorithms, namely, Evolution Strategy with Stochastic Ranking (ES-SR) [22], Simple Multi-membered Evolution Strategy (SMES) [28], Adaptive Tradeoff Model with Evolution Strategy (ATMES) [29], and multi-objective Hybrid Constrained Optimization EA (HCOEA) [30]. Note that results for these 4 algorithms are taken directly from the respective sources in literature, without any re-runs. In previous experiment on the orthodontic spring problem, we do not present a comparison of CCMA with these 4 algorithms due to the unavailability of their original codes.

Similar parametric configurations tabulated in Table 2 are also used here in the study on the robustness of the CCMA, with the exception on maximum number of evaluations used for search termination. Instead 2.4E+05 (SMES, ATMES, and HCOEA) or 3.5E+05 (ES-SR) objective evaluations is used to be in consistent with the literature and for the sake of a fair comparison.

Preliminary results obtained by CCMA and the other 4 algorithms are summarized in Table 5, while the t-test with 95% confidence level is tabulated in Table 6. It is worth noting that statistically, CCMA is shown to perform competitively if not better than most the existing algorithms reported in the literature. In several instances, including F3 and F4, CCMA is observed to outperform SMES&ATMES and ES-SR, respectively.

Table 5. Results obtained by CCMA, ES-SR, SMES, ATMES, and

HCOEA on the benchmark problems after 2.4E+05 evaluation count.

Benchmark Problem Algorithm

Mean Standard Deviation

F1 CCMA -1 0.00E+00 ES-SR -1 1.90E-04 SMES -1 2.09E-04

ATMES -1 5.90E-05 HCOEA -1 1.30E-12

F2 CCMA -30665.54 0.00E+00 ES-SR -30665.54 2.00E-05 SMES -30665.54 0.00E+00

ATMES -30665.54 7.40E-12 HCOEA -30665.54 5.40E-07

F3 CCMA 680.630 2.23E-04 ES-SR 680.625 3.40E-02 SMES 680.643 1.55E-02

ATMES 680.639 1.00E-12 HCOEA 680.630 9.41E-08

F4 CCMA -6961.81 9.33E-13 ES-SR -6875.94 1.60E+02 SMES -6961.28 1.85E+00

ATMES -6961.81 4.60E-12 HCOEA -6961.81 8.51E-12

F5 CCMA -0.089157 0.020524 ES-SR -0.095825 2.60E-17 SMES -0.095825 0.00E+00

ATMES -0.095825 2.80E-17 HCOEA -0.095825 2.42E-17

Table 6. Results of t-test with 95% confidence level comparing statistical values for CCMA and those of ES-SR, SMES, ATMES, HCOEA on F1-F5, in terms of p-value and s+, s-, or ≈ to indicate if CCMA is significantly better, significantly worse, or indifferent, respectively.

Benchmark Problem

p-value ES-SR SMES ATMES HCOEA

F1 1.0(≈) 1.0(≈) 1.0(≈) 1.0(≈) F2 1.0(≈) 1.0(≈) 1.0(≈) 1.0(≈) F3 0.4238(≈) <0.0001(s+) <0.0001(s+) 1.0(≈) F4 0.0047(s+) 0.1221(≈) 1.0(≈) 1.0(≈) F5 0.0804(≈) 0.0804(≈) 0.0804(≈) 0.0804(≈)

V. CONCLUSION In this paper, an experimental study on the geometry

selection of orthodontic retraction spring using constrained evolutionary optimization methodologies has been presented. In particular, constrained evolutionary optimization has been performed using GA, MA, and the proposed Classifier-assisted Constrained MA (CCMA). Experimental results highlighted the efficacy of the proposed CCMA for solving the complex real world design problem more effectively and efficiently. Empirical study on benchmark problems further confirmed the robustness of the CCMA for general complex design with promising results shown. As future work, it would be interesting to look at how to combine other machine learning methodologies and domain knowledge [31] that would both serve to guide the search towards good solutions more efficiently while extracting knowledge about the problem in the spirit of ‘Optinformatics’ [32].

APPENDIX The benchmark problems (F1-F5) used in empirical study: F1

( ) ∏=

−=d

ii

dxd)f(

1

x

∑=

=−=d

iix)(h

1

21 01x

where 10=d and ).,...,1(10 dixi =≤≤ F2

( )1414079229323937

8356891035785475x

1

5123

. - x. xx. x.f

++=

( )0920022053000062620

0056858033440785x

5341

521

≤++=

- xx.- xx. xx. .g

( )00022053000062620

0056858033440785x

5341

522

≤+−−−=

xx. xx. xx. .g

( )01100021813000299550

007131705124980x2321

523

≤+

+=

- x.- xx.

xx. .g

0900021813.00029955.0

0071317.051249.80)x(2321

524

≤+−−

−−=

xxx

xxg

( )0250019085.00012547.0

0047026.0300961.9x

4331

535

≤−+++=

xxxxxxg

( )0200019085.00012547.0

0047026.0300961.9x

4331

536

≤+−−−−=

xxxxxxg

where 5=d and ).5,4,3(4527and4533,10278 21 =≤≤≤≤≤≤ ixxx i

F3 ( ) ( ) ( ) ( )

767647

26

65

24

43

22

21

8104710

11312510x

xxxxxxx

xxxxf

−−−+++

−++−+−=

( ) 05432127x 5243

42

211 ≤+++++−= xxxxxg

( ) 01037282x 5423212 ≤−++++−= xxxxxg

( ) 08623196x 726

2213 ≤−+++−= xxxxg

( ) 0115234x 762321

22

214 ≤−++−+= xxxxxxxg

where 7=d and ).7,...,1(1010 =≤≤− ixi

F4 ( ) 3

23

1 )20()10(x −+−= xxf

( ) 0100)5()5(x 22

211 ≤+−−−−= xxg

( ) ( ) 081.825)6(x 22

212 ≤−−+−= xxg

where 2=d and .1000,10013 21 ≤≤≤≤ xx

F5

( ) ( ) ( )( )21

31

213 2sin2sinx

xxxxxf

+−= ππ

( ) 01x 2211 ≤+−= xxg

( ) ( ) 041x 2212 ≤−+−= xxg

where 2=d and ).2,1(100 =≤≤ ixi

REFERENCES [1] J. S. Arora, Introduction to optimum design. McGraw-Hill, 1989. [2] J. A. Snyman, Practical mathematical optimization: an introduction

to basic optimization theory and classical and new gradient-based algorithms, Springer Publishing, 2005.

[3] T. Bäck, D. Fogel, Z. Michalewicz, Handbook of evolutionary computation, Oxford Univ. Press, 1997.

[4] Y. S. Ong, N. Krasnogor, and H. Ishibuchi, “Special Issue on Memetic Algorithm,” IEEE Transactions Systems, Man Cybernetics, Part B: Cybernetics,” 37(1):2-5, 2007.

[5] R. Meuth, M. H. Lim, Y. S. Ong, and D. C. Wunsch II, “A proposition on memes and meta-memes in computing for higher-order learning,” Memetic Computing, 1(2):85–100, 2009.

[6] Q. H. Nguyen, Y. S. Ong, and M. H. Lim, “A probabilistic memetic framework,” IEEE Transactions on Evolutionary Computation, 13(3):604-623, 2009.

[7] Q. H. Nguyen, Y. S. Ong, M. H. Lim, N. Krasnogor, “Adaptive cellular memetic algorithms,” Evolutionary Computation Journal, 17(2):231-256, 2009.

[8] Y. S. Ong, P.B. Nair and A.J. Keane, “Evolutionary optimization of computationally expensive problems via surrogate modeling,” American Institute of Aeronautics and Astronautics Journal (AIAA), 41(4):687-696, 2003.

[9] Z. Z. Zhou, Y. S. Ong, M. H. Lim and B. S. Lee, “Memetic algorithm using multi-surrogates for computationally expensive optimization problems,” Soft Computing, 11(10):957-971, 2007.

[10] Z. Zhu, Y. S. Ong, and M. Dash, “Markov blanket-embedded genetic algorithm for gene selection,” Pattern Recognition, 40 (11), pp. 3236-3248, 2007.

[11] Q. C. Nguyen, Y. S. Ong and J.-L. Kuo, “A hierarchical approach to study the thermal behavior of protonated water clusters H+(H2O)n', Journal Chemical Theory & Computation, 5(10):2629-2639, 2009.

[12] J. P. Den Hartog, Advanced Strength of Material, Dover Publications, Nevada, 1987.

[13] R. Setiawan, B. B. Putra, and N. Wachyudi, “Geometry selection of orthodontic retraction spring through knowledge-based design,” Joint Meeting JSME ch. Indonesia – AUN/SEED Net regional workshop, ITB, Bandung, 23-24 July 2008.

[14] M. Ferreira and F. Oliveira, “Experimental force definition system for a new orthodontic retraction spring, American Journal of Orthodontics and Dentofacial Orthopedics, 75:334-343, 2005.

[15] J. T. Richardson, M. R. Palmer, G. Liepins, and M. Hilliard, “Some guidelines for Genetic Algorithms with penalty functions,” Conference on Genetic Algorithms, pp. 191–197, 1989.

[16] A. Homaifar, S. H.-Y. Lai, and X. Qi, “Constrained optimization via genetic algorithms,” Simulation, 62:242–254, 1994.

[17] J. A. Joines and C. R. Houck, “On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with Gas,” IEEE World Congress on Computational Intelligence, pp. 579–584, 1994.

[18] D. W. Coit and A. E. Smith, “Penalty guided genetic search for reliability design optimization,” Computers and Industrial Engineering, 30(4):895–904, 1996.

[19] C. A. Coello Coello, “Use of a self-adaptive penalty approach for engineering optimization problems,” Computers in Industry, 41(2): 113–127, 2000.

[20] Z. Michalewicz and J. Xiao, “Evaluation of paths in evolutionary planner/navigator,” International Workshop on Biologically Inspired Evolutionary Systems, Tokyo, 1995.

[21] K. Deb, “An efficient constraint handling method for genetic algorithm,” Computer Methods in Applied Mechanics and Engineering, 186(2-4):311-338, 2000.

[22] T. P. Runarsson and X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Transactions on Evolutionary Computation, 4(3):284–294, 2000.

[23] I. C. Parmee and G. Purchase, “The development of a directed genetic search technique for heavily constrained design spaces,” In I. C. Parmee(eds), Adaptive Computing in Engineering Design and Control, pp. 97–102. 1994.

[24] E. Camponogara and S. N. Talukdar, “A genetic algorithm for constrained and multi-objective optimization,” In J. T. Alander(eds.), Computer Science and Operations Research, pp. 49–62. 1997.

[25] T. P. Runarsson, “Constrained evolutionary optimization by approximate ranking and surrogate models,” Parallel Problem Solving from Nature – PPSN VIII, LNCS-3242, pp. 401-410, 2004.

[26] S. D. Handoko, C. K. Kwoh and Y. S. Ong, “Using classification for constrained memetic algorithm: a new paradigm,” pp. 547-552, IEEE International Conference on Systems, Man, and Cybernetics, 2008.

[27] S. D. Handoko, C. K. Kwoh, and Y. S. Ong, “Feasibility structure modeling: an effective chaperon for constrained memetic algorithms,” IEEE Transactions on Evolutionary Computation, Accepted 2009.

[28] E. Mezura-Montes and C. A. C. Coello, “A simple multimembered evolution strategy to solve constrained optimization problems,” IEEE Transactions on Evolutionary Computation, 9(1):1–17, 2005.

[29] Y. Wang, Z. Cai, Y. Zhou, and W. Zheng, “An adaptive tradeoff model for constrained evolutionary optimization,” IEEE Transaction on Evolutionary Computation, 12(1):80–92, 2008.

[30] Y. Wang, Z. Cai, G. Guo, and Y. Zhou, “Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems,” IEEE Transaction on Evolutionary Computation, 37(3):560–575, 2007.

[31] Y. S. Ong and A.J. Keane, “A domain knowledge based search advisor for design problem solving environments,” Engineering Applications of Artificial Intelligence, 15(1):105-116, 2002.

[32] M. N. Le, Y. S. Ong, and Q. H. Nguyen, “Optinformatics for schema analysis of binary genetic algorithms,” in Proc. Genetic Evolutionary Computation Conference, pp. 1121–1122, 2008.