Two-level refined direct optimization scheme using intermediate surrogate models for electromagnetic...
Transcript of Two-level refined direct optimization scheme using intermediate surrogate models for electromagnetic...
ORIGINAL ARTICLE
Two-level refined direct optimization scheme using intermediatesurrogate models for electromagnetic optimization of a switchedreluctance motor
Guillaume Crevecoeur • Ahmed Abou-Elyazied Abdallh •
Ivo Couckuyt • Luc Dupre • Tom Dhaene
Received: 2 February 2011 / Accepted: 21 June 2011 / Published online: 8 July 2011
� Springer-Verlag London Limited 2011
Abstract Electromagnetic optimization procedures require
a large number of evaluations in numerical forward
models. These computer models simulate complex prob-
lems through the use of numerical techniques, e.g. finite
elements. Hence, the evaluations need a large computa-
tional time. Two-level methods such as space mapping
have been developed that include a second model so as to
accelerate the inverse procedures. Contrary to existing two-
level methods, we propose a scheme that enables acceler-
ation when the second model is based on the initial
numerical model with coarse discretizations. This paper
validates the proposed refined direct optimization method
onto algebraic test functions. Moreover, we applied
the methodology onto the geometrical optimization of the
magnetic circuit of a switched reluctance motor. The
obtained numerical results show the efficiency of the opti-
mization algorithm with respect to the computational time
and the accuracy.
Keywords Switched reluctance motor � Optimal design �Finite elements � Geometrical optimization �Surrogate models � Kriging
1 Introduction
Electromagnetic rotating machines are indispensable in
industry. Specifically, switched reluctance motors (SRMs)
are widely used due to their simple working principle. In
order to optimally design such machines, optimal design
procedures with high-fidelity computer models, e.g. finite
element (FE) models, are commonly utilized [1–3]. The
motor under study is a 6/4 SRM, see Fig. 1, where we aim
at optimizing the geometry of the magnetic circuit of the
motor so as to obtain an average torque profile that is as
high as possible.
In a general electromagnetic optimization framework,
the electromagnetic field computations are performed by
solving the classical equations of Maxwell, specifying the
geometry, materials, and sources. Using efficient numerical
techniques, e.g. finite element, finite difference, boundary
element methods, etc. computer models of the electro-
magnetic device can be built. These computer models solve
the so-called forward problems with high solution accu-
racy. However, these computer models are generally CPU
time-consuming. So that the traditional direct optimization
approaches, which strive towards the minimization of a
predefined cost function iteratively by the use of the for-
ward model, become very time demanding, difficult, and
impractical.
In this perspective, so-called two-level optimization
methods, e.g. space mapping [4, 5], manifold mapping [6],
response and parameter mapping [7], etc., were presented.
In these two-level optimization methods, the optimization
procedure is accelerated by incorporating, next to the high-
fidelity ‘‘fine model’’, an additional low-fidelity ‘‘coarse
model’’. Indeed, these two-level optimization methods
were successfully applied onto different electromagnetic
devices. For example, the space mapping technique was
G. Crevecoeur (&) � A. A.-E. Abdallh � L. Dupre
Department of Electrical Energy, Systems and Automation,
Ghent University, Sint-Pietersnieuwstraat 41,
9000 Ghent, Belgium
e-mail: [email protected]
I. Couckuyt � T. Dhaene
Department of Information Technology (INTEC),
Ghent University-IBBT, Sint-Pietersnieuwstraat 41,
9000 Ghent, Belgium
123
Engineering with Computers (2012) 28:199–207
DOI 10.1007/s00366-011-0239-5
applied onto the efficient optimal design of electromagnetic
actuators [8, 9], optimal design of a SRM [10], a trans-
former [11] etc. In [8–10], the used coarse models were
mostly analytical models, i.e. lumped magnetic reluctance
network, where approximations were made with respect to
geometry, materials, and sources. However, the construc-
tion of such fast coarse models can be also time demanding
and difficult, especially when dealing with complex for-
ward models. Moreover, these coarse models have to be
sufficiently faster than the fine models; otherwise the
optimization procedure is not remarkably accelerated [12].
Therefore, a two-level optimization method based on a
relatively easier to build coarse models is needed. We
propose a novel alternative two-level optimization scheme
that enables to solve optimization problems on the fly in a
more efficient way when including a coarse model that is
directly derived from the fine numerical model with coarse
discretizations. The reason for considering such class of
coarse model in two-level procedures is because they are
much easier to build than analytical models and because
they enable better approximation of the system under study
where the physics of the model are more accurately
incorporated, e.g. for the case geometrical details are
important in the forward solution or in case the nonlinearity
of the material model is important to the forward solution.
For such class of coarse models, it is better to use a
numerical model.
This paper describes the proposed refined direct opti-
mization (RDO) scheme in detail and implements the
scheme within the widely used Nelder–Mead simplex
(NMS) method and nonlinear least squares methods. The
scheme is based on the framework presented in [13, 14]
and the two-level genetic algorithm [15], which employ
surrogate models. For details concerning the surrogate
models, we refer to [16]. Acceleration of the optimal
design is obtained by optimizing only once a surrogate
model that is based on the coarse model with coarse dis-
cretizations. This surrogate model is corrected by an
interpolation model calibrated with the fine model with fine
discretizations. In order to validate the method, we apply
the method onto algebraic test functions. In a next stage,
we apply the method for the optimal geometrical design of
a 6/4 SRM and compare the results with the space mapping
technique and the traditional direct optimization technique.
2 Two-level minimization methods
Minimization methods that include next to the initial
computer model, a second model, are so-called two-level
minimization methods. In forward modelling problems, the
initial ‘‘fine’’ model is mostly based on numerical tech-
niques such as the finite element method (FEM), finite
difference method (FDM), etc. This model has a high level
of accuracy that requires a large computational time. The
second ‘‘coarse’’ model has a lower level of fidelity and is
computationally fast. We denote the fine and coarse model
as f(x) and c(x) respectively with input parameter vector x.
Metamodels, denoted here as models that interpolate
input–output data and which do not solve the physics of the
problem, e.g. response surface models [17], Kriging mod-
els [18, 19], can act as ‘‘coarse’’ models in two-level
minimization methods [14, 20–22] and are constructed by
interpolating response data, obtained by evaluating the
Fig. 2 Refined direct optimization scheme
Dse
Dsi
Dg
Dre
tsp
Dri
trp
x.
. x
Fig. 1 Geometry of the 6/4 SRM under study. d is the air gap at
alignment condition between stator and rotor poles
200 Engineering with Computers (2012) 28:199–207
123
model fðxiÞ for a certain set of sample points xi; i ¼1; . . .;Nd in the design space, where Nd is the number of
design points. When dealing with complex problems, Nd
needs to be large in order to obtain a sufficiently accurate
metamodel. Metamodels can be used within optimization
schemes, i.e. metamodel-assisted optimization (MAO) (e.g.
[23]) and surrogate-based optimization (SBO) (e.g. [24]),
with or without additional evaluations of the fine model for
refinement of the metamodel. The efficient global optimi-
zation (EGO) algorithm [25] is an example of a minimi-
zation method that enables refinement of the metamodel
(Kriging model) during the optimization procedure by
performing additional fine model evaluations. Indeed,
EGO/expected improvement provides a balance between
exploration, i.e. enhancing the accuracy of the metamodel,
and exploitation, i.e. refining the metamodel solely in the
region of the current optimum. The main drawback when
using metamodels is that it becomes difficult to determine
an accurate metamodel when dealing with high-dimen-
sional parameters and with highly nonlinear forward
models (‘‘curse of dimensionality’’) [26]. The number of
evaluations that need to be carried out in the fine model for
building the metamodel can increase to a large extend.
A second type of coarse models can be physics-based
where assumptions are made with respect to the geometry,
sources, materials, etc. Space mapping (SM) and manifold
mapping mostly include such models. In the most basic
methods, e.g. the aggressive space mapping (ASM) algo-
rithm [5], P forward model evaluations are carried out as
well as P minimizations of the coarse model for different
objectives. P is the total number of iterations in the two-
level algorithm. The total time for minimizing the objective
function equals:
TSM ¼ PTf þ PNcTc ð1Þ
with Tf and Tc being the needed computational time for
carrying out one evaluation in the fine and coarse forward
model, respectively. Nc is the average number of
evaluations that need to be carried out for minimizing the
coarse model, given certain objective(s). If we assume that
the traditional ‘‘one-level’’ (1L) minimization method
needs Nf & Nc evaluations in the fine model, then we
can calculate the total time as T1L = Nf Tf. Acceleration of
space mapping with respect to traditional minimization
methods can be defined as:
A1 ¼T1L
TSM
� NcTf
PTf þ PNcTc: ð2Þ
This acceleration depends on the ratio s = Tf/Tc and
A1 [ 1 is obtained when Nc Tf [ PTf ? PNcTc or
s[PNc
Nc � Pð3Þ
where we can assume Nc � P so that s needs to be larger
than the number of iterations in space mapping or manifold
mapping. It is not always possible to build a sufficiently
fast coarse model, e.g. coarse models that are numerical
models with coarse discretizations.
3 Refined direct optimization (RDO) scheme
3.1 Iterative scheme
As mentioned in the previous section, existing two-level
schemes cannot accelerate the procedure when the coarse
model is not sufficiently fast or when a large number of
evaluations are needed for building a metamodel. In this
paper, we carry out only one optimization of a surrogate-
based model [P = 1 in (1)] that is iteratively refined during
the optimization itself (increased number of fine model
evaluations). This surrogate model is based on the coarse
model and tries to approximate the fine model through the
use of iteratively refined metamodels. The specific feature
of this scheme is that acceleration is possible even for
relatively small s.
The basic idea of the RDO scheme is to alter the opti-
mization of the cost Y; e.g. least-squares difference
between targets and simulations, of the fine model:
x�f ¼ arg minxYðfðxÞÞ ð4Þ
to the optimization of the cost of the surrogate model s(x):
x�s ¼ arg minxYðsðxÞÞ ð5Þ
where we want that near x�s ; sðxÞ well-approximates f(x), so
that x�s is close to x�f : Here, we use metamodels for
interpolating the coarse model response data to the fine
model response data. The relation between coarse model
response and fine model response can become less complex
and less difficult to determine. In this way, Nd can be
reduced. The surrogate model, used in the RDO scheme,
has the following form:
sðxÞ ¼ cðxÞ þ eðxÞ ð6Þ
with error function e(x) that is determined using meta-
models. Notice that (6) can also be of the following form:
s(x) = c(x)e(x). In this paper, we use the Kriging meta-
model, see e.g. [18], for building the error function. The
surrogate model s(x) is refined during the optimization
procedure by performing a limited number of fine model
evaluations. For more details concerning the use of Kriging
within the RDO scheme, see Sect. 3.2.
Notice that when using a coarse model that is based on the
fine model with coarse discretizations that we have to be
sure that the error model e(x) is not modelling the numerical
Engineering with Computers (2012) 28:199–207 201
123
noise but that due to the use of coarse discretizations, i.e.
some physics are not fully included in the fine model. For
example, the level of discretizations near an air gap in
electromagnetic devices can be modelled in a coarse way so
that the physics of the coarse model are not modelled with a
high fidelity near an air gap, i.e. neglecting fringing effects.
The proposed method has the same features as the tra-
ditional direct optimization method, i.e. start value, stop-
ping criteria, etc., where the internal parameters of the
RDO method are self-tunable. The method uses a trust-
region strategy for updating the surrogate model. An out-
line is given:
Step 1: An initial set of Ninit samples is generated by an
optimal maximin Latin hypercube design (LHD; [27])
around start value xð0Þ within the trust region radius
Dð0Þ : xð0Þi with i ¼ 1; . . .;Ninit: Evaluations are then
made in the coarse and fine model:
F ¼ f xð0Þ1
� �; . . .; f x
ðNÞ1
� �h ið7Þ
C ¼ c xð0Þ1
� �; . . .; c x
ðNÞ1
� �h i: ð8Þ
Step 2: Construction of surrogate model sð0ÞðxÞ by
determining eð0ÞðxÞ in (6) by interpolating xð0Þi with
F–C. We initialize m = 1.
Step 3: Partial run of direct minimization method (NMS,
gradient method, etc.) using surrogate model sðm�1ÞðxÞwith start value xð0Þ: Updates xðkÞ; k ¼ 1; . . .;K are
carried out, depending on the used direct optimization
method. The partial run of direct optimization method is
stopped when xðKÞ is near to the trust region boundary or
when the stopping criteria of the direct minimization
method are fulfilled. xðmÞ becomes xðKÞ:Step 4: Determine the accuracy of the surrogate model in
order to determine the new trust region DðmÞ:The accuracy
of the previous surrogate model sðm�1Þ; depends on the
fidelity of the coarse model c(x) relatively to the fine
model and on the accuracy of the error function e(x). The
accuracy is determined as follows [13]:
qðmÞ ¼Y f xðm�1Þ� �� �
� Y f xðmÞ� �� �
Y sðm�1Þ xðm�1Þð Þð Þ � Y sðm�1Þ xðmÞð Þð Þ : ð9Þ
On the basis of q(m), we determine DðmÞ; similar to [14].
Step 5: Update of surrogate model: sðmÞðxÞ or error
model eðmÞ in the region DðmÞ: A limited number of
evaluations R are carried out in the fine and coarse model
so as to refine the surrogate model in the next trust
region. We add the simulations to the datasets (7), (8).
Step 6: If the termination criteria of the direct optimi-
zation method are not satisfied, then go to step 3, and set
m = m ? 1.
The computationally demanding steps 1 and 4 in the two-
level refined direct method can be parallelized so to
improve the acceleration of the procedure. The total time
equation of the RDO scheme (without parallelization) is
theoretically:
TRDO ¼ NinitðTf þ TcÞ þ QðKTc þ RTc þ RTf Þ ð10Þ
with Q the total number of iterations in the RDO scheme.
Remark that we can assume that QK & Nc with Nc from
equation (1), i.e. the total number of iterations for
minimizing the surrogate model is close to the total
number of iterations for minimizing the coarse model. As
long as Nf [ Ninit ? QR, acceleration with respect to the
traditional method:
A2 ¼T1L
TRDO
ð11Þ
is satisfied when
s\Ninit þ Nc þ RK
Nf � Ninit � QR: ð12Þ
3.2 Kriging in RDO scheme
Kriging is a popular technique to interpolate deterministic
noise-free data [20, 28]. These Gaussian process-based
surrogate models are compact and cheap to evaluate.
Kriging is applied in the RDO scheme for approximating
e(x) in equation (6). We elaborate here in a general way the
working of the Kriging modeling where a Kriging model is
made starting from a certain model m.
Let us consider the following NKr-dimensional base
(training) set
XB;Kr ¼ fxkr;1; xkr;2; . . .; xkr;NKrg ð13Þ
and
mB;Kr ¼ fmðxkr;1Þ;mðxkr;2Þ; . . .;mðxkr;NKrÞg ð14Þ
being the associated responses in the model m. Then, the
Kriging model mKrðxÞ with input vector x, is also known as
the best linear unbiased predictor (BLUP) that can be
obtained by
mKrðxÞ ¼Maþ rðxÞW�1ðmB;Kr � FaÞ ð15Þ
M and F are Vandermonde matrices of the test point x and
the base set XB;Kr; respectively. The coefficient vector a is
determined by generalized least squares (GLS). r(x) is an
1 9 NKr vector of correlations between the point x and the
base set XB;Kr; where the i-th element is given by
riðxÞ ¼ wðx; xkr;iÞ; i ¼ 1; . . .;NKr ð16Þ
W in (15) is a NKr 9 NKr correlation matrix, where the
entries are given by Wi;j ¼ wðxkr;i; xkr;jÞ:
202 Engineering with Computers (2012) 28:199–207
123
In this work, the correlation function is chosen
Gaussian:
wðxi; xjÞ ¼ expXn
k¼1
�hkkxi;k � xj;kk !
ð17Þ
where xi,k denotes the k-th component of vector xi and n the
dimension of the input vector x. The parameters hk; k ¼1; . . .; n of the correlation function are determined by
maximum likelihood estimation (MLE). The optimization
for MLE was performed using SQPLab [29]. The regres-
sion function is chosen constant, i.e., F ¼ ½1 1. . .1�T and
M the identity matrix.
In this RDO scheme, a Kriging interpolant is built in step
2. The base set is XB;Kr ¼ fxð0Þ1 ; . . .; xð0ÞN g that needs to be
interpolated with mB;Kr ¼ ffðxð0Þ1 Þ � cðxð0Þ1 Þ; . . .; fðxð0ÞN Þ�cðxð0ÞN Þg: In step 5 of the iterative scheme, the training set is
extended with R points yielding a more refined Kriging
interpolant.
4 Optimal design of the magnetic circuit
of a switched reluctance motor
The forward problem for the optimal design of the magnetic
circuit of the SRM consists in determining the torque profile
of the SRM for a set of geometrical variables. The SRM is
excited from stator windings which are concentric coils
wound in series on diagonally opposite stator poles, see
Fig. 1. The rotor is brushless with no windings. The variable
geometrical parameters, as depicted in Fig. 1, are the width
of the stator pole tsp and the rotor pole trp, the internal
diameter of the stator yoke Ds,1 and the external diameter of
the rotor yoke Dr,0. The external diameter of the stator is
Ds,0 = 135 mm, the internal diameter of the rotor is
Dr,1 = 25 mm and the air gap width is d = 0.25 mm.
The motor can be analyzed using a magnetic equivalent
circuit [30]. For accurate prediction of the behavior of the
machine, i.e. correct simulation of the torque for different
rotor positions, numerical methods such as the Finite Ele-
ment Method (FEM) are more suitable for the accurate
simulation of the machine [10, 31]. The demand for servo-
type torque control requires the calculation of the instanta-
neous torque for each rotor angle [32]. The electromagnetic
torque can be computed through the following equation
Temðh0; I0Þ ¼o
ohWcoðh; I0Þjh¼h0
ð18Þ
for a certain given rotor angle h0 and excitation current I0.
Wco is the so-called co-energy defined as:
Wco ¼Z
Wdi ð19Þ
with W the flux linkage and i the current where the inte-
gration can be carried out in FEM directly through global
integration over the domain of the solution, see e.g. [32].
The forward problem can be solved using the following
Poisson’s equation:
r� ðl�1r� AÞ ¼ J ð20Þ
for the vector potential A and for a certain current density
J, which is related to the enforced current I0 in the
windings around the two opposite stator poles. Since the
currents J are perpendicularly oriented on the plane of
the magnetic circuit (J = Jz being the current density in
the z-direction), see Fig. 1, the magnetic induction and
field are also oriented in this plane. The vector potential has
thus a component perpendicular upon the plane of the
magnetic circuit: A = Az. The Poisson’s equation (20) can
in this way be reduced to the following equation in 2D:
r � ðl�1rAÞ ¼ �J ð21Þ
The FE calculations depend on the geometrical parameters
and on the specifications of the permeability l. For the
magnetic circuit, this permeability is nonlinear and we use
the following single-valued constitutive B - H relationship:
l ¼ B0
H0
1þ B
B0
� �m�1 !�1
ð22Þ
which is determined by the following three parameters
[H0, B0, m] and which originates from the following
equation [10, 33]:
H
H0
¼ B
B0
� �þ B
B0
� �m
: ð23Þ
The fine model consists of very fine discretizations of the
motor under study. The number of elements is approxi-
mately 250,000. The number of mesh elements in the
coarse model is approximately 10 times smaller. The tor-
que is calculated for the excitation current of I0 = 4A and
for 5 different rotor angles h0 = 25, 27.5, 30, 32.5, 35
mechanical degrees. During the optimization procedures,
the average torque is maximized for a fixed value of I0.
5 Results and discussion
5.1 RDO of algebraic test functions
In order to validate the RDO scheme, we applied the
method onto two different algebraic test functions and
compared the results with the traditional direct optimiza-
tion scheme. Firstly, the following algebraic function was
minimized:
Engineering with Computers (2012) 28:199–207 203
123
Y1ðf ðxÞÞ f ðxÞ ¼ � expð�ðx21 þ x2
2ÞÞ ð24Þ
with x�f ¼ ½0; 0�T : The coarse model is similar to the fine
model with altered output (Ac) and input (matrix Bc) space:
cðxÞ ¼ Acf ðBcxÞ ð25Þ
with optimal value x�c ¼ ½1;�1�T : Figure 3 shows the val-
ues of the iterates xðkÞ in the traditional NMS method for
minimizing the fine and the coarse model so as to obtain x�fand x�c , respectively. The figure additionally shows the
alternative path followed in the variable (design) space by
the RDO algorithm in order to achieve convergence to
x�s ¼ x�f : The internal parameters for the RDO algorithm
are the following: Ninit = 8, R = 4 with initial trust region
Dð0Þ ¼ 0:2D� where D� denotes the whole input space
region. The total number of iterations is Q = 8. The total
number of evaluations is in the fine model 40 and in the
coarse model 110.
Secondly, we applied the RDO onto the minimization of
the two-dimensional Rosenbrock test function, with fine
model:
Y2ðf ðxÞÞ ¼ 100 x2 � x21
� �2þ 1� x21
� �: ð26Þ
The coarse model is again seriously altered in input and
output space. Figure 4 compares the convergence history
of the traditional method (cost logðYðf ðxðkÞÞÞÞÞ with the
RDO method (cost logðYðsðxðkÞÞÞÞÞ in each k-th iteration.
The internal parameters are chosen as follows: Ninit = 10,
R = 5 with this time Dð0Þ ¼ D�: The trust region is reduced
during the minimization procedure. It can be observed from
Fig. 4 that the minimization procedure follows an alter-
nated path in the parameter space and that the iteratively
refined surrogate model is minimized. Near to the minimal
value of the cost function, the surrogate model sðQÞðxÞ is
close to the fine model f ðxÞ so that x�s � x�f : The value of
the trust region ratio DðkÞ
D� for each iteration is shown in
Fig. 5a and the minimal value of the cost function in the
fine model evaluated in (7) for the first iteration and in step
5 for the next iterations is shown in Fig. 5b. Convergence
is observed after Q = 8 iterations with 50 evaluations in
the fine model and 350 evaluations in the coarse model.
5.2 RDO scheme for the optimal design of a SRM
The computational time for one forward fine model is
Tf(np) = 21.2 min, for one coarse model is Tc = 8.1 min.
When using preconditioning of the fine model based on the
coarse model, i.e. solution of the fine model is obtained by
starting from the coarse model solution, then the total
computational time is Tf(wp) = 7.9 min. The superscript np
denotes that no preconditioning was performed, while the
superscript wp denotes that preconditioning was per-
formed. Preconditioning can be performed in steps 1 and 3.
The cost function that was implemented for the optimal
design calculates the maximum average torque for the rotor
angles:
−1.5 −1 −0.5 0 0.5 1 1.5−1.5
−1
−0.5
0
0.5
x1
x 2
x(k) in refined surrogate
x(k) in fine model
x(k) in coarse model
Fig. 3 Minimization of Y1 with the path followed in the parameter
space for the coarse model, fine model and surrogate model
0 20 40 60 80 100 120 140−30
−20
−10
0
10
Convergence history in fine model
Convergence historyin refined surrogate model
Fig. 4 Convergence history for the minimization of Y2 using NMS
and RDO scheme
Fig. 5 Minimization of Y2
using RDO with (a) trust region
in each iteration, (b) minimal
value of fine model evaluations
in step 5
204 Engineering with Computers (2012) 28:199–207
123
Y ¼ �Xh0
Temðh0; I0Þ ð27Þ
with the rotor angles h0 specified in Sect. 4.
The intermediate surrogate model (6) is modelled with
e(x) being a Kriging model [28]. In order to guarantee that
the numerical noise is not interpolated between the fine and
the coarse model, a co-Kriging model could be imple-
mented [34]. However, the numerical simulations showed
that a Kriging model was sufficient for obtaining a highly
accurate intermediate surrogate model.
The internal parameters taken for the RDO scheme are
the following: Ninit ¼ 24;Dð0Þ ¼ D�;R ¼ 12. The minimi-
zation in step 3 is carried out through sequential quadratic
programming (SQP) [35]. The identified optimal geomet-
rical parameters x�RDO are listed in Table 1. These optimal
parameters correspond well with the optimal parameters
x�SQP obtained using SQP with the fine model only.
Figure 6 depicts the minimal value of the fine model
evaluated in (7) and step 5. Convergence is obtained after
Q = 5 iterations. We also added the trust region radius in
this figure. Figure 7 shows the convergence history of the
surrogate model in the partial run of direct optimization
(step 3 of the RDO scheme).
The total number of fine model evaluations Nf and
coarse model evaluations in the RDO scheme and the one-
level direct optimization SQP method is given in Table 2.
The total computational time is also given where the total
time needed in RDO (np) is 1.6 times faster. We observe
that when using the preconditioned fine model evaluations
(RDO (wp)), the time needed in steps 1 and 5 can be
accelerated so that the total computational time for opti-
mization is approximately 2 times faster.
Figure 8 shows the percentual error between the fine and
coarse model kF� Ck=kFk in step 1 of the RDO algorithm.
This figure shows the error for 5 different angles of the rotor
and shows that the error highly depends on the rotor angle.
This is because the discretization highly influences the
accuracy of the computer model for angles 25–30�. When
we compare this with Fig. 9 which is the percentual error
between the fine and surrogate model kF� Ck=kFk in step 5
of the RDO scheme then we observe that the surrogate
model has a relatively good quality.
Acceleration of the RDO scheme can possibly be
achieved by evaluating the fine model mainly in the region
where the error of the Kriging model is large, as in the
EGO algorithm.
It is difficult to provide a correct quantitative compari-
son between the space mapping methodology (e.g. ASM)
Table 1 Optimal parameters of SRM machine
Parameters tsp (mm) Dsi (mm) trp (mm) Dre (mm)
x�RDO 17.1 109.4 20.12 43.79
x�SQP 17.0 109.2 19.89 44.04
1 2 3 4 50
0.2
0.4
0.6
0.8
1Fig. 6 Fine model evaluations
in each iteration
0 5 10 15 20 25 30 35 40 45 500.3
0.4
0.5
0.6
0.7
0.8
0.9
1Fig. 7 Convergence history of
the first partial minimization of
the surrogate model
Engineering with Computers (2012) 28:199–207 205
123
with the here developed RDO methodology since the
efficiency of both methods strongly depends on the used
coarse model. Indeed, their convergence will depend on the
quality of the coarse model relatively to the fine model and
upon the computational time of the two models, i.e. their
ratio s = Tf/Tc. Theoretical work has been carried out in
[36] with respect to the influence of quality of the coarse
model upon the convergence of space mapping-based
methods. When using a class of coarse models that are
analytically built and that approximate very coarsely the
fine model or the system under study, s will be very high
and because of their relatively poor quality, they will have
a better convergence in the space mapping method rather
than in the RDO method. However, when using a class of
coarse models that are numerically built and that approx-
imate relatively accurate the fine model or the system
under study, the ratio s will be very low and because of
their relatively good quality, they will lead to a better
convergence in the RDO scheme rather than in ASM. This
can qualitatively be concluded by comparing the time
equations associated with the acceleration A1, see (2) in the
ASM, and acceleration A2, see (11) in the RDO. It is dif-
ficult to provide a quantitative comparison since the time
equations depend on some constants (i.e. P in ASM and
R, K, Q in RDO). These constant will also depend on the
quality of the used coarse model. We do not claim here that
the RDO will always be better than the ASM for the class
of coarse models that are numerically based. This could for
example be the case if this numerical model has a poor
quality because of using too coarse discretizations. It is
then possible that the ASM and RDO have a comparable
total time for solving the optimization problem.
6 Conclusion
This paper proposes a methodology where a coarsely dis-
cretized model can be included in the direct optimization
scheme. Intermediate surrogate models, here Kriging
models, were used so as to interpolate the so-called coarse
model (coarse mesh) and the fine model (fine mesh). The
refined direct optimization scheme was applied onto alge-
braic test functions for validation of the methodology.
The methodology was also applied onto the computation-
ally demanding optimal design of a switched reluctance
motor. We observed that the optimization procedure calcu-
lates accurately the optimal parameters because good corre-
spondence is obtained with the one-level direct optimization
Table 2 Number of iterations in forward models and the total com-
putational time
Algorithm Nf Nc Total time (h)
RDO (np) 42 242 47.5
RDO (wp) 42 242 38.2
SQP 220 0 77.7
25 27.5 30 32.5 350
10
20
30
40Fig. 8 Percentual error
between fine and coarse model
in first step for initial design of
experiment
25 27.5 30 32.5 350
2
4
6
8
10
12
Rotor angle (degrees)
Fig. 9 Percentual error
between F and S for points
specified in step 5
206 Engineering with Computers (2012) 28:199–207
123
method. The proposed methodology accelerates the opti-
mization procedure compared to the direct optimization
method with a factor of two.
Acknowledgments This work was supported by the GOA project
GOA07/GOA/006 and the IAP project IAP-P6/21. Ivo Couckuyt is
funded by the Institute for the Promotion of Innovation through
Science and Technology in Flanders (IWT-Vlaanderen). Guillaume
Crevecoeur is a postdoctoral researcher of the FWO.
References
1. Mirzaeian B, Moallem M, Tahani V, Lucas C (2002) Multiob-
jective optimization method based on a genetic algorithm for
switched reluctance motor design. IEEE Trans Magn 38:
4605–4617
2. Wu W, Dunlop J, Collocott S, Kalan B (2003) Design optimi-
zation of a switched reluctance motor by electromagnetic and
thermal finite-element analysis. IEEE Trans Magn 39:3334–3336
3. Vijayakumar K, Karthikeyan R, Paramasivam S, Arumugam R,
Srinivas K (2008) Switched reluctance motor modeling, design,
simulation, and analysis: a comprehensive review. IEEE Trans
Magn 44:4605–4617
4. Bandler J, Biernacki R, Chen S, Grobelny P, Hemmers H (1994)
Space mapping technique for electromagnetic optimization. IEEE
Trans Microw Theory Tech 42:2536–2544
5. Bandler J, Cheng Q, Dakroury S, Mohamed A, Bakr M, Madsen
K, Søndergaard J (2004) Space mapping: state of the art. IEEE
Trans Microw Theory Tech 52:337–361
6. Echeverrıa D, Lahaye D, Encica L, Lomonova E, Hemker P,
Vandenput A (2006) Manifold-Mapping optimization applied to
linear actuator design. IEEE Trans Magn 42:1183–1186
7. Crevecoeur G, Sergeant P, Dupre L, Vande Walle R (2008) Two-
level response and parameter mapping optimization for magnetic
shielding. IEEE Trans Magn 44:301–308
8. Encica L, Echeverria D, Lomonova E, Vandenput AJA, Hemker
P, Lahaye D (2007) Efficient optimal design of electromagnetic
actuators using space mapping. Struct Multidisc Optim 33:
481–491
9. Encica L, Paulides J, Lomonova E (2009) Space-mapping opti-
mization in electromechanics: an overview of algorithms and
applications. Compel Int J Comp Math Electrical Electron Eng
28:1216–1226
10. Crevecoeur G, Dupre L, Vande Walle R (2007) Space mapping
optimization of the magnetic circuit of electrical machines
including local material degradation. IEEE Trans Magn 43:
2609–2611
11. Tran T, Brisset S, Brochet P (2007) Combinatorial and multi-
level optimization of a safety isolating transformer. Int J Appl
Electromagn Mech 3:201–208
12. Echeverrıa D (2007) Multi-level optimization: space mapping
and manifold mapping. PhD thesis, Universiteit van Amsterdam.
13. Alexandrov N, Dennis JE, Lewis RM, Torczon V (1998) A trust
region framework for managing the use of approximation models
in optimization. Struct Optim 15:16–23
14. Booker A, Dennis JJ, Frank P, Serafini D, Torczon V, Trosset M
(1999) A rigorous framework for optimization of expensive
functions by surrogates. Struct Optim 17:1–13
15. Crevecoeur G, Sergeant P, Dupre L, Vande Walle R (2010) A
two-level genetic algorithm for electromagnetic optimization.
IEEE Trans Magn 46:2585–2595
16. Gorissen D, Crombecq K, Couckuyt I, Dhaene T, Demeester P
(2010) A surrogate modeling and adaptive sampling toolbox for
computer based design. J Mach Learn Res 11:2051–2055
17. Dyck D, Lowther DA, Malik Z, Spence R, Nelder J (1999)
Response surface models of electromagnetic devices and their
application to design. IEEE Trans Magn 34:1821–1824
18. Lebensztajn L, Maretto CAR, Costa L, Coulomb J-L (2004)
Kriging: a useful tool for electromagnetic devices optimization.
IEEE Trans Magn 40:1196–1199
19. Mullur A, Messac A (2006) Metamodeling using extended radial
basis functions: a comparative approach. Eng Comput 21:
203–217
20. Simpson T, Peplinski J, Koch P, Allen J (2001) Metamodels for
computer-based engineering design: survey and recommenda-
tions. Eng Comput 17:129–150
21. Simpson T, Booker A, Ghosh D, Giunta A, Koch P, Yang R
(2004) Approximation methods in multidisciplinary analysis and
optimization: a panel discussion. Struct Multidiscip Optim 27:
302–313
22. Lahaye D, Canova A, Gruosso G, Repetto M (2007) Adaptive
manifold-mapping using multiquadratic interpolation applied to
linear actuator design. Compel Int J Comput Math Elect Electron
Eng 26:225–235
23. Karakasis MK, Giannakoglou KC (2006) On the use of meta-
model-assisted, multi-objective evolutionary algorithms. Eng
Optim 38:941–957
24. Couckuyt I, Declercq F, Dhaene T, Rogier H, Knockaert L (2010)
Surrogate-based infill optimization applied to electromagnetic
problems. Adv Design Optim Microw RF Circuits Syst 20:
492–501
25. Jones DR, Schonlau M, Welch WJ (1998) Efficient global opti-
mization of expensive black-box functions. J Glob Optim 13:
445–492
26. Shan S, Wang G (2010) Survey of modeling and optimization
strategies to solve high-dimensional design problems with com-
putationally-expensive black-box functions. Struct Multidiscip
Optim 41:219–241
27. Grosso A, Jamali A, Locatelli M (2009) Finding maximin latin
hypercube designs by Iterated Local Search heuristics. Eur J Oper
Res 197:541–547
28. Sacks J, Welch WJ, Mitchell T, Wynn HP (1989) Design and
analysis of computer experiments. Stat Sci 4:409–435
29. SQPLab available at: http://www-roc.inria.fr/gilbert/modulopt/
optimization-routines/sqplab/sqplab.html
30. Moallem M, Dawson G (1998) An improved magnetic equivalent
circuit method for predicting the characteristic of highly saturated
electromagnetic devices. IEEE Trans Magn 34:3632–3635
31. Arkadan AA, Kielagas BW (1994) Switched reluctance motor
drive system dynamic performance prediction and experimental
verification. IEEE Trans Energy Convers 9:36–44
32. Miller T (2002) Optimal design of switched reluctance motors.
IEEE Trans Ind Electron 49:15–27
33. Abdallh AA, Dupre L (2010) Local magnetic measurements in
magnetic circuits with highly non-uniform electromagnetic fields.
Meas Sci Technol 21:045109
34. Forrester AI, Keane AJ (2009) Recent advances in surrogate-
based optimization. Prog Aerospace Sci 45:50–79
35. Coleman TF, Li Y (1996) An interior, trust region approach for
nonlinear minimization subject to bounds. SIAM J Optim 6:
418–445
36. Koziel S, Bandler JW, Madsen K (2008) Quality assessment of
coarse models and surrogates for space mapping optimization.
Optim Eng 9:375–391
Engineering with Computers (2012) 28:199–207 207
123