Simple adaptive control for SISO nonlinear systems using multiple neural networks

6
Simple Adaptive Control for SISO Nonlinear Systems Using Multiple Neural Networks Muhammad Yasser, Agus Trisanto, Ayman Haggag, Takashi Yahagi, Hiroo Sekiya, and Jianming Lu Graduate School of Science and Technology Chiba University Chiba-shi, 263-8522 Japan Email: [email protected] Abstract—This paper presents a method of continuous-time simple adaptive control (SAC) using multiple neural networks for a single-input single-output (SISO) nonlinear systems with unknown parameters and dynamics, bounded-input bounded- output, and bounded nonlinearities. The control input is given by the sum of the output of the simple adaptive controller and the sum of the outputs of the parallel small-scale neural networks. The parallel small-scale neural networks are used to compensate the nonlinearity of plant dynamics that is not taken into consideration in the usual SAC. The role of the parallel small- scale neural networks is to construct a linearized model by minimizing the output error caused by nonlinearities in the control systems. Finally, the stability analysis of the proposed method is carried out, and the effectiveness of this method is confirmed through computer simulations. I. INTRODUCTION Adaptive control methods were developed as an attempt to overcome difficulties connected with the ignorance of systems structure and critical parameter values as well as changing control regimes [1][3]. Most self-tuning and adaptive control algorithms usually use reference models, controllers, or identifiers of almost the same order as the controlled plant. Since the dimension of the plants in the real world may be very large or unknown, implementation of adaptive control procedures may be very difficult or impossible. To overcome this problem, simple adaptive control (SAC) method was developed by Sobel et al. [4] as an attempt to simplify the adaptive controllers, since no observers or identifiers are needed in the feedback loop [5]. Furthermore, the reference model is allowed to be of very low order compared with the controlled plant. For linear plants with unknown structures, SAC is an important class of adaptive control scheme [5], [6]. However, for nonlinear plants with unknown structures, it may not be possible to ensure perfect plant output that follows the output of a reference model by using SAC [7]. For nonlinear plants, many methods for the control using neural network are proposed. It has been proved that these control methods show excellent performance for nonlinearity [8], [9], [10]. The combination of SAC and neural network for a single-input single-output (SISO) nonlinear plant has been proposed and proven to give a perfect result [7], [11]. The method of the combination of SAC and neural network for nonlinear continuous-time systems and its stability proof has been proposed in [11]. The control input is given by the sum of the output of a simple adaptive controller and the output of neural network. The role of neural network is to compensate for constructing a linearized model so as to minimize the output error caused by nonlinearities in the control system by using backpropagation learning algorithm. The role of simple adaptive controller is to perform the model matching for the linear system with unknown structures to a given linear reference model. In other control methods using neural network, when the size of neural network increased, the calculation time for each of iteration of the learning process will also increase. When the size of neural network is very large then its calculation process will be very time consuming. This will cause the control system to require large sampling time. If it is applied to an actual plant, it is necessary to drastically reduce the calculation time of the learning process of neural network. Thus the sampling-time of the controller can also be reduced to control the plant well. The proposed method of this paper is an improvement of the methods in [11] and [12]. In this paper, we propose a design method using backpropagation learning algorithm for multiple neural networks, which consists of several small-scale neural networks with identical structures connected in parallel, in order to design SAC required for real-time processing. This proposed method is designed for a class of SISO nonlinear plants with unknown parameters and dynamics, bounded-input bounded-output (BIBO), and bounded nonlinearities. The number of time required per iteration for the calculation in the learning process of neural network to update its weights can be decreased by using the method of multiple neural networks [12]. Moreover, by parallel training of several small-scale neural networks, the learning efficiency is improved. Finally, the stability analysis of the proposed method is carried out, and the effectiveness of this method is confirmed through computer simulations. II. SIMPLE ADAPTIVE CONTROL In this section, we briefly describe SAC for a linear SISO plant. Simple adaptive controller is designed to realize a plant

Transcript of Simple adaptive control for SISO nonlinear systems using multiple neural networks

Simple Adaptive Control for SISO Nonlinear Systems Using Multiple Neural Networks

Muhammad Yasser, Agus Trisanto, Ayman Haggag, Takashi Yahagi, Hiroo Sekiya, and Jianming Lu

Graduate School of Science and Technology Chiba University

Chiba-shi, 263-8522 Japan Email: [email protected]

Abstract—This paper presents a method of continuous-time simple adaptive control (SAC) using multiple neural networks for a single-input single-output (SISO) nonlinear systems with unknown parameters and dynamics, bounded-input bounded-output, and bounded nonlinearities. The control input is given by the sum of the output of the simple adaptive controller and the sum of the outputs of the parallel small-scale neural networks. The parallel small-scale neural networks are used to compensate the nonlinearity of plant dynamics that is not taken into consideration in the usual SAC. The role of the parallel small-scale neural networks is to construct a linearized model by minimizing the output error caused by nonlinearities in the control systems. Finally, the stability analysis of the proposed method is carried out, and the effectiveness of this method is confirmed through computer simulations.

I. INTRODUCTION

Adaptive control methods were developed as an attempt to overcome difficulties connected with the ignorance of systems structure and critical parameter values as well as changing control regimes [1]-[3]. Most self-tuning and adaptive control algorithms usually use reference models, controllers, or identifiers of almost the same order as the controlled plant. Since the dimension of the plants in the real world may be very large or unknown, implementation of adaptive control procedures may be very difficult or impossible.

To overcome this problem, simple adaptive control (SAC) method was developed by Sobel et al. [4] as an attempt to simplify the adaptive controllers, since no observers or identifiers are needed in the feedback loop [5]. Furthermore, the reference model is allowed to be of very low order compared with the controlled plant.

For linear plants with unknown structures, SAC is an important class of adaptive control scheme [5], [6]. However, for nonlinear plants with unknown structures, it may not be possible to ensure perfect plant output that follows the output of a reference model by using SAC [7]. For nonlinear plants, many methods for the control using neural network are proposed. It has been proved that these control methods show excellent performance for nonlinearity [8], [9], [10]. The combination of SAC and neural network for a single-input single-output (SISO) nonlinear plant has been proposed and proven to give a perfect result [7], [11].

The method of the combination of SAC and neural network for nonlinear continuous-time systems and its stability proof has been proposed in [11]. The control input is given by the sum of the output of a simple adaptive controller and the output of neural network. The role of neural network is to compensate for constructing a linearized model so as to minimize the output error caused by nonlinearities in the control system by using backpropagation learning algorithm. The role of simple adaptive controller is to perform the model matching for the linear system with unknown structures to a given linear reference model.

In other control methods using neural network, when the size of neural network increased, the calculation time for each of iteration of the learning process will also increase. When the size of neural network is very large then its calculation process will be very time consuming. This will cause the control system to require large sampling time. If it is applied to an actual plant, it is necessary to drastically reduce the calculation time of the learning process of neural network. Thus the sampling-time of the controller can also be reduced to control the plant well.

The proposed method of this paper is an improvement of the methods in [11] and [12]. In this paper, we propose a design method using backpropagation learning algorithm for multiple neural networks, which consists of several small-scale neural networks with identical structures connected in parallel, in order to design SAC required for real-time processing. This proposed method is designed for a class of SISO nonlinear plants with unknown parameters and dynamics, bounded-input bounded-output (BIBO), and bounded nonlinearities. The number of time required per iteration for the calculation in the learning process of neural network to update its weights can be decreased by using the method of multiple neural networks [12]. Moreover, by parallel training of several small-scale neural networks, the learning efficiency is improved. Finally, the stability analysis of the proposed method is carried out, and the effectiveness of this method is confirmed through computer simulations.

II. SIMPLE ADAPTIVE CONTROL

In this section, we briefly describe SAC for a linear SISO plant. Simple adaptive controller is designed to realize a plant

output which converges to reference model output. Let us consider the following controllable and observable

but with unknown parameter SISO linear plant of order pn

( ) ( ) ( )p p p pt A t B u t= +x x (1)

( ) ( )p p py t C t= x (2)

where ( )p tx is an pn th-order plant state vector, ( )u t is the

control input, ( )py t is the plant output, pA , pB , and pC are matrices with appropriate dimensions.

It is necessary for us in the realization of linear SAC to control the plant in (1), (2) to make the following assumption. Assumption 1

(a) Plant (1), (2) is ASPR (almost strictly positive real). That is, there exists a constant gain *

ek such that the transfer

function 1( ) ( )p p c pG s C sI A B−= − is SPR (strictly positive

real), where ( )pG s is the plant transfer function, and *

c p e p pA A k B C= + .

(b) det 00

p p

p

A BC⎡ ⎤

≠⎢ ⎥⎣ ⎦

.

Furthermore, let us consider that the plant (1), (2) is required to follow the input-output behavior of a reference model of the form

( ) ( ) ( )m m m mt A t B u t= +mx x (3) ( ) ( )m m my t C t= x (4)

where ( )m tx is an mn th-order reference model state vector, ( )mu t is the reference model input, ( )my t is the reference

model output, mA , mC are matrices with appropriate dimensions, and mB is a scalar value. The reference model can be independent of the controlled plant, and it is permissible to assume m pn n .

It is necessary to add the supplementary values of the augmented plant which are defined as

( ) ( ) ( )a p sy t y t y t= + (5)

( ) ( ) ( )s py s D s u s= (6)

( ) ( ) ( )y m ae t y t y t= − (7)

where ( )pD s is a simple parallel feedforward compensator (PFC)

( )1p

DpD ssρ

=+

(8)

across the controlled plant to fulfill the condition in assumption 1(a) to guarantee its robust stability [5], [6], [13], where ρ is a positive constant. The augmented plant we use here must satisfy the following conditions: (i) plant (13) is ASPR, (ii)

( ) ( ) ( ) ( )a p s py t y t y t y t= + ≅ , which can be fulfilled by setting

the value of pD to be very small [5], [6], and (iii) ( )pD s is physically realizable.

The control objective is to achieve the following relation lim ( ) 0yt

e t→∞

= . (9)

Since the plant is unknown, the actual control of the plant will be generated by the following adaptive algorithm using the values that can be measured, which are ( )ye t , ( )tmx and

( )mu t , to get the low-order adaptive controller ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )p e y x m u mu t K t t K t e t K t t K t u t= = + +r x (10)

where [ ]( ) ( ) ( ) ( )e x uK t K t K t K t= (11)

( ) ( ) ( ) ( )T Ty m mt e t t u t⎡ ⎤= ⎣ ⎦r x (12)

and the adaptive gains are obtained as a combination of 'proportional' and 'integral' terms as follows

( ) ( ) ( )p iK t K t K t= + (13)

( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( )y m m

Tp y y p y m p y m pe x u

Ty p

K t e t e t T e t t T e t u t T

e t t T

⎡ ⎤= ⎢ ⎥⎣ ⎦=

x

r (14)

( ) ( ) ( ) ( )Ti y i iK t e t t T K tσ= −r (15)

( )0, 0T Tp p i iT T T T= > = > .

Then we apply SAC control input in (10) to control the SISO linear plant (1), (2), so that the control input ( )u t for the plant will be

( ) ( )pu t u t= . (16)

III. SIMPLE ADAPTIVE CONTROL USING MULTIPLE NEURAL NETWORK FOR NONLINEAR SYSTEM

When the input-output characteristic of the controlled plant is nonlinear, it is not possible to express as in (1) and (2). First let us consider an SISO nonlinear plant with BIBO to be expressed by a system that consists of a linear part and a nonlinear part as ( ) ( ) ( ) ( ( ), ( ))p p p p x pt A t B u t t u t= + +x x f x (17)

( ) ( ) ( ( ))p p p y py t C t f t= +x x (18)

where ( )p tx is an pn th-order plant state vector, ( )u t is the

control input, ( )py t is the plant output, pA , pB , and pC are

matrices with appropriate dimensions, ( )x ⋅f is a nonlinear

function vector pnR∈ , and ( )yf ⋅ is a nonlinear function. We further assume that the system (17), (18) is controllable and observable. Then, it is necessary to make the following assumption [11]. Assumption 2

(a) The linear part and the nonlinear part of the plant in (17), (18) are unknown. (b) For the plant in (17), (18), there exists an augmented plant where its linear part satisfies assumption 1(a). This augmented plant, as in (21), is formed by incorporating the plant in (17), (18) with the supplementary values in (5)—(8)

[15]. (c) The nonlinear part of the plant in (17), (18), which is represented by ( )x ⋅f and ( )yf ⋅ , is bounded.

However, in this case, when the input in (16) is used to control the nonlinear system in (17) and (18), the problem of output error will arise [7], [11].

To keep the plant output ( )py t converge to the reference

model output ( )my t , the control input can be expressed as

( ) ( ( ), ( ), ( ), ( ))T T T Tm a p pu t h y t y t y t x t= (19)

according to (7), (17), and (18), where (.)h is an unknown nonlinear function vector.

In this paper, we synthesize the control input ( )u t by the following equation

( ) ( ) ( )p pu t u t u t= + (20)

where ( )pu t is the output of SAC, as mentioned in (10). And

( )pu t is the total control input of neural network given as

( )ˆ ˆ( ) ( ) ( )p p ZOH pu t u t f u kα α= = (21)

1

ˆ ˆ( ) ( )vn

p pvv

u k u k=

= ∑ (22)

where α is a positive constant, ˆ ( )pu t is the total continuous-time output of neural network, ˆ ( )pu k is the total discrete-time

output of neural network, ( )ZOHf ⋅ is a zero-order hold function [11], ˆ ( )pv

u k is the output of the v -th small-scale neural network, and vn is the number of the parallel small-scale neural networks.

As in [11], a sampler is implemented in front of the neural network with appropriate sampling period to obtain discrete-time multi-input of the neural network, and a zero-order hold is implemented to change the discrete-time output ( )pu k of the

neural network back to continuous-time output ( )pu t as shown in.(21).

Consequently, we can assume the discrete-time output ˆ ( )pu k as follows

( )ˆˆ ( ) ( 1), ( 1),..., ( )p m p pu k h y k y k y k n= − − − (23)

where ( )h ⋅ is an unknown nonlinear function and is the number of the past data of output of the plant.

Using the above approach, the parallel small-scale neural networks will be trained until the output error ( )e t given as

( ) ( ) ( )m pe t y t y t= − (24) satisfies the following relation

lim ( ) lim ( ) ( )m pt te t y t y t ε

→∞ →∞= − ≤ (25)

where ε is a small positive value.

IV. COMPOSITION OF NEURAL NETWORK

Each of the parallel small-scale neural networks consists of three layers: an input layer, an output layer and a hidden layer.

For the v -th parallel small-scale neural network ( 1,2,..., vv n= ), let ( )ivx k be the input to the i -th neuron in the input layer

( 1,2,..., ii n= ), ( )qvh k be the input to the q -th neuron in the

hidden layer ( 1, 2,..., qq n= ), ( )vo k be the input to the neuron

in the output layer. Furthermore, let ( )iqvm k be the weight

between the input layer and the hidden layer, ( )qjvm k be the

weight between the hidden layer and the output layer. The control input is given by the sum of the output of simple

adaptive controller and the output of neural network. The neural network is used to compensate the nonlinearity of the plant dynamics that is not taken into consideration in the usual SAC. The role of the neural network is to construct a linearized model by minimizing the output error caused by nonlinearities in the control systems. Refering to (23), the input ( )vi k of the neural network is given as

( ) ( 1), ( 1),..., ( )v m p pi k y k y k y k n⎡ ⎤= − − −⎣ ⎦ . (26)

Therefore, the nonlinear function of the system can be approximated by neural network. Furthermore, values n should be chosen appropriately according to practical nonlinear systems.

V. LEARNING OF NEURAL NETWORK

The dynamics of the v -th parallel small-scale neural network are given as

( ) ( ) ( )q i iqvv vi

h k x k m k= ∑ (27)

1( ) ( ( )) ( )v q qjv vq

o k S h k m k= ∑ (28)

( )2ˆ ( ) ( )p vvu k S o k= (29)

where 1(.)S is a tangent sigmoid function, 2 (.)S is a pure linear function. The tangent sigmoid function is chosen as

12( ) 1

1 exp( )S X

Xµ= −

+ − (30)

where 0µ > , and the pure linear function is chosen as

2 ( )S X X= . (31) Consider the case when 1( )S X a= . Then the derivative of

the tangent sigmoid function 1(.)S and the pure linear function

2 (.)S are as follows

21( ) (1 )

2S X aµ′ = − (32)

2 ( ) 1S X′ = . (33) The objective of training is to minimize the error function ( )E k by taking the error gradient with respect to the

parameters or weight vector ( )vm k , that is to be adapted. The error function is defined as

21( ) ( ) ( )2 m pE k y k y k⎡ ⎤= −⎣ ⎦ . (34)

The weights are then adapted by using ( )( )( )qjv

qjv

E km k cm k∂

∆ = − ⋅∂

(35)

( )( )( )iqv

iqv

E km k cm k∂

∆ = − ⋅∂

(36)

where 0c > is the learning parameter. For the learning process, (35) and (36) will be expanded as follows

( )

( )

2

2 1

( ) ( ) ( )( )( )ˆ( ) ( ) ( ) ( )

[ ( ) ( )] ( ) ( ( ))

p v vqjv

p p v qjv v

m p plant v qv v

y k S o k o kE km k cy k u k o k m k

c y k y k J S o k S h k

∂ ∂ ∂∂∆ = − ⋅ ⋅ ⋅ ⋅

∂ ∂ ∂

′= ⋅ − ⋅ ⋅ ⋅

(37)

( )( )2

2

1

1

2

1

ˆ ( )( ) ( )( )( )ˆ( ) ( ) ( ) ( )

( ( )) ( )( )( ( )) ( ) ( )

[ ( ) ( )] ( ( )) ( )

( ( )) ( )

pp vviqv

p p v vv

q qv

q q iq

m p plant v qjv v

q v

u ky k S o kE km k cy k u k S o k o k

S h k h ko kS h k h k m k

c y k y k J S o k m k

S h k i k

∂∂ ∂∂∆ = − ⋅ ⋅ ⋅ ⋅

∂ ∂ ∂

∂ ∂∂⋅ ⋅ ⋅∂ ∂ ∂

′= ⋅ − ⋅ ⋅ ⋅

′ ⋅

(38)

where ( )

ˆ ( )p

plant vpv

y kJ SGN

u k

⎛ ⎞∂= ⎜ ⎟⎜ ⎟∂⎝ ⎠

(39)

which is derived from the one mentioned in reference [9], [10].

VI. CONVERGENCE AND STABILITY

For the stability analysis of our method, we use and modify the stability analysis presented in [11] and [14]. As mentioned in assumption 2(b), the PFC in (6), (8) is incorporated with the nonlinear system in (1), (2) to form the augmented plant, as in (5), where its linear part is ASPR. For convenience, first, it is necessary for the PFC in (6), (8) to be expressed in a state-space form as follows

( ) ( ) ( )s s s st A t B u t= +x x (40) ( ) ( )s s sy t C t= x (41)

then, by applying (40), (41) to (6), (17), and (18), the augmented plant can be described as follows

ˆ( ) ( ) ( ) ( ( ), ( ), ( ))eq pt A t Bu t t u t u t= + + ix x δ x (42) ˆ( ) ( ) ( ( ))a oy t C t tδ= +x x (43)

where ( )

( )( )

p

s

tt

t⎡ ⎤

= ⎢ ⎥⎣ ⎦

xx

x (44)

00

p

s

AA

A⎡ ⎤

= ⎢ ⎥⎣ ⎦

(45)

p

s

BB

B⎡ ⎤

= ⎢ ⎥⎣ ⎦

(46)

p pC C D⎡ ⎤= ⎣ ⎦ (47)

and ˆ ( ( ), ( ), ( ))pt u t u tiδ x and ˆ ( ( ))o tδ x represent the nonlinear part of the augmented plant described as follows

( ( ), ( ))ˆ ( ( ), ( ), ( )) ( )0

( ( ), ( )) ( )

x pp c p

p

t u tt u t u t Bu t

t u t Bu t

⎡ ⎤= +⎢ ⎥⎣ ⎦

= +

i

i

f xδ x

δ x (48)

ˆ ( ( )) ( ( ))o y pt f tδ =x x . (49) To prove the stability of our method, we start by defining its

Lyapunov function as follows ( ) ( ) ( )SACNN SAC NNV t V t V t= + (50)

where ( )SACV t is the Lyapunov function of SAC of our method which is a modification from the Lyapunov function of SAC presented in [11], [14], and ( )NNV t is the total of the Lyapunov function of NN of our method.

The Lyapunov function of SAC of our method is described as follows

{ }1( ) ( ) ( ) tr ( ) ( )TT

SAC x x i i iV t t P t K t K T K t K−⎡ ⎤ ⎡ ⎤= + − −⎣ ⎦ ⎣ ⎦e e (51)

where P is a real symmetric positive definite matrix, ( )tr ⋅ is a

trace function, and ( )x te is a state error vector given as ( 1)ˆ( ) ( ) ( ) ( ), ( ), , ( )p

Tnx x x xt t t e t e t e t−⎡ ⎤= − = ⎣ ⎦e x x (52)

where ˆ ( )tx is the ideal target state vector of the system, and

e x uK K K K⎡ ⎤= ⎣ ⎦ is the unknown ideal gain of SAC.

The derivation of Lyapunov function in (50) will be ( ) ( ) ( )SACNN SAC NNV t V t V t= + . (53)

where the Lyapunov function of SAC of our method ( )SACV t in (51) is developed by replacing the disturbances used in the Lyapunov function of SAC in [14] with the nonlinear part represented by ˆ ( ( ), ( ), ( ))pt u t u tiδ x and ˆ ( ( ))o tδ x , then its

derivation, ( )SACV t , can be described as follows

{ }

( )

1

1

( ) ( ) ( ) 2 tr ( ) ( )

2 ( ) ( ) ( ) ( )

2 ( ) ( ) ( ) ( ) ( ) ( )

2 tr ( ) 2 ( ) F( )

ˆ2 ( ( )) ( ) ( )

ˆ2

y

m m

TTSAC x x i i i

T Ty y y p ye

T T Ty y m p m m p mx u

T Ti i x

o i

V t t Q t K t K T K t K

e t e t e t T e t

e t e t x t T x t u t T u t

K t K T K e t P t

t K t K t

σ

σ

δ

δ

⎡ ⎤ ⎡ ⎤= − − − −⎣ ⎦ ⎣ ⎦

⎡ ⎤− +⎣ ⎦⎡ ⎤⎡ ⎤− − −⎣ ⎦⎣ ⎦

− −

e e

x r

( )

1

( ( )) ( ) ( ) ( )

ˆ2 ( ( )) ( ) ( ) ( ) ( ) ( )

2 tr ( ) 2 ( ) F( )

ˆ2 ( ( )) ( ) ( )

y

m m

To y y p ye

T To y m p m m p mx u

T Ti i x

o i

t e t e t T e t

t e t x t T x t u t T u t

K t K T K e t P t

t K t K t

δ

σ

δ

⎡ ⎤− +⎣ ⎦⎡ ⎤⎡ ⎤− − −⎣ ⎦⎣ ⎦

− −

x

x

x r

ˆ2 ( ( )) ( ) ( ) ( )

ˆ2 ( ( )) ( ) ( ) ( ) ( ) ( )

y

m m

To y y p ye

T To y m p m m p mx u

t e t e t T e t

t e t x t T x t u t T u t

δ

δ

⎡ ⎤− +⎣ ⎦

x

x (54)

where Q is a real matrix, and F( )t is given as ˆ ˆF( ) E ( ) ( ( )) ( ( ), ( ), ( ))Bias e o pt t BK t t u t u tδ= − + ix δ x (55)

where E ( )Bias t is a bias term as explained in [16]. For ( )NNV t , we assume that it can be approximated as

1( ) ( )

vn

NN NN vv

V t V t=

= ∑ (56)

( ) ( ) /NN NNv vV t V k T≅ ∆ ∆ (56) where ( )NN vV k∆ is the derivation of a discrete-time Lyapunov function, and T∆ is a sampling time. According to [11],

( )NN vV k∆ can be guaranteed to be negative definite if the learning parameter c fulfills the following conditions

0 2 / qc n< < (57) for the weights between the hidden layer and the output layer,

( )qjvm k , and

220 max ( ) max ( )k qj k ivvq

c m k i kn

−⎡ ⎤< < ⋅⎣ ⎦ (58)

for the weights between the input layer and the hidden layer, ( )iqv

m k . Furthermore, if the conditions in (57) and (58) are

fulfilled for each of the parallel small-scale neural networks, the negativity of ( )NNV t can also be increased by reducing

T∆ in (56). The stability of our method requires that ( )SACNNV t in (50) to

be negative definite. For the derivation of the Lyapunov function of SAC of our method in (54), (55), since from (20), (44), (45), assumption 2(c), and by fulfilling the conditions in (57) and (58), ˆ ( ( ), ( ), ( ))pt u t u tiδ x and ˆ ( ( ))o tδ x are bounded. Then, we can apply directly the same method as in [14] to prove the stability of SAC of our method. Thus, the Lyapunov function of our proposed method in (50) can be guaranteed to be negative definite.

VII. COMPUTER SIMULATION

We will consider an SISO nonlinear plant with BIBO and bounded nonlinearity. Example : Let us consider an SISO nonlinear plant from [11] as follow

( )1 2 10

1 1210

1 1

002f sin( )0 1

sin( )

psat

p

x xu

x xx

y x x−

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎢ ⎥⎣ ⎦= +

where ( )fupper

lower

n

satn

⋅ is a saturation function with a lower limit at

lowern and an upper limit at uppern , and the parameters

0.001pD = in (8), 1ρ = in (8), 3 3 3(5 10 ,5 10 ,5 10 )pT diag= × × × in (14),

4 4 4(5 10 ,5 10 ,5 10 )iT diag= × × × in (15), 1σ = in (15), 2µ = in (30), 0.001c = in (35)-(38), and 0.01T∆ = in (56) are fixed. Furthermore, we assume a first-order reference model (3), (4) with parameters 10mA = − , 10mB = , and 1mC = .

Fig. 1 shows the output of the reference model ( )my t and the plant output ( )py t using only SAC. The result in Fig. 1 shows that the error between ( )py t and ( )my t is large.

Fig. 2 shows the output of reference model ( )my t and the plant output ( )py t using our proposed method of SAC using

multiple neural networks with 1vn = . It can be seen that the error of the system has been reduced, and the plant output

( )py t can follow closer the output of the reference model ( )my t compared to using only SAC.

Fig. 3 shows the output of reference model ( )my t and the plant output ( )py t using our proposed method with 2vn = . It can be seen that the error of the system has been reduced, and the plant output ( )py t can follow closer the output of the reference model ( )my t compared to using our method with

1vn = . Fig. 4 shows the output of reference model ( )my t and the

plant output ( )py t using our proposed method with 3vn = . It can be seen that the error of the system has been reduced, and the plant output ( )py t can follow very closely the output of the reference model ( )my t .

Figs. 1—4 and Table I show that, with the same number of trainings, the sum of the square error of the system is decreasing as the number of the parallel small-scale neural networks increased.

VIII. CONCLUSION

In this paper, we proposed a design method using backpropagation learning algorithm for multiple neural networks, which consists of several small-scale neural networks with identical structures connected in parallel, in order to design SAC required for real-time processing. This proposed method was designed for a class of SISO nonlinear plants with unknown parameters and dynamics, bounded-input bounded-output (BIBO), and bounded nonlinearities. The number of time required per iteration for the calculation in the learning process of neural network to update its weights could be decreased by using the method of multiple neural networks [12]. Moreover, by parallel training of several small-scale, the learning efficiency has been improved. Finally, the stability analysis of the proposed method has been carried out, and the

effectiveness of this method has been confirmed through computer simulations.

TABLE I COMPARISON OF SUM OF SQUARE ERROR AFTER 4501 TRAINING ITERATIONS

Number of parallel small-scale neural networks Sum of Square Error

SAC only 7.9492 1 parallel small-scale neural network 2.5220 2 parallel small-scale neural networks 1.7695 3 parallel small-scale neural networks 0.3055

REFERENCES [1] K. J. Åström and B. Wittenmark, Adaptive Control, Addison-Wesley

1995. [2] Jianming Lu and Takashi Yahagi, “New design method for model

reference adaptive control for nonminimum phase discrete-time systems with disturbances,” IEE Proceeding-D, Control Theory and Applications, vol. 140, no. 1, pp. 34-40, 1993.

[3] Jianming Lu and Takashi Yahagi, “Discrete-time model reference adaptive control for nonminimum phase systems with disturbances,” Trans. ASME, Journal of Dynamic Systems, Measurement, and Control, vol. 120, no. 3, pp 117-123, 1998.

[4] K. Sobel, H. Kaufman, and L. Mabius, “Implicit adaptive control for a class of MIMO systems,” IEEE Aerospace Electron Syst., vol. AES-18, no.5, pp. 576-590, 1982.

[5] I. Bar-Kana and H. Kaufman, “Global stability and performance of a simplified adaptive algorithm,” Int. J. Control, vol. 42, no. 6, pp. 1491-1505, 1985.

[6] Z. Iwai and I. Mizumoto, “Robust and simple adaptive control systems,” Int. J. Control, vol. 55, no. 6, pp. 1453-1470, 1992.

[7] J. Lu, J. Phuah, and T. Yahagi, “SAC for nonlinear systems using elman recurrent neural networks,” IEICE Trans. Fundamentals, vol. E85-A, no. 8, pp.1831-1840, 2002.

[8] J. Lu, J. Phuah, and T. Yahagi, “A method of model reference adaptive control for MIMO nonlinear systems using neural networks,” IEICE Trans. Fundamentals, vol. E84-A, no. 8, pp. 1933-1941, 2001.

[9] G. Lightbody, Q.H. Wu, and G.W. Irwin, “Control applications for feedforward networks,” Neural Network for Control and Systems, IEE Control Engineering series 46, pp. 51-71, 1992.

[10] J.A.K. Suykens, J.P.L. Vandewalle, and B.L.R. De Moor, Artificial Neural Networks for Modelling and Control of Non-Linear Systems, Kluwer Academic Publishers, 1996.

[11] M. Yasser, A. Trisanto, J. Lu, and T. Yahagi, “A method of simple adaptive control for nonlinear systems using neural networks,” IEICE Trans. Fundamentals, vol. E89-A, no. 7, pp. 2009-2018, 2006.

[12] M. Yasser, Jiunshian Phuah, Jianming Lu, and Takashi Yahagi, “A method of simple adaptive control for MIMO nonlinear continuous-time systems using multifraction neural network,” Proc. of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), vol.1, pp.23-28 Kobe, Japan, July 2003.

[13] I. Bar-Kana and H. Kaufman, “Simple adaptive control of uncertain systems,” Int. J. Adaptive Control and Signal Processing, vol. 2, pp. 133-143, 1988.

[14] H. Kaufman, I. Barkana, and K. Sobel, Direct Adaptive Control Algorithms, Theory and Applications (2nd ed.), Springer, 1997.

Fig. 1. ( )my t and ( )py t using only SAC.

Fig. 2. ( )my t and ( )py t using our proposed method of SAC with 1 parallel small-scale neural network.

Fig. 3. ( )my t and ( )py t using our proposed method of SAC with 2 parallel small-scale neural networks.

Fig. 4. ( )my t and ( )py t using our proposed method of SAC with 3 parallel small-scale neural networks.