Generalised influential rule search scheme for fuzzy function approximation

12
Soft Comput (2006) DOI 10.1007/s00500-005-0471-2 ORIGINAL PAPER A. Chatterjee · A. Rakshit · P. Siarry Generalised influential rule search scheme for fuzzy function approximation Published online: 24 January 2006 © Springer-Verlag 2006 Abstract The present paper is a humble attempt to develop a fuzzy function approximator which can completely self-gen- erate its fuzzy rule base and input-output membership func- tions from an input-output data set. The fuzzy system can be further adapted to modify its rule base and output mem- bership functions to provide satisfactory performance. This proposed scheme, called generalised influential rule search scheme, has been successfully implemented to develop pure fuzzy function approximators as well as fuzzy logic control- lers. The satisfactory performance of the proposed scheme is amply demonstrated by implementing it to develop different major components in a process control loop. The versatility of the algorithm is further proved by implementing it for a benchmark nonlinear function approximation problem. Keywords Adaptive fuzzy systems · Fuzzy function ap- proximators · FCM with modified α-cuts · Fuzzy rule base punishment · Adaptation of output membership functions 1 Introduction In recent years, determination of a suitable function mapping from a given input-output data set, employing fuzzy systems and/or neural networks in a supervised manner, has become an active area of research [1]. Both fuzzy systems and neu- ral networks have their advantages and disadvantages, when applied to a specific problem domain. Neural networks can generate effective solutions, if the network can be properly trained. However, if the network training is not satisfactory, in many cases it is almost impossible to analyze why the net- work performance is poor and hence, it can not be improved upon. Even when a neural network application performs sat- isfactorily, it is very difficult for the user to understand the A. Chatterjee · A. Rakshit () Electrical Engineering Department, Jadavpur University, Calcutta 700 032, India P. Siarry Universit´ e Paris XII, Facult´ e des Sciences (LERISS), 94010 Cr´ eteil, France entire working principle as any neural network architecture is bound to be complex in nature. Normally, every neural network is associated with considerable computational bur- den in its training phase. A fuzzy system, on the other hand, is much more analytically tractable, from the user point of view. A fuzzy system architecture and operating principle can be relatively easily understood and its performance can be improved by proper tuning of its parameters. However, in many cases, fuzzy system performance may not be compa- rable to its neural network counterpart. This can be due to the fact that the inherent strength of a fuzzy system lies in its knowledge base which is obtained by knowledge acquisition from the domain expert. Poor exchange of domain knowl- edge between the knowledge engineer and the domain expert may cause the development of a weak fuzzy knowledge base and hence performance may be less satisfactory. So, in many cases, fuzzy systems are easy to develop, simple to under- stand but may not provide sufficient accuracy. To overcome these drawbacks, different methodologies to develop fuzzy systems have been proposed in recent past which can do away with the domain expert and can com- pletely self-construct itself on the basis of a given input-output data set. The objective of adapting this self-constructing fuzzy system is to adapt the topology of the fuzzy system to deter- mine optimum number of membership functions (MFs) for each input and output variable, number of associated fuzzy rules etc. and/or to determine the optimum set of free param- eters which can improve the system performance [2,3]. Two of the most popular earliest adaptive fuzzy systems were reported in [4] and [5]. Both of them implemented their fuzzy systems by dividing each input space into a fixed number of fuzzy regions characterized by unbiased choices of MFs. An important variation of [5] later proposed how a Takag- i-Sugeno-Kang (TSK) fuzzy rule conclusion can be com- puted based on an efficient heuristic [6]. Hong and Lee [7] proposed another heuristic based fuzzy system which is ini- tially based on fixed fuzzy regions in input space. Automatic determination and adaptation of fuzzy rule base and MFs are carried out according to a series of merging operations. Fur- ther improvements on this scheme were later proposed in [8]

Transcript of Generalised influential rule search scheme for fuzzy function approximation

Soft Comput (2006)DOI 10.1007/s00500-005-0471-2

ORIGINAL PAPER

A. Chatterjee · A. Rakshit · P. Siarry

Generalised influential rule search scheme for fuzzy functionapproximation

Published online: 24 January 2006© Springer-Verlag 2006

Abstract The present paper is a humble attempt to develop afuzzy function approximator which can completely self-gen-erate its fuzzy rule base and input-output membership func-tions from an input-output data set. The fuzzy system canbe further adapted to modify its rule base and output mem-bership functions to provide satisfactory performance. Thisproposed scheme, called generalised influential rule searchscheme, has been successfully implemented to develop purefuzzy function approximators as well as fuzzy logic control-lers. The satisfactory performance of the proposed scheme isamply demonstrated by implementing it to develop differentmajor components in a process control loop. The versatilityof the algorithm is further proved by implementing it for abenchmark nonlinear function approximation problem.

Keywords Adaptive fuzzy systems · Fuzzy function ap-proximators · FCM with modified α-cuts · Fuzzy rule basepunishment · Adaptation of output membership functions

1 Introduction

In recent years, determination of a suitable function mappingfrom a given input-output data set, employing fuzzy systemsand/or neural networks in a supervised manner, has becomean active area of research [1]. Both fuzzy systems and neu-ral networks have their advantages and disadvantages, whenapplied to a specific problem domain. Neural networks cangenerate effective solutions, if the network can be properlytrained. However, if the network training is not satisfactory,in many cases it is almost impossible to analyze why the net-work performance is poor and hence, it can not be improvedupon. Even when a neural network application performs sat-isfactorily, it is very difficult for the user to understand the

A. Chatterjee · A. Rakshit (✉)Electrical Engineering Department, Jadavpur University,Calcutta 700 032, India

P. SiarryUniversite Paris XII, Faculte des Sciences (LERISS),94010 Creteil, France

entire working principle as any neural network architectureis bound to be complex in nature. Normally, every neuralnetwork is associated with considerable computational bur-den in its training phase. A fuzzy system, on the other hand,is much more analytically tractable, from the user point ofview. A fuzzy system architecture and operating principlecan be relatively easily understood and its performance canbe improved by proper tuning of its parameters. However, inmany cases, fuzzy system performance may not be compa-rable to its neural network counterpart. This can be due tothe fact that the inherent strength of a fuzzy system lies in itsknowledge base which is obtained by knowledge acquisitionfrom the domain expert. Poor exchange of domain knowl-edge between the knowledge engineer and the domain expertmay cause the development of a weak fuzzy knowledge baseand hence performance may be less satisfactory. So, in manycases, fuzzy systems are easy to develop, simple to under-stand but may not provide sufficient accuracy.

To overcome these drawbacks, different methodologiesto develop fuzzy systems have been proposed in recent pastwhich can do away with the domain expert and can com-pletely self-construct itself on the basis of a given input-outputdata set.The objective of adapting this self-constructing fuzzysystem is to adapt the topology of the fuzzy system to deter-mine optimum number of membership functions (MFs) foreach input and output variable, number of associated fuzzyrules etc. and/or to determine the optimum set of free param-eters which can improve the system performance [2,3]. Twoof the most popular earliest adaptive fuzzy systems werereported in [4] and [5]. Both of them implemented their fuzzysystems by dividing each input space into a fixed numberof fuzzy regions characterized by unbiased choices of MFs.An important variation of [5] later proposed how a Takag-i-Sugeno-Kang (TSK) fuzzy rule conclusion can be com-puted based on an efficient heuristic [6]. Hong and Lee [7]proposed another heuristic based fuzzy system which is ini-tially based on fixed fuzzy regions in input space. Automaticdetermination and adaptation of fuzzy rule base and MFs arecarried out according to a series of merging operations. Fur-ther improvements on this scheme were later proposed in [8]

Used Distiller 5.0.x Job Options
This report was created automatically with help of the Adobe Acrobat Distiller addition "Distiller Secrets v1.0.5" from IMPRESSED GmbH. You can download this startup file for Distiller versions 4.0.5 and 5.0.x for free from http://www.impressed.de. GENERAL ---------------------------------------- File Options: Compatibility: PDF 1.2 Optimize For Fast Web View: Yes Embed Thumbnails: Yes Auto-Rotate Pages: No Distill From Page: 1 Distill To Page: All Pages Binding: Left Resolution: [ 600 600 ] dpi Paper Size: [ 595 842 ] Point COMPRESSION ---------------------------------------- Color Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 150 dpi Downsampling For Images Above: 225 dpi Compression: Yes Automatic Selection of Compression Type: Yes JPEG Quality: Medium Bits Per Pixel: As Original Bit Grayscale Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 150 dpi Downsampling For Images Above: 225 dpi Compression: Yes Automatic Selection of Compression Type: Yes JPEG Quality: Medium Bits Per Pixel: As Original Bit Monochrome Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 600 dpi Downsampling For Images Above: 900 dpi Compression: Yes Compression Type: CCITT CCITT Group: 4 Anti-Alias To Gray: No Compress Text and Line Art: Yes FONTS ---------------------------------------- Embed All Fonts: Yes Subset Embedded Fonts: No When Embedding Fails: Warn and Continue Embedding: Always Embed: [ ] Never Embed: [ ] COLOR ---------------------------------------- Color Management Policies: Color Conversion Strategy: Convert All Colors to sRGB Intent: Default Working Spaces: Grayscale ICC Profile: RGB ICC Profile: sRGB IEC61966-2.1 CMYK ICC Profile: U.S. Web Coated (SWOP) v2 Device-Dependent Data: Preserve Overprint Settings: Yes Preserve Under Color Removal and Black Generation: Yes Transfer Functions: Apply Preserve Halftone Information: Yes ADVANCED ---------------------------------------- Options: Use Prologue.ps and Epilogue.ps: No Allow PostScript File To Override Job Options: Yes Preserve Level 2 copypage Semantics: Yes Save Portable Job Ticket Inside PDF File: No Illustrator Overprint Mode: Yes Convert Gradients To Smooth Shades: No ASCII Format: No Document Structuring Conventions (DSC): Process DSC Comments: No OTHERS ---------------------------------------- Distiller Core Version: 5000 Use ZIP Compression: Yes Deactivate Optimization: No Image Memory: 524288 Byte Anti-Alias Color Images: No Anti-Alias Grayscale Images: No Convert Images (< 257 Colors) To Indexed Color Space: Yes sRGB ICC Profile: sRGB IEC61966-2.1 END OF REPORT ---------------------------------------- IMPRESSED GmbH Bahrenfelder Chaussee 49 22761 Hamburg, Germany Tel. +49 40 897189-0 Fax +49 40 897189-71 Email: [email protected] Web: www.impressed.de
Adobe Acrobat Distiller 5.0.x Job Option File
<< /ColorSettingsFile () /AntiAliasMonoImages false /CannotEmbedFontPolicy /Warning /ParseDSCComments false /DoThumbnails true /CompressPages true /CalRGBProfile (sRGB IEC61966-2.1) /MaxSubsetPct 100 /EncodeColorImages true /GrayImageFilter /DCTEncode /Optimize true /ParseDSCCommentsForDocInfo false /EmitDSCWarnings false /CalGrayProfile () /NeverEmbed [ ] /GrayImageDownsampleThreshold 1.5 /UsePrologue false /GrayImageDict << /QFactor 0.9 /Blend 1 /HSamples [ 2 1 1 2 ] /VSamples [ 2 1 1 2 ] >> /AutoFilterColorImages true /sRGBProfile (sRGB IEC61966-2.1) /ColorImageDepth -1 /PreserveOverprintSettings true /AutoRotatePages /None /UCRandBGInfo /Preserve /EmbedAllFonts true /CompatibilityLevel 1.2 /StartPage 1 /AntiAliasColorImages false /CreateJobTicket false /ConvertImagesToIndexed true /ColorImageDownsampleType /Bicubic /ColorImageDownsampleThreshold 1.5 /MonoImageDownsampleType /Bicubic /DetectBlends false /GrayImageDownsampleType /Bicubic /PreserveEPSInfo false /GrayACSImageDict << /VSamples [ 2 1 1 2 ] /QFactor 0.76 /Blend 1 /HSamples [ 2 1 1 2 ] /ColorTransform 1 >> /ColorACSImageDict << /VSamples [ 2 1 1 2 ] /QFactor 0.76 /Blend 1 /HSamples [ 2 1 1 2 ] /ColorTransform 1 >> /PreserveCopyPage true /EncodeMonoImages true /ColorConversionStrategy /sRGB /PreserveOPIComments false /AntiAliasGrayImages false /GrayImageDepth -1 /ColorImageResolution 150 /EndPage -1 /AutoPositionEPSFiles false /MonoImageDepth -1 /TransferFunctionInfo /Apply /EncodeGrayImages true /DownsampleGrayImages true /DownsampleMonoImages true /DownsampleColorImages true /MonoImageDownsampleThreshold 1.5 /MonoImageDict << /K -1 >> /Binding /Left /CalCMYKProfile (U.S. Web Coated (SWOP) v2) /MonoImageResolution 600 /AutoFilterGrayImages true /AlwaysEmbed [ ] /ImageMemory 524288 /SubsetFonts false /DefaultRenderingIntent /Default /OPM 1 /MonoImageFilter /CCITTFaxEncode /GrayImageResolution 150 /ColorImageFilter /DCTEncode /PreserveHalftoneInfo true /ColorImageDict << /QFactor 0.9 /Blend 1 /HSamples [ 2 1 1 2 ] /VSamples [ 2 1 1 2 ] >> /ASCII85EncodePages false /LockDistillerParams false >> setdistillerparams << /PageSize [ 576.0 792.0 ] /HWResolution [ 600 600 ] >> setpagedevice

A. Chatterjee et al.

and [9] to determine only the relevant input attributes andfurther simplify fuzzy rule base to ease computational bur-den. Two important variations of adaptive fuzzy systems wereproposed in [2] and [15] where each input space could be di-vided into variable fuzzy regions and number of such fuzzysets associated with each input variable can be dynamicallychosen. This system could optimise both the structure and thefree parameters of the fuzzy inference system (FIS). Severalother different adaptive fuzzy systems are presented in [10–14]. An excellent analysis on approximation capabilities offuzzy systems, which can be used as universal approximators,is given in [16].

Another important class of adaptive fuzzy systems em-ploys neural network based architectures [17–25], which canbe successfully employed as universal approximators for alarge class of nonlinear, multidimensional functions. Someof these schemes employ strictly supervised learning whilesome of them employ hybrid schemes of supervised andunsupervised learning techniques [2]. Lin and Lee’s neuro-fuzzy system [17] and Jang’s ANFIS [18] are the two mostpopular neuro-fuzzy architectures which were employed laterby many researchers to develop their own learning algorithms(e.g. [25]).Another variation of neuro-fuzzy systems was pro-posed by Simpson [19], which employed adaptively growinghyperboxes to characterize dynamic nature of fuzzy MFs.This system was further improved upon by incorporating twodifferent types of hyperboxes [20] and introducing hyperel-lipsoidal prototypes [21]. Nauck and Kruse proposed theirneuro-fuzzy approximator (NEFPROX) [23] as an improve-ment over ANFIS. NEFPROX can employ both structure andparameter learning and it can be used for both Mamdani- andSugeno-type systems. ANFIS, on the other hand, employsSugeno philosophy and can only employ parameter learning.Recently combinatorial metaheuristic techniques have alsobeen successfully employed to optimize the structure andparameters of a fuzzy system [26–29]. These optimizationtechniques include genetic algorithm and simulated anneal-ing. A very interesting algorithm was proposed by Russo in[30] which attempted to combine all good features of fuzzysystems, neural networks and genetic algorithm together todevelop an efficient function approximator.

However, most of these adaptive fuzzy systems are eithercomplex to understand (e.g. neuro-fuzzy systems or opti-mised fuzzy systems) or are too much heuristic in nature.Hence these systems often sacrifice the basic inherent strengthof a fuzzy system i.e. easy interpretability, to improve accura-cy. The present paper is an effort to develop a new adaptivefuzzy function approximation algorithm, called generalisedinfluential rule search scheme, which attempts to develop anadaptive fuzzy system from the basic principles of Mam-dani-type inferencing and yet can provide satisfactory per-formance. The algorithm is a generalised, extended versionof our earlier proposed generalised influential rule searchscheme (IRSS) as a fuzzy pattern classifier [31]. The algo-rithm can completely self-generate input and output MFs andthe fuzzy rule base from a given input-output data set and cansubsequently modify the fuzzy rule base and/or output MFs,

in a supervised manner, to improve accuracy. The proposedadaptive fuzzy algorithm has been successfully employed todevelop both general purpose fuzzy approximators as well asadaptive fuzzy logic controllers.

The rest of the paper is organised as follows. Section 2describes the generalised IRSS as a fuzzy function approxi-mator. Section 3 describes the implementation philosophiesof generalised IRSS based systems as pure function approx-imators and fuzzy controllers. Section 4 presents the simula-tion studies performed. Section 5 presents the conclusions.

2 Generalised influential rule search scheme (IRSS)algorithm as a pure function approximation tool

We propose a new fuzzy function approximation system whichis a generalised version and modified form of our earlier pro-posed IRSS algorithm as an effective fuzzy pattern classifiertool in [31]. The generalised IRSS algorithm for functionapproximation problems is summarized in Algorithm 1. Letus consider an n-input-one-output system where xk

j denotesthe kth instance of j th input variable and yk denotes the kthinstance of the output variable. Hence, the kth data pair of thedata set is given as (xk

1 , xk2 , . . .. . ., xk

n : yk). Let the total num-ber of instances in the data set be m. The algorithm starts withinitial construction of membership functions (MFs) for eachinput and output variable separately. This initial member-ship function construction (IMFC) algorithm is summarizedin Algorithm 2. This IMFC algorithm is applied separatelyfor each input variable xj (j = 1, 2, . . .. . ., n) and outputvariable y for a MISO system. This algorithm employs fuzzyc-means (FCM) clustering algorithm in one dimensional spa-ce to create MFs for each variable (xj or y). FCM algorithmcreates a set of fuzzy partitions given by:

Mfcm ={

U ∈ [0, 1]cmc∑

i=1

µik = 1, k = 1, ....., m;

×m∑

k=1

µik > 0, i = 1, ........, c

}. (1)

Here the number of clusters, c, corresponds to the num-ber of MFs in which xj or y is fuzzified. The number ofclusters for each xj or y is chosen a priori. Experimenta-tions on a number of data sets have revealed that each MFcreated in this manner contains small perturbations towardstwo extreme ends. Hence, IMFC algorithm employs modifiedα-cuts of fuzzy sets to smooth out MF variations. Contraryto the usual concept of α-cuts of fuzzy sets, this modifiedform at first searches that crisp value of xj or y for whichmembership value within that fuzzy set or MF A i.e. µA(xj )or µA(y) is maximum. From that apex value of µA(xj ) orµA(y) we travel in possible two directions of either increas-ing or decreasing values of xj or y and keep on calculatingthe corresponding µA(xj ) or µA(y). This process is carriedout until we arrive at a value of xj or y on the right hand sideof the apex where µA(xj ) or µA(y) < α. From this point

Generalised influential rule search scheme for fuzzy function approximation

Algorithm 1 Generalised IRSS algorithm for function approximation

Given input–output data set (xk : yk) where xk ∈ �n and yk ∈ �;Set all necessary user defined parameters (MRMSE, Krule, KMF etc.);Set Adapt = TRUE;FOR all input variables xj DO

Apply IMFC algorithm to constructinitial input MF’s;

END FORApply IMFC algorithm to construct initial output MF’s;Apply IFRBC algorithm to construct initial fuzzy rule base;Set total number of epochs e = 0;FOR all instances k in input output data set DO

Execute the fuzzy system constructed;Calculate error for the kth instance;Calculate error contribution factor (ECF)of each activated rule;Calculate aggregate error contribution factor (AECF)of each activated rule;

END FORCalculate RMS error (mRMSE) for the data set;WHILE (Adapt = TRUE)

IF mRMSE ≥ MRMSE THENIF mRMSE ≥ Krule THEN

Apply IRSM algorithm to search thoserules in fuzzy rule basewhich are mostly influential to produceerroneous output andaccordingly modify them;

END IFIF mRMSE < KMF THEN

Apply OMFA algorithm to adapt output MF’s;END IFincrement e;

ELSESet Adapt = FALSE;increment e;

END IFEND WHILE

Algorithm 2 Initial membership function construction (IMFC)algorithm

Set c = number of MF’s for the variable xj ;Set α;Extract an array of xk

j from the given data set (xk : yk)(k = 1, 2, . . .. . .m);Sort the values of xk

j in ascending order;Apply FCM clustering algorithm on sorted xk

j valueswith number of clusters = c;Obtain MF’s for xj from fuzzy partition matrix U ;Apply modified α-cuts of fuzzy sets to obtainsmooth MF variations for xj ;

onwards, for all values of xj or y towards more on the righthand side, all µA(xj ) or µA(y) for the fuzzy set or MF A ismade zero. Similarly we arrive at the same condition for apoint on the left hand side of the apex and for all points hence-forth towards more left, we make µA(xj ) or µA(y) equal tozero. This α is a user defined parameter and an optimumvalue of α is chosen depending on the amount of perturba-tions present in each MF variation. These MFs hence createdare all generic in shape (constrained by FCM clustering) andmainly dependent on the data variations in the data set. Thisis a significant deviation from many adaptive fuzzy system

Algorithm 3 Initial fuzzy rule base construction (IFRBC) algorithm

FOR all data pair k DOFOR all input xk

j DODetermine MF(s) activated withcorresponding membership value(s);

END FORDetermine MF(s) of output y activated for yk withcorresponding membership value(s);Determine D(Rule)new for all possible r rules activated;

FOR all r rules DOIF D(Rule r)new > D(Rule r)existing THEN

Replace existing rule consequenceby new rule consequence;

ELSERetain existing rule consequence;

END IFEND FOR

END FOR

algorithms, developed previously as either pattern classifiersor function approximators, which either employ triangular[2,4,7–11], trapezoidal [20,21] or gaussian [25] prototypesto define their MFs.

Once all the initial MFs for input and output variablesare created, we can apply initial fuzzy rule base construc-tion (IFRBC) algorithm to construct initial fuzzy rule base.Algorithm 3 shows the functioning of IFRBC algorithm. Thisis a modified form of Wang and Mendel’s method of self-con-structing fuzzy rule base from training exemplars [4]. Herefor each instance or exemplar, we determine the MF(s) thatwill be activated for each input and output variable individu-ally. For the kth instance of this n-input-one-output system,if corresponding crisp value of each input variable activatep MFs and output variable activate q MFs, then this systemcan activate possible n×p number of rules. Each rule conse-quence will be one of the q possible output MFs activated.For each rule activated, we calculate an associated strengthof rule, D(rule), given by:

D(rule) = µA(xk1)×µB(xk

2)×· · ·· · ·· · · · · · ×µN(xkn)

×µo(yk) , (2)

where µA(xk1) is the membership value of kth instance of

input x1 i.e. xk1 in fuzzy set or MF A and so on. A rule

with a higher value of D(rule) is assumed to be a morereliable rule to fill a fuzzy rule base entry. Now if the cor-responding fuzzy rule base entry is an empty one it will befilled up by this new output consequence determined. How-ever, if the corresponding rule base entry is already filledup by a different output consequence, then the new deter-mined output consequence will replace the existing one onlyif D(rule)new > D(rule)existing. Otherwise the existing rulebase entry will remain unchanged. This method gives a moreoptimistic approach of filling more than one rule base entryfrom a single data pair as opposed to Wang and Mendel’srecommendation of filling only one rule base entry from asingle data pair. Since this algorithm always makes provisionfor replacing a weaker rule by a stronger rule, our proposedmethod should provide a more robust method of constructingfuzzy rule base.

A. Chatterjee et al.

Once the initial self-construction of fuzzy system is com-pleted, this adaptive fuzzy system is fed with data pairs inbatch mode to train it to improve its accuracy. The IRSSalgorithm employs center of area (COA) method as its def-uzzification strategy. In its training phase, IRSS algorithmmodifies the fuzzy rule base and/or tunes the output MFsafter each epoch to improve the system performance. Thetraining phase continues until the system RMS error fallsbelow the maximum allowable value (MRMSE). While adap-tation of fuzzy rule base corresponds to a coarse tuning ofthe fuzzy system architecture, updating of output MFs is car-ried out to achieve fine tuning of the same architecture. Ineach epoch for each data pair k presented to the fuzzy sys-tem, we calculate error contribution factor (ECF) of each ruleactivated. This ECF for a given rule is calculated as a prod-uct of per unit error of the kth instance in that epoch andper unit output area contributed by that activated rule in thedefuzzification process employing COA strategy. This ECFof each activated rule typically lies within the zone (−1, 1).Typically, for each data pair k presented to the fuzzy system,more than one fuzzy rule gets activated. For each activatedfuzzy rule, ECF is calculated individually. The rule with ahigher magnitude of ECF is more influential in producingsystem RMS error for data pair k and vice versa. Now, in abatch mode of training, for different data pair k different ruleswill be activated. But in a given epoch, a given rule may beactivated for more than one data pair with different values ofECF for different data pair. At the end of a given epoch, wecalculate aggregate error correction factor (AECF) for eachrule as summation of all individual ECFs for that rule in thatgiven epoch. Algorithm 4 shows the influential rule searchand modify (IRSM) algorithm which modifies the fuzzy rulebase on the basis of AECF for each rule. In a given epoch,fuzzy rule r with highest magnitude of AECF i.e. |AECFr | istermed as the most influential rule for producing the systemRMS error, rule r with second highest |AECFr | is the secondmost influential rule and so on. Accordingly, the correspond-ing consequence for the fuzzy rule r i.e. the specific outputMF entry in the rule base is punished by either advancingit or forcing it backward one step, depending on the signof AECFr . However this punishment is only applied if theresultant new fuzzy label for the rule consequence belongs

Algorithm 4 Influential rule search and modify (IRSM) algorithm

Set a user defined value for β;FOR all rules r in fuzzy rule base DO

IF AECFr > β THENIF| AECFr | > 0 THEN

Punish the consequence for rule rin fuzzy rule baseby forcing it backward one step;

ELSEPunish the consequence for rule rin fuzzy rule baseby forcing it forward one step;

END IFEND IF

END FOR

to one of the valid fuzzy output consequences i.e. definedoutput MFs. The whole exercise is carried out only for thoserules whose |AECFr | exceed the rule base tuning factor (β).The value of β is a user-defined parameter, which has to beproperly selected for a given problem. Less the value of β,more number of fuzzy rules is modified after each epoch andvice versa. This IRSM algorithm signifies the coarse tuningof the adaptive fuzzy system and β determines the amount ofcoarse tuning after each epoch. If we choose a higher valueof β, it implies that there will be less number of rules, aftereach epoch, that will satisfy the condition |AECFr | > β andhence only those small number of rules satisfying the condi-tion |AECFr | > β will be updated. Hence a higher value ofβ implies smaller adaptation and lesser coarse tuning aftereach epoch and hence, in all probability, it will require morenumber of epochs to arrive at a given accuracy. On the otherhand if we choose β to be too small, then there will be sev-eral rules satisfying the condition |AECFr | > β in spite ofhaving small individual values of |AECFr |. This may leadto undesired, too frequent adaptation of too many rules aftereach epoch. Hence, a smaller value of β will cause higherrate of adaptation and large coarse tuning after each epochand it may require a significantly smaller number of epochsto arrive at a given accuracy. However, as there are too manyrules undergoing changes in each epoch while approaching adesired accuracy level, the system may show oscillatory per-formance that cannot be controlled further. Hence it is desiredthat an optimum value of β should be so chosen that it doesnot sacrifice the speed of convergence, at the same time itdoes not cause adaptation of too many rules after each epochsimultaneously leading to unstable, oscillatory behavior.

To achieve better control over the adaptation of the fuzzysystem architecture, the coarse tuning module of IRSM alg-orithm should be followed by the fine tuning module of out-put membership function adaptation (OMFA) algorithm, asshown in Algorithm 5. According to OMFA algorithm, alloutput MFs are updated simultaneously after a given epochand the amount of percent adaptation is same for each MF.Whether the MFs will be contracted or expanded will dependon the cumulative summation of individual errors for eachdata pair k in a given epoch. Each MF p will be contractedor expanded according to the relation:

ynew = ymax + (yold − ymax)×lr (3)

where ymax = that crisp value of output y for which pth MFattains its maximum membership value, yold = crisp value ofoutput y for which pth MF had a given membership valueµy during eth epoch, ynew = updated crisp value of outputfor which pth output MF has the same membership value µy

during (e + 1)th epoch and lr = learning rate at epoch e.Updating output MFs according to (3) implies that each

MF maintains its vertex or peak as constant and expands orcontracts towards its two edges from the vertex in a similarfashion. If learning rate (lr) is chosen as positive fraction, MFswill contract. If lr > 1 is chosen, MFs will expand. However,to achieve finer control over OMFA scheme, lr is made afunction of system RMS error at the end of each epoch i.e.mRMSE. As the system approaches towards more satisfactory

Generalised influential rule search scheme for fuzzy function approximation

Algorithm 5 Output membership function adaptation (OMFA)algorithm

Set a user defined value for γ ;Calculate mCUME for the given epoch;FOR all output MF’s p DO

IF mCUME > 0 THENCalculate learning rate, lr = 1 − γ× mRMSE;Contract output MF p accordingly;

ELSECalculate learning rate, lr = 1 + γ× mRMSE;Expand output MF p accordingly;

END IFEND FOR

performance after each epoch, mRMSE gradually decreasesand lr converges more and more towards unity (irrespectiveof whether lr is higher or lower than unity). This implies thatoutput MFs are less and less adapted as system performanceimproves. Hence it can be argued that OMFA algorithm exer-cises finer and finer control as system converges.

However, duration of activation of IRSM algorithm andOMFA algorithm are controlled by two user defined par-ameters, Krule and KMF. If system RMS error for a givenepoch, mRMSE, falls below Krule, IRSM algorithm is deacti-vated. This signifies that henceforth no more rules in fuzzyrule base will be adapted and coarse tuning of GeneralisedIRSS fuzzy system is stopped. Coarse tuning is normally fol-lowed by fine tuning where OMFA algorithm is activatedwhen mRMSE < KMF. Normally Krule and KMF are so chosenthat the system starts adapting by activating IRSM algorithmfirst. After a few epochs, OMFA algorithm is activated andcoarse and fine tuning continues simultaneously. Then, whensystem RMS error falls below Krule, coarse tuning is deacti-vated and fine tuning is only employed for finer adjustments.Normally, a few trial runs with different sets of Krule and KMFenable the user to develop a clear understanding about theirproper values that should be chosen.

In our previously proposed IRSS algorithm in [31] as apattern classifier, the output MFs were all assumed as trian-gular prototypes. There, a pattern classification problem wasconsidered as an identification problem among q possibleoutput classes and the output y could be discretely chosenfrom the universe of discourse Uy = {1, 2, . . .., q}. Hencethe system was composed of q triangular output MFs, eachof which was specified by a triplet and these triplets weregiven as {0, 1, 2}, {1, 2, 3}, . . .. . ., {(q−2), (q−1), q}, {(q−1), q, (q + 1)}. Hence the output of the fuzzy classifier ywould belong to class q if it belonged to the interval ((q −−0.5), (q +0.5)] : (q −0.5) < y ≤ (q +0.5). However, forthe function approximation scheme, the output y is consid-ered as a continuous variable like each input xj and its uni-verse of discourse, Uy , is considered as a continuous interval.Hence, like all input MFs, the output MFs are also deter-mined by applying fuzzy c-means algorithm in one-dimen-sional space, as described in IMFC algorithm in Algorithm 2.Then the actual output of the system y as obtained from thegeneralized IRSS for function approximation is consideredas it is, and no further manipulation is conducted on thisoutput y.

3 Implementations of generalised IRSS based fuzzysystems

To demonstrate the usefulness of the proposed generalisedIRSS based fuzzy system, we have chosen an example of anew fuzzy controller development scheme with saturation.This algorithm can suitably explain the application of theproposed adaptive fuzzy inference system to develop (a) afuzzy function approximator and (b) an adaptive fuzzy logiccontroller (FLC). The new control algorithm proposed is exp-lained in detail in [32] with all the nuances and finer detailsof the control philosophy implemented. The present paperconcentrates on the detailed analysis of the efficacy of theIRSS based fuzzy system implemented as two major compo-nents of the overall system: (a) an IRSS based fuzzy inverseprocess estimator (IRSSIPE) and (b) an IRSS based FLC(IRSSFLC). Before we discuss the development of IRSSIPEand IRSSFLC in detail, a primer on the above mentioned newcontrol methodology is presented below.

3.1 A new control philosophy for direct applicationof function approximators

The new control philosophy makes provision for direct imple-mentation of fuzzy/neuro/neuro-fuzzy based systems todevelop controllers on the basis of an ideal input–output dataset [32]. The control philosophy starts with the present outputresponse of a process, determines a desired improved outputresponse for the process and develops a new design proce-dure to implement an improved adaptive FLC necessary forthe desired response. Successful implementations of gener-alised IRSS based fuzzy systems are carried out to developthe necessary fuzzy-based building blocks i.e. IRSSIPE andIRSSFLC. A detail, step-by-step description of the proposedfuzzy control methodology is given below.

Step 1. Obtain the present process output characteristic emp-loying the existing controller.

This characteristic can be determined independent of whe-ther the process is currently controlled by a conventional PIDcontroller or a fuzzy controller.

Step 2. Obtain the ideal process output characteristic fromthe data obtained in Step 1.

The ideal process output characteristic, yideal can be deter-mined on the basis of the existing process characteristic withan aim to improve one or more of the performance indicese.g. peak overshoot (%OS), rise time (tr ), settling time (ts)etc. For our system we have chosen %OS and tr as the twoprinciple performance indices on the basis of which we wantto improve our process characteristic.

Step 3. Develop an inverse process estimator employing gen-eralized IRSS (IRSSIPE) on the basis of input–output dataof the existing process.

The development of the ideal adaptive FLC will be basedon the ideal input–output data set for the controller consist-ing of data pairs (ek

ideal, �ekideal: �uk

ideal) where k denotes the

A. Chatterjee et al.

sampling instants. The input instances for each k i.e. (ekideal)

and (�ekideal) can be easily obtained from the system input

and the desired improved output characteristic. However, toobtain incremental controller output at each sampling instantk i.e. �uk

ideal, we have proposed the development of an inverseprocess estimator employing generalized IRSS i.e. IRSSIPE.The objective of this IPE is to estimate the control input uk

idealfor a known process output yk

ideal at any given sampling ins-tant. �uk

ideal can be easily generated, once ukideal is obtained for

each k. The IRSSIPE is developed as another two-input-one-output MISO system which employs the data set containingdata pairs (yk , yk−1: uk) for training. Hence the IRSSIPE uti-lizes the temporal variation of process output to determineits two inputs which need to be nonlinearly mapped to deter-mine the control output uk . The training data set containingdata pairs (yk , yk−1: uk) is obtained from the present knowl-edge of the process input and output, employing the existingcontroller.

Step 4. Determine ideal input-output data set for the im-proved controller.

Once successfully trained, the IRSSIPE is fed with inputdata from desired improved process output characteristic i.e.yk

ideal and yk−1ideal at each sampling instant k to estimate uk

ideal.Once uk

ideal is obtained for each k, we can determine the set of�uk

ideal. Hence the ideal input–output data set for the gener-alized IRSS based FLC i.e. IRSSFLC is completely obtainedand the controller can be successfully trained. Mathemati-cally speaking, the proposed improved controller should bedesigned to give an efficient nonlinear mapping: �uk

ideal =ϕ(ek

ideal, �ekideal).

Here ekideal and �ek

ideal can be easily obtained from rk

(which stands for the input as well as the desired output atsampling instant k) and yk

ideal, for each sampling instant k.

Step 5. Employ the proposed generalized IRSS based func-tion approximator to train the improved controller, IRSSFLC.

Once the input–output data set is prepared to train theimproved controller, we propose to obtain an efficient non-linear model ϕ based on our generalized IRSS based functionapproximator. If the IRSSFLC is successfully trained on thebasis of its ideal input–output data set, then when this control-ler is actually implemented, the process output characteristicshould potentially approach the ideal process output.

Step 6. Implement the IRSSFLC for the process under con-sideration.

Once the IRSSFLC is successfully trained, it can be imple-mented to replace the existing controller configuration foraugmenting the process performance. The performance ofthe proposed control scheme (implemented in PI-form) isfurther enhanced by introducing an additional smooth fuzz-ily varying resetting action. While a normal PI-type FLC isgoverned by the relation:

uk = uk−1 + �uk (4)

our proposed system employs the control law:

uk = (1 − (Kr.rsk))uk−1 + Ku.�uk (5)

Here rsk is the fuzzily varying resetting action contributedat instant k and Ku and Kr are the output gains of IRSSFLCand fuzzy resetting action controller respectively.

The performance of the generalized IRSS based fuzzyfunction approximator is further tested by implementing itfor a benchmark function approximation problem consideredin [35–39].

4 Simulation studies

Let us first consider the fuzzy process control problem alreadydescribed in Sect. 3. The transfer function of a general secondorder linear process to be controlled can be given as:

Gp(s) = � 2n e−Ls

s2 + 2ξ�ns + � 2n

(6)

where �n = natural angular frequency of oscillation and ξ =damping ratio of the process.

The specific process characteristic for our problem is cho-sen with � 2

n = 2, ξ = 0.495 and L = 0.2 s. The samplinginterval is chosen as 25 ms and the fourth order Runge-Kuttamethod is employed for numerical integration. A detailed,step-by-step description of development of IRSS based IPEand then IRSS based FLC, along with the analysis is pre-sented now.

4.1 Training of generalised IRSS based fuzzy IPE(IRSSIPE)

For the construction of IRSSIPE architecture, we have cho-sen five MFs for each input and output variable and each MFis constructed applying modified α-cuts of fuzzy sets withα = 0.01. Figure 1a–c describe the MFs used by IRSSIPE.In Fig. 1c MFs with solid lines depict the initial output MFsconstructed and MFs with dashed lines depict the adaptedoutput MFs after completion of the training phase. The userchosen parameters for successful training of IRSS algorithmare, as mentioned in Sect. 2, Krule, KMF, β and γ . While Kruleand KMF effectively determine the duration of applicationof coarse tuning (fuzzy rule base) and fine tuning (outputMFs) of the fuzzy system architecture, β and γ determinethe degree of coarse and fine tuning applied after each iter-ation. The choices of Krule and KMF are guided by the sug-gested empirical rule, MRMSE < Krule ≤ KMF < mRMSE1,where mRMSE1 is the route mean square error after epoch 1 i.e.when the adaptation process has not yet started. This empir-ical rule will indicate that the adaptation process will startwith coarse tuning for a few epochs and then it will switchto fine tuning module as route mean square error graduallydiminishes. The coarse and fine tuning module may be simul-taneously activated for an overlapping period, if we chooseKrule < KMF. Guided by mRMSE1, we chose Krule = 0.1 andKMF = 0.15 during the training of IRSSIPE for the process in(6). With a higher choice of Krule = 0.12 and 0.15 we exam-ined that the performance remains unchanged. However with

Generalised influential rule search scheme for fuzzy function approximation

Fig. 1 Membership functions of IRSSIPE for a input1 (yk), b input2(yk−1) and c output (uk) for the process in (6)

Krule = 0.05, the system performance in the training phasebecomes oscillatory. This is because the coarse tuning mod-ule has been kept operative for too long when we actuallyrequire only fine tuning to further reduce system error in asystematic manner. The choice of higher Krule with higherKMF was avoided, although they could provide desired accu-racy. However, this would require early deactivation of coarsetuning and long activation of fine tuning module which wouldunnecessarily prolong the training phase. Once Krule and KMFare decided upon, the proper choice of β and γ becomes very

important. Figure 2 shows the training procedure of IRSSIPEwith different combinations of Krule, KMF, β and γ . Withfixed Krule and KMF and a chosen value of γ = 0.05, wetrain the IRSSIPE with different values of β (0.5, 1, 10, 30and 50). With a higher value of β the system will adapt lessafter each iteration and vice versa. Figure 2a validates thisargument. System performance shows best results for β = 1.Now with Krule, KMF and β fixed, we vary γ (0.01 to 0.5)for the best possible combination. Figure 2b verifies the factthat smaller the γ , finer the adaptation and the system takesmore number of epochs to converge. The converse is alsotrue, but too high a γ may cause oscillations in system rmserror. Hence an optimum value for γ = 0.03 is chosen.

4.2 Training of generalised IRSS based fuzzy controller(IRSSFLC)

Here also IRSS based FIS is constructed with 5 MFs for eachof ek , �ek and �uk with α = 0.01. Figure 3a and 3b demon-strate these MFs created for the IRSSFLC developed for theprocess in (6), applying IMFC algorithm. Figure 3c demon-strates the MFs for �uk . Here also solid MFs are the initialMFs created for �uk applying IMFC algorithm. Dashed MFsare the adapted ones obtained at the end of training session,applying OMFA algorithm. The training philosophy of IRS-SFLC is very similar to that of IRSSIPE. Since mRMSE1 fortraining IRSSFLC for the process in (6) gave a very smallvalue, it indicates that the initial constructions of fuzzy rulebase and input–output membership functions are already pro-viding good accuracy. To improve upon this system, tuningpolicy should be carefully decided, so that training proce-dure can avoid oscillations in root mean square error. Forthis situation, where mRMSE1 already shows quite acceptableresults, we have activated the fine tuning module only. Thecomplete deactivation of coarse tuning module can be easilyachieved by choosing Krule > mRMSE1. If the coarse tun-ing module is completely deactivated then the fine tuningmodule should become operative immediately after epoch 1.Hence Krule and KMF are both chosen as 0.02. Since Krule andmRMSE1 determined complete deactivation of coarse tuningmodule, the value of β chosen becomes immaterial. Figure 4shows training results with fixed Krule and KMF where γ isvaried between 0.00008 and 0.0005. An optimum choice ofγ = 0.0001 is made for satisfactory performance. HenceIRSSIPE and IRSSFLC can be similarly trained for otherprocesses, following the guidelines presented above.

The existing controller controlling the process in (6) ischosen as a conventional PID controller with saturation, withZiegler-Nichols (Z-N) tuned parameters. Unit step responsesof various controllers employed for the process in (6) arecompared on the basis of various transient and steady stateperformance indices like peak percentage overshoot (%OS),rise time (tr ), settling time (ts), integral-absolute-error (IAE)and integral-time-multiplied-absolute-error (ITAE). Theseresults are tabulated in Table 1 which clearly demonstratethat the IRSSFLC with IRSSIPE show the best overall per-formance with least IAE and ITAE measures. The results of

A. Chatterjee et al.

Fig. 2 Variation of rms error with iteration during training phase of development of IRSSIPE with a (a) β = 0.5, (b) β = 1, (c) β = 10, (d)β = 30 and (e) β = 50 (Krule = 0.1, KMF = 0.15 and γ = 0.05 in each case), b (a) γ = 0.01, (b) γ = 0.03, (c) γ = 0.05, (d) γ = 0.1 and (e)γ = 0.5 (Krule = 0.1, KMF = 0.15 and β = 1 in each case)

IRSSFLC are compared with conventional Z-N tuned PIDcontroller, static PI-type FLC and self-tuned PI-type FLCproposed in [33]. Along with that we have also implementedour controller development scheme with supervised neuralnetwork based function approximators [34] i.e. (a) with back-propagation neural network based IPE and controller (referredas BPNNIPE and BPNNC) and (b) radial basis neural net-work based IPE and controller (referred as RBNNIPE andRBNNC). This is done because our proposed controller sch-eme in [32] can employ any neuro/fuzzy/neuro-fuzzyfunction approximator directly to develop an improved con-troller. A comparison of performance of IRSSFLC vis-a-visBPNNC and RBNNC will demonstrate the effectiveness of

the proposed generalised IRSS. From Table 1 we find thatBPNNC and RBNNC could achieve marginal improvementin rise time compared to IRSSFLC but at the cost of morethan 100% degradation in %OS. As a consequence, IAE andITAE measures are also significantly worse for BPNNC andRBNNC. Among other controllers, Z-N tuned PID control-ler shows marginally better rise time and static PI-type FLCshows lower %OS. However in each case other performanceindices are significantly worse, resulting in increased IAEand ITAE values and degraded overall transient and steadystate performance. Hence it can be inferred that the over-all response of the system controlled by IRSSFLC is supe-rior compared to the conventional PID controller and non

Generalised influential rule search scheme for fuzzy function approximation

Fig. 3 Membership functions of IRSSFLC for a input1 (ek), b input2(�ek) and c output (�uk) for the process in (6)

conventional fuzzy based and neuro based controllers. Fig-ure 5 shows the responses of these controllers for the processin (6), in a graphical form.

From the above experimentation it is pretty clear that theperformance of the proposed system depends on the suitabledetermination of the parameter set (Krule, KMF, β, γ ). Herewe propose a structured guideline for suitable determination

of the above-mentioned parameters with minimum trial anderror effort.Step 1. It is suggested to employ the empirical rule

MRMSE < Krule ≤ KMF < mRMSE1 (7)

for initial determination of Krule and KMF such that Kruleis chosen more towards MRMSE and KMF is chosen moretowards mRMSE1, the rms error after the first epoch.

Step 2. For initial determination of β and γ we employ twoempirical relations.We implement the algorithm for one epochand choose β close to (AECFr )max based on minimum andmaximum values of AECFr. The initial choice of γ is carriedout according to the empirical rule γ×(KMF)initial = 0.1 suchthat the initial value of lr is kept clamped within 0.9 and 1.1and we can exercise very fine control over updating of theoutput MFs.

Step 3. The system can now be implemented with this initialguess of (Krule, KMF, β, γ ). If the system shows oscillatoryperformance, keep increasing Krule until the system becomesstable. Keep Krule unchanged at this value and seek furtherperformance improvement by decreasing KMF, always satis-fying the empirical rule in (7). Observe the best performanceand preserve the corresponding value of KMF.

Step 4. Now, decrease the initial chosen value of β in steps toensure more aggressive coarse tuning so that more number ofrules is updated in each epoch and we attempt to acceleratethe speed of convergence and yet attain desired accuracy.This process is continued until the performance becomesoscillatory. We stop at this point and preserve that highestvalue of β with which we were just able to avoid oscillatoryperformance.

Step 5. With these already obtained values of (Krule, KMF,β), we increase γ in steps to accelerate the fine tuning proce-dure for the output MFs until a too high γ causes oscillatoryperformance. We preserve that value of γ with which wewere just able to avoid oscillatory performance.

4.3 A benchmark two-input–one-output nonlinear functionapproximation problem

Next we consider approximating a benchmark nonlinear func-tion, which was considered in several literatures [35–39]:

y = (1 + x−21 + x−1.5

2 )2 1 ≤ x1, x2 ≤ 5 . (8)

Following the guidelines presented in Sect. 4.2, we have cho-sen after a few trial runs, Krule = 0.2, KMF = 0.24, β = 1.5and γ = 0.3. Figure 6a and 6b show the MFs created for thetwo inputs x1 and x2, applying the IMFC algorithm. Figure 6cshows the initial MFs created for output y, applying IMFCalgorithm as well as the adapted output MFs at the end ofthe successful training session. The training performance ofthe generalized IRSS is shown in Fig. 7. The performance ofthe trained generalized IRSS is tested by implementing it intesting phase and calculating the mean square error (MSE)

A. Chatterjee et al.

Fig. 4 Variation of rms error with iteration during training phase of development of IRSSFLC with (a) γ = 0.00008, (b) γ = 0.0001, (c)γ = 0.0003 and (d) γ = 0.0005

Fig. 5 Unit step responses of (a) Z-N tuned PID controller, (b) proposed IRSSFLC (employing IRSSIPE), (c) Static PI-FLC in [3], (d) self-tunedPI-FLC proposed in [3] and (e) RBNNC (employing RBNNIPE) for the process in (6)

Table 1 Performance comparison of unit step responses of various controllers implemented for the second order linear process in (6)

Controller configuration Tunable parameters %OS/US tr (s) ts (s) IAE ITAE

Z-N tuned PID Kp = 2.22, Ti = 1.087, Td = .272 16.70 1.925 5.95 1.433 1.764IRSSFLC Ku = 22, Kr = 0.05 10.34 2.0 5.75 1.409 1.575Static PI-FLC in [3] Ke = 0.75, K�e = 10, Ku = 0.7 8.50 3.4 9.275 2.343 4.359Self tuned PI-FLC in [3] Ke = 0.8, K�e = 8, Ku = 1.1 14.30 2.925 8.0 2.283 4.218RBNNC (with RBNNIPE) 21.54 1.775 6.1 1.454 2.066BPNNC (with BPNNIPE) 21.5 1.8 6.125 1.479 2.103

Generalised influential rule search scheme for fuzzy function approximation

Fig. 6 Membership functions of the generalised IRSS for a input x1, binput x2 and c output y for the function approximation problem in (8)

at the end of the testing session. A comparison of the testingperformance of our proposed algorithm vis-a-vis other con-temporary algorithms, presented in Table 2, shows that theperformance of the generalized IRSS is comparable with thebest results obtained so far employing other algorithms.

Fig. 7 Training performance of the generalized IRSS for the functionapproximation problem in (8)

Table 2 Performance comparison in the testing phase for the general-ized IRSS vis-a-vis other contemporary fuzzy algorithms for the func-tion approximation problem in (8)

Fuzzy function approximation Algorithms Mean square error (MSE)

Sugeno and Yasukawa [35] 0.079Nozaki et al. [36] 0.0085Kim et al. [37] 0.009Kim et al. [38] 0.0197Tsekouras et al. [39] 0.011Generalised IRSS 0.0098

5 Conclusions

The present paper proposes the development of an adap-tive fuzzy function approximator tool which can determinean efficient, multidimensional, nonlinear function mappingbetween input and output variables in a given data set. Theproposed system attempts to preserve easy interpretabilityof the fuzzy system intact, as far as practicable, and yetcan offer sufficiently accurate performance. This general-ised IRSS algorithm has been successfully implemented todevelop an inverse process estimator (which is developed asa pure MISO function approximator) and an adaptive fuzzylogic controller. Simulation studies to control a second or-der process employing those IRSSIPE and IRSSFLC showthe versatility and usefulness of the proposed fuzzy func-tion approximator. The usefulness of the proposed algorithmis further tested by implementing it for a benchmark non-linear function approximation problem and comparing itsresult with the best contemporary results, obtained so far.

Acknowledgements The authors would like to thank Prof. P. K. Muk-herjee, Retired Professor, Electrical Engineering Department, JadavpurUniversity, India, for his constant encouragement during the periodwhen the research work was carried out.

A. Chatterjee et al.

References

1. Jang JSR, Sun CT (1995) Neuro-fuzzy modeling and control. IEEEProc 83(3):378–405

2. Rojas I, Pomares R, Ortega J, Prieto A (2000) Self-organised fuzzysystem generation from training examples. IEEE Trans Fuzzy Syst8(1):23–36

3. Guillaume S (2001) Designing fuzzy inference systems fromdata: an interpretability-oriented review. IEEE Trans Fuzzy Syst9(3):426–443

4. Wang LX, Mendel JM (1992) Generating fuzzy rules by learningfrom examples. IEEE Trans Syst Man Cyber 22(6):1414–1427

5. Ishibuchi H, Nozaki K, Tanaka H, Hosuka Y, Matsuda M (1994)Empirical study on learning in fuzzy systems by rice test analysis.Fuzzy Sets Syst 64:129–144

6. Nozaki K, Ishibuchi H, Tanaka H (1997) A simple but powerfulheuristic method for generating fuzzy rules from numerical data.Fuzzy Sets Syst 86:251–270

7. Hong TP, Lee CY (1996) Induction of fuzzy rules and membershipfunctions from training examples. Fuzzy Sets Syst 84:33–47

8. Hong TP, Chen JB (1999) Finding relevant attributes and member-ship functions. Fuzzy Sets Syst 103:389–404

9. Hong TP, Chen JB (2000) Processing individual fuzzy attributesfor fuzzy rule induction. Fuzzy Sets Syst 112:127–140

10. Zapata GOA, Galvao RKH, Yoneyama T (1999) Extracting fuzzycontrol rules from experimental human operator data. IEEE TransSyst Man Cybern B Cybern 29(3):398–406

11. Wu TP, Chen SM (1999) A new method for constructing member-ship functions and fuzzy rules from training examples. IEEE TransSyst Man Cybern B Cybernet 29(1):25–40

12. Klawonn F, Kruse R (1997) Constructing a fuzzy controller fromdata. Fuzzy Sets Syst 85(2):177–193

13. Langari R, Wang L (1996) Fuzzy models, modular networks andhybrid learning. Fuzzy Sets Syst 79(2):141–150

14. Rovatti R, Guerrieri R (1996) Fuzzy sets of rules for system iden-tification. IEEE Trans Fuzzy Syst 4(1):89–102

15. Pomares H, Rojas I, Ortega J, Gonzalez J, Prieto A (2000) A sys-tematic approach to a self-generating fuzzy rule-table for func-tion approximation. IEEE Trans Syst Man Cybernet B Cybernet30(3):431–447

16. Zeng X-J, Singh MG (1996) Approximation accuracy analysis offuzzy systems as function approximators. IEEE Trans Fuzzy Syst4(1):44–63

17. Lin CT, Lee CSG (1991) Neural-network-based fuzzy logic controland decision system. IEEE Trans Comput 40(12):1320–1336

18. Jang JSR (1993) ANFIS: Adaptive-network-based fuzzy inferencesystem. IEEE Trans Syst Man Cybernet 23(3):665–685

19. Simpson PK (1992) Fuzzy min-max neural networks – Part 1: Clas-sification. IEEE Trans Neural Netw 3(5):776–786

20. Abe S, Lan MS (1995) Fuzzy rules extraction directly from numer-ical data for function approximation. IEEE Trans Syst Man Cyber-net 25(1):119–129

21. Thawonmas R, Abe S (1999) Function approximation based onfuzzy rules extracted from partitioned numerical data. IEEE TransSyst Man Cybern B Cybernet 29(4):525–534

22. Keller J,Yager R, Tahani H (1992) Neural network implementationof fuzzy logic. Fuzzy Sets Syst 45:1–12

23. Nauck D, Kruse R (1999) Neuro-fuzzy systems for functionapproximation. Fuzzy Sets Syst 101:261–271

24. Cho KB, Wang BH (1996) Radial basis function based adaptivefuzzy systems and their applications to system identification andprediction. Fuzzy Sets Syst 83:325–339

25. Lotfi A, Tsoi AC (1996) Learning fuzzy inference systems usingan adaptive membership function scheme. IEEE Trans Syst ManCybern B Cybernet 26(2):326–331

26. Wang C-C, Her S-M (1999) A self-generating method for fuzzysystems design. Fuzzy Sets Syst 103:13–25

27. Guely F, La R, Siarry P (1999) Fuzzy rule base learning throughsimulated annealing. Fuzzy Sets Syst 105:353–363

28. Guely F, Siarry P (1994) A centered formulation of Takagi-Sugenorules for improved efficiency. Fuzzy Sets Syst 62:277–285

29. Siarry P, Guely F (1998) A genetic algorithm for optimizing Tak-agi-Sugeno fuzzy rule bases. Fuzzy Sets Syst 99:37–47

30. Russo M (1998) FuGeNeSys—a fuzzy genetic neural system forfuzzy modeling. IEEE Trans Fuzzy Syst 6:373–388

31. Chatterjee A, Rakshit A (2004) Influential rule search scheme(IRSS) – a new fuzzy pattern classifier. IEEE Trans KnowledgeData Eng 16(8):881–893

32. Chatterjee A, Rakshit A, Siarry P (2005) A new adaptive fuzzycontroller with saturation employing influential rule search scheme(IRSS) (submitted)

33. Mudi RK, Pal NR (1999) A robust self-tuning scheme for PI- andPD-type fuzzy controllers. IEEE Trans Fuzzy Syst 7(1):2–16

34. Demuth H, Beale M (1998) Neural network toolbox for use withMATLAB user’s guide, Version 3.0. The MathWorks Inc.

35. Sugeno M, Yasukawa T (1993) A fuzzy logic based approach toqualitative modeling. IEEE Trans Fuzzy Syst 1(1):7–31

36. Nozaki K, Ishibuchi H, Tanaka H (1997) A simple but powerfulmethod for generating fuzzy rules from numerical data. Fuzzy SetsSyst 86:251–270

37. Kim E, Lee H, Park M, Park M (1998) A simply identifiedSugeno-type fuzzy model via double clustering. Inform Sci110:25–39

38. Kim E, Park M, Seunghwan J, Ji S, Park M (1997) A new approachto fuzzy modeling. IEEE Trans Fuzzy Syst 5(3):328–337

39. Tsekouras G, Sarimveis H, Kavakli E, Bafas G (2005) A hierarchi-cal fuzzy clustering approach to fuzzy modeling. Fuzzy Sets Syst150:245–266