Genetic optimization of modular neural networks with fuzzy response integration for human...

19
Genetic optimization of modular neural networks with fuzzy response integration for human recognition Patricia Melin , Daniela Sánchez, Oscar Castillo Tijuana Institute of Technology, Calzada Tecnologico s/n 22379 Tijuana, Mexico article info Article history: Received 24 November 2010 Received in revised form 16 February 2012 Accepted 19 February 2012 Available online 27 February 2012 Keywords: Modular neural network Type-2 fuzzy logic Genetic algorithm Recognition abstract In this paper we propose a new approach to genetic optimization of modular neural net- works with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algo- rithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integra- tors of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experi- mental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. Ó 2012 Elsevier Inc. All rights reserved. 1. Introduction In this paper, a new approach to genetic optimization of modular neural networks with fuzzy response integration is pro- posed. The topology of the modular neural network and the structure of the fuzzy system are designed using genetic algo- rithms. The proposed approach is tested with the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results of the proposed approach show that optimal modular neural networks can be obtained with the genetic algorithm and as a consequence the human recognition rates can be improved significantly with respect to non-optimized models and other approaches in the literature for this problem. Biometrics plays an important role in the public and information security domains. Using various physiological charac- teristics of the human, such as face, facial thermograms, fingerprint, iris, retina, hand geometry etc., biometrics can accu- rately identify each individual [10,24,26–29,35,40]. Recent available real-world implementations indicate that biometric techniques are much more precise and accurate than traditional identification techniques. Other than precision, there have always been certain problems which remain associated with existing traditional techniques [23]. As an example consider possession and knowledge. Both can be shared, stolen, forgotten, duplicated, misplaced or taken away. However, the danger is minimized in the case of using biometric measures [25,31,32]. 0020-0255/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2012.02.027 Corresponding author. E-mail address: [email protected] (P. Melin). Information Sciences 197 (2012) 1–19 Contents lists available at SciVerse ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins

Transcript of Genetic optimization of modular neural networks with fuzzy response integration for human...

Information Sciences 197 (2012) 1–19

Contents lists available at SciVerse ScienceDirect

Information Sciences

journal homepage: www.elsevier .com/locate / ins

Genetic optimization of modular neural networks with fuzzy responseintegration for human recognition

Patricia Melin ⇑, Daniela Sánchez, Oscar CastilloTijuana Institute of Technology, Calzada Tecnologico s/n 22379 Tijuana, Mexico

a r t i c l e i n f o

Article history:Received 24 November 2010Received in revised form 16 February 2012Accepted 19 February 2012Available online 27 February 2012

Keywords:Modular neural networkType-2 fuzzy logicGenetic algorithmRecognition

0020-0255/$ - see front matter � 2012 Elsevier Incdoi:10.1016/j.ins.2012.02.027

⇑ Corresponding author.E-mail address: [email protected] (P. Melin)

a b s t r a c t

In this paper we propose a new approach to genetic optimization of modular neural net-works with fuzzy response integration. The architecture of the modular neural networkand the structure of the fuzzy system (for response integration) are designed using geneticalgorithms. The proposed methodology is applied to the case of human recognition basedon three biometric measures, namely iris, ear, and voice. Experimental results show thatoptimal modular neural networks can be designed with the use of genetic algorithmsand as a consequence the recognition rates of such networks can be improved significantly.In the case of optimization of the fuzzy system for response integration, the genetic algo-rithm not only adjusts the number of membership functions and rules, but also allows thevariation on the type of logic (type-1 or type-2) and the change in the inference model(switching to Mamdani model or Sugeno model). Another interesting finding of this workis that when human recognition is performed under noisy conditions, the response integra-tors of the modular networks constructed by the genetic algorithm are found to be optimalwhen using type-2 fuzzy logic. This could have been expected as there has been experi-mental evidence from previous works that type-2 fuzzy logic is better suited to modelhigher levels of uncertainty.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction

In this paper, a new approach to genetic optimization of modular neural networks with fuzzy response integration is pro-posed. The topology of the modular neural network and the structure of the fuzzy system are designed using genetic algo-rithms. The proposed approach is tested with the case of human recognition based on three biometric measures, namely iris,ear, and voice. Experimental results of the proposed approach show that optimal modular neural networks can be obtainedwith the genetic algorithm and as a consequence the human recognition rates can be improved significantly with respect tonon-optimized models and other approaches in the literature for this problem.

Biometrics plays an important role in the public and information security domains. Using various physiological charac-teristics of the human, such as face, facial thermograms, fingerprint, iris, retina, hand geometry etc., biometrics can accu-rately identify each individual [10,24,26–29,35,40]. Recent available real-world implementations indicate that biometrictechniques are much more precise and accurate than traditional identification techniques. Other than precision, there havealways been certain problems which remain associated with existing traditional techniques [23]. As an example considerpossession and knowledge. Both can be shared, stolen, forgotten, duplicated, misplaced or taken away. However, the dangeris minimized in the case of using biometric measures [25,31,32].

. All rights reserved.

.

2 P. Melin et al. / Information Sciences 197 (2012) 1–19

In this paper, a hybrid approach combining neural networks and fuzzy logic to solve the problem of pattern recognition,more efficiently and accurately, is proposed. The approach is represented as a modular neural network model that uses fuzzylogic for response integration. Also, a genetic algorithm is used to optimize the parameters in the fuzzy system for responseintegration [15–17,33]. We are contributing with a method that performs optimization of the whole model with two geneticalgorithms; one for the modules of the neural network and the other for optimizing the parameters of the fuzzy integrationsystem. This last genetic algorithm can select the number of membership functions to use; besides being able to combinedifferent types of membership functions, and it also decides to use type-1 or type-2 fuzzy logic. We decided to test the meth-od with a human recognition system based on three biometric measures, and in this case we choose the iris [38], ear [36] andvoice [25] biometrics. Experimental results with benchmark databases of these biometric measures show the advantages ofthe proposed method. Other approaches in the literature [2,6,13,20,21,30] have use MNNs or type-2 fuzzy logic for patternrecognition, but in this paper a hybrid combination of both methodologies is proposed to solve recognition problem in amore efficient fashion.

The rest of the paper is organized as follows. In Section 2, the description of the proposed method is presented, followedwith its application to the corresponding individual benchmark databases for the biometric measures. The results obtainedwith the proposed method are described in detail in Section 3. In Section 4 a statistical comparison of results is presented.Finally, Conclusions are presented in Section 5.

2. Proposed method

This section describes the general architecture of the modular neural network with fuzzy response integration and thegenetic algorithms that constitute the proposed method.

2.1. General architecture of the proposed method

The proposed method is based on a modular neural network (MNN) using a fuzzy system as response integrator. Themethod uses genetic algorithms to optimize the MNN architecture and the structure of the fuzzy system. The MNN consistsof three modules, and each module is divided into another three sub-modules. Each sub-module contains different informa-tion [7]. For example, if we decided to use this model for human recognition using three biometric measures, the number ofpersons of a database would be divided into three parts, with each part corresponding to each sub-module, and this must bedone for each biometric measure. Fuzzy logic [3,19,42–44] is used for response integration in the MNN. Genetic algorithms(GAs) are used for parameter optimization because they are a good tool to find the best parameters of different kinds of mod-els [34]; in our case two genetic algorithms were developed, the first genetic algorithm is responsible for the optimization ofthe parameters in the modular neural network, and the second genetic algorithm is responsible for the optimization of thefuzzy system’s structure [1,18,34,41]. It is important to say that this last genetic algorithm was modified iteratively for per-forming three tests. In each test, the chromosome of the GA included more information and accordingly the representedstructure of the fuzzy system contained more parameters. The final genetic algorithm developed for the fuzzy integratorshas the ability to adjust the type of membership functions (also combines the type of membership functions), adjust thenumber of membership functions, and creates the fuzzy rules, which includes choosing the type of fuzzy logic. The detaileddescription of the genetic algorithm for each test is presented in the next section. Fig. 1 shows the general architecture of theproposed method.

2.1.1. Description of the genetic algorithm for MNN optimizationWith the purpose of improving the recognition rate, a genetic algorithm for parameter optimization in modular neural

networks was used. In this case, the number of neurons in the two hidden layers of each module, the type of learning algo-rithm and the value of the goal error were optimized. Fig. 2 shows the binary chromosome of 34 genes, which was estab-lished for optimization of the neural networks. In this case, 7 genes were used for the first hidden layer and 6 genes forthe second hidden layer. To set the final value for the number of neurons in the two hidden layers, 60 and 50 neurons wereadded respectively, to the value produced by the chromosome, which was done to prevent that the neural network wastrained with a number of neurons too small. Concerning the learning algorithm, 2 genes were established, this was doneto have the 3 choices of learning algorithms (scaled conjugate gradient, gradient descent with adaptive learning, andmomentum with adaptive learning), and the error goal was represented with 19 genes, to obtain the required number ofdigits. The fitness function can be expressed as:

f ¼Xn

i¼1

Xi

!,n ð1Þ

where Xi is 1 if the module provides the correct result and 0 if not, and n is total number of data points used for testing in thecorresponding module.

Fig. 1. The general architecture of the proposed method.

Fig. 2. The binary chromosome for the MNN.

P. Melin et al. / Information Sciences 197 (2012) 1–19 3

2.1.2. Description of the genetic algorithm for optimizing the fuzzy integratorWith the purpose of increasing even more the recognition rate, a genetic algorithm to optimize the fuzzy integrator was

used. The genetic algorithm was developed in three phases, i.e., 3 successive tests were performed with the genetic algo-rithm, where the size of the chromosome was increased (genes were added in each phase). In the first phase, the optimiza-tion of the type of fuzzy system (Mamdani or Sugeno), type of membership functions (trapezoidal and generalized bell) andparameters of the membership functions were performed. In the second phase, the optimization of number of rules (theirantecedents and consequents are considered) and number of membership functions (2 or 3 were used) were performed. Fi-nally, in the third phase the genetic algorithm can decide if the fuzzy system will be based on type-1 or type-2 fuzzy logic(besides all the previous variations).

In this case, the fitness function can be expressed in the following way:

F ¼Xi¼1

N

Xi

!,N ð2Þ

4 P. Melin et al. / Information Sciences 197 (2012) 1–19

where Xi is 1 if the person is identified and 0 if not, and N is total number of data points used for testing of the three modules.

2.1.2.1. Description of the chromosome for the first phase. In the first phase, the chromosome contains 100 genes to represent aparticular fuzzy system. The first 13 genes are binary, followed by genes 14–100 that are real, with values between 0 and 1.In this case, the fuzzy systems can only use type-1 fuzzy logic. The chromosome representation is described in detail asfollows:

� Gene 1: type of fuzzy systems (Mamdani or Sugeno).� Gene 2: Type of membership function for iris (low).� Gene 3: Type of membership function for iris (medium).� Gene 4: Type of membership function for iris (high).� Gene 5: Type of membership function for ear (low).� Gene 6: Type of membership function for ear (medium).� Gene 7: Type of membership function for ear (high).� Gene 8: Type of membership function for voice (low).� Gene 9: Type of membership function for voice (medium).� Gene 10: Type of membership function for voice (high).� Gene 11: Type of membership function for output (low) (if type of system is Mamdani).� Gene 12: Type of membership function for output (medium) (if type of system is Mamdani).� Gene 13: Type of membership function for output (high) (if type of system is Mamdani).� Gene 14: Output value (low) (if type of system is Sugeno).� Gene 15: Output value (medium) (if type of system is Sugeno).� Gene 16: Output value (high) (if type of system is Sugeno).� Gene 17 to 100: parameters of the membership functions.

2.1.2.2. Description of the chromosome for the second phase. In the second phase, five more genes were added to the initialchromosome: four genes for the number of membership functions per variable (one for each input and one for the output)and one for the number of rules. It is important to note that this representation allows the genetic algorithm to adjust thenumber of membership functions that will be used for each variable (In this case with type-1 fuzzy logic).

2.1.2.3. Description of the chromosome for the third phase. In the third phase, 98 genes were added to the previous chromo-some for being able to work with type-2 fuzzy logic. In this phase, the genetic algorithm can decide if it will use type-1or type-2 fuzzy logic. In Fig. 3 we can find the final chromosome representation.

2.2. Proposed architecture of the modular neural network for person recognition based on the iris, ear and voice biometric measures

The proposed method was applied to the case of human recognition based on the iris, ear and voice biometric measures.The MNN consists of three modules, one for each biometric measure and each module is divided into another three sub mod-ules. Each sub module contains different information, which is, one third of the complete database of the correspondingmeasure. The idea of dividing the biometric databases is to improve the performance of the MNN by the divide and conquerprinciple.

Simulation results were obtained by performing experiments with three types of learning algorithms: gradient descentwith scaled conjugate gradient (SCG), gradient descent with adaptive learning and momentum (GDX) and adaptive learning(GDA). The number of neurons, in the first and second hidden layer, was also adjusted dynamically with the GA.

The architecture of the modular neural network for person recognition based on the iris, ear and voice biometrics isshown in Fig. 4. The biometric databases used in this work are described in the following section.

2.2.1. Databases and pre-processingThe databases that were used and their pre-processing are described below.

2.2.1.1. Iris database. The human iris database from the Institute of Automation of the Chinese Academy of Sciences (CASIA)was used. The database is structured as follows: the information of 99 persons with 14 images (7 of each eye) per person, arepresented. The image dimensions are 320 � 280, in JPEG format [9]. Only the first 77 persons were used in this work. The 14images of each person were divided as follows: 8 images were used for training and 6 for testing (see Fig. 5).

In the case of the iris, the following preprocessing steps were performed: the coordinates and radius of the iris and pupilwere obtained using the method developed by Masek and Kovesi [22], once the coordinates and radius are obtained, a cut onthe iris is performed and a new image is produced, then a resizing of the new image to 21-21 is performed, and finally theimages are converted from vector to matrix form.

Fig. 3. The final chromosome of the genetic algorithm.

Fig. 4. Architecture of the MNN for person recognition based on iris, ear and voice biometrics.

P. Melin et al. / Information Sciences 197 (2012) 1–19 5

2.2.1.2. Ear database. The images from a database elaborated by the Ear Recognition Laboratory of the University of Scienceand Technology from Beijing (USTB) were used. The database contains 4 images (of one ear) per person, and it consists of 77persons. The image dimensions are 300 � 400, in BMP format [8]. In this case, 3 images were used for training and 1 for test-ing, and a cross-validation process was applied (see Fig. 6).

Fig. 5. Examples of human iris images from the CASIA database.

Fig. 6. Examples of images from the Ear Recognition Laboratory of the University of Science & Technology Beijing (USTB).

6 P. Melin et al. / Information Sciences 197 (2012) 1–19

In the case of the Ear, the following preprocessing steps were performed: a cut of the ear was performed, then a resizing ofthe new image to 132-91 is performed, then the images are divided automatically into three regions of interest (helix, shelland lobe) and finally the images are converted from vector to matrix form.

2.2.1.3. Voice database. In the case of the voice biometric measure, the signals of the database were collected from students ofTijuana Institute of Technology, and it consist of 10 voice samples (of 77 persons), in WAV format. In this case, 7 voice sam-ples were used for training and 3 for testing (of each person). A particular Spanish word (‘‘Accesar’’) was spoken by the stu-dents and the wave signal was collected. Mel Frequency Cepstral Coefficients were used for preprocessing the voice signal,which help improve the results.

3. Experimental results

In this section we describe the results using the proposed modular approach applied to recognition.

3.1. Results of the modular neural network applied to each biometric measure

The results obtained with modular neural networks applied to each biometric measure are presented below. In all cases20 tests were performed to each biometric measure, but only the best five results are presented for illustrative purposes.Cross-validation in different combinations, depending on the used database, was performed.

3.1.1. Results for the irisAs mentioned previously, in this case 8 images were used for training and 6 for testing (per person), which produces a

total of 462 images used for testing. Integration of responses in the MNN was performed using the Gating Network method.We can find in Table 1 the best trainings that were obtained with this approach. In this case, trainings EI2 and EI4 achievedthe best recognition rate (97.19%, 448 correct images from a total of 462).

The results obtained with cross-validation for these trainings are shown in Table 2. The best average was 93.92%.A review of previous works that had used this same database was performed and it was found that Gaxiola et al. [12] in

their work used 99 persons (1386 images) and obtained a recognition rate of 97.13%, they also performed 10-fold cross-val-idation and an average of 91.83% was achieved (the same technique of pre-processing was used). Others results are pre-sented by others authors, such as Sanchez-Avila et al. [37] in their work obtained a recognition rate of 97.89%, Tisse et al.[39] in their work obtained a 89.37% and Daugman [11] in his work obtained a 99.90% (however these works only used756 images of 108 persons, which means a smaller number of images for training and testing, and they also used differenttechniques for feature extraction).

Table 1The best results of validation 1 for the iris.

Training Method Neurons Epoch Error Training duration Module recognition Total recognition

EI1 l traingda 150, 110 2000 0.00001 00:01:59 150/156 (96.15)2 traingda 150, 100 00:00:55 150/156 (96.15) 448/4623 traingda 150, 110 00:00:39 148/150 (98.67) 96.97%

EI2 l traingda 150, 110 2000 0.00001 00:01:59 150/156 (96.15)2 trainscg 150, 118 00:00:59 151/156 (96.79) 448/4623 traingda 150, 110 00:00:39 148/150 (98.67) 97.19%

EI3 1 traingda 140, 95 2000 0.00001 00:01:07 148/156 (95.51)2 traingda 150, 125 00:00:59 150/156 (96.15) 445/4623 traingda 155, 110 00:00:38 147/150 (98.00) 96.32%

EI4 1 traingda 150, 100 2000 0.00001 00:02:22 150/156 (96.15)2 traingda 150, 115 00:00:59 151/156 (96.79) 448/4623 traingda 155, 110 00:00:39 148/150 (98.67) 97.19%

EI5 1 traingda 150, 100 2000 0.00001 00:01:12 150/156 (96.15)2 traingda 150, 115 00:00:59 150/156 (96.15) 448/4623 traingda 155, 110 00:00:44 148/150 (98.67) 96.97%

Table 2The results of cross-validation of the iris.

Training V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 Average (%)

1 96.97 95.45 93.72 93.94 93.29 92.42 92.42 91.99 90.69 91.99 92.21 95.24 96.75 94.16 93.662 97.19 95.24 93.94 92.36 90.91 91.77 90.04 91.56 92.36 90.91 92.64 94.37 96.10 94.31 93.223 96.32 96.32 94.16 94.16 92.86 91.34 92.21 90.69 89.83 93.29 93.72 95.02 96.10 95.24 93.664 97.19 95.45 93.29 91.99 93.29 91.34 90.91 91.34 90.69 91.99 92.64 96.54 96.32 94.81 93.415 96.97 96.32 93.51 94.16 93.29 91.77 92.21 92.21 91.13 92.36 93.51 95.24 96.10 95.67 93.92

P. Melin et al. / Information Sciences 197 (2012) 1–19 7

3.2. Results for the ear

As mentioned previously, 3 images of the ear were used for training and 1 for testing (per person), in total 77 images wereused for testing. In this case, integration was performed with the winner takes all method. We can find in Table 3 the besttrainings that were obtained in this case. The best results were obtained in trainings EO3 and EO4 with a 100% (77/77) rec-ognition rate.

The results obtained after performing cross-validation for these trainings are shown on Table 4, and the best average was89.28%.

A review of others works that had used this same ear database was performed and found the work of Gutierrez et al. [14]in which the same number of images, persons and cross-validation were used and an average recognition rate of 94.48% wasachieved. The difference in results is due to the use of a 2D wavelet analysis for preprocessing in [14].

Table 3The best results of validation 1 for the ear.

Training Method Neurons Epoch Error Training duration Module recognition Total recognition

EO1 1 trainscg 80, 50 2000 0.00001 00:01:24 (25/26) 96.15%2 trainscg 80, 60 00:01:29 (25/26) 96.15% 75/773 trainscg 80, 50 00:01:13 (25/25) 100% 97.40%

EO2 1 trainscg 80, 55 2000 0.00001 00:03:23 (25/26) 96.15%2 trainscg 85, 65 00:02:44 (24/26) 92.30% 74/773 trainscg 80, 55 00:02:08 (25/25) 100% 96.10%

EO3 1 trainscg 83, 55 2000 0.00001 00:02:14 (26/26) 100%2 trainscg 85, 60 00:01:20 (26/26) 100% 77/773 trainscg 83, 55 00:02:53 (25/25) 100% 100%

EO4 1 trainscg 83, 55 2000 0.00001 00:03:14 (26/26) 100%2 trainscg 85, 60 00:01:11 (26/26) 100% 77.773 trainscg 83, 55 00:02:12 (25/25) 100% 100%

EO5 1 trainscg 80, 55 2000 0.00001 00:02:15 (25/26) 96.15%2 trainscg 85, 65 00:02:04 (25/26) 96.15% 75/773 trainscg 80, 55 00:02:43 (25/25) 100% 97.40%

Table 4The results of cross-validation for the ear.

Training V1 (%) V2 (%) V3 (%) V4 (%) Average (%)

1 97.40 74.02 85.71 100 89.282 96.10 66.23 83.11 100 86.363 100 74.02 83.11 100 89.284 100 66.23 85.71 100 87.985 97.40 66.23 85.71 100 87.33

8 P. Melin et al. / Information Sciences 197 (2012) 1–19

3.3. Results for the voice

As noted previously, 4 voice samples were used for training and 3 for testing (per person), in total 321 voice samples wereused for testing. To integrate the results in the MNN a gating network method was used.

We can notice on Table 5, the best trainings that were obtained in this case. Training EV4 had the best recognition rate(94.81%, 219 voice samples from a total of 231).

The results obtained with cross-validation for these trainings are shown on Table 6, and the best average was 93.94%.The voice database was created by our research group, and for this reason the comparison was done with the previous

work performed by Muñoz et al. [32], in which they used the voice of 30 persons and obtained a recognition rate of 90% usingMel Frequency Cepstral Coefficients and neural networks. As it is evident from Table 6 the recognition rate was improvedwith the proposed method.

3.4. Genetic algorithm for MNN optimization and results

In this section we can find the recognition results obtained with the optimized MNN after using a genetic algorithm forstructure optimization.

3.4.1. Optimized iris resultsTable 7 shows the best evolutions that were achieved for each module. In module 1, the best evolution was with 165 neu-

rons in the first hidden layer and 78 in the second hidden layer, the learning algorithm was adaptive learning (GDA), with agoal error of 0.000002, and with an identification rate of 97.44% (152/156). In module 2, the best evolution was with 68

Table 5The best results of validation 1 for the voice.

Training Method Neurons Epoch Error Training duration Module recognition Total recognition

EV1 1 traingda 190, 110 2000 0.000001 00:00:39 74/78 (94.87%)2 traingda 190, 110 00:00:42 72/78 (92.31%) 214/2313 trainscg 190, 110 00:00:30 68/75 (90.67%) 92.64%

EV2 1 traingda 190, 110 2000 0.00001 00:00:31 75/78 (96.15%)2 traingda 190, 110 00:00:34 73/78 (93.59%) 216/2313 trainscg 190, 110 00:00:31 68/75 (90.67%) 93.51%

EV3 1 traingda 190, 110 2000 0.00001 00:00:30 75/78 (96.15%)2 traingda 190, 120 00:00:29 74/78 (94.87%) 217/2313 trainscg 190, 110 00:00:41 68/75 (90.67%) 93.94%

EV4 1 traingda 190, 110 2000 0.00001 00:00:33 75/78 (96.15%)2 traingda 190, 120 00:00:29 74/78 (94.87%) 219/2313 trainscg 190, 110 00:00:25 70/75 (93.33%) 94.81%

EV7 1 traingda 190, 115 2000 0.00001 00:00:31 73/78 (93.59%)2 traingda 190, 120 00:00:31 73/78 (93.59%) 215/2313 traingda 190, 120 00:00:31 69/75 (92.00%) 93.07%

Table 6The best results of cross-validation for the voice.

Training V1 V2 V3 V4 V5 V6 V7 Average (%)

1 92.64 93.07 89.18 91.34 93.94 98.70 94.81 93.382 93.51 92.64 87.88 89.18 94.37 96.97 95.67 92.883 93.94 90.91 88.31 89.61 95.67 99.13 94.37 93.134 94.81 92.64 90.04 90.91 96.10 97.84 95.24 93.945 93.07 92.21 89.61 89.61 96.97 99.13 95.67 93.75

Table 7The best evolutions for each module for iris (validation 1).

Mod GGAP Cross-over Pc Pm Duration Method Error goal Neurons Fitness Module recognition Total recognition

1 0.8 Xovmp 0.8 0.05 05:27:21 traingda 0.000002 165, 78 0.025641 (152/156) 97.44%2 0.8 Xovmp 0.8 0.06 01:05:56 trainscg 0.000002 68, 69 0.019231 (153/156) 98.03% (455/462)3 0.8 Xovmp 0.8 0.05 02:50:45 traingda 0.000002 131, 77 0.00000 (150/150) 100% 93.48%

P. Melin et al. / Information Sciences 197 (2012) 1–19 9

neurons in the first hidden layer and 69 in the second hidden layer, the learning algorithm was gradient descent with scaledconjugate gradient (SCG), with a goal error of 0.000002, and with an identification rate of 98.08% (153/156). In module 3, thebest evolution was with 131 neurons in the first hidden layer and 77 in the second hidden layer, the learning algorithm wasadaptive learning (GDA), with a goal error of 0.000002, and with an identification rate of 100% (150/150). A final recognitionrate of 98.48% was achieved, which is better than the non-optimized recognition rate (97.19%).

The results obtained with cross-validation for the optimized case are shown on Table 8. An average recognition rate of96.11% was achieved, which is better than the average non-optimized recognition rate (93.92%).

3.4.2. Optimized ear resultsTable 9 shows the best evolution that was obtained for each module for validation number 2. In module 1, the best evo-

lution was with 126 neurons in the first hidden layer and 86 in the second hidden layer, the learning algorithm was adaptivelearning (GDA), with a goal error of 0.000004, and with an identification rate of 80.76% (21/26). In module 2, the best evo-lution was with 58 neurons in the first hidden layer and 49 in the second hidden layer, the learning algorithm was gradientdescent with adaptive learning and momentum (GDX), with a goal error of 0.000002, and with an identification rate of84.61% (22/26). In module 3, the best evolution was with 117 neurons in the first hidden layer and 69 in the second hiddenlayer, the learning algorithm was gradient descent with scaled conjugate gradient (SCG), with a goal error of 0.000004, andwith an identification rate of 84% (22/25). A final recognition rate of 83.11% was obtained, which is better than the non-opti-mized recognition rate (74.02%).

The results obtained with cross-validation for the optimized case are shown on Table 10. An average recognition rate of93.82% was obtained, which is better than the average non-optimized recognition rate (89.29%).

3.4.3. Optimized voice resultsTable 11 shows the best evolution that was obtained for each module. In module 1, the best evolution was with 177 neu-

rons in the first hidden layer and 86 in the second hidden layer, the learning algorithm was gradient descent with scaledconjugate gradient (SCG), with a goal error of 0.000010, and with an identification rate of 98.72% (77/78). In module 2,the best evolution was with 91 neurons in the first hidden layer and 113 in the second hidden layer, the learning algorithmwas gradient descent with adaptive learning and momentum (GDX), goal error of 0.000003, and with an identification rate of96.15% (75/78). In module 3, the best evolution was with 150 neurons in the first hidden layer and 87 in the second hiddenlayer, the learning algorithm was adaptive learning (GDA), goal error of 0.000012, and with an identification rate of 94.67%(71/75). A final recognition rate of 96.54% was obtained, and this is better than the non-optimized recognition rate (94.81%).

Table 8The results of cross-validation for the optimized case of iris.

V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 Average

98.48 97.4 95.88 96.32 95.67 94.58 94.37 94.8 93.72 95.67 96.1 97.61 98.26 96.75 96.11

Table 9The best evolutions for each module for the ear (validation 2).

Mod GGAP Cross-over Pc Pm Duration Method Error goal Neurons Fitness Module recognition Total reccgnition

1 0.8 Xovmp 0.8 0.6 09:30:12 traingda 0.000004 126, 86 0.192308 (21/26) 80.76%2 0.8 Xovmp 0.8 0.5 12:03:54 traingdx 0.000002 58, 49 0.153846 (22/26) 84.61% 64/773 0.8 Xovmp 0.8 0.6 11:02:23 trainscg 0.000004 117, 69 0.160000 (22/25) 84% 83.11%

Table 10The results of cross-validation for the optimized case of ear.

V1 V2 V3 V4 Average

100 83.11 92.20 100 93.8275

Table 11The best evolutions for each module of the voice.

Mod GGAP Cross-over Pc Pm Duration Method Error goal Neurons Fitness Module recognition Total recognition

1 0.8 Xovmp 0.8 0.05 02:04:44 trainscg 0.000010 177, 86 0.012321 (77/78) 98.72%2 0.8 Xovmp 0.8 0.05 00:58:15 traingdx 0.000003 91, 113 0.038462 (75/78) 96.15 % (223/231)3 0.8 Xovmp 0.8 0.05 01:16:40 traingda 0.000012 150, 87 0.053333 (71/75) 94.67% 96.54%

10 P. Melin et al. / Information Sciences 197 (2012) 1–19

The results obtained with cross-validation for the optimized case are shown in Table 12. An average recognition rate of96.90% was achieved, which is greater than the average non-optimized recognition rate (93.94%).

3.5. Fuzzy integration

Seven cases to study fuzzy integration were established combining different trainings of iris, ear and voice, for the opti-mized and non-optimized results. Table 13 illustrates how the trainings were combined with each other using fuzzy re-sponse integration.

To combine the responses of the different biometric measurements, 2 fuzzy integrators were established; the first (seeFig. 7) of Mamdani type with 27 rules and trapezoidal membership functions, the second (see Fig. 8) of Mamdani type with23 rules and Gaussian membership functions.

The fuzzy systems have 3 input variables (one for each biometric measure) and 1 output variable, and there are 3 mem-bership functions for each variable (ABaja, Amedia and AAlta).

3.5.1. Fuzzy integration results using the fuzzy integrators #1 and #2Table 14 shows the integration results using fuzzy integrators #1 and #2. It can be noted that in most cases the recog-

nition rate was increased using fuzzy integrator #2.

3.6. Results of the genetic algorithm for the fuzzy integrator

In this section the results obtained using the genetic algorithm, for optimizing the structure of the fuzzy integrator, arepresented. A total of 120 tests (20 per case) were performed in each phase of the genetic algorithm, but only the best test foreach case is shown.

3.6.1. Optimization and results of the first testTable 15 shows the comparison between the different fuzzy integrators and it can be observed that the best recognition

rates are for the optimized fuzzy integrators in all cases.The genetic algorithm created different fuzzy integrators, for example in Fig. 9 the best fuzzy integrator for case 3 is

shown, which in this case produced a type of fuzzy system of Sugeno form.

3.6.2. Optimization and results of the second testTable 16 shows the comparison between the different fuzzy integrators and it can be observed that the best recognition

rates were obtained in the second test in almost all cases. Only in case 5 the percentage of recognition remained at 100%.

Table 12The results of cross-validation for the optimized case of the ear.

V1 V2 V3 V4 V5 V6 V7 Average

96.54 96.1 95.23 93.5 99.13 100 97.83 96.90

Table 13Trainings used for forming the seven cases.

# Case Iris Ear Voice

1 EI3 96.32% EO3 66.23% EV1 92.64%2 EI3 96.32% EO1 97.40% EV3 93.94%3 EI5 96.97% EO2 85.71% EV5 93.07%4 EI2 97.19% EO4 100.00% EV4 94.81%5 Iris optimized 98.48% EO4 100.00% Voice optimized V1 96.54%6 EI4 97.19% EO5 74.02% EV4 94.81%7 Iris optimized 98.48% Ear optimized 83.11% Voice optimized 96.54%

Fig. 7. Membership functions of fuzzy integrator #1.

Fig. 8. Membership functions of fuzzy integrator #2.

P. Melin et al. / Information Sciences 197 (2012) 1–19 11

The genetic algorithm created different fuzzy integrators, for example in Fig. 10 the best fuzzy integrator for case 6 is pre-sented, in this case the type of fuzzy system was of Sugeno form with 7 rules, and it can be noted how one of its variablesonly uses two membership functions.

3.6.3. Optimization and results of the third testTable 17 shows the comparison between the different fuzzy integrators and it is observed that the recognition rates in

some cases remained like in the second test, and in some others there was a small improvement.

Table 14The comparison between fuzzy integrators #1 and #2.

# Case Fuzzy integrator #1 Fuzzy integralor #2

1 394/462 380/46285.28% 82.25%

2 427/462 454/46292.42% 98.27%

3 415/462 437/46289.83% 94.59%

4 435/462 459/46294.16% 99.35%

5 448/462 460/46296.97% 99.57%

6 399/462 403/46286.36% 87.23%

7 432/462 432/46293.51% 93.51%

Fig. 9. The best fuzzy integrator for the case 3.

Table 15The comparison among fuzzy integrators of the first test.

# Case Fuzzy integrator #1 Fuzzy integrator #2 Optimized fuzzyintegrator (first test)

1 394/462 380/462 448/46285.28% 82.25% 96.96%

2 427/462 454/462 461/46292.42% 98.27% 99.78%

3 415/462 437/462 454/46289.83% 94.59% 98.26%

4 435/462 459/462 461/4623.16% 99.35% 99.78%

5 448/462 460/462 462/46296.97% 99.57% 100%

6 399/462 403/462 455/46286.36% 87.23% 98.48%

7 432/462 432/462 458/46293.51% 93.51% 99.13%

12 P. Melin et al. / Information Sciences 197 (2012) 1–19

Table 16The comparison between fuzzy integrators of the second test.

# Case Fuzzy integrator #1 Fuzzy integrator #2 Optimized fuzzyintegrator(first test)

Optimized fuzzyintegrator (second test)

1 394/462 380/462 448/462 460/46285.28% 82.25% 96.96% 99.56%

2 427/462 454/462 461/462 462/46292.42% 98.27% 99.78% 100%

3 415/462 437/462 454/462 460/46289.83% 94.59% 98.26% 99.56%

4 435/462 459/462 461/462 462/46294.16% 99.35% 99.78% 100%

5 448/462 460/462 462/462 462/46296.97% 99.57% 100% 100%

6 399/462 403/462 455/462 461/46286.36% 87.23% 98.48% 99.78%

7 432/462 432/462 458/462 459/46293.51% 93.51% 99.13% 99.35%

Fig. 10. The best fuzzy integrator for the case 6.

P. Melin et al. / Information Sciences 197 (2012) 1–19 13

The genetic algorithm created different fuzzy integrators, and now there were results with type-1 and type-2 fuzzy logic.For example in Fig. 11 the best fuzzy integrator for case 1 can be appreciated, which in this case the type of fuzzy system wasof Sugeno form with type-2 fuzzy logic and 7 rules.

3.7. Cases with noise

When it was noted that into the third test (where type-2 fuzzy logic was already used [4,5]) that there was not a signif-icant difference in comparison with the second test (where only type-1 fuzzy logic was used), a decision to apply noise to theimages and voice for testing and training was made. Different levels of noise (Gaussian noise) were considered and the samecases mentioned previously were repeated but now with noise. The same tests were performed, with the 2 non-optimizedfuzzy integrators and 3 tests with the genetic algorithm. In Table 18 it can be noted how the trainings were combined witheach other.

3.7.1. Non-optimized fuzzy integratorsThe previously non-optimized fuzzy integrators were used to perform the new tests.

Table 17The comparison between fuzzy integrators of the third test.

#Case

Fuzzyintegrator #l

Fuzzy integrator#2

Optimized fuzzy integrator(first test)

Optimized fuzzy integrator(second test)

Optimized fuzzy integrator(third test)

1 394/462 380/462 443/462 460/462 461/46285.28% 82.25% 96.96% 99.56% 99.78%

2 427/462 454/462 461/462 462/462 462/46292.42% 98.27% 99.78% 100% 100%

3 415/462 437/462 454/462 460/462 460/46289.83% 94.59% 98.26% 99.56% 99.56%

4 435/462 459/462 461/462 462/462 462/46294.16% 99.35% 99.78% 100% 100%

5 448/462 460/462 462/462 462/462 462/46296.97% 99.57% 100% 100% 100%

6 399/462 403/462 455/462 461/462 462/46286.36% 87.23% 98.48% 99.78% 100%

7 432/462 432/462 458/462 459/462 461/46293.51% 93.51% 99.13% 99.35% 99.78%

Fig. 11. The best fuzzy integrator for the case 1.

Table 18Trainings used for forming the seven cases with noise.

# Cases Iris Ear Voice

1 EI3 88.96% EO3 66.23% EV1 46.32%2 EI3 50.65% EO1 44.16% EV3 93.94%3 EI5 72.29% EO2 66.23% EV5 59.31%4 EI2 53.25% EO4 85.71% EV4 87.01%5 Iris optimized 63.42% EO4 77.92% Voice optimized 81.82%6 EI4 68.40% EO5 51.95% EV4 94.81%7 Iris optimized 70.78% Ear optimized V2 53.24% Voice optimized 81.82%

14 P. Melin et al. / Information Sciences 197 (2012) 1–19

3.7.1.1. Fuzzy integration results using the fuzzy integrator 1. Table 19 shows the integration results using fuzzy integrators #1and #2 for the cases with noise. From Table 19 it is noted that there exists a variation in the recognition rate, and in mostcases the recognition rate was increased using fuzzy integrator #2.

P. Melin et al. / Information Sciences 197 (2012) 1–19 15

3.7.2. Tests with the genetic algorithmThe same three tests with the genetic algorithm were repeated, but now with noise.

Table 19Comparison between non-optimized fuzzy integrators.

# Case Fuzzy integrator #1 Fuzzy integrator #2

1 313/462 368/46267.75% 79.65%

2 361/462 353/46278.14% 77.49%

3 324/462 333/46270.13% 72.08%

4 302/462 319/46265.37% 69.05%

5 306/462 304/46266.23% 65.80%

6 303/462 274/46265.58% 59.31%

7 276/462 275/46259.74% 59.52%

Table 20The comparison between fuzzy integrators of the first test.

# Case Fuzzy integrator #1 Fuzzy integrator #2 Optimized fuzzyintegrator (first test)

1 313/462 368/462 387/46267.75% 79.65% 83.76%

2 361/462 358/462 443/46278.14% 77.49% 95.88%

3 324/462 333/462 354/46270.13% 72.08% 76.62%

4 302/462 319/462 432/46265.37% 69.05% 93.50%

5 306/462 304/462 390/46266.23% 65.80% 84.41%

6 303/462 274/462 412/46265.58% 59.31% 89.17%

7 276/462 275/462 333/46259.74% 59.52% 72.07%

Table 21The comparison between fuzzy integrators of the second test.

#Case

Fuzzyintegrator#1

Fuzzyintegrator#2

Optimized fuzzyintegrator (first test)

Optimized fuzzyintegrator (secondtest)

1 313/462 368/462 387/462 412/46267.75% 79.65% 83.76% 89.17%

2 361/462 353/462 443/462 444/46278.14% 77.49% 95.88% 96.10%

3 324/462 333/462 354/462 363/46270.13% 72.08% 76.62% 79.65%

4 302/462 319/462 432/462 432/46265.37% 69.05% 93.50% 93.50%

5 306/462 304/462 390/462 403/46266.23% 65.80% 84.41% 87.22%

6 303/462 274/462 412/462 431/46265.58% 59.31% 89.17% 93.29%

7 276/462 275/462 333/462 384/46259.74% 59.52% 72.07% 83.11%

16 P. Melin et al. / Information Sciences 197 (2012) 1–19

3.7.2.1. Results of the first test. Table 20 shows the comparison among the different fuzzy integrators and it is observed that inall cases the best recognition rates are for the optimized fuzzy integrators.

3.7.2.2. Results of the second test. Table 21 shows the comparison between the different fuzzy integrators and it is observedthat in the second test in all cases the best recognition rates are for the optimized fuzzy integrators. It is important toremember that in this test we only used type-1 fuzzy logic.

3.7.2.3. Results of the third test. Table 22 shows the comparison among the different fuzzy integrators and it is observed thatin all cases the best recognition rates are for the optimized fuzzy integrators. It is important to remember that in this test, the

Table 22The comparison between fuzzy integrators of the third test.

#Case

Fuzzyintegrator#l

Fuzzyintegrator#2

Optimized fuzzyintegrator (first test)

Optimized fuzzyintegrator(second test)

Optimized fuzzyintegrator(third test)

1 313/462 368/462 387/462 412/462 423/46267.75% 79.65% 83.76% 89.17% 91.55%

2 361/462 358/462 443/462 444/462 460/46278.14% 77.49% 95.88% 96.10% 99.56%

3 324/462 333/462 354/462 368/462 386/46270.13% 72.08% 76.62% 79.65% 83.54%

4 302/462 319/462 432/462 432/462 444/46265.37% 69.05% 93.50% 93.50% 96.10%

5 306/462 304/462 390/462 403/462 418/46266.23% 65.80% 84.41% 87.22% 90.47%

6 303/462 274/462 412/462 431/462 446/46265.58% 59.31% 89.17% 93.29%96.53%

7 276/462 275/462 333/462 384/462 401/46259.74% 59.52% 72.07% 83.11% 86.79%

Fig. 12. Comparison between type-1 and type-2 fuzzy logic for cases with noise.

Module 1 Module 2 Module 3

Fig. 13. Example of images of ear.

P. Melin et al. / Information Sciences 197 (2012) 1–19 17

genetic algorithm can decide if it will use type-1 or type-2 fuzzy logic. In all the cases the best fuzzy integrator was obtainedwith type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works thattype-2 fuzzy logic can handle a higher degree of uncertainty in a process, which is in this case was represented with the dif-ferent noise levels.

Fig. 12 shows the comparison between recognition results when using type-1 and type-2 fuzzy logic, and these results areobtained in the second and third test of the cases with noise, respectively. It is observed that in all cases the results werebetter when the genetic algorithm chose to use type-2 fuzzy logic.

3.8. What happens if a module fails?

One advantage that modular neural networks offer is the possibility that if one or more of the modules fail, then the oth-ers will continue working, perhaps the recognition rate could change slightly but the overall system’s performance will notbe affected. For verifying this statement, tests were performed to the 3 ear trainings already presented in this work. In thiscase some white images were used as input to the modular neural network as shown in Fig. 13. In other words, the corre-sponding parts in module 1 (lobe), module 2 (shell) and module 3 (helix) were replaced with white images.

The results were as follows: for the first training of the ear the recognition rate was 97.4% (75 of 77 images) and with thistest the recognition rate decreased to 94.8% (73 of 77 images), which means that there were only 2 more errors, for the sec-ond and third trainings of the ear the recognition rates were 96.10 and 100% respectively, and for these tests the recognitionrates remained the same (independently of the white parts of the images).

These tests were also performed using the three biometric measures, and we choose case number four. First, the moduleof the ear was forced to fail, and it is noted in Table 23 that the result did not change. Then the modules of the ear and voicewere force to fail, and it is noted in Table 24 that the result of the integration is the result of the iris. This is because if theother two modules fail, then the last one must respond. Besides when problems of this kind arise then the fuzzy integratorgives more weight to the modules that work correctly.

4. Statistical comparison of results

It was also important to statistically verify that the results of the optimized modular neural networks and the fuzzy inte-grators were better than the non-optimized ones. For achieving this goal, statistical t tests were performed to verify that the t

Table 23Results when the module of the ear fails.

# Case Iris Ear Voice % Recognition

4 EI2 97.19% EO4 0% EV4 94.81% 462/462 100

Table 24Results when the modules of ear and voice fail.

# Case Iris Ear Voice % Recognition

4 EI2 97.19% EO4 0% EV4 0% 462/462 97.19

Table 25The results of t for the modular neural networks.

t-test Value of t

Iris non-optimized vs. optimized �3.47Ear non-optimized vs. optimized �0.60Voice non-optimized vs. optimized �2.15

Table 26The results of t for the fuzzy integrators.

Fuzzy integrator #1 vs. fuzzy integrator#2

Fuzzy integrator #2 vs. firsttest

First test vs. secondtest

Second test vs. thirdtest

Cases withoutnoise

�0.78 �2.13 �12.06 �2.68

Cases with noise �0.38 �3.6 �7.41 �4.3

18 P. Melin et al. / Information Sciences 197 (2012) 1–19

values were sufficiently high to reject the null hypothesis, so that sufficient statistical difference between the results wasfound. The obtained t values in the different tests are presented below.

4.1. Tests for the modular neural networks

A t-test between the non-optimized and optimized modular neural networks (one per each measure) was performed, andthe results are presented in Table 25. For performing the t-test of each biometric measure, in the case of iris 14 non-opti-mized trainings and 14 optimized trainings were used, in the case of the ear 4 non-optimized trainings and 4 optimizedtrainings were used, and in the case of voice 7 non-optimized trainings and 7 optimized trainings were used (which meansusing the best result of each validation for both non-optimized and optimized cases).

It can be noted from Table 25 that for the iris and voice there exists significant statistical difference, and based on this factit can be said that these modular neural networks are better when they are optimized than when they are non-optimized.However, it is also noted from Table 25 that there is not sufficient evidence to say that ear results were improved afteroptimization.

4.2. Tests for the fuzzy integrators

Several t-tests were also performed between the non-optimized and optimized fuzzy integrators, and the results are pre-sented in Table 26. For the non-optimized fuzzy integrators, 7 values were used for performing the t-test (one per each case),and in the optimized fuzzy integrators 140 values were used (20 per each case).

From Table 26 it can be noted that when tests between the non-optimized fuzzy integrators were performed (in caseswithout and with noise) there is not sufficient statistical difference between both non-optimized fuzzy integrators. However,when a comparison was performed between the fuzzy integrator #2 and the first test (in cases without and with noise) thenthere is statistical difference and we can say that the first test is better than the fuzzy integrator #2. When a comparisonbetween the first and second tests was made (in cases without and with noise) it was found that there also exists sufficientstatistical difference and it can be concluded that the second test is better than the first test. Finally, when a comparisonbetween the second and third tests was made (in cases without and with noise) it was found that there is also sufficient sta-tistical difference and it can be concluded that the third test is better than the second test. In particular, in this last statisticalcomparison of cases with noise, we can perform a comparison between type-1 and type-2 fuzzy logic, and we observed thattype-2 fuzzy logic is better than type-1 when there is noise present.

It is important to say that a t-test was also performed for knowing if when there is not noise in the biometric measures, abetter recognition rate was achieved, and the value in this t-test was of 7.23. In summary, it can be said that there is suffi-cient statistical evidence that there exists a better recognition rate when the data for testing does not have any noise.

5. Conclusions

In this paper, a method that combines modular neural networks and fuzzy logic for response integration was proposed.The final genetic algorithm that was developed for the design of fuzzy integrators has the ability of adjusting the type ofmembership functions (also combines the type of the membership functions), adjusts the number of membership functions,and creates the fuzzy rules, besides choosing the type of fuzzy logic (type-1 or type-2).

Human recognition based on iris, ear and voice biometrics, using modular neural networks with type-1 and type-2 fuzzylogic for response integration, was performed. Genetic algorithms were also used for optimizing the modular neural networkand the fuzzy integrator, and thus the percentage of recognition was increased significantly. Cases with noise were also con-sidered to analyze the behavior of the genetic algorithm. In all the cases with noise a better recognition rate was obtainedwhen type-2 fuzzy logic is used in the fuzzy response integrator. With the obtained results, it can be noted that genetic algo-rithms are of great help in finding the optimal architectures for the neural networks and the fuzzy systems.

As future work, we will consider generalizing the proposed method to consider more than three biometric measures andalso to consider biometric databases of any size, which will require automatically dividing the databases to form the inputsto the modules.

References

[1] R. Alcalá, P. Ducange, F. Herrera, B. Lazzerini, F. Marcelloni, A multiobjective evolutionary approach to concurrently learn rule and data bases oflinguistic fuzzy-rule-based systems, IEEE Transactions on Fuzzy Systems 17 (2009) 1106–1122.

[2] R.A. Aliev, W. Pedrycz, B.G. Guirimov, R.R. Aliev, U. Ilhan, M. Babagil, S. Mammadli, Type-2 fuzzy neural networks with fuzzy clustering and differentialevolution optimization, Information Sciences 181 (2011) 1591–1608.

[3] O. Castillo, P. Melin, Type-2 Fuzzy Logic: Theory and Applications, Springer-Verlag, Heidelberg, Germany, 2008.[4] J.R. Castro, O. Castillo, P. Melin, An interval type-2 fuzzy logic toolbox for control applications, in: Proceedings of FUZZ-IEEE 2007, 2007, pp. 1–6.[5] J.R. Castro, O. Castillo, P. Melin, A. Rodriguez-Diaz, Building fuzzy inference systems with a new interval type-2 fuzzy logic toolbox, Transactions on

computational science 1 (2008) 104–114.[6] B.-I. Choi, F. Chung-Hoon Rhee, Interval type-2 fuzzy membership function generation methods for pattern recognition, Information Sciences 179 (13)

(2009) 2102–2122.

P. Melin et al. / Information Sciences 197 (2012) 1–19 19

[7] H. Chris Tseng, B. Almogahed, Modular neural networks with applications to pattern profiling problems, Neurocomputing 72 (10–12) (2009) 2093–2100.

[8] Database Ear Recognition Laboratory from the University of Science & Technology Beijing (USTB). <http://www.ustb.edu.cn/resb/en/index.htmasp>(accessed 21.09.09).

[9] Database of Human Iris from the Institute of Automation of Chinese Academy of Sciences (CASIA). <http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp>(accessed 21.09.09).

[10] J.G. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Transactions on Pattern Analysis and MachineIntelligence 15 (1993) 1148–1161.

[11] J.G. Daugman, Statistical richness of visual phase information: update on recognition persons by iris patterns, International Journal of Computer Vision45 (2001) 25–38.

[12] F. Gaxiola, P. Melin, M. López, Modular neural networks for person recognition using the contour segmentation of the human iris biometricmeasurement, Studies in Computational Intelligence 312 (2010) 137–153.

[13] A. Goltsev, V. Gritsenko, Modular neural networks with Hebbian learning rule, Neurocomputing 72 (2009) 2477–2482.[14] L. Gutiérrez, P. Melin, M. López, Modular neural network for human recognition from ear images using wavelets, Studies in Computational Intelligence

312 (2010) 121–135.[15] D. Hidalgo, O. Castillo, P. Melin, Optimization with genetic algorithms of modular neural networks using interval type-2 fuzzy logic for response

integration: the case of multimodal biometry, in: Proceedings of IJCNN 2008, 2008, pp. 738–745.[16] D. Hidalgo, O. Castillo, P. Melin, Type-1 and type-2 fuzzy inference systems as integration methods in modular neural networks for multimodal

biometry and its optimization with genetic algorithms, Studies in Computational Intelligence 154 (2008) 89–114.[17] D. Hidalgo, P. Melin, G. Licea, O. Castillo, Optimization of type-2 fuzzy integration in modular neural networks using an evolutionary method with

applications in multimodal biometry, in: Proceedings of MICAI 2009, 2009, pp. 454–465.[18] D. Hidalgo, O. Castillo, P. Melin, Type-1 and type-2 fuzzy inference systems as integration methods in modular neural networks for multimodal

biometry and its optimization with genetic algorithms, Information Sciences 179 (2009) 2123–2145.[19] J. Jang, C. Sun, E. Mizutani, Neuro-Fuzzy and Soft Computing, Prentice Hall, New Jersey, 1997.[20] Y. Ji, R.M. Massanari, J. Ager, J. Yen, R.E. Miller, H. Ying, A fuzzy logic-based computational recognition-primed decision model, Information Sciences

177 (2007) 4338–4353.[21] D. Kitakoshi, H. Shioya, R. Nakano, Empirical analysis of an on-line adaptive system using a mixture of Bayesian networks, Information Sciences 180

(2010) 2856–2874.[22] L. Masek, P. Kovesi, MATLAB Source Code for a Biometric Identification System Based on Iris Patterns, The School of Computer Science and Software

Engineering, University of Western Australia, 2003.[23] P. Melin, O. Castillo, Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing, Springer-Verlag, Heidelberg, 2005.[24] P. Melin, C. Felix, O. Castillo, Face recognition using modular neural networks and the fuzzy Sugeno integral for response integration, International

Journal of Intelligent Systems 20 (2005) 275–291.[25] P. Melin, J. Urias, D. Solano, M. Soto, M. Lopez, O. Castillo, Voice recognition with neural networks, type-2 fuzzy logic and genetic algorithms, Journal of

Engineering Letters 13 (2006) 108–116.[26] P. Melin, O. Mendoza, O. Castillo, An improved method for edge detection based on interval type-2 fuzzy logic, Expert Systems with Applications 37

(2010) 8527–8535.[27] O. Mendoza, P. Melin, G. Licea, A new method for edge detection in image processing using interval type-2 fuzzy logic, Proceedings of Granular

Computing (2007) 151–156.[28] O. Mendoza, P. Melin, G. Licea, A hybrid approach for image recognition combining type-2 fuzzy logic, modular neural networks and the Sugeno

integral, Information Sciences 179 (2009) 2078–2101.[29] O. Mendoza, P. Melin, O. Castillo, Interval type-2 fuzzy logic and modular neural networks for face recognition applications, Applied Soft Computing 9

(2009) 1377–1387.[30] H.B. Mitchell, Pattern recognition using type-II fuzzy sets, Information Sciences 170 (2005) 409–418.[31] B. Moreno, A. Sanchez, J.F. Velez, On the use of outer ear images for personal identification in security applications, in: Proceedings of IEEE 33rd Annual

International Carnahan Conference on Security Technology, 1999, pp. 469–476.[32] R. Muñoz, O. Castillo, P. Melin, Face, fingerprint and voice recognition with modular neural networks and fuzzy integration, Studies in Computational

Intelligence 256 (2009) 69–79.[33] R. Muñoz, O. Castillo, P. Melin, Optimization of fuzzy response integrators in modular neural networks with hierarchical genetic algorithms: The case

of face, fingerprint and voice recognition, Studies in Computational Intelligence 257 (2009) 111–129.[34] N. Nawa, T. Furuhashi, Fuzzy system parameters discovered by bacterial evolutionary algorithm, IEEE Transactions on Fuzzy Systems 7 (1999) 608–

616.[35] P.A. Salazar-Tejeda, P. Melin, O. Castillo, A new biometric recognition technique based on hand geometry and voice using neural networks and fuzzy

logic, Studies in Computational Intelligence 154 (2008) 171–186.[36] M. Saleh, Using Ears as a Biometric for Human Recognition, Arab Academy for Science and Technology and Maritime Transport, Cairo, Egypt,

September 2006.[37] C. Sanchez-Avila, R. Sanchez-Reillo, D. Martin-Roche, Iris recognition for biometric identification using dyadic wavelet transform zero-crossing, in:

Proceedings of the IEEE International Carnahan Conference on Security Technology, 2002, pp. 272–277.[38] A. Sarhan, Iris Recognition Using Discrete Cosine Transform and Artificial Neural Networks, Dept. of Computer Engineering, Jordan University, Amman,

Jordan, 2009.[39] C. Tisse, L. Torres M. Robert, Person identification based on iris patterns, in: Proceedings of the 15th International Conference on Vision Interface, 2002.[40] B. Verma, M. Blumenstein, Pattern Recognition Technologies and Applications, Information Science Reference, Hershey, New York, 2008, pp. 90–91.[41] W. Wang, S. Bridges, Genetic Algorithm Optimization of Membership Functions for Mining Fuzzy Association Rules, Department of Computer Science

Mississippi State University, March 2, 2000.[42] L.A. Zadeh, Fuzzy sets, Journal of Information and Control 8 (1965) 338–353.[43] L.A. Zadeh, Towards a generalized theory of uncertainty (GTU) – an outline, Information Sciences 172 (2005) 1–40.[44] L.A. Zadeh, Is there a need for fuzzy logic?, Information Sciences 178 (2008) 2751–2779