Score Level Fusion of Ear and Face Local 3D Features for Fast and Expression-Invariant Human...

10
Score Level Fusion of Ear and Face Local 3D Features for Fast and Expression-Invariant Human Recognition S.M.S. Islam, M. Bennamoun, A.S. Mian, and R. Davies The University of Western Australia, Crawley, WA 6009, Australia {shams,mbennamoun,ajmal,rowan}@csse.uwa.edu.au Abstract. Increasing risks of spoof attacks and other common prob- lems of unimodal biometric systems such as intra-class variations, non- universality and noisy data necessitate the use of multimodal biometrics. The face and the ear are highly attractive biometric traits for combina- tion because of their physiological structure and location. Besides, both of them can be acquired non-intrusively. However, changes of facial ex- pressions, variations in pose, scale and illumination and the presence of hair and ornaments present some genuine challenges. In this paper, a 3D local feature based approach is proposed to fuse ear and face biometrics at the score level. Experiments with FRGC v.2 and the University of Notre Dame Biometric databases show that the technique achieves an identification rate of 98.71% and a verification rate of 99.68% (at 0.001 FAR) for fusion of the ear with neutral face biometrics. It is also found to be fast and robust to facial expressions achieving 98.1% and 96.83% identification and verification rates respectively. 1 Introduction M ultimodal biometric recognition is a comparatively new research area where multiple physiological (such as face, fingerprint, palm-print, iris, DNA etc.) or behavioral (handwriting, gait, voice etc.) characteristics of a subject are taken into consideration for automatic recognition purposes [1,2]. A system may be called multimodal if it collects data from different biometric sources or uses different types of sensors (e.g. infra-red, reflected light etc.), or uses multiple samples of data or multiple algorithms [3] to combine the data [1]. In multimodal systems, a decision can be made on the basis of different subsets of biometrics depending on their availability and confidence. These systems are also more robust to spoof attacks as it is relatively difficult to spoof multiple biometrics simultaneously. Among the biometric traits, the face is not as accurate as DNA or retina, but in terms of acceptability, the face is considered to be the most promising due to its non-intrusiveness and feature-richness. Although 2D still image based face recognition systems have a history of around 30 years, the technology reached its maturity in the mid 1990s [4,5]. But the inherent problems of the 2D systems such M. Kamel and A. Campilho (Eds.): ICIAR 2009, LNCS 5627, pp. 387–396, 2009. c Springer-Verlag Berlin Heidelberg 2009

Transcript of Score Level Fusion of Ear and Face Local 3D Features for Fast and Expression-Invariant Human...

Score Level Fusion of Ear and Face Local 3D

Features for Fast and Expression-Invariant

Human Recognition

S.M.S. Islam, M. Bennamoun, A.S. Mian, and R. Davies

The University of Western Australia, Crawley, WA 6009, Australia{shams,mbennamoun,ajmal,rowan}@csse.uwa.edu.au

Abstract. Increasing risks of spoof attacks and other common prob-lems of unimodal biometric systems such as intra-class variations, non-universality and noisy data necessitate the use of multimodal biometrics.The face and the ear are highly attractive biometric traits for combina-tion because of their physiological structure and location. Besides, bothof them can be acquired non-intrusively. However, changes of facial ex-pressions, variations in pose, scale and illumination and the presence ofhair and ornaments present some genuine challenges. In this paper, a 3Dlocal feature based approach is proposed to fuse ear and face biometricsat the score level. Experiments with FRGC v.2 and the University ofNotre Dame Biometric databases show that the technique achieves anidentification rate of 98.71% and a verification rate of 99.68% (at 0.001FAR) for fusion of the ear with neutral face biometrics. It is also foundto be fast and robust to facial expressions achieving 98.1% and 96.83%identification and verification rates respectively.

1 Introduction

Multimodal biometric recognition is a comparatively new research area wheremultiple physiological (such as face, fingerprint, palm-print, iris, DNA etc.) orbehavioral (handwriting, gait, voice etc.) characteristics of a subject are takeninto consideration for automatic recognition purposes [1,2]. A system may becalled multimodal if it collects data from different biometric sources or usesdifferent types of sensors (e.g. infra-red, reflected light etc.), or uses multiplesamples of data or multiple algorithms [3] to combine the data [1]. In multimodalsystems, a decision can be made on the basis of different subsets of biometricsdepending on their availability and confidence. These systems are also morerobust to spoof attacks as it is relatively difficult to spoof multiple biometricssimultaneously.

Among the biometric traits, the face is not as accurate as DNA or retina, butin terms of acceptability, the face is considered to be the most promising dueto its non-intrusiveness and feature-richness. Although 2D still image based facerecognition systems have a history of around 30 years, the technology reached itsmaturity in the mid 1990s [4,5]. But the inherent problems of the 2D systems such

M. Kamel and A. Campilho (Eds.): ICIAR 2009, LNCS 5627, pp. 387–396, 2009.c© Springer-Verlag Berlin Heidelberg 2009

388 S.M.S. Islam et al.

as variance to pose and illumination and sensitivity to the use of cosmetics havemotivated the research communities to investigate 3D image based biometricsystems [6,7]. By using 3D or a combination of 2D and 3D face images, very highrecognition rates can be obtained for faces with neutral expression. But in reallife applications, facial expression changes are very common and the geometry ofthe face significantly changes [8] with them which severely affects the recognitionprocess. Occlusions caused by hair or ornaments are also a matter of concern.

Researchers have proposed fusing fingerprints [3], palm prints [9], hand geom-etry [10], gait [11], iris [12], voice [13] etc., and most recently the ear with theface images [14]. Among all of these alternatives, the ear has the advantage thatit is located at the side of the face. Ear data can easily be collected (with thesame sensor) along with the face image. Therefore, it can efficiently supplementface images. Besides, it has some other attractive characteristics of biometricssuch as consistency (not changing with expressions and with age between 8 yearsand 70 years), reduced spatial resolution and uniform distribution of color [15].

Different levels of fusion have been proposed for fusing ear data with facedata [14]. Yuan et al. [16] proposed a data level fusion approach using Full-SpaceLinear Discriminant Analysis (FSLDA) and obtained 96.2% identification accu-racy while testing on a database of 395 images from 79 subjects. Xu et al. [17]obtained 96.8% accuracy on a similar database for feature level fusion usingKernel Fisher Discriminant Analysis (KFDA). Xu and Mu [18] also used featurelevel fusion using Kernel Canonical Correlation Analysis (KCCA) and obtained98.68% recognition rate on a smaller database of 190 images from 38 subjects.However, score level fusion is most commonly used in biometrics [19,20,21] be-cause it involves processing of less data and hence, it is a faster and easier way ofrecognizing people [22,23]. In this level of fusion, the scores generated by clas-sifiers pertaining to the ear and the face modalities are combined. Woodard etal. [24] used this score level fusion technique for combining 3D face recognitionwith ear and finger recognition. Using the ICP algorithm, they obtained 97%rank-one recognition rate on a small database of 85 subjects.

Most of the ear-face multimodal approaches mentioned above are based onglobal features which are sensitive to occlusions and variations in pose, illumina-tion and scale. In this work, we use 3D local features (L3DFs) first proposed byMian et al. [25] for face recognition to compute ear and face scores. L3DFs arefound to be very fast for computation. Mian et al. [25] reported 23 matches persecond on a 3.2 GHz Pentium IV machine with 1GB RAM. In our approach, atfirst we detect the face from the frontal face images and the ear from the profileimages using the detection techniques described in [25] and [26] respectively.Following a normalization step, face and ear L3DFs are extracted and matchedas described in [27] and [28] respectively. Matching scores from the ear and theface modalities are then fused according to a weighted sum rule. The perfor-mance of the system is evaluated on the largest ear-face dataset available (tothe best of our knowledge) composed of 326 gallery images and 315 probes withneutral facial expression and 311 probes with non-neutral facial expressions. Allthe images are taken from the FRGC v.2 and the University of Notre Dame

Score Level Fusion of Ear and Face Local 3D Features 389

(UND) Biometric databases and there is only one instance per subject in boththe gallery and the probe dataset. The proposed fusion approach achieves highaccuracy for faces with both neutral and non-neutral expressions. The system isalso fast and significantly robust to occlusions and pose variations.

The paper is organized as follows. The proposed approach for fusion of scoresfrom the ear and face L3DFs is described in Section 2. The results obtainedare reported and discussed in Section 3 and compared with other approaches inSection 4. Section 5 concludes.

2 Methodology

The main steps in our multimodal recognition system are shown in the blockdiagram of Fig. 1. Each of the components is described in this section.

Frontal face

imagesFace detection

Ear detection

Face feature

(L3DF)

extraction

Ear feature

(L3DF)

extraction

Matching face

L3DFs

Matching ear

L3DFs

Fusion of

matching

scoresProfile

face

images

Recognition

Result

Fig. 1. Block diagram of the proposed multimodal recognition system

2.1 Data Acquisition

The ear region is detected on 2D profile face images using the AdaBoost baseddetector described by Islam et al. [26]. This detector is chosen as it is fullyautomatic and also due to its speed and high accuracy of 99.89% on the UND

profile face database with 942 images of 302 subjects [29]. The corresponding3D data is then extracted from the co-registered 3D profile data as describedin [30]. The extracted ear data includes some portion of the hair and face as arectangular area around the ear is cropped. However, it is then normalized andsampled on a uniform grid of 132mm by 106mm.

The face region is detected from the 3D frontal face image based on theposition of the nose tip as described in [25]. Face data is also normalized andsampled on a uniform grid of 160 by 160.

2.2 Feature Extraction

Local 3D features are extracted from 3D ear and 3D face data. A number ofdistinctive 3D feature point locations (keypoints) are selected on the 3D earand 3D face region based on the asymmetrical variations in depth around them.This is determined by the difference between the first two eigenvalues in a PCA

390 S.M.S. Islam et al.

(centred on the keypoints) following [27]. The number and locations of the key-points are found to be different for the ear and the face images of differentindividuals. It is also observed that these have a high degree of repeatability forthe same individual [27,28,29].

A spherical area of radius R is cropped around the selected keypoints andaligned on its principal axes. Then, a uniformly sampled (with a resolution of1mm) 3D surface of 30 × 30 lattice is approximated (using D’Errico’s surfacefitting code [31]) on the cropped data points. In order to avoid boundary effects,an inner lattice of 20× 20 is cropped from the bigger surface and converted to a400 dimensional vector to be used as the feature for the corresponding keypoint.Details can be found in [27,28,29].

2.3 Matching Features

Features are matched using Euclidean distance and rotation angles between theunderlying coordinate bases of the features are computed as described in [28].These angles are then clustered. The largest cluster is used for coarse alignmentof the matching probe and gallery dataset. We then use a modified version ofthe Iterative Closest Point (ICP) algorithm [25] for finer alignment. Since ICP isa computationally expensive algorithm, we extract a minimal rectangular areacontaining the matching features only from the whole face or ear data and applyICP on that area of the point cloud only.

While making the final matching decision for both the ear and the face modal-ities, we consider the following scores: (i) The mean of the distances for all thematched probe and gallery features, (ii) The ratio of the size of the largest clus-ter of rotation angles to the total number of matching features (Rr) and (iii) theICP error.

For both the ear and the face, we normalize the above score vectors on a0 and 1 scale using the min-max rule. A weight factor is then computed asthe ratio of the difference of the minimum value from the mean to that of thesecond minimum value from the mean of a score vector. The Rr is subtractedfrom unity as it has a polarity opposite to other scores (the higher this valuethe better are the results). The final score is then computed by summing theproducts of the scores and the corresponding weights (confidence weighted sumrule) [27].

2.4 Fusion

The matching scores from the ear and the face data can be fused in differentways. Kittler et al. [32] and Jain et al. [23] empirically demonstrated that thesum rule provides better results than other score fusion rules. We have used theweighted sum, a generalization of the sum rule to give more emphasis on facefeatures as it turned out that the L3DFs are more distinctive and reliable forthe face data than the ear data.

Score Level Fusion of Ear and Face Local 3D Features 391

3 Results and Discussion

The recognition performance of our proposed approach is evaluated in this sec-tion. Results of fusing L3DFs scores with or without ICP scores are shownseparately to demonstrate the strength of L3DFs and also the effect of applyingthe ICP.

3.1 Dataset

To perform our experiments on both ear and face data, we built a multimodaldataset comprising some common data from Collection F from the UND Profileface database [33] and the Fall2003 and Spring2004 datasets of the FRGC v.2frontal face database [7]. The UND database has images from 415 individualsand the FRGC v.2 database has images from 466 individuals. However, wefound that only 326 images in the gallery of the UND database are available inthe list of the images with neutral expression (which we included in the gallerydataset for face biometrics) in FRGC v.2 database. Similarly, we found 311 and315 probe face images with neutral and non-neutral expressions respectively inthe FRGC v.2 database which are available in the probe images of the UND

database. Thus, our multimodal dataset includes 326 gallery images and 311probes with neutral expressions and 315 probes with non-neutral expressions.To the best of our knowledge, this is the largest ear-face database.

3.2 Identification Results with Local 3D Features Only

We obtained rank-1 identification rates of 72% and 96.8% separately for the earand the face respectively. However, the score level fusion of these two modalitieswith equal weight results in 97.75%. As shown in Fig. 2(a), by simply givingdouble weight to the face scores, we obtain 98.71% accuracy in identification.

2 4 6 8 10 12 14 16 18 200.7

0.75

0.8

0.85

0.9

0.95

1

Rank

Identification r

ate

Face (L3DF)

Ear (L3DF)

Fusion (equal wts.)

Fusion (double wts. to face)

2 4 6 8 10 12 14 16 18 200.7

0.75

0.8

0.85

0.9

0.95

1

Rank

Identification r

ate

Face (L3DF)

Ear (L3DF)

Fusion (equal wts.)

Fusion (double wts. to face)

(a) (b)

Fig. 2. Identification results for fusion of ears and faces: (a) with neutral expression.(b) with non-neutral expressions.

392 S.M.S. Islam et al.

10−4

10−3

10−2

10−1

100

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

False accept rate (log scale)

Ve

rifica

tio

n r

ate

Face

Ear

Fusion (double wts. to face)

10−4

10−3

10−2

10−1

100

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

False accept rate (log scale)

Ve

rifica

tio

n r

ate

Face

Ear

Fusion (double wts. to face)

(a) (b)

Fig. 3. ROC curves for fusion of ears and faces: (a) with neutral expression. (b) withnon-neutral expressions.

The plots in the figure also demonstrate that the face data is more stable forlocal 3D features than the ear data.

We also performed experiments with a gallery of neutral faces and probesof face images with different expressions such as smile, cry or anger. For thedatabase mentioned above we obtained rank-1 identification rates of 71.4%,84.4% and 94.6% for the ear, the face and their score level fusion with equalweight respectively (see Fig. 2(b)). However, this result improved to 95.87% bysimply assigning double weight to the scores of the face data.

3.3 Verification Results with Local 3D Features Only

We obtain a verification rate of 98.07% at a False Acceptance Rate (FAR) of0.001 with the neutral faces only. The rate is 72.35 % for the ear data only atthe same FAR. Then, we fuse both the face and the ear scores and achieve averification rate of 98.07%. However, the verification rate increases to 99.68%for the same FAR of 0.001 when we assign double weight to the face scores. Forthe probe dataset with faces with expression changes, the verification rate withface only is 86.98% which improves after fusion with equal weight to 94.60% andwith double weight to the face scores to 96.19% (see Fig. 3).

3.4 Recognition Results Including ICP Scores

Considering the ICP scores from both the ear and the face data during fusion,we obtained a slightly improved result. The rank 1 identification rates and ver-ification rates at 0k.001 FAR obtained for this approach are reported in Table1. Since ICP is computationally expensive and the data with facial expressionsare more critical, we perform experiments with ICP on the probe dataset withears and non-neutral facial expressions only.

Fig. 4 shows some of the probes which are misclassified with face data only butare recognized correctly after fusion with ear data. 2D images of the

Score Level Fusion of Ear and Face Local 3D Features 393

Table 1. Fusion results including ICP scores

Scoresconsidered

ICP fromface

ICP and L3DFfrom face (1)

ICPfrom ear

ICP and L3DFfrom ear (2)

Fusion of(1) and (2)

Id. rates (%) 53.3 84.8 92.4 87.3 98.1

Ver. rates (%) 54.29 77.14 93.97 86.67 96.83

Fig. 4. Example of some correctly recognized probes

corresponding probe range images are also shown in top rows for clear viewof the expressions.

3.5 Misclassification

Only five out of 315 probes are misclassified. The range images of those faceand ear probes are shown in top and bottom rows respectively in Fig. 5. It isapparent that there are large expression changes in the face probes and datalosses due to hair plus large out-of-plane pose variation in the ear probes.

Fig. 5. Example of misclassified probes

4 Comparative Study

On a larger dataset but with multi-instance gallery and probes from the FRGC

v.2 database Mian et al. [27] obtained 86.7% and 92.7% identification and ver-ification rates respectively using face L3DFs involving non-neutral expressions.In this paper, we obtain better results (95.87% and 96.19% respectively) fusing

394 S.M.S. Islam et al.

Table 2. Comparison with other approaches

Authors andReference

Data Type and Database Size Algorithm FusionLevel

Id.Rate

This paper 3D images from 326 subjects L3DFs, ICP Score 98.1%

Woodard et al. [24] 3D images from 85 subjects. ICP and RMS Score 97%

Xu and Mu [18] 190 images from 38 subjects. KCCA Feature98.7%

Xu et al. [17] Images from 79 subjects. KFDA Feature96.8%

Yuan et al. [16] 395 images from 79 subjects. FSLDA Data 96.2%

Chang et al. [34] 197 2D images PCA Data 90.9%

scores from ear L3DFs and face L3DFs (without considering ICP scores). A com-parison of our approach with other ear-face multimodal approaches is illustratedin Table 2.

5 Conclusion

In this paper, an expression-robust multimodal ear-face biometric recognitionapproach is proposed with fusion at the score level. The approach is based onlocal 3D features which are very fast to compute and robust to pose and scalevariations and occlusions due to hair and earrings. The recognition accuracyobtained significantly exceeds that of individual modalities and is suitable foruse in many real-time biometric applications.

Acknowledgments

This research is sponsored by ARC grants DP0664228 and DP0881813. We ac-knowledge the use of the FRGC v.2 and the UND Biometrics databases forear and face detection and recognition. We also like to thank D’Errico for thesurface fitting code used in constructing local 3D features.

References

1. Bowyer, K.W., Chang, K.I., Yan, P., Flynn, P.J., Hansley, E., Sarkar, S.: Multi-modal biometrics: an overview. In: Proc. of Second Workshop on MultiModal UserAuthentication (2006)

2. Jain, A.K., Ross, A., Pankanti, S.: Biometrics: A Tool For Information Security.IEEE Trans. on Information Forensics and Security 1, 125–143 (2006)

3. Ushmaev, O., Novikov, S.: Biometric Fusion: Robust Approach. In: Proc. of theSecond Int’l Workshop on Multimodal User Authentication, MMUA 2006 (2006)

4. Zhao, W., Chellappa, R., Rosenfeld, A., Phillips, P.: Face Recognition: A LiteratureSurvey. ACM Computing Surveys, 399–458 (2003)

5. Bowyer, K., Chang, K., Flynn, P.: A survey of approaches and challenges in 3Dand multi-modal 3D+2D face recognition. Computer Vision and Image Under-standing 101, 1–15 (2006)

Score Level Fusion of Ear and Face Local 3D Features 395

6. Bowyer, K., Chang, K., Flynn, P.: A survey of approaches and challenges in 3Dand multi-modal 3D+2D face recognition. Computer Vision and Image Under-standing 101, 1–15 (2006)

7. Phillips, P., Flynn, P., Scruggs, T., Bowyer, K., Chang, J., Hoffman, K., Marques,J., Min, J., Worek, W.: Overview of the face recognition grand challenge. In: Proc.of CVPR 2005, vol. 1, pp. 947–954 (2005)

8. Li, C., Barreto, A.: Evaluation of 3D Face Recognition in the presence of facialexpressions: an Annotated Deformable Model approach. In: Proc. of ICASSP 2006,vol. 3, pp. 14–19 (2006)

9. Gao, Y., Maggs, M.: Feature-level fusion in personal identification. In: Proc. ofCVPR 2005, vol. 1, pp. 468–473 (2005)

10. Ross, A., Govindarajan, R.: Feature Level Fusion Using Hand and Face Biometrics.In: Proc. of SPIE Conf. on Biometric Technology for Human Identification II, pp.196–204 (2005)

11. Zhou, X., Bhanu, B.: Integrating Face and Gait for Human Recognition. In: Proc.of CVPR Workshop, pp. 55–55 (2006)

12. Wang, Y., Tan, T., Jain, A.K.: Combining face and iris biometrics for identity veri-fication. In: Proc. of Int’l Conf. on Audio- and Video-based Person Authentication,pp. 805–813 (2003)

13. Brunelli, R., Falavigna, D.: Person identification using multiple cues. IEEE Trans.on PAMI 12, 955–966 (1995)

14. Islam, S., Bennamoun, M., Owens, R., Davies, R.: Biometric Approaches of 2D-3DEar and Face: A Survey. In: Sobh, T. (ed.) Advances in Computer and InformationSciences and Engineering, pp. 509–514. Springer, Netherlands (2008)

15. Iannarelli, A.: Ear Identification. Forensic Identification Series. Paramount Pub-lishing Company, Fremont, California (1989)

16. Yuan, L., Mu, Z., Liu, Y.: Multimodal recognition using face profile and ear. In:Proc. of the 1st Int’l Symposium on SCAA, pp. 887–891 (2006)

17. Xu, X.N., Mu, Z.C., Yuan, L.: Feature-level fusion method based on KFDA for mul-timodal recognition fusing ear and profile face. In: Proc. International Conferenceon ICWAPR, vol. 3, pp. 1306–1310 (2007)

18. Xu, X., Mu, Z.: Feature Fusion Method Based on KCCA for Ear and Profile FaceBased Multimodal Recognition. In: Proc. IEEE International Conference on Au-tomation and Logistics, pp. 620–623 (2007)

19. Zhou, X., Bhanu, B.: Integrating Face and Gait for Human Recognition. In: Proc.of CVPR Workshop, pp. 55–55 (2006)

20. Mian, A.S., Bennamoun, M., Owens, R.: 2D and 3D multimodal hybrid face recog-nition. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953,pp. 344–355. Springer, Heidelberg (2006)

21. Yan, P., Bowyer, K.W.: Multi-biometrics 2D and 3D ear recognition. In: Kanade,T., Jain, A., Ratha, N.K. (eds.) AVBPA 2005. LNCS, vol. 3546, pp. 503–512.Springer, Heidelberg (2005)

22. Xu, X., Mu, Z.: Multimodal Recognition Based on Fusion of Ear and Profile Face.In: Proc. the 4th Int’l Conference on Image and Graphics, pp. 598–603 (2007)

23. Jain, A.K., Nandakumar, K., Ross, A.: Score normalization in multimodal biomet-ric systems. Pattern Recognition 38, 2270–2285 (2005)

24. Woodard, D., Faltemier, T., Yan, P., Flynn, P., Bowyer, K.: A comparison of 3dbiometric modalities. In: Proc. of CVPR Workshop, pp. 57–61 (2006)

25. Mian, A.S., Bennamoun, M., Owens, R.: An Efficient Multimodal 2D-3D HybridApproach to Automatic Face Recognition. IEEE Trans. on PAMI 29, 1927–1943(2007)

396 S.M.S. Islam et al.

26. Islam, S., Bennamoun, M., Davies, R.: Fast and Fully Automatic Ear Detection Us-ing Cascaded AdaBoost. In: Proc. of IEEE Workshop on Application of ComputerVision, pp. 1–6 (2008)

27. Mian, A., Bennamoun, M., Owens, R.: Keypoint Detection and Local FeatureMatching for Textured 3D Face Recognition. International Journal of ComputerVision 79, 1–12 (2008)

28. Islam, S., Davies, R., Mian, A., Bennamoun, M.: A Fast and Fully AutomaticEar Recognition Approach Based on 3D Local Surface Features. In: Blanc-Talon,J., Bourennane, S., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2008.LNCS, vol. 5259, pp. 1081–1092. Springer, Heidelberg (2008)

29. Islam, S., Bennamoun, M., Davies, R., Mian, A.: Fast and Fully Automatic EarRecognition. IEEE Trans. on PAMI (under review, 2008)

30. Islam, S., Bennamoun, M., Mian, A., Davies, R.: A Fully Automatic Approach forHuman Recognition from Profile Images Using 2D and 3D Ear Data. In: Proc. ofthe 4th Int’l Symposium on 3DPVT, pp. 131–141 (2008)

31. D’Errico, J.: (Surface fitting using gridfit) available from MATLAB Central FileExchange Select

32. Kittler, J., Hatef, M., Duin, R., Matas, J.: On combining classifiers. IEEE Trans-actions on PAMI 20, 226–239 (1998)

33. Yan, P., Bowyer, K.W.: Biometric recognition using 3d ear shape. IEEE Trans. onPAMI 29, 1297–1308 (2007)

34. Chang, K., Bowyer, K.W., Sarkar, S., Victor, B.: Comparison and combination ofear and face images for appearance-based biometrics. IEEE Trans. on PAMI 25,1160–1165 (2003)