An efficient fingerprint verification system using integrated gabor filters and Parzen Window...

9
Neurocomputing 68 (2005) 208–216 Letters An efficient fingerprint verification system using integrated gabor filters and Parzen Window Classifier Dario Maio, Loris Nanni DEIS, IEIIT—CNR, Universita` di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy Received 5 April 2005; received in revised form 2 May 2005; accepted 3 May 2005 Available online 18 July 2005 Communicated by R.W. Neucomb Abstract This paper proposes a novel method of image-based fingerprint matching based on the features extracted by ‘‘FingerCode’’. The experiments show that our system outperforms the standard ‘‘FingerCode’’ recognition method and other image-based approaches. Combining the matching score generated by the proposed technique with that obtained from a minutiae- based matcher results in an overall improvement in performance of the fingerprint matching. r 2005 Elsevier B.V. All rights reserved. Keywords: Fingerprint authentication; FingerCode; Image-based matcher 1. Introduction Various approaches of automatic fingerprint matching have been proposed in the literature. Fingerprint matching techniques may be broadly classified as being either minutiae-based, correlation-based or image-based [2] (for a good survey see [9]). ARTICLE IN PRESS www.elsevier.com/locate/neucom 0925-2312/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2005.05.003 Corresponding author. E-mail address: [email protected] (L. Nanni).

Transcript of An efficient fingerprint verification system using integrated gabor filters and Parzen Window...

ARTICLE IN PRESS

Neurocomputing 68 (2005) 208–216

0925-2312/$ -

doi:10.1016/j

�Correspo

E-mail ad

www.elsevier.com/locate/neucom

Letters

An efficient fingerprint verification systemusing integrated gabor filters and

Parzen Window Classifier

Dario Maio, Loris Nanni�

DEIS, IEIIT—CNR, Universita di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy

Received 5 April 2005; received in revised form 2 May 2005; accepted 3 May 2005

Available online 18 July 2005

Communicated by R.W. Neucomb

Abstract

This paper proposes a novel method of image-based fingerprint matching based on the

features extracted by ‘‘FingerCode’’. The experiments show that our system outperforms the

standard ‘‘FingerCode’’ recognition method and other image-based approaches. Combining

the matching score generated by the proposed technique with that obtained from a minutiae-

based matcher results in an overall improvement in performance of the fingerprint matching.

r 2005 Elsevier B.V. All rights reserved.

Keywords: Fingerprint authentication; FingerCode; Image-based matcher

1. Introduction

Various approaches of automatic fingerprint matching have been proposed in theliterature. Fingerprint matching techniques may be broadly classified as being eitherminutiae-based, correlation-based or image-based [2] (for a good survey see [9]).

see front matter r 2005 Elsevier B.V. All rights reserved.

.neucom.2005.05.003

nding author.

dress: [email protected] (L. Nanni).

ARTICLE IN PRESS

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216 209

Minutiae-based approaches first extract the minutiae from the fingerprint images;then, the matching between two fingerprints is made using the two sets of minutiaelocations.

Image-based approaches usually extract the features directly from the raw imagesince a grey-level fingerprint image is available; then, the decision is made using thesefeatures. Moreover, image-based approaches may be the only viable choice, forinstance, when image quality is too low to allow reliable minutia extraction. Fordissimilarity matching (in the image-based approaches), in many papers, a simpleEuclidean distance metric is employed instead of more complex classifiers. This isbecause the authors give more relevance to the effectiveness of the feature extractionrather than to the classification technique in the experiments [6,12].

In this work, we show that with a study of machine learning methods,we may drastically increase the performance reducing the equal error rate (EER).The approach here presented combines the features extracted by ‘‘FingerCode’’ [6]with Parzen Window Classifier (PWC) [4]. The outline of the paper is as follows:Sections 2 and 3 provide a short review of the system proposed and themethodologies tested in this work, respectively. Section 4 presents the experimentalresults.

2. Fingerprint verification

The training stage proposed in this work, as depicted in Fig. 1, consists of sixmodules:

Enhancement: the image is enhanced using the technique described in [3]; � Core Detection: the core is extracted by a Poincare-based algorithm; � Orientation Computation: we correct the orientation of the image; � Extra-images generation: creation of other images of the same fingerprint using

morphological operators;

� Feature Extraction: the features are extracted using ‘‘FingerCode’’;

Fig. 1. System proposed.

ARTICLE IN PRESS

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216210

Creation of the Template by PWC: during the learning phase a final template iscreated for each individual using a ‘‘one-class’’ PWC (Classifier).

The Verification Stage proposed in this work consists of five modules:Enhancement; Core Detection; Orientation Computation; Feature Extraction; Classi-

fication: a similarity between the input fingerprint and each template is calculated.

3. Methods

3.1. Enhancement

Since the input image may be noisy, it is first enhanced before applying the featureextraction. Enhancement improves the clarity of the ridges and furrow structure inthe fingerprint image. We use the technique described in [3] to enhance thefingerprint image. In [3] the authors propose a Fourier domain based block-wisecontextual filter approach for enhancing fingerprint images. The image is filtered inthe Fourier domain by a frequency and orientation selective filter, whose parametersare based on the estimated local ridge orientation and frequency.

3.2. Orientation computation

The binarized shape of the finger can be approximated by an ellipse. Theparameters of the best-fitting ellipse, for a given binary shape, are computed usingthe moments [1]. The orientation of the binarized image is approximated by themajor axis of the ellipse and the required angle of rotation is the difference betweennormal and the orientation of image.

3.3. Extra-images generation

Morphological operators [5] are here applied to the fingerprint, to create extra-images. In particular, the erode operator and dilate operator are adopted. Moreover,the original image is rotated by an angle of 22.51 and �22.51 and others images aregenerated.

3.4. Feature extraction (FE)

The four main steps in Fingercode [6] feature extraction algorithm are:

(1)

determine a reference point and region of interest for the fingerprint image; (2) tessellate the region of interest around the reference point; (3) filter the region of interest in eight different directions using a bank of Gabor

filters (eight directions are required to completely capture the local ridgecharacteristics in a fingerprint while only four directions are required to capturethe global configuration);

ARTICLE IN PRESS

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216 211

(4)

compute the average absolute deviation from the mean of grey values inindividual sectors in filtered images to define the feature vector or theFingerCode.

3.5. Fingerparzen (FP)

In this method, the features extracted from the region of interest, around thereference point, are used for creating a template of the individual by PWC.

Given a training set of an individual c composed by a set Dc of feature vectors(mean of grey values in individual sectors in filtered images) of its fingerprints(original images and the extra-images generated at step 4), a non-parametricestimation of its probability density function (pdf) is obtained by using Parzenwindow [4](chapter 4.1–4.3). The Parzen window density estimate can be used toapproximate the probability density pðxÞ of a vector of continuous random variablesX. It involves the superposition of a normalized window function centered on a set ofsamples. Given a set of n d-dimensional samples D ¼ fx1;x2; . . . ;xng the pdf estimateby the Parzen window is given by

pðxÞ ¼ 1n

Xn

i¼1

jðx � xi; hÞ, (1)

where jð�; �Þ is the window function and h is the window width parameter (h ¼ 25 inour tests). Parzen showed that p

_ðxÞ converges to the true density if j( � , � ) and h are

selected properlyWe use Gaussian kernels with covariance matrix S ¼ Identity as window function

(det(.) denotes the determinant of the matrix):

jðy; hÞ ¼1

ð2pÞd=2hd detðSÞ1=2exp �

yTS�1y

2h2

� �. (2)

We get the estimate of the conditional pdf p_ðs=cÞ of each class c using the Parzen

Window method as:

p_ðs=cÞ ¼

1

#Dc

Xsi2Dc

jðs� si; hÞ, (3)

where #Dc denotes the cardinality of the set Dc.The verification procedure of the PWC is performed by evaluating Eq. (3). The

scores (p_ðs=cÞ) are normalized to sum to one as in [8,13].

This classifier is implemented as in PrTools 3.1.7 [13].

3.6. Fingerparzen 1 quadrant (F1Q), fingerparzen 2 quadrants (F2Q) and

fingerparzen 3 quadrants (F3Q)

The image-based approach suffer from two types of distortions: (1) noise, which iscaused by the capturing device or by e.g. dirty fingers and (2) non-linear distortions.

ARTICLE IN PRESS

Fig. 2. The region of interest is partitioned in four smaller sub-images.

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216212

This causes various sub-regions in the sensed image to be distorted differently due tothe non-uniform pressure applied representation.

In these methods (F1Q, F2Q and F3Q), the region of interest around the referencepoint is partitioned in four smaller sub-images (the different quadrants), as depictedin Fig. 2. Consequently, variations in the image will only affect sub-images, and thusthe local information may be better represented.

In F1Q, the features extracted from each sub-image are used for creating adifferent template of the individual by means of PWC.

In F2Q, the features extracted from each combination of 2 sub-images are used forcreating a different template of the individual by means of PWC.

In F3Q, the features extracted from each combination of three sub-images areused for creating a different template of the individual by means of PWC.

For example, in F3Q we have four combinations: 11, 21 and 31 quadrant; 11, 21and 41 quadrant; 11, 31 and 41 quadrant; 21, 31 and 41 quadrant. The templatecreated using the first combination is obtained training a PWC with the featuresextracted from the related quadrants (11, 21, 31) of the original images and of theextra-images generated at step 4.

Please note that the features used in F1Q, F2Q, and F3Q represent the averageabsolute deviations from the mean of grey values in individual sectors in filteredimages, as in FP or in FingerCode. We simply use the ‘‘max rule’’ [7] to combine thescore from the different templates of each individual.

3.7. Multiclassifiers

We simply use the ‘‘mean rule’’ and Dempster-Shafer [7] to combine the scorefrom FingerParzen and a commercial minutiae matcher [14]. Whenever training wasneeded for the fusion schemes (i.e., Dempster-Shafer), 2-fold cross-validation wasused.

4. Experiments and discussion

In this paper, the algorithm is evaluated on images taken from FVC 2002, which isavailable in DVD included in [9]. FVC2002 provided four different fingerprintdatabases: Db1, Db2, Db3 and Db4, three of these databases are acquired by various

ARTICLE IN PRESS

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216 213

sensors, low-cost and high quality, optical and capacitive. Ninety volunteers wererandomly partitioned into three groups (30 person each); each group was associatedto a Db and therefore to a different fingerprint scanners. The fourth databasecontains synthetically generated images. In our verification stage, the comparison oftwo fingerprints must be based on the same core point. However, the comparisononly can be done if both fingerprint images contain their respective core points, buttwo out of eight impressions for each finger in FVC2002 [10] have an exaggeratedisplacement. In our experiments, as in [12], these two impressions were excluded,and hence, there are only six impressions per finger yielding 600 fingerprint images intotal for each database.

We recall that false acceptance rate (FAR) is the frequency of fraudulentaccesses, and false rejection rate (FRR) is the frequency of rejections of peoplewhich should be correctly verified. For the performance evaluation we adoptEER [10], that is the error rate when FAR and FRR assume the same value;it can be adopted as a unique measure for characterizing the security level of abiometric system.

The experiments were conducted by using 1–2 training images per person to forma template while the other impressions are treated as the testing image. The value 1or 2 in the first column of Tables 1–7 is the number of the images used, for each

Table 1

EER obtained using FingerCode matcher

Db1 Db2 Db3 Db4

1 12.5 11.7 29 18

2 9.2 8.7 21 12.4

Table 2

EER obtained by FP

Db1 Db2 Db3 Db4

1 7 6.3 27 16

2 4.5 3.4 25 11

Table 3

EER obtained by F1Q

Db1 Db2 Db3 Db4

1 5.5 5.1 31 9.8

2 3.8 3.4 27 5.8

ARTICLE IN PRESS

Table 5

EER obtained by F3Q

Db1 Db2 Db3 Db4

1 4.5 4.2 27 14

2 3.1 2.7 23 8

Table 6

EER obtained by [12]

Db1 Db2 Db3 Db4

1 5.65 5.3 8.33 Not reported

2 3.97 Not reported Not reported Not reported

Table 7

EER obtained combining F3Q and BioM.

Fus1 Fus2 BioM

1 0.79 0.81 1.6

2 0.75 0.61 1

Table 4

EER obtained by F2Q

Db1 Db2 Db3 Db4

1 5 3.99 29 10.1

2 3.5 3.3 26 7.1

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216214

person, in the training. In Tables 1–6, we show that our systems permit to obtainimprovements in comparison to the state-of-the-art works of image-based fingerprintmatchers [6,12]. Please note:

(1)

To be fair the comparison with FingerCode, the input image of FingerCode isfirst enhanced using [3];

(2)

The test protocol used in [12] is easier than the test protocol used in this work, in[12] the authors only one impression is treated (for each individual) as testingimage;

(3)

In Db3 the performance of our systems are similar to the performance of‘‘FingerCode’’. In this Db the binarized shape of the finger can be approximated

ARTICLE IN PRESS

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216 215

by a circle (and not an ellipse), hence our ‘‘Orientation Computation’’ is not veryreliable.

These tests are reported in Table 7:BioM: commercial minutiae matcher;Fus1: fusion between F3Q and BioM by mean rule;Fus2: fusion between F3Q and BioM by Dempster-Shafer.Please note, we have used only Db2 because the matcher BioM works only using

the images of that database.

5. Conclusions

An integrated framework, that combines FingerCode-based feature extractor andPWC, is presented in this paper. The creation of the template of an individual(training stage) takes3 s in a Pentium IV, 2400 MHz processor (Matlab code), thecreation of the template of an individual (testing stage) takes 0.4 s in the samecomputer. The training stage of PWC takes 0.1 s for a dataset of 100 individuals,while the testing stage of PWC takes 0.05 s for individuals. Please note that thecomputation time of the training stage of PWC is linear to the number of individuals.Experimental results obtained from a large fingerprint database (FVC2002) showthat our approaches leads to a substantial improvement in the overall performance.It must be mentioned that the performance of the proposed technique is inferior tothat of a minutiae-based matcher [9] (Section 4.3). However, when used alongside aminutiae matcher, an improvement in matching performance is observed. The maindrawback of our method is that detecting the core point is not an easy problem. Insome images the core point may not even be present, or may be present close to theboundary of the image. To circumvent the problem we want to use the extractedfeatures themselves to align the fingerprint (as in [11]).

Acknowledgements

This work has been supported by Italian PRIN prot. 2004098034 and byEuropean Commission IST-2002-507634 Biosecure NoE projects.

References

[1] M. Baskan, M. Balut, V. Atalay, Projection-based method for segmentation of human face and its

evaluation. Pattern Recognition Lett. 23 (14) (2002) 1623–1629.

[2] A.M. Bazen, G.T.B. Verwaaijen, S.H. Gerez, L.P.J. Veelenturf, B. Zwaag, Correlation-based

fingerprint verification system, Proceedings Program for research on Integrated systems and circuits,

Veldhoven, the Netherlands, 29 December—1 November, 2000, pp. 205–213.

[3] S. Chikkerur, C. Wu, V. Govindaraju, A Systematic approach for feature extraction in fingerprint

images, International Conference on Bioinformatics and its Applications, Fort Lauderdale, Florida,

USA, 16–19 December, 2004, pp. 344–350.

[4] R. Duda, P. Hart, D. Stork, Pattern Classification, Wiley, New York, 2000.

ARTICLE IN PRESS

D. Maio, L. Nanni / Neurocomputing 68 (2005) 208–216216

[5] R.C. Gonzales, E.R. Woods, Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, 2002.

[6] A.K. Jain, S. Prabhakar, L. Hong, S. Pankanti, Filterbank-based fingerprint matching, IEEE Trans.

Image Process. 5 (9) (2000) 846–859.

[7] L.I. Kuncheva, J.C. Bezdek, R.P.W. Duin, Decision templates for multiple classifier fusion: an

experimental comparison, Pattern Recognition Lett. 34 (2001) 299–314.

[8] S. Macskassy, H. Hirsch, F. Provost, V. Dhar, R. Sankaranarayanan, Information triage using

prospective criteria, Workshop on Machine Learning, Information Retrieval and User Modeling,

Sonthofen, German 13–17 (2001) 1–10.

[9] D. Maio, D. Maltoni, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer,

New York, 2003.

[10] D. Maio, D. Maltoni, R. Cappelli, J.L. Wayman, A.K. Jain, FVC2002: second fingerprint verification

competition, 16th International Conference on Pattern Recognition, Quebec City, QC, Canada,

11–15 August, 2002, pp. 30811–30814.

[11] A. Ross, J. Reisman, A.K. Jain, Fingerprint Matching Using Feature Space Correlation, European

Conference on Computer Vision. Copenhagen, Denmark, 27 May –2 June, 2002, pp. 48–57.

[12] A. Teoh, D. Ngo, O.T. Song, An efficient fingerprint verification system using integrated wavelet and

Fourier-Mellin invariant transform, Image Vision Comput. J. 22 (6) (2004) 503–513.

[13] ftp://ftp.ph.tn.tudelft.nl/pub/bob/prtools/prtools3.1.7/

[14] www.biometrika.it

Loris Nanni is a Ph.D Candidate in Computer Engineering at the University of

Bologna, Italy. He received his Master Degree cum laude in 2002 from the

University of Bologna. In 2002, he started his Ph.D in Computer Engineering at

DEIS, University of Bologna. His research interests include pattern recognition,

and biometric systems (fingerprint classification and recognition, signature

verification, face recognition).

Dario Maio is a Full Professor at the University of Bologna. He is a Chair of the

Cesena Campus and Director of the Biometric Systems Laboratory (Cesena, Italy).

He has published over 150 papers in numerous fields, including distributed

computer systems, computer performance evaluation, database design, informa-

tion systems, neural networks, autonomous agents and biometric systems. He is

author of the book: ‘‘Biometric Systems, Technology, Design and Performance

Evaluation’’, Springer, London, 2005, and of the ‘‘Handbook of Fingerprint

Recognition’’, Springer, New York, 2003 (received the PSP award from the

Association of American Publishers). Before joining University of Bologna, he

received a fellowship from the C.N.R. (Italian National Research Council) for

working on the Air Traffic Control Project. He received a degree in Electronic

Engineering from the University of Bologna in 1975. He is an IEEE member. He is with DEIS and IEIIT-

C.N.R.; he teaches database and information systems.