A Frequency-based Approach for Features Fusion in Fingerprint and Iris Multimodal Biometric...

12
384 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C:APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010 A Frequency-based Approach for Features Fusion in Fingerprint and Iris Multimodal Biometric Identification Systems Vincenzo Conti, Carmelo Militello, Filippo Sorbello, Member, IEEE, and Salvatore Vitabile, Member, IEEE Abstract—The basic aim of a biometric identification system is to discriminate automatically between subjects in a reliable and dependable way, according to a specific-target application. Mul- timodal biometric identification systems aim to fuse two or more physical or behavioral traits to provide optimal False Acceptance Rate (FAR) and False Rejection Rate (FRR), thus improving sys- tem accuracy and dependability. In this paper, an innovative multi- modal biometric identification system based on iris and fingerprint traits is proposed. The paper is a state-of-the-art advancement of multibiometrics, offering an innovative perspective on features fusion. In greater detail, a frequency-based approach results in a homogeneous biometric vector, integrating iris and fingerprint data. Successively, a hamming-distance-based matching algorithm deals with the unified homogenous biometric vector. The proposed multimodal system achieves interesting results with several com- monly used databases. For example, we have obtained an inter- esting working point with FAR = 0% and FRR = 5.71% using the entire fingerprint verification competition (FVC) 2002 DB2B database and a randomly extracted same-size subset of the BATH database. At the same time, considering the BATH database and the FVC2002 DB2A database, we have obtained a further interest- ing working point with FAR = 0% and FRR = 7.28% ÷ 9.7%. Index Terms—Fusion techniques, identification systems, iris and fingerprint biometry, multimodal biometric systems. I. INTRODUCTION I N AN ACTUAL technological scenario, where Information and Communication Technologies (ICT) provide advanced services, large-scale and heterogeneous computer systems need strong procedures to protect data and resources access from unauthorized users. Authentication procedures, based on the simple username–password approach, are insufficient to provide a suitable security level for the applications requiring a high level of protection for data and services. Biometric-based authentication systems represent a valid al- ternative to conventional approaches. Traditionally biometric Manuscript received May 29, 2009; revised November 20, 2009; accepted February 7, 2010. Date of publication April 22, 2010; date of current version June 16, 2010. This paper was recommended by Associate Editor E. R. Weippl. V. Conti, C. Militello, and F. Sorbello are with the Department of Com- puter Engineering, University of Palermo, Palermo 90128, Italy (e-mail: [email protected]; [email protected]; [email protected]). S. Vitabile is with the Department of Biopathology, Medical and Foren- sic Biotechnologies, University of Palermo, Palermo 90127, Italy (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCC.2010.2045374 systems, operating on a single biometric feature, have many limitations, which are as follows [1]. 1) Trouble with data sensors: Captured sensor data are often affected by noise due to the environmental conditions (in- sufficient light, powder, etc.) or due to user physiological and physical conditions (cold, cut fingers, etc). 2) Distinctiveness ability: Not all biometric features have the same distinctiveness degree (for example, hand- geometry-based biometric systems are less selective than the fingerprint-based ones). 3) Lack of universality: All biometric features are universal, but due to the wide variety and complexity of the human body, not everyone is endowed with the same physical features and might not contain all the biometric features, which a system might allow. Multimodal biometric systems are a recent approach devel- oped to overcome these problems. These systems demonstrate significant improvements over unimodal biometric systems, in terms of higher accuracy and high resistance to spoofing. There is a sizeable amount of literature that details differ- ent approaches for multimodal biometric systems, which have been proposed [1]–[4]. Multibiometrics data can be combined at different levels: fusion at data-sensor level, fusion at the feature- extraction level, fusion at the matching level, and fusion at the decision level. As pointed out in [5], features-level fusion is eas- ier to apply when the original characteristics are homogeneous because, in this way, a single resultant feature vector can be calculated. On the other hand, feature-level fusion is difficult to achieve because: 1) the relationship between the feature spaces could not be known; 2) the feature set of multiple modalities may be incompatible; and 3) the computational cost to process the resultant vector is too high. In this paper, a template-level fusion algorithm resulting in a unified biometric descriptor and integrating fingerprint and iris features is presented. Considering a limited number of meaning- ful descriptors for fingerprint and iris images, a frequency-based codifying approach results in a homogenous vector composed of fingerprint and iris information. Successively, the Hamming Distance (HD) between two vectors is used to obtain its simi- larity degree. To evaluate and compare the effectiveness of the proposed approach, different tests on the official fingerprint veri- fication competition (FVC) 2002 DB2 fingerprint database [30] and the University of Bath Iris Image Database (BATH) iris database [31] have been performed. In greater details, the test conducted on the FVC2002 DB2B database and a subset of the BATH database (ten users) have resulted in False Acceptance 1094-6977/$26.00 © 2010 IEEE

Transcript of A Frequency-based Approach for Features Fusion in Fingerprint and Iris Multimodal Biometric...

384 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

A Frequency-based Approach for Features Fusionin Fingerprint and Iris Multimodal Biometric

Identification SystemsVincenzo Conti, Carmelo Militello, Filippo Sorbello, Member, IEEE, and Salvatore Vitabile, Member, IEEE

Abstract—The basic aim of a biometric identification system isto discriminate automatically between subjects in a reliable anddependable way, according to a specific-target application. Mul-timodal biometric identification systems aim to fuse two or morephysical or behavioral traits to provide optimal False AcceptanceRate (FAR) and False Rejection Rate (FRR), thus improving sys-tem accuracy and dependability. In this paper, an innovative multi-modal biometric identification system based on iris and fingerprinttraits is proposed. The paper is a state-of-the-art advancementof multibiometrics, offering an innovative perspective on featuresfusion. In greater detail, a frequency-based approach results ina homogeneous biometric vector, integrating iris and fingerprintdata. Successively, a hamming-distance-based matching algorithmdeals with the unified homogenous biometric vector. The proposedmultimodal system achieves interesting results with several com-monly used databases. For example, we have obtained an inter-esting working point with FAR = 0% and FRR = 5.71% usingthe entire fingerprint verification competition (FVC) 2002 DB2Bdatabase and a randomly extracted same-size subset of the BATHdatabase. At the same time, considering the BATH database andthe FVC2002 DB2A database, we have obtained a further interest-ing working point with FAR = 0% and FRR = 7.28% ÷ 9.7%.

Index Terms—Fusion techniques, identification systems, iris andfingerprint biometry, multimodal biometric systems.

I. INTRODUCTION

IN AN ACTUAL technological scenario, where Informationand Communication Technologies (ICT) provide advanced

services, large-scale and heterogeneous computer systems needstrong procedures to protect data and resources access fromunauthorized users. Authentication procedures, based on thesimple username–password approach, are insufficient to providea suitable security level for the applications requiring a high levelof protection for data and services.

Biometric-based authentication systems represent a valid al-ternative to conventional approaches. Traditionally biometric

Manuscript received May 29, 2009; revised November 20, 2009; acceptedFebruary 7, 2010. Date of publication April 22, 2010; date of current versionJune 16, 2010. This paper was recommended by Associate Editor E. R. Weippl.

V. Conti, C. Militello, and F. Sorbello are with the Department of Com-puter Engineering, University of Palermo, Palermo 90128, Italy (e-mail:[email protected]; [email protected]; [email protected]).

S. Vitabile is with the Department of Biopathology, Medical and Foren-sic Biotechnologies, University of Palermo, Palermo 90127, Italy (e-mail:[email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMCC.2010.2045374

systems, operating on a single biometric feature, have manylimitations, which are as follows [1].

1) Trouble with data sensors: Captured sensor data are oftenaffected by noise due to the environmental conditions (in-sufficient light, powder, etc.) or due to user physiologicaland physical conditions (cold, cut fingers, etc).

2) Distinctiveness ability: Not all biometric features havethe same distinctiveness degree (for example, hand-geometry-based biometric systems are less selective thanthe fingerprint-based ones).

3) Lack of universality: All biometric features are universal,but due to the wide variety and complexity of the humanbody, not everyone is endowed with the same physicalfeatures and might not contain all the biometric features,which a system might allow.

Multimodal biometric systems are a recent approach devel-oped to overcome these problems. These systems demonstratesignificant improvements over unimodal biometric systems, interms of higher accuracy and high resistance to spoofing.

There is a sizeable amount of literature that details differ-ent approaches for multimodal biometric systems, which havebeen proposed [1]–[4]. Multibiometrics data can be combined atdifferent levels: fusion at data-sensor level, fusion at the feature-extraction level, fusion at the matching level, and fusion at thedecision level. As pointed out in [5], features-level fusion is eas-ier to apply when the original characteristics are homogeneousbecause, in this way, a single resultant feature vector can becalculated. On the other hand, feature-level fusion is difficult toachieve because: 1) the relationship between the feature spacescould not be known; 2) the feature set of multiple modalitiesmay be incompatible; and 3) the computational cost to processthe resultant vector is too high.

In this paper, a template-level fusion algorithm resulting in aunified biometric descriptor and integrating fingerprint and irisfeatures is presented. Considering a limited number of meaning-ful descriptors for fingerprint and iris images, a frequency-basedcodifying approach results in a homogenous vector composedof fingerprint and iris information. Successively, the HammingDistance (HD) between two vectors is used to obtain its simi-larity degree. To evaluate and compare the effectiveness of theproposed approach, different tests on the official fingerprint veri-fication competition (FVC) 2002 DB2 fingerprint database [30]and the University of Bath Iris Image Database (BATH) irisdatabase [31] have been performed. In greater details, the testconducted on the FVC2002 DB2B database and a subset of theBATH database (ten users) have resulted in False Acceptance

1094-6977/$26.00 © 2010 IEEE

CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 385

Rate (FAR) = 0% and a False Rejection Rate (FRR) = 5.71%,while the tests conducted on the FVC2002 DB2A database andthe BATH database (50 users) have resulted in an FAR = 0%and an FRR = 7.28% ÷ 9.7%.

The paper is organized as follows. Section II presents the mainrelated works. Section III illustrates the main techniques for mul-timodal biometric authentication systems. Section IV describesthe proposed multimodal authentication system. Section Vshows the achieved experimental results. Section VI deals withthe comparison of the state-of-the-art solutions. Finally, someconclusions and future works are reported in Section VII.

II. RELATED WORKS

A variety of articles can be found, which propose different ap-proaches for unimodal and multimodal biometric systems. Tra-ditionally, unimodal biometric systems have many limitations.Multimodal biometric systems are based on different biometricfeatures and/or introduce different fusion algorithms for thesefeatures. Many researchers have demonstrated that the fusionprocess is effective, because fused scores provide much betterdiscrimination than individual scores. Such results have beenachieved using a variety of fusion techniques (see Section IIIfor further details). In what follows, the most meaningful worksof the aforementioned fields are described.

An unimodal fingerprint verification and classification systemis presented in [7]. The system is based on a feedback path forthe feature-extraction stage, followed by a feature-refinementstage to improve the matching performance. This improvementis illustrated in the contest of a minutiae-based fingerprint ver-ification system. The Gabor filter is applied to the input imageto improve its quality.

Ratha et al. [9] proposed a unimodal distortion-tolerant fin-gerprint authentication technique based on graph representa-tion. Using the fingerprint minutiae features, a weighted graphof minutiae is constructed for both the query fingerprint and thereference fingerprint. The proposed algorithm has been testedwith excellent results on a large private database with the use ofan optical biometric sensor.

Concerning iris recognition systems in [10], the Gabor fil-ter and 2-D wavelet filter are used for feature extraction. Thismethod is invariant to translation and rotation and is tolerantto illumination. The classification rate on using the Gabor is98.3% and the accuracy with wavelet is 82.51% on the Instituteof Automation of the Chinese Academy of Sciences (CASIA)database.

In the approach proposed in [11], multichannel and Gaborfilters have been used to capture local texture information of theiris, which are used to construct a fixed-length feature vector.The results obtained were FAR = 0.01% and FRR = 2.17% onCASIA database.

Generally, unimodal biometric recognition systems presentdifferent drawbacks due its dependency on the unique bio-metric feature. For example, feature distinctiveness, featureacquisition, processing errors, and features that are temporallyunavailable can all affect system accuracy. A multimodal bio-metric system should overcome the aforementioned limits byintegrating two or more biometric features.

Conti et al. [12] proposed a multimodal biometric sys-tem using two different fingerprint acquisitions. The matchingmodule integrates fuzzy-logic methods for matching-score fu-sion. Experimental trials using both decision-level fusion andmatching-score-level fusion were performed. Experimental re-sults have shown an improvement of 6.7% using the matching-score-level fusion rather than a monomodal authenticationsystem.

Yang and Ma [2] used fingerprint, palm print, and hand ge-ometry to implement personal identity verification. Unlike othermultimodal biometric systems, these three biometric featurescan be taken from the same image. They implemented matching-score fusion at different levels to establish identity, performinga first fusion of the fingerprint and palm-print features, and suc-cessively, a matching-score fusion between the multimodal sys-tem and the palm-geometry unimodal system. The system wastested on a database containing the features self-constructed by98 subjects.

Besbes et al. [13] proposed a multimodal biometric systemusing fingerprint and iris features. They use a hybrid approachbased on: 1) fingerprint minutiae extraction and 2) iris tem-plate encoding through a mathematical representation of theextracted iris region. This approach is based on two recognitionmodalities and every part provides its own decision. The finaldecision is taken by considering the unimodal decision throughan “AND” operator. No experimental results have been reportedfor recognition performance.

Aguilar et al. [14] proposed a multibiometric method usinga combination of fast Fourier transform (FFT) and Gabor fil-ters to enhance fingerprint imaging. Successively, a novel stagefor recognition using local features and statistical parameters isused. The proposed system uses the fingerprints of both thumbs.Each fingerprint is separately processed; successively, the uni-modal results are compared in order to give the final fusedresult. The tests have been performed on a fingerprint databasecomposed of 50 subjects obtaining FAR = 0.2% and FRR =1.4%.

Subbarayudu and Prasad [15] presented experimental re-sults of the unimodal iris system, unimodal palmprint system,and multibiometric system (iris and palmprint). The systemfusion utilizes a matching scores feature in which each sys-tem provides a matching score indicating the similarity of thefeature vector with the template vector. The experiment wasconducted on the Hong Kong Polytechnic University Palm-print database. A total of 600 images are collected from 100different subjects.

In contrast to the approaches found in literature and detailedearlier, the proposed approach introduces an innovative ideato unify and homogenize the final biometric descriptor usingtwo different strong features—the fingerprint and the iris. Inopposition to [2], this paper shows the improvements introducedby adopting the fusion process at the template level as well asthe related comparisons against the unimodal elements and theclassical matching-score fusion-based multimodal system. Inaddition, the system proposed in this paper has been tested onthe official fingerprint FVC2002 DB2 and iris BATH databases[30], [31].

386 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

III. MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEMS

Fusion strategies can be divided into two main categories:premapping fusion (before the matching phase) and postmap-ping fusion (after the matching phase). The first strategy dealswith the feature-vector fusion level [17]. Usually, these tech-niques are not used because they result in many implementationproblems [1]. The second strategy is realized through fusion atthe decision level, based on some algorithms, which combinesingle decisions for each component of the system. Further-more, the second strategy is also based on the matching-scorelevel, which combines the matching scores of each componentsystem.

The biometric data can be combined at several different levelsof the identification process. Input can be fused in the followinglevels [1], [5].

1) Data-sensor level: Data coming from different sensorscan be combined, so that the resulting information are insome sense better than they would be possible when thesesources were individually used. The term better in thatcase can mean more accurate, more complete, or moredependable.

2) Feature-extraction level: The information extracted fromsensors of different modalities is stored in vectors on thebasis of their modality. These feature vectors are then com-bined to create a joint feature vector, which is the basis forthe matching and recognition process. One of the poten-tial problems in this strategy is that, in some cases, a veryhigh-dimensional feature vector results from the fusionprocess. In addition, it is hard to generate homogeneousfeature vectors from different biometrics in order to use aunified matching algorithm.

3) Matching-score level: This is based on the combinationof matching scores, after separate feature extraction andcomparison between stored data and test data for each sub-system have been compiled. Starting from the matchingscores or distance measures of each subsystem, an over-all matching score is generated using linear or nonlinearweighting.

4) Decision level: With this approach, each biometric sub-system completes autonomously the processes of featureextraction, matching, and recognition. Decision strategiesare usually of Boolean functions, where the recognitionyields the majority decision among all present subsystems.

Fusion at template level is very difficult to obtain, since bio-metric features have different structures and distinctiveness. Inthis paper, we introduce a frequency approach based on Log-Gabor filter [18], to generate a unified homogeneous templatefor fingerprint and iris features. In greater detail, the proposedapproach performs fingerprint matching using the segmentedregions (Regions Of Interests, ROIs) surrounding fingerprintsingularity points. On the other hand, iris preprocessing aims todetect the circular region surrounding the iris. As a result, weadopted a Log-Gabor-algorithm-based codifier to encode bothfingerprint and iris features, obtaining a unified template. Suc-cessively, the HD on the fused template has been used for thesimilarity index computation.

Fig. 1. General schema of the proposed multimodal system.

IV. PROPOSED MULTIMODAL BIOMETRIC SYSTEM

In this paper, a multimodal biometric system, based on fin-gerprint and iris characteristics, is proposed. As shown in Fig. 1,the proposed multimodal biometric system is composed of twomain stages: the preprocessing stage and matching stage. Iris andfingerprint images are preprocessed to extract the ROIs, basedon singularity regions, surrounding some meaningful points.Despite to the classic minutiae-based approach, the fingerprint-singularity-regions-based approach requires a low executiontime, since image analysis is based on a few points (core anddelta) rather than 30–50 minutiae. Iris image preprocessing isperformed by segmenting the iris region from eye and delet-ing the eyelids and eyelashes. The extracted ROIs are used asinput for the matching stage. They are normalized, and then,processed through a frequency-based approach, in order to gen-erate a homogeneous template. A matching algorithm is basedon the HD to find the similarity degree. In what follows, eachphase is briefly described.

A. Preprocessing Stage

An ROI is a selected part of a sample or an image used toperform particular tasks. In what follows, the fingerprint singu-larity regions extraction process and the iris region extractionprocess are described.

1) Fingerprint Singularity Region Extraction: Singularityregions are particular fingerprint zones surrounding singularitypoints, namely the “core” and the “delta.” Several approachesfor singularity-point detection had been proposed in litera-ture. They can be broadly categorized into techniques based on1) the Poincare index; 2) heuristics; 3) irregularity or curvatureoperators; and 4) template matching.

By far, the most popular method has been proposed byKawagoe and Tojo [19]. The method is based on the Poincare

CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 387

index since it assumes that the core, double core, and delta gen-erate a Poincare index equal to 180◦, 360◦, and −180◦, respec-tively. Fingerprint singularity region extraction is composed ofthree main blocks: directional image extraction, Poincare indexcalculation, and singularity-point detection.

Singularity points are not included in fingerprint images wheneither the acquired image is only a partial image, or it is an archfingerprint. In these cases, singularity points cannot be detected,so that the whole process will fail. For the aforementioned rea-sons, a new technique, showing good accuracy rates and lowcomputational cost, is introduced to detect pseudosingularitypoints.

a) Core and delta extraction: Singularity-point detectionis performed by checking the Poincare indexes associated withthe fingerprint direction matrix. As pointed out before, the sin-gularity points with a Poincare index equal to 180◦, −180◦,360◦ are associated with the core, the delta, and the double core,respectively. In greater detail, the directional image extractionphase is composed of the following four sequential tasks:

1) Gx and Gy gradients calculation using Sobel operators;2) Dx and Dy derivatives calculation;3) θ(i, j) angle calculation of the (i, j) block;4) Gaussian smoothing filter application on angle matrix.Finally, the singularity points are detected, accordingly to the

Poincare indexes.b) Pseudosingularity-point detection and fingerprint clas-

sification: The extraction step described in the previous sectionmay be affected by several drawbacks to the fingerprint acquisi-tion process. In addition, arch fingerprints have no core and nodelta points, so that the previous process will give no points asoutput. Generally, fingerprint images do not contain the samenumber of singularity points. A whorl class fingerprint has twocore and two delta points, a left-loop or a right-loop fingerprinthas one core and one delta, and a tented arch fingerprint hasonly a core point, while an arch fingerprint has no singularitypoints.

A directional map is used to identify the real class of theprocessed fingerprint image. Let us call α as the angle betweena directional segment and the horizontal axis. Fingerprint topo-logical structure shows that the core–delta path follows onlyhigh angular variation points in a vertical direction. For thisreason, each α > π/2 will be set to α–π, so that the directionalmap is mapped in the range [−π/2, π/2]. The new mappingmakes possible to highlight the points with an angular variationclosed to π/2 in the directional map [see the white curve inFig. 2(b)].

Accordingly to (1), for each kernel (i, j), the differences com-puted among each directional map element (i, j) and its 8_neigh-bors are used to detect the zones with the highest vertical differ-ences. Finally, according to (2), the point having the maximumangular difference is selected

differencek (i, j)= angle(i, j)− k neighbor(i, j),

k = 1,. . . , 8 (1)

max difference angle(i, j) = max(differencek (i, j)). (2)

Fig. 2. Classification of a partially acquired image. (a) Original fingerprintimage with the overlapping line between the core and pseudodelta point. (b) Mapof highest differences where the white line direction identifies pseudodelta point.c) Midpoint Md , the core, and pseudodelta-point segment with the orthogonalLR segment.

In unbroken and well-acquired images, high value pointsidentify a path between the core and delta points.

In a partially acquired left-loop, right-loop, and tented archfingerprint, this value identifies a path between the extractedsingularity point and the missing point. Fig. 2(a) shows an ex-ample of a partial left-loop fingerprint, in which there is only onecore point (highlighted by a green circle). Fig. 2(b) representsthe matrix of the higher angle differences. The white line startsfrom the core point and goes toward the missing delta point.The proposed algorithm follows this line and approximates a“pseudodelta point” (highlighted by a red triangle), which willbe used for image classification.

Fingerprint classification is performed considering the mu-tual position between the core and pseudosingularity point. Asshown in Fig. 2(c), the directional map analysis refers to threepoints, identified by the core and pseudodelta line midpoint Md .The midpoint is used to set the L point and the R point, on theorthogonal line, in such a way that Md is the midpoint of the LRsegment.

The mutual positions between core, pseudodelta point, Md ,L, and R identify fingerprint class applying the following rules.

1) Left-loop class:

abs(core delta angle − R angle)

< abs(core delta angle − L angle)

abs(core delta angle − Md angle) > tolerance angle).

2) Right-loop class:

abs(core delta angle−R angle)

> abs(core delta angle−L angle)

abs(core delta angle−Md angle) > tolerance angle).

3) Tented arch class:

abs(core delta angle−R angle)

< abs(core delta angle−L angle)

abs(core delta angle−Md angle) < tolerance angle).

where “core_delta_angle” is the angle between the core andpseudodelta-point segment and the horizontal axis, “R_angle”

388 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

Fig. 3. For arch fingerprints, the pseudosingularity point refers to the maxi-mum curvature points. (a) Arch fingerprint. (b) 2-D view of the curvature map.

Fig. 4. Iris ROI extraction scheme: boundary localization, pupil segmentation,iris segmentation, and eyelids and eyelashes erosion.

is the angle of the R point in the directional map, “L_angle” isthe angle of the L point in the directional map, and “Md angle”is the angle of the Md point in the directional map.

Whorl fingerprint topology is characterized by two closed andcentered core points, so that the double core detection selectsthe fingerprint class.

Arch fingerprints have no singularity points. In this case,the proposed approach detects the maximum curvature point,namely pseudocore point, and the neighbor pixels, i.e., theneeded ROI (see Fig. 3).

2) Iris ROIs Extraction: As shown in Fig. 4, iris ROI seg-mentation process is composed of four tasks: boundary local-ization, pupil segmentation, iris segmentation, and eyelid andeyelash erosion.

a) Boundary localization: The boundaries in an iris imageare extracted by means of edge-detection techniques to computethe parameters of the iris and pupil neighbors.

The approach aims to detect the circumference center andradius of the iris and pupil region, even if the circumferencesare usually not concentric [20]. Finally, the eyelids and eyelashesform are located.

b) Pupil segmentation: The pupil-identification phase iscomposed of two steps. The first step is an adaptive thresholding,and the second step is a morphological opening operation [21].

Fig. 5. Pupil segmentation. Thresholding application and the morphologicalopening operation.

The first step is able to identify the pupil, but it cannot elim-inate the presence of noise due to the acquisition phase. Thesecond step is based on a morphological opening operation per-formed using a structural element of circular shape. As shown inFig. 5, the step ends when the morphological opening operationreduces the pupil area to approximate the structural element.Successively, the pupil radius and center are identified. Theidentification algorithm is executed in two steps: the first stepdetects connected circular areas and almost connected circularareas, trying to get the better pair (radius, center) with respect theprevious phase. The second step, starting from a square aroundthe coordinates of the obtained centroid, measures the maxi-mum gradient variations along the circumferences centered inthe identified points with a different radius.

c) Iris segmentation: The iris boundary is detected in twosteps. Image-intensity information is converted into a binaryedge map. Successively, the set of edge points is subjected tovoting to instantiate the contour of the parametric values.

The edge-map is recovered using the Canny algorithm foredge detection [22]. This operation is based on the thresholdingof the magnitude of the image-intensity gradient. In order toincorporate directional tuning, image-intensity derivatives areweighted to amplify certain ranges of orientation. For example,in order to recognize this boundary contour, the derivatives areweighted to be selective for vertical edges. Then, a voting pro-cedure of Canny operator extracted points is applied to erase thedisconnected points along the edge. In greater detail, the Houghtransform [23] is defined, as in (3), for the circular boundaryand a set of recovered edge points xj , yj (with j = 1, . . ., n)

H (xc, yc , r) =n∑

j=1

h (xj , yj , xc , yc , r) (3)

where

h (xj , yj , xc , yc , r) =

{1, if g (xj , yj , xc , yc , r) = 0

0, otherwise

}(4)

with

g (xj , yj , xc , yc , r) = (xj − xc)2 + (yj − yc)

2 − r2 . (5)

For each edge point (xj , yj ), g(xj , yj , xc , yc , r) = 0 for everyparameter (xc, yc , r) that represents a circle through that point.

CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 389

Fig. 6. Examples of the segmentation process. (a) Original BATH databaseiris image. (b) Image with pupil and iris circumference, eyelid, and eyelashpoints. (c) Extracted iris ROI after the segmentation.

The triplet maximizing H corresponds to the largest number ofedge points that represents the contour of interest.

d) Eyelid and eyelash erosion: Eyelids and eyelashes areconsidered to be “noise,” and therefore, are seen to degradethe system performance. Eyelashes are of two types: separableeyelashes and multiple eyelashes. The eyelashes present in ourdataset belong to the separable type. Initially, the eyelids areisolated by fitting a line to the upper and lower eyelid using thelinear Hough transform. Successively, the Canny algorithm isused to create the edge map, and only the horizontal gradientinformation is considered.

As shown in Fig. 6, the real part of the Gabor filter (1-DGabor filter) in the spatial domain has been used [24] for eyelashlocation. The convolution between the separable eyelash andthe real part of the Gabor filter is very small. For this reason,if a resultant point is smaller than an empirically predefinedthreshold, the point belongs to an eyelash. Each point in theeyelash must be connected to another eyelash point or to aneyelid. If at any point, one of the two previous criterions isfulfilled, its neighbor pixels are required to check whether ornot they belong to an eyelash or eyelid. If none of the neighborpixels has been classified as a point in an eyelid or in eyelashes,it is not consider as a pixel in an eyelash.

B. Matching Algorithm

Fusion is performed by combining the biometric templateextracted from every pair of fingerprints and irises representinga user. First, the identifiers extracted from the original imagesare stored in different feature vectors. Successively, each vectoris normalized in polar coordinates. Then, they are combined.Finally, HD is used for matching score computation. In whatfollows, the applied techniques for ROI normalization, templategeneration, and HD will be described.

1) ROI Normalization: Since the fingerprint and iris im-ages of different people may have different sizes, a normal-ization operation must be performed after ROIs extraction. Fora given person, biometric feature size may vary because of illu-mination changes during the iris-acquisition phase or pressure

Fig. 7. (a) Polar coordinate system for an iris ROI and the correspondinglinearized visualization. (b) Two examples showing the linearized iris and fin-gerprint ROI images, respectively.

variation during the fingerprint-acquisition phase. Equalizingthe histogram, ROIs show a uniform level of brightness.

The coordinate transformation process produces a 448 × 96biometric pattern for each meaning ROI: 448 is the numberof the chosen radial samples (to avoid data loss in the roundangle), while 96 pixels are the highest difference between irisand pupil radius in the iris images, or the ROI radius in thefingerprint images. In order to achieve invariance with regardsto roto-translation and scaling distortion, the r polar coordinateis normalized in the [0, 1] range. Fig. 7(a) depicts the polarcoordinate system for an iris ROI and the corresponding lin-earized visualization. In Fig. 7(b), two examples of iris andfingerprint ROI images are depicted. For each Cartesian pointof the ROI, image is assigned a polar coordinates pair (r, θ),with r ∈ [R1 , R2] and θ ∈ [0, 2π], where R1 is the pupil radiusand R2 is the iris radius. For fingerprint ROIs, R1 = 0.

Since iris eyelashes and eyelids generate some corrupted ar-eas, a noise mask corresponding to the aforementioned cor-rupted areas is associated with each linearized ROI. In addition,the phase component will be meaningless in the regions wherethe amplitude is zero, so that these regions are also added tothe noise mask. Fig. 8 depicts the biometric template and therelative noise masks. In this example, the noise mask associatedwith a fingerprint singularity region [see Fig. 8(c)] is completelyblack because the region is inside the fingerprint image and nonoise is considered to be present. On the contrary, if ROIs arepartially discovered, the noise mask will contain the white zones(no information), as shown in the Fig. 8(a).

2) Homogenous Template Generation: The homogenousbiometric vector from fingerprint and iris data is composed of bi-nary sequences representing the unimodal biometric templates.The resulting vector is composed of a header and a biomet-ric pattern. The biometric pattern is composed of two subpat-terns as well. The first pattern is related to the extracted finger-print singularity points, reporting the codified and normalizedROIs.

The second pattern is related to the extracted iris code, re-porting the codified and normalized ROIs. In greater detail, the1-B header contains the following information.

390 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

Fig. 8. (a) and (c) Noise masks used to extract useful information from theiris and fingerprint descriptors. The black area is the useful area used to performthe matching process. The white areas are related to the noisy zones and con-sequently they are discarded in the matching process. The iris and fingerprintdescriptors are reported in (b) and (d), respectively.

1) Core number (2 bit): indicates the number of core pointsin the fingerprint image (0 if no ROI has been extractedaround the core).

2) Delta number (2 bit): indicates the number of delta pointsin the fingerprint image (0 if no ROI has been extractedaround the delta).

3) Fingerprint class (3 bit): indicates the membership towhich of the five fingerprint classes.

4) Iris ROI extraction (1 bit): 0 if the segmentation step hasfailed.

Normalized fingerprint and iris ROIs are codified usingthe Log-Gabor approach [18]. Different from magnitude-basedmethods [38], Gabor filters are tool used to provide local fre-quency information [25], [37]. However, the standard Gaborfilter has two limitations: its bandwidth and the limited infor-mation extraction on a broad spectrum. An alternative Gaborfilter is the Log-Gabor filter. This filter can be designed witharbitrary bandwidth, and it represents a Gabor filter constructedas a Gaussian on a logarithmic scale. The frequency responseof this filter is defined by the following equation:

G(f) = exp(− log2(f/f0)2 log2(σ/f0)

)(6)

where f0 is the central frequency and σ is the filter bandwidth.In our approach, the implementation of the Log-Gabor filter

proposed by Masek [26] has been considered. As depicted inFig. 9, each row of the normalized pattern is considered as a1-D signal, processed by a convolution operation using the 1-DLog-Gabor filter.

The phase component, obtained from the 1-D Log-Gaborfilter real and imaginary parts, is then quantized in four levels,using the Daugman method [27], [28]. Therefore, each filtergenerates a 2-bits coding for each iris/fingerprint ROI pixel.The phase-quantization coding is performed through the Graycode, so that only 1 bit changes when moving from one quadrantto the next one. This will minimize the number of differing bitswhen two patterns are slightly misaligned [26].

The different coded biometric patterns are then concatenated,thus obtaining a 3-D biometric pattern, where each element isrepresented by a voxel (see Fig. 10).

Fig. 9. Codified fingerprint or iris ROIs obtained applying the four levelsquantized in the 1-D Log-Gabor filter.

Fig. 10. 3-D biometric pattern obtained by iris coding, fingerprint core regioncoding, and fingerprint delta region coding. The associated template header willaddress the meaning voxels in the 3-D template.

3) HD-Based Matching: The matching score is calculatedthrough the HD calculation between two final fused templates.The template obtained in the encoding process will need a cor-responding matching metric that provides a measure of the sim-ilarity degree between the two templates. The result of the mea-sure is then compared with an experimental threshold to decidewhether or not the two representations belong to the same user.The metric used in this paper is also used by Daugman [27], [28]in his recognition system.

If two patterns X and Y have to be compared, the HD isdefined as the sum of discordant bits in homologous position(XOR operation between X and Y bits). In other words

HD =1N

N∑j=1

XOR(Xj , Yj ) (7)

where N is the total number of bits.If two patterns are completely independent, the HD between

them should be equal to 0.5, since independence implies thatthe two strings of bits are completely random so that 0.5 is theability to set every bit to 1 and vice versa. If the two patterns ofthe same biometric descriptor are processed, then, their distanceshould be zero.

CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 391

TABLE ICOMPOSITION AND DETAILS OF THE USED FINGERPRINT AND IRIS DATABASES

(IN ITALIC THE REDUCED DATABASES)

The algorithm used in this paper uses a mask to identify theuseful area for the matching process. Therefore, the HD is cal-culated based only on the significant bits of the two templates.The modified and used formula is (8), where Xj and Yj are thecorrespondent bits to compare, Xnj and Ynj are the correspond-ing bits in the noisy mask, and N is the number of the bits torepresent every template

HD =1

N −∑N

k=1 OR(Xnk, Ynk

)

×N∑

j=1

AND(XOR(Xj , Yj ),Xnj, Ynj

). (8)

As suggested by Daugman [27], [28], to avoid false resultscaused by the rotation problem, a template is shifted to theright and left with respect to the corresponding template, andfor each shift operation, the new HD is then calculated. Eachshift corresponds to a rotation of 2◦. Among all obtained values,the minimum distance is considered (corresponding to the bestmatching between the two templates).

V. EXPERIMENTAL RESULTS

The proposed multimodal biometric authentication systemachieves interesting results on standard and commonly useddatabases. To show the effectiveness of our approach, theFVC2002 DB2 database [30] and the BATH database [31] havebeen used for fingerprints and irises, respectively. The obtainedexperimental results, in terms of recognition rates and executiontimes, are here outlined. The listed FAR and FRR indexes havebeen calculated following the FVC guidelines [30]. Table I givesa brief description of the features of the used databases.

The reduced BATH-S1 database has been generated withten user random extractions from the full iris database. Foreach user, the first eight iris acquisitions have been selected.The BATH-S2 database has been generated considering the 50database users. For each user, the first eight iris acquisitions havebeen selected. The BATH-S3 database has been generated con-sidering the 50 database users as well. For each user, a secondpattern of eight iris acquisitions (9–16) has been selected. The

TABLE IIRECOGNITION RATES OF THE UNIMODAL BIOMETRIC SYSTEMS

FOR THE ENTIRE DATABASES

TABLE IIITEST SETS COMPOSITION AND FEATURES

FVC2002 DB2A-S1 database has been generated consideringthe first 50 users, while the FVC2002 DB2A-S2 database hasbeen generated considering the last 50 users.

A. Recognition Analysis of the Multimodal System

The multimodal recognition system performance evaluationhas been performed using the well-known FRR and FAR in-dexes. For an authentication system, the FAR is the number oftimes that an incorrectly accepted unauthorized access occurred,while the FRR is the number of times that an incorrectly rejectedauthorized access resulted.

To evaluate and compare the performance of the proposedapproach, several tests have been conducted. The first test hasbeen conducted on the full FVC2002 DB2A database using aclassical unimodal minutiae-based recognition system [7], [8].This approach has resulted in FAR = 0.38% and FRR = 14.29%.The performance of the fingerprint unimodal recognition sys-tem using the previously described frequency-based approachon the same full FVC2002 DB2A database has also been eval-uated.This approach has resulted in FAR = 1.37% and FRR =22.45%. Table II shows the achieved results using two methods.In Table II, the result achieved by the iris unimodal recog-nition system using the previously described frequency-basedapproach on the full BATH database [28] is also reported.

Successively, several test sets considering the appropriatenumber of fingerprint and iris acquisitions have been gener-ated to test the proposed multimodal approach. Table III showsthe used test sets composition.

An initial test has been conducted on the DBtest1 datasetusing a classical fusion approach at the matching-score leveland utilizing an Euclidean metric applied to the HD of eachsubsystem. With this approach, the following results have been

392 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

TABLE IVRECOGNITION RATES OF THE PROPOSED TEMPLATE-LEVEL FUSION

ALGORITHM COMPARED TO UNIMODAL SYSTEMS

FOR REDUCED DATABASES (TEN USERS)

Fig. 11. ROC curves for the unimodal biometric systems and the correspond-ing multimodal system with the DBtest1 dataset.

obtained: FAR = 0.07% and FRR = 11.78%. The score has beenobtained weighting the matching scores (0.65 for iris and 0.35for fingerprints) of each unimodal biometric system. The afore-mentioned weight pair is the better tradeoff in order to meet thefollowing constraints: 1) literature approaches show that iris-based systems achieve higher recognition accuracies than thefingerprint-based ones [7], [9], [27], [28] and 2) our experimen-tal trials confirm that the aforementioned weights optimize therecognition accuracy performance.

Successively, the proposed fusion strategy, at the templatelevel, has been applied and tested on the same dataset (DBtest1)obtaining the results listed in Table IV. The correspondent mul-timodal authentication system uses the previously described ho-mogeneous biometric vector, and it does not require any weightfor iris and fingerprint unimodal systems to evaluate the match-ing score. With this approach, the following results have beenobtained: FAR = 0% and FRR = 5.71%. Fig. 11 shows thereceiver-operating characteristic (ROC) curves for the systemsreported in Table IV. ROC curves are obtained by plotting theFAR index versus the FRR index, with different values of thematching threshold.

Finally, following the items listed in Table III, the remainingfour datasets have been considered to further evaluate the pro-posed template-level fusion strategy. Table V shows the achievedresults in terms of FAR and FRR indexes. The results achievedby the two unimodal recognition systems on the same pertinentdatabases are also reported in Table V.

TABLE VRECOGNITION RATES OF THE PROPOSED TEMPLATE-LEVEL FUSION

ALGORITHM COMPARED TO UNIMODAL SYSTEMS

FOR REDUCED DATABASES (50 USERS)

Fig. 12. ROC curves for the unimodal biometric systems and the correspond-ing multimodal system with the DBtest4 dataset.

As shown in Table V, the conducted tests produce comparableresults on the used datasets, underlying the presented approachrobustness. Fig. 12 shows the ROC curves for the systems deal-ing with the DBtest4 dataset. Analogous curves have been ob-tained with the remaining datasets.

B. Execution Time Analysis of the Multimodal Software System

The multimodal systems have been implemented usingthe MATLAB environment on a general-purpose Intel P4 at3.00GHz processor with 2-GB RAM memory. Table VI showsthe average software execution times for the preprocessingand matching tasks. The fingerprint preprocessing time canchange, since it depends either on singularity-point detection,pseudosingularity-point detection, or maximum curvature pointdetection.

VI. DISCUSSIONS AND COMPARISONS

Multimodal biometric identification systems aim to fuse twoor more physical or behavioral pieces of information to provideoptimal FAR and FRR indexes improving system dependability.

CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 393

TABLE VISOFTWARE EXECUTION TIMES FOR THE PREPROCESSING

AND MATCHING TASKS

In contrast to the majority of work published on this topicthat has been based on matching-score-level fusion or decision-level fusion, this paper presents a template-level fusion methodfor a multimodal biometric system based on fingerprints andirises. In greater detail, the proposed approach performs finger-print matching using the segmented regions (ROIs) surroundingfingerprint singularity points. On the other hand, iris prepro-cessing aims to detect the circular region surrounding the iris.To achieve these results, we adopted a Log-Gabor-algorithm-based codifier to encode both fingerprint and iris features, thusobtaining a unified template. Successively, the HD on the fusedtemplate was used for the similarity index computation. Theimprovements are now described and discussed, which were in-troduced by adopting the fusion process at the template levelas well as the related comparison against the unimodal bio-metric systems and the classical matching-score-fusion-basedmultimodal systems. The proposed approach for fingerprint andiris segmentation, coding, and matching has been tested as uni-modal identification systems using the official FVC2002 DB2Afingerprint database and the BATH iris database. Even if, thefrequency-based approach, using fingerprint (pseudo) singular-ity point information, introduces an error on system recognitionaccuracy (see Table II), the achieved recognition results haveshown an interesting performance if compared with the litera-ture approaches on similar datasets. On the other hand, in thefrequency-based approach, it is very difficult to use the classi-cal minutiae information, due to its great number. In this case,the frequency-based approach should consider a high numberof ROIs, resulting in the whole fingerprint image coding, andconsequently, in high-dimensional feature vector.

Shi et al. [32] proposed a novel fingerprint-matching methodbased on the Hough transform. They tested the method us-ing the FVC2002 DB2A database, depicting two ROC curveswith FAR and FRR indexes comparable to our results. Nagaret al. [33] used minutiae descriptors to capture orientation andridge frequency information in a minutia’s neighbor. They vali-dated their results on the FVC2002 DB2A database, showing aworking point with FAR = 0.7% at a genuine accept rate (GAR)of 95%. However, they did not use the complete database, butonly two samples for each user; therefore, they considered only200 images. In [34], Yang et al. proposed a novel helper databased on the topo-structure to reduce the alignment calculationamount. They tested their approach with FVC2002 DB2A ob-taining an FAR between 0% and 0.02% with a GAR between88% or 92%, changing particular thresholds.

Concerning the iris identification system, the achieved per-formance can be considered very interesting when compared

with the results of different approaches found in literature onthe same dataset or similar dataset. A novel technique for irisrecognition using texture and phase features is proposed in [35].Texture features are extracted on the normalized iris strip usingHaar Wavelet, while phase features are obtained using Log-Gabor Wavelet. The matching scores generated from individualmodules are combined using the sum of their score technique.The system is tested on the BATH database giving an accuracyof 95.62%. The combined system at a matching-score-level fu-sion increased the system performance with FAR = 0.36% andFRR = 8.38%.

In order to test the effectiveness of the proposed multimodalapproach, several datasets have been used. First, two differ-ent multimodal systems have been tested and compared on thestandard FVC2002 DB2B fingerprint image database and theBATH-S1 iris image database: the former was based on amatching-score-level fusion technique, while the latter wasbased on the proposed template-level fusion technique. Theobtained results show that the proposed template-level fusiontechnique carries out an enhanced system showing interestingresults in terms of FAR and FRR (see Tables II and IV for furtherdetails). The aforementioned result suggests that the template-level fusion gives better performance than the matching-score-level fusion. This statement confirms the results presented in[36]. In this paper, Khalifa and Amara presented the results offour different fusions of modalities at different levels for twounimodal biometric verification systems, based on offline sig-nature and handwriting. However, the better result was obtainedusing a fusion strategy at the feature-extraction level. In con-clusion, we can affirm that when a fusion strategy is performedat the feature-extraction level, a homogeneous template is gen-erated, so that a unified matching algorithm is used, at whichtime the corresponding multimodal identification system showsbetter results when compared to the result achieved using otherfusion strategies.

Lastly, several 50-users databases have been generated, com-bining the available FVC2002 DB2A fingerprint database andthe BATH iris database. The achieved results, reported inTable V, show uniform performance on the used datasets.

In literature, few multimodal biometric systems based ontemplate-level fusion have been published, rendering it isvery difficult to comment and analyze the experimental re-sults obtained in this paper. Besbes et al. [13] proposed amultimodal biometric system using fingerprint and iris fea-tures. They use a hybrid approach based on: 1) fingerprintminutiae extraction and 2) iris template encoding through amathematical representation of the extracted iris region. How-ever, no experimental results have been reported in the pa-per. As pointed out before, a mixed multimodal system basedon features fusion and matching-score fusion has been pro-posed in [2]. The paper presents the overall result of the en-tire system on self-constructed, proprietary databases. The pa-per reports the ROC graph with the unimodal and the mul-timodal system results. The ROC curves show the improve-ments introduced by the adopted fusion strategy. No FAR andFRR values are reported. Table VII summarizes the previousresults.

394 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

TABLE VIICOMPARISON OF THE RECOGNITION RATES OF OUR APPROACH AND THE OTHER LITERATURE APPROACHES

VII. CONCLUSION AND FUTURE WORKS

For an ideal authentication system, FAR and FRR indexesare equal to 0. The aforementioned result may be reached byonline biometric authentication systems, because they have thefreedom to reject the low-quality acquired items. On the con-trary, official ready-to-use databases (FVC databases, CASIA,BATH, etc.) contain images with different quality, includinglow-, medium-, and high-quality biometric acquisitions, as wellas partial and corrupted images. For this reason, these biomet-ric authentication systems do not achieve the ideal result. Toincrease the related security level, system parameters are thenfixed in order to achieve the FAR = 0% point and a correspond-ing FRR point.

In this paper, a template-level fusion algorithm working ona unified biometric descriptor was presented. The aforemen-tioned result leads to a matching algorithm that is able to pro-cess fingerprint-codified templates, iris-codified templates, andiris and fingerprint-fused templates. In contrast to the classi-cal minutiae-based approaches, the proposed system performsfingerprint matching using the segmented regions (ROIs) sur-rounding (pseudo) singularity points. This choice overcomesthe drawbacks related to the fingerprint minutiae information:the frequency-based approach should consider a high numberof ROIs, resulting in the whole fingerprint image coding, andconsequently, in high-dimensional feature vector.

At the same time, iris preprocessing aims to detect the circularregion surrounding the feature, generating an iris ROI as well.For best results, we adopted a Log-Gabor-algorithm-based cod-ifier to encode both fingerprint and iris features, thus obtaininga unified template. Successively, the HD on the fused templatehas been used for the similarity index computation.

The multimodal biometric system has been tested on differentcongruent datasets obtained by the official FVC2002 DB2 fin-gerprint database [30] and the BATH iris database [31]. The firsttest conducted on ten users has resulted in an FAR = 0% and

an FRR = 5.71%, while the tests conducted on the FVC2002DB2A and BATH databases resulted in an FAR = 0% and anFRR = 7.28% ÷ 9.7%.

Future works will be aimed to design and prototype an em-bedded recognizer-integrating features acquisition and process-ing in a smart device without biometric data transmission be-tween the different components of a biometric authenticationsystem [6], [16].

REFERENCES

[1] A. Ross and A. Jain, “Information fusion in biometrics,” PatternRecogn. Lett., vol. 24, pp. 2115–2125, 2003. DOI: 10.1016/S0167–8655(03)00079-5.

[2] F. Yang and B. Ma, “A new mixed-mode biometrics informa-tion fusion based-on fingerprint, hand-geometry and palm-print,” inProc. 4th Int. IEEE Conf. Image Graph., 2007, pp. 689–693. DOI:10.1109/ICIG.2007.39.

[3] J. Cui, J. P. Li, and X. J. Lu, “Study on multi-biometric feature fusion andrecognition model,” in Proc. Int. IEEE Conf. Apperceiving Comput. Intell.Anal. (ICACIA), 2008, pp. 66–69. DOI: 10.1109/ICACIA.2008.4769972.

[4] S. K. Dahel and Q. Xiao, “Accuracy performance analysis of multimodalbiometrics,” in Proc. IEEE Syst., Man Cybern. Soc., Inf. Assur. Workshop,2003, pp. 170–173. DOI: 10.1109/SMCSIA.2003.1232417.

[5] A. Ross, K. Nandakumar, and A. K. Jain, Handbook of Multibiometrics.Berlin, Germany: Springer-Verlag. ISBN 978–0-387-22296-7.

[6] UK Biometrics Working Group (BWG). Biometrics Security Con-cerns. (2009, Nov.). [Online]. Available: http://www.cesg.gov.uk/policy_technologies/biometrics/index.shtml, 2003.

[7] S. Prabhakar, A. K. Jain, and J. Wang, “Minutiae verification and classifi-cation,” presented at the Dept. Comput. Eng. Sci., Univ. Michigan State,East Lansing, MI, 1998.

[8] V. Conti, C. Militello, S. Vitabile, and F. Sorbello, “A multimodal tech-nique for an embedded fingerprint recognizer in mobile payment systems,”Int. J. Mobile Inf. Syst., vol. 5, no. 2, pp. 105–124, 2009.

[9] N. K. Ratha, R. M. Bolle, V. D. Pandit, and V. Vaish, “Robust fin-gerprint authentication using local structural similarity,” in Proc. 5thIEEE Workshop Appl. Comput. Vis., Dec. 4–6, 2000, pp. 29–34. DOI10.1109/WACV.2000.895399.

[10] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification on irispatterns,” in Proc. 15th Int. Conf. Pattern Recogn., 2000, vol. 2, pp. 805–808.

[11] L. Ma, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizingkey local variations,” IEEE Trans. Image Process., vol. 13, no. 6, pp. 739–750, Jun. 2004.

CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 395

[12] V. Conti, G. Milici, P. Ribino, S. Vitabile, and F. Sorbello, “Fuzzy fusionin multimodal biometric systems,” in Proc. 11th LNAI Int. Conf. Knowl.-Based Intell. Inf. Eng. Syst. (KES 2007/WIRN 2007), Part I LNAI 4692.B. Apolloni et al., Eds. Berlin, Germany: Springer-Verlag, 2010,pp. 108–115.

[13] F. Besbes, H. Trichili, and B. Solaiman, “Multimodal biometric systembased on fingerprint identification and Iris recognition,” in Proc. 3rd Int.IEEE Conf. Inf. Commun. Technol.: From Theory to Applications (ICTTA2008), pp. 1–5. DOI: 10.1109/ICTTA.2008.4530129.

[14] G. Aguilar, G. Sanchez, K. Toscano, M. Nakano, and H. Perez, “Multi-modal biometric system using fingerprint,” in Proc. Int. Conf. Intell. Adv.Syst. 2007, pp. 145–150. DOI: 10.1109/ICIAS.2007.4658364.

[15] V. C. Subbarayudu and M. V. N. K. Prasad, “Multimodal biometric sys-tem,” in Proc. 1st Int. IEEE Conf. Emerging Trends Eng. Technol., 2008,pp. 635–640. DOI 10.1109/ICETET.2008.93.

[16] P. Ambalakat. Security of biometric authentication systems. 21st Com-put. Sci. Semin. (SA1-T1-1). (2009, Nov.). [Online]. Available: http://www.rh.edu/∼rhb/cs_seminar_2005/SessionA1/ambalakat.pdf, 2005

[17] A. Ross and R. Govindarajan, “Feature level fusion using hand and facebiometrics,” in Proc. SPIE Conf. Biometric Technol. Human IdentificationII, Mar. 2005, vol. 5779, pp. 196–204.

[18] D. J. Field, “Relations between the statistics of natural images and theresponse profiles of cortical cells,” J. Opt. Soc. Amer., vol. 4, pp. 2379–2394, 1987.

[19] M. Kawagoe and A. Tojo, “Fingerprint pattern classification,” Pat-tern Recogn., vol. 17, no. 3, pp. 295–303, 1984. DOI: 10.1016/0031-3203(84)90079-7.

[20] M. L. Pospisil, “The human Iris structure and its usages,” Acta Univ.Palacki Phisica, vol. 39, pp. 87–95, 2000.

[21] R. C. Gonzalez and R. E. Woods, Digital Image Processing. EnglewoodCliffs, NJ: Prentice-Hall, 2008.

[22] J. Canny, “A computational approach to edge detection,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679–698, Nov. 1986.

[23] P. V. C. Hough, “Method and means for recognizing complex patterns,”U.S. Patent 3 069 654, Dec. 18, 1962.

[24] A. Kumar and G. K. H. Pang, “Defect detection in textured materialsusing gabor filters,” IEEE Trans. Ind. Appl., vol. 38, no. 2, pp. 425–440,Mar./Apr. 2002.

[25] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement, algorithmand performance evaluation,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 20, no. 8, pp. 777–789, Aug. 1998.

[26] L. Masek. (2003). “Recognition of human Iris patterns for bio-metric identification,” Master’s thesis, Univ. Western Australia, Aus-tralia. (2009, Nov.). [Online]. Available: http://www.csse.uwa.edu.au/-pk/studentprojects/libor/, 2003

[27] J. Daugman, “The importance of being random: Statistical principles ofiris recognition,” Pattern Recogn., vol. 36, pp. 279–291, 2003.

[28] J. Daugman, “How iris recognition works,” IEEE Trans. CircuitsSyst. Video Technol., vol. 14, no. 1, pp. 21–30, Jan. 2004. DOI:10.1109/TCSVT.2003.818350.

[29] Celoxica Ltd. (2009, Nov.). [Online]. Available: http://agilityds.com/products/

[30] Fingerprint Verification Competition FVC2002. (2009, Nov.). [Online].Available: http://bias.csr.unibo.it/fvc2002/

[31] BATH Iris Database, University of Bath Iris Image Database.(2009, Nov.). [Online]. Available: http://www.bath.ac.uk/eleceng/research/sipg/irisweb/

[32] J. Q. Z. Shi, X. Zhao, and Y. Wang, “A novel fingerprint matching methodbased on the hough transform without quantization of the hough space,”in Proc. 3rd Int. Conf. Image Graph. (ICIG 2004), pp. 262–265. ISBN0-7695-2244-0.

[33] A. Nagar, K. Nandakumar, and A. K. Jain, “Securing fingerprint template:Fuzzy vault with minutiae descriptors,” in Proc. 19th Int. Conf. PatternRecogn. (ICPR 2008), pp. 1–4. ISBN 978-1-4244-2174-9.

[34] J. Li, X. Yang, J. Tian, P. Shi, and P. Li, “Topological structure-basedalignment for fingerprint fuzzy vault,” presented at the 19th Int. Conf.Pattern Recogn. (ICPR), Bejing, China. ISBN 978-1-4244-2174-9.

[35] H. Mehrotra, B. Majhi, and P. Gupta, “Multi-algorithmic Iris authentica-tion system,” presented at the World Acad. Sci., Eng. Technol., BuenosAires, Argentina, vol. 34. 2008. ISSN 2070-3740.

[36] A. B. Khalifa and N. E. B. Amara, “Bimodal biometric verification withdifferent fusion levels,” in Proc. 6th Int. Multi-Conf. Syst., Signals Devices,2009, SSD ’09, pp. 1–6. DOI: 10.1109/SSD.2009.4956731.

[37] Y. Guo, G. Zhao, J. Chen, M. Pietikainen, and Z. Xu, “A new gabor phasedifference pattern for face and ear recognition,” presented at the 13th Int.Conf. Comput. Anal. Images Patterns, Munster, Germany, Sep. 2–4, 2009.

[38] A. Ross, A. K. Jain, and J. Reisman, “A hybrid fingerprint matcher,”Pattern Recogn., vol. 36, no. 7, pp. 1661–1673, Jul. 2003.

Vincenzo Conti received the Laurea (summa cumlaude) and the Ph.D. degrees in computer engineer-ing from the University of Palermo, Palermo, Italy,in 2000 and 2005, respectively.

Currently, he is a Postdoc Fellow with the De-partment of Computer Engineering, University ofPalermo. His research interests include biometricrecognition systems, programmable architectures,user ownership in multi-agent systems, and bioin-formatics. In each of these research fields he has pro-duced many publications in national and international

journals and conferences. He has participated to several research projects fundedby industries and research institutes in his research areas.

Carmelo Militello received the Laurea (summa cumlaude) degree in computer engineering in 2006 fromthe University of Palermo, Palermo, Italy, with thefollowing thesis: “An Embedded Device Based onFingerprints and SmartCard for Users Authentica-tion. Study and Realization on Programmable Logi-cal Devices.” From January 2007 to December 2009,he participated to Ph.D. student course in the Depart-ment of Computer Engineering (DINFO), Universityof Palermo.

He is currently a component of the InnovativeComputer Architectures (IN.C.A.) Group of the DINFO, coordinated the Prof.Filippo Sorbello. His research interests include embedded biometric systemsprototyped on reconfigurable architectures.

Filippo Sorbello (M’91) received the Laureadegree in electronic engineering from the Universityof Palermo, Palermo, Italy, in 1970.

He is a Professor of computer engineering withthe Department of Computer Engineering (DINFO),University of Palermo, Palermo, Italy. He is a found-ing member and served as the Department Head forthe first two terms. From 1995 to 2009, he served asthe Director of the Office for Information Tecnology(CUC) of the University of Palermo. His researchinterests include neural networks applications, real

time image processing, biometric authentication systems, multi-agent systemsecurity, and digital computer architectures. He has chaired and participated asmember of the program committee of several national and international confer-ences. He has coauthored more than 150 scientific publications.

Prof. Sorbello is a member of the IEEE Computer, the Association of theComputing Machinery (ACM), the Italian Association for Artificial Intelligence(AIIA), Italian Association for Computing (AICA), Italian Association of Elec-trical, Electronic, and Control and Computer Engineers (AEIT).

Salvatore Vitabile (M’07) received the Laurea de-gree in electronic engineering and the Dr. degree incomputer science from the University of Palermo,Palermo, Italy, in 1994 and 1999, respectively.

He is currently an Assistant Professor with theDepartment of Biopathology, Medical, and ForensicBiotechnologies (DIBIMEF), University of Palermo,Palermo, Italy. His research interests include neu-ral networks applications, biometric authenticationsystems, exotic architecture design and prototyping,real-time driver assistance systems, multi-agent sys-

tem security, and medical image processing. He has coauthored more than 100scientific papers in referred journals and conferences.

Dr. Vitabile has joined the Editorial Board of the International Journal of In-formation Technology and Communications and Convergence. He has chaired,organized, and served as member of the technical program committees of severalinternational conferences, symposia, and workshops. He is currently a memberof the Board of Directors of SIREN (Italian Society of Neural Networks) andthe IEEE Engineering in Medicine and Biology Society.