Iris matching using multi-dimensional artificial neural network

7
Published in IET Computer Vision Received on 26th August 2010 Revised on 24th November 2010 doi: 10.1049/iet-cvi.2010.0133 ISSN 1751-9632 Iris matching using multi-dimensional artificial neural network R.M. Farouk 1 R. Kumar 2 K.A. Riad 1 1 Department of Mathematics, Faculty of Science, Zagazig University, Egypt 2 Department of Electrical and Computer Engineering, National University of Singapore, Singapore E-mail: [email protected] Abstract: Iris recognition is one of the most widely used biometric technique for personal identification. This identification is achieved in this work by using the concept that, the iris patterns are statistically unique and suitable for biometric measurements. In this study, a novel method of recognition of these patterns of an iris is considered by using a multi- dimensional artificial neural network. The proposed technique has the distinct advantage of using the entire resized iris as an input at once. It is capable of excellent pattern recognition properties as the iris texture is unique for every person used for recognition. The system is trained and tested using two publicly available databases (CASIA and UBIRIS). The proposed approach shows significant promise and potential for improvements, compared with the other conventional matching techniques with regard to time and efficiency of results. 1 Introduction The developments in science and technology have made it possible to use biometrics in applications to establish or confirm the identity of individuals. Applications such as passenger control in airports, access control in restricted areas, border control, database access and financial services are some of the examples where the biometric technology has been applied for more reliable identification and verification. Biometrics is inherently a more reliable and capable technique to identify human’s authentication by his or her own physiological or behavioral characteristics. The features used for personal identification by current biometric applications include facial features, fingerprints, iris, palm-prints, retina, handwriting signature, DNA, gait, etc [1, 2]. The work in this paper focuses on iris biometrics. The iris is the coloured ring of tissue around the pupil through which light enters the interior of the eye. The pupil region generally appears darker than the iris. However, the pupil may have specular highlights, and cataracts can lighten the pupil. Fig. 1 shows an example for an image acquired by a commercial iris biometric system (from the UBIRIS databse). The unique pattern on the surface of the iris is formed during the first year of life, and pigmentation of the stroma takes place for the first few years. The minute details of the iris texture are believed to be determined randomly and not related to any genetic factors [3]. The iris patterns are statistically unique and suitable for biometric measurements [4]. The United Arab Emirates expellees tracking and border control system is an outstanding example of the iris technology as discussed in [5]. 1.1 Related work The research in the area of iris recognition has been receiving considerable attention and a number of techniques and algorithms have been proposed over the last few years. The most important work in the early history of iris biometrics is that of Daugman [4], which describes an operational iris recognition system in some detail and Daugman’s patent [6]. The approach presented by Wildes [3] combines the method of edge detection with Hough transform for iris location. Boweyer et al. [1] recently presented an excellent review of almost iris recognition methods. Xu et al. [7] proposed an efficient iris recognition system based on intersecting cortical model (ICM) neural network, which includes two parts mainly. The first part is image preprocessing, which has three steps (segmentation, normalisation and image enhancement). In the second part, the ICM neural network is used to generate iris codes and the Hamming distance between two iris codes is calculated to measure the dissimilarity of them. Sarhan [8] used the discrete cosine transform for feature extraction and artificial neural networks for classification. In the last year only, the iris recognition takes the attention of many researchers and different ideas are formulated and published. For example, in [9] a biorthogonal wavelet-based iris recognition system is modified and demonstrated to perform off-angle iris recognition. An efficient and robust segmentation of noisy iris images for non-cooperative iris recognition is described in [10]. Iris image segmentation and sub- optimal images are discussed in [11]. Comparison and combination of iris matchers for reliable personal authentication are introduced in [12]. Noisy iris 178 IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184 & The Institution of Engineering and Technology 2011 doi: 10.1049/iet-cvi.2010.0133 www.ietdl.org

Transcript of Iris matching using multi-dimensional artificial neural network

www.ietdl.org

Published in IET Computer VisionReceived on 26th August 2010Revised on 24th November 2010doi: 10.1049/iet-cvi.2010.0133

ISSN 1751-9632

Iris matching using multi-dimensional artificialneural networkR.M. Farouk1 R. Kumar2 K.A. Riad1

1Department of Mathematics, Faculty of Science, Zagazig University, Egypt2Department of Electrical and Computer Engineering, National University of Singapore, SingaporeE-mail: [email protected]

Abstract: Iris recognition is one of the most widely used biometric technique for personal identification. This identification isachieved in this work by using the concept that, the iris patterns are statistically unique and suitable for biometricmeasurements. In this study, a novel method of recognition of these patterns of an iris is considered by using a multi-dimensional artificial neural network. The proposed technique has the distinct advantage of using the entire resized iris as aninput at once. It is capable of excellent pattern recognition properties as the iris texture is unique for every person used forrecognition. The system is trained and tested using two publicly available databases (CASIA and UBIRIS). The proposedapproach shows significant promise and potential for improvements, compared with the other conventional matchingtechniques with regard to time and efficiency of results.

1 Introduction

The developments in science and technology have made itpossible to use biometrics in applications to establish orconfirm the identity of individuals. Applications such aspassenger control in airports, access control in restrictedareas, border control, database access and financial servicesare some of the examples where the biometric technologyhas been applied for more reliable identification andverification.

Biometrics is inherently a more reliable and capabletechnique to identify human’s authentication by his or herown physiological or behavioral characteristics. Thefeatures used for personal identification by currentbiometric applications include facial features, fingerprints,iris, palm-prints, retina, handwriting signature, DNA, gait,etc [1, 2].

The work in this paper focuses on iris biometrics. The irisis the coloured ring of tissue around the pupil through whichlight enters the interior of the eye. The pupil region generallyappears darker than the iris. However, the pupil may havespecular highlights, and cataracts can lighten the pupil.Fig. 1 shows an example for an image acquired by acommercial iris biometric system (from the UBIRISdatabse). The unique pattern on the surface of the iris isformed during the first year of life, and pigmentation of thestroma takes place for the first few years. The minutedetails of the iris texture are believed to be determinedrandomly and not related to any genetic factors [3]. The irispatterns are statistically unique and suitable for biometricmeasurements [4]. The United Arab Emirates expelleestracking and border control system is an outstandingexample of the iris technology as discussed in [5].

178

& The Institution of Engineering and Technology 2011

1.1 Related work

The research in the area of iris recognition has been receivingconsiderable attention and a number of techniques andalgorithms have been proposed over the last few years. Themost important work in the early history of iris biometricsis that of Daugman [4], which describes an operational irisrecognition system in some detail and Daugman’s patent[6]. The approach presented by Wildes [3] combines themethod of edge detection with Hough transform for irislocation. Boweyer et al. [1] recently presented an excellentreview of almost iris recognition methods.

Xu et al. [7] proposed an efficient iris recognition systembased on intersecting cortical model (ICM) neural network,which includes two parts mainly. The first part is imagepreprocessing, which has three steps (segmentation,normalisation and image enhancement). In the second part,the ICM neural network is used to generate iris codes andthe Hamming distance between two iris codes is calculatedto measure the dissimilarity of them. Sarhan [8] used thediscrete cosine transform for feature extraction and artificialneural networks for classification.

In the last year only, the iris recognition takes theattention of many researchers and different ideas areformulated and published. For example, in [9] abiorthogonal wavelet-based iris recognition system ismodified and demonstrated to perform off-angle irisrecognition. An efficient and robust segmentation ofnoisy iris images for non-cooperative iris recognition isdescribed in [10]. Iris image segmentation and sub-optimal images are discussed in [11]. Comparison andcombination of iris matchers for reliable personalauthentication are introduced in [12]. Noisy iris

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184doi: 10.1049/iet-cvi.2010.0133

www.ietdl.org

segmentation, with boundary regularisation and reflectionsremoval, is discussed in [13].

1.2 Outline

In this paper, we first present the active contour models for irispreprocessing and segmentation, which is a crucial step to thesuccess of any iris recognition system, since data falselyrepresented in iris pattern data will corrupt the templatesgenerated, thus resulting to poor recognition rates. Once the irisregion is successfully segmented from an eye image, the 2DGabor wavelet is processed to convolute with the segmentediris to extract the iris texture. Finally, the MDANN is trainedwith a database of the convoluted irises, and then tested forrecognition of patterns. The flow diagram for the variousprocessing stages of the proposed system is shown as in Fig. 2.

2 Segmentation

It is the stage of locating the iris region in an eye image, whereas mentioned the iris region is the annular part between pupiland sclera as shown in Fig. 1. The iris segmentation has beenachieved by the following three main steps. The first steplocates the outer iris boundary by using the circular Houghtransform. The second is to locate the inner iris boundaryby using the discrete circular active contour (DCAC). Thelast step locates the eyelids, eyelashes and noise regions.

2.1 Hough transform

The Hough transform is a standard computer vision algorithmthat is used to determine the parameters of simple geometric

Fig. 1 Image (Img_141_1_1) from the UBIRIS database

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184doi: 10.1049/iet-cvi.2010.0133

objects, such as lines and circles, present in an image. Thecircular Hough transform is employed to deduce the radiusand center coordinates of the pupil and iris regions. Anautomatic segmentation algorithm based on the circularHough transform is employed by Wildes et al. [3] andTisse et al. [14]. In the proposed method by Wildes, anedge map of the image is first obtained by thresholding themagnitude of the image intensity gradient

|∇G(x, y) ∗ I(x, y)| (1)

G(x, y) = 1

2ps2e−((x−x0)2+(y−y0)2)/2s2

(2)

where G(x, y) is a Gaussian smoothing function with scalingparameter s to select the proper scale of edge analysis. First,an edge map is generated by calculating the first derivatives ofintensity values in an eye image and then thresholding theresult. From the edge map, votes are cast in Hough space tomaximise the defined Hough transform for the desiredcontour. Considering the obtained edge points as for theparameters of circles passing through each edge points as(xi, yi), i = 1, 2, . . . , n. These parameters are the centrecoordinates xc and yc, and the radius r, which are able todefine any circle.

A Hough transform can be written as

H(xc, yc, r) =∑n

i=1

h(xi, yi, xc, yc, r) (3)

h(xi, yi, xc, yc, r) = 1, if g(xi, yi, xc, yc, r) = 00, otherwise

{(4)

where the parametric function g(xi, yi, xc, yc, r) =(xi − xc)2 + (yi − yc)2 − r2. In order to detect limbus, onlyvertical edge information is used. The upper and lowerparts, which have the horizontal edge information, areusually covered by the two eyelids. The horizontal edgeinformation is used for detecting the upper and lowereyelids, which are modeled as parabolic arcs.

Fig. 2 Flow diagram of the proposed system

179

& The Institution of Engineering and Technology 2011

www.ietdl.org

2.2 Active contour models

Ritter [15] has used active contour models for localising thepupil in eye images. In order to improve accuracy Ritterhave used the variance image, rather than the edge image.A point interior to the pupil is located from a varianceimage and then a DCAC is created with this point as itscentre. The DCAC is then moved under the influence ofinternal and external forces until it reaches equilibrium, andthe pupil is localised.

In 2003 Ritter and Cooper [16] proposed a model thatdetects pupil and limbus by activating and controlling theactive contour using two defined forces: internal andexternal forces. The internal forces are responsible toexpand the contour into a perfect polygon with a radius dlarger than the contour average radius. The internal forceFint,i applied to each vertex, Vi, is defined as

Fint,i = �V i − Vi (5)

where �V i is the expected position of the vertex in the perfectpolygon. The position of �V i can be obtained with respect toCr, the average radius of the current contour, and thecontour centre, C = (Cx, Cy). The centre of a contour whichis the average position of all contour vertices is defined as

C = (Cx, Cy) = 1

n

∑n

i=1

Vi (6)

The average radius of the contour, which is the averagedistance of all the vertices from the defined centre point Cis defined as

Cr =1

n

∑n

i=1

‖Vi − C‖ (7)

and the position of the vertices of the expected perfectpolygon is obtained as

�V i = (Cx + (Cr + d) cos(2pi/n), Cy + (Cr + d) sin(2pi/n))

(8)

where n is the total number of vertices.The internal forces are designed to expand the contour and

keep it circular. The force model assumes that pupil andlimbus are globally circular, rather than locally, to minimisethe undesired deformations due to specular reflections anddark patches near the pupil boundary. The contour detection

180

& The Institution of Engineering and Technology 2011

process of the model is based on the equilibrium of thedefined internal forces with the external forces. The externalforces are obtained from the grey level intensity values ofthe image and are designed to push the vertices inward. Themagnitude of the external forces is defined as

‖Fext,i‖ = I(Vi) − I (Vi + Fext,i) (9)

where I(Vi) is the grey level value of the nearest neighbour toVi. Fext,i is the direction of the external force for each vertexand it is defined as a unit vector given by

Fext,i =C − Vi

‖C − Vi‖(10)

Therefore the external force over each vertex can be written as

Fext,i = ‖Fext,i‖Fext,i (11)

The movement of the contour is based on the composition ofthe internal and external forces over the contour vertices.Replacement of each vertex is obtained iteratively by

Vi(t + 1) = Vi(t) + bFint,i(t) + (1 − b)Fext,i(t) (12)

where b is a defined weight that controls the pace of thecontour movement and sets the equilibrium condition ofinternal and external forces. The final equilibrium isachieved when the average radius and center of the contourbecomes the same for the first time in m iterations ago. TheDCAC is applied on the three images is shown in Fig. 4,which give an example for avoiding errors of segmentationcaused by Hough transform shown in Fig. 3.

2.3 Detecting eyelids, eyelashes and noise regions

The eyelids are detected by first fitting a line to the upper andlower eyelid using the linear Hough transform. A horizontalline is then drawn which intersects with the first line at theiris edge that is closest to the pupil. A second horizontalline allows the maximum isolation of eyelid regions.

Detecting eyelashes requires proper choice of features andclassification procedure because of complexity andrandomness of the patterns. The proposed eyelash detectionby Kong Zhang [17], consider eyelashes as two groups ofseparable eyelashes, which are isolated in the image, andmultiple eyelashes, which are bunched together and overlapin the eye and applies two different feature extractionmethods to detect eyelashes. Separable eyelashes are

Fig. 3 Errors in segmentation of the Hough transform to four different iris images

a Image from the CASIA-Iris V. 1b Image from the CASIA-Iris V. 3-Intervalc Image from the CASIA-Iris V. 3-Lampd Image from the UBIRIS database

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184doi: 10.1049/iet-cvi.2010.0133

www.ietdl.org

detected using 1D Gabor filter, since the convolution of aseparable eyelash with the Gaussian smoothing functionresults in a low output value. The edge information is alsoinfused with the region information to localise the noiseregions, as shown in Fig. 5.

3 Resizing the segmented iris image

Since the segmented iris’s size is not constant and alsolarge, they cannot be used to train the network as it willbe highly time consuming. Hence, it is mandatory toreduce the size of the iris without losing the necessaryinformation from it. This is performed by dividing thesegmented iris into 256(16 × 16) matrices and then addup the pixel intensity values of every individual matrix.Now we will be left with a 16 × 16 matrix, in which thepixel intensity value is given a maximum of 255 and aminimum of zero. This small matrix retains relevant iristexture information extracted, and hence can be used totrain the network.

4 Gabor feature extraction

Once the segmented iris region is resized the relevant textureinformation needs to be extracted. The resized segmented iris

region is convoluted with the 2D Gabor wavelet

G(x, y, u, f ) = exp − 1

2

x′

sx

( )2

+ y′

sy

( )2⎡⎣

⎤⎦ cos (2pfx′)

⎛⎝

⎞⎠(13)

x′ = x cos(u) + y sin(u)

y′ = y cos(u) − x sin(u)

where sx and sy refer to the standard deviation of theGaussian envelope, and u and f refer to the orientation andthe frequency of the Gabor filter. The convolution of theresized segmented iris with the 2D Gabor wavelet will givean image, which illustrates the basic texture of the irisregion, as shown in Fig. 6. The magnitude of theconvolution process will be used for recognition using theMDANN.

5 Iris recognition by MDANN

The neural networks learning is an easy process because it isdependent only on the locally available information, but sincethe information is mixed together at the storage elements,unambiguous retrieval of stored information is not simple.

Fig. 5 Iris localisation and segmentation

a Showing the perfect iris localisation, where black regions denote detected eyelids and eyelashes regions, and rowb Showing the segmented irises that will be used for training the MDANN

Fig. 4 Segmentation of the DCAC

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184 181doi: 10.1049/iet-cvi.2010.0133 & The Institution of Engineering and Technology 2011

www.ietdl.org

Recognition involves knowing whether or not a new input hasbeen presented before and stored in memory. The gradientdescent is generalised to a non-linear, multi-layer feed-forward network called back propagation. This is asupervised learning algorithm that learns by first computingan error signal and then propagates the error backwardthrough the network by assuming the network weights arethe same in both the backward and forward directions. Backpropagation is a very popular and widely used networklearning algorithm [18].

In this paper, we explain our work on training the irises onthe MDANN by using the back propagation algorithm andthen matching the iris for recognition from the database.The MDANN used here consists of 21 neurons (the inputlayer is of eight neurons, the hidden layer is of 12 neurons,and the output layer is of one neuron). Fig. 7 shows thestructure of the MDANN but the connections betweenthe input and the hidden layer are not shown completely forthe purpose of clarity and the complete structure is obtainedby extending the connections to the entire image.

5.1 Algorithms for the MDANN

Let S1 and S2 are structures containing n layers and kelements in each field and comprise of the internal weightsWdl, Wdr, Wdl1 and Wdr1 are matrices denoting the entirestrengths of the image to neuron connection and thecomponents Aw, Adw, Ab, Ad and Ad1 are single rowmatrices denoting the weights of the connection of theneurons with the image pixel points, their differential

Fig. 6 2D Gabor convolution, where

a Resized segmented irisb Magnitude of the convolution of a with the 2D Gabor wavelet

182

& The Institution of Engineering and Technology 2011

values, the bias values for the neurons and their differentialvalues.

† Step 1: calculation of the derivatives:Let i refers to the number of patterns we are trying to train

on the network. j refers to the number of regresses which isgiven by one in excess of the number of neurons and krefers to the dimension of the input matrix. n refers to thenumber of neurons.

D = actual output − required output (14)

For the weights

Ad(j) =

Ad(j)+D, if j = 1

Ad(j)+D2

1+ exp (−2(Sj1 × I (i)× Sj2))− 1

( ), if j . 1

⎧⎪⎨⎪⎩

(15)

where the symbols Sj1 and Sj2 refer to the jth layer elements ofthe structures Sj1 and Sj2, respectively, and I(i) refers to the ithmatrix input to the network. The values are calculated for allvalues of j. Now as refers to the activation signal value and isgiven by the equation

as = −2

1+ exp (−2(Sj1 × I (i)× Sj2)2)(16)

† Step 2: updating the weights and bias values and settingtheir derivatives back to zero:

For all values of j: for the weights

Aw(j) = Aw(j) + (alpha × Adw(j)) − (eta × Ad(j)) (17)

Adw(j) = (alpha × Adw(j)) − (eta × Ad(j)) (18)

Ad(j) = 0 (19)

where eta and alpha refer to the initial learning rate and themomentum factor, respectively.† Step 3: updating the internal weights and setting thederivatives back to zero:

Fig. 7 Structure of the MDANN

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184doi: 10.1049/iet-cvi.2010.0133

www.ietdl.org

For all the coordinates ( j, k)

Sjk1 = Sjk1 + (alpha × Wdl(j, k)) − (eta × Wdl1(j, k)) (20)

Wdl(j, k) = (alpha × Wdl(j, k)) − (eta × Wdl1(j, k)) (21)

Wdl1(j, k) = 0 (22)

Also similar equations are for Sjk2 where Sjk1 and Sjk2 refer tothe kth element in the jth layer of the structure S1 and S2,respectively.† Step 4: calculating the output values after the training:

Output=

Aw(j), for j=1

Aw(1)+∑n

j=2 Aw(j)

× 2

1+exp((Sj1× I (i)×Sj2)+Ab(j))−1

( ), for j . 1

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

(23)

The iteration goes on until the value calculated in Step 4 is thetarget value that was assigned to the image. This value isunique for every distinct iris and when a random iris istested on the network, this value concludes as to whetherthe iris is already present in the database or not and ifpresent the exact iris to which it corresponds to.

The MDANN is trained by using 996 irises from theCASIA database, and 723 irises from the UBIRIS databaseseparately. During the training stage, each iris is assigned adistinct arbitrary value. Now the trained MDANN issubjected to a test to evaluate if the network is capable ofrecognising an iris that have already trained on it and alsoto evaluate if it can reject an iris that have not trained on it.We will use some trained irises to test the ability of theMDANN to identify the trained irises.

6 Matching

In general, the matching metric gives a measure of similaritybetween two iris templates. This metric should give one rangeof values when comparing templates generated from the sameeye, known as intra-class comparisons, and another range ofvalues when comparing templates created from differentirises, known as extra-class comparisons. These two casesshould give distinct and separate values, so that a decisionhave made with high confidence as to whether twotemplates are from the same iris, or from two different irises.

In this paper, we have compared our matching method(MDANN) with three popular matching methods, theWED-matching method that is introduced by Zhu et al.[19], the HD-matching method that is introduced byDaugman [20] and the SP method that is introduced byRiad et al. [21].

7 Iris database

In this reported work, we have used two large publicly andfreely available iris databases, CASIA and UBIRIS. TheCASIA iris database (CASIA-Iris V. 3-Interval) is a largeopen iris database and we only use a subset forperformance evaluation. This database includes 1992 irisimages collected from 249 different eyes (hence, 249

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184doi: 10.1049/iet-cvi.2010.0133

different classes) with eight images captured for eachperson. Each image has a resolution of 320 × 280 in 8-bitgrey level. Four images of each class are selected randomlyto constitute the training set and the remaining images arethe test set. In the preprocessing stage, we checked thesegmentation accuracy of the iris boundaries subjectivelyand obtained an accuracy rate of 95.9% (81 images are notused), as shown in Table 1, which shows different causesresulting in the failure of iris locating.

The UBIRIS iris database [22] is composed of 1205 imagescollected from 241 persons (five images for each eye). Eachimage size has a resolution of 200 × 150 in JPEG format.For each iris class, we choose three samples at random fortraining and the remaining ones as test samples. Afterpreprocessing, the obtained accuracy rate is 97.17% (34images are not used), as shown in Table 1.

8 Results

To evaluate the effectiveness and the performance of theproposed method for iris recognition, we have usedthe above described databases as the test and train dataset.The experiments are conducted in two modes: verificationand identification. In the verification mode, the receiveroperating characteristic (ROC) curve have depicted therelationship of false match rate (FMR), the rate which non-authorised people are falsely recognised during the featurecomparison, against false non-match rate (FNMR), the rateof authorised people are falsely not recognised duringfeature comparison, for 95% confidence interval as shownin Fig. 9. The area under the ROC curve (denoted as Az)

Table 1 Failure analysis of locating iris for different causes

Cause of failure Number of images

CASIA UBIRIS

occlusion by eyelids 31 6

inappropriate eye positioning 21 13

occlusion by eyelash 23 11

noises within iris 6 4

total 81 34

Fig. 8 Training performance against number of iterations fortraining the MDANN

183

& The Institution of Engineering and Technology 2011

www.ietdl.org

have reflected how well the intra-class and extra-classdistributions can be distinguished and the ranges are from0.5 to 1. For an ideal ROC curve, the value of Az shouldbe 1. It denotes that the intra- and extra-class areinseparable while the Az value is equal to 0.5. Hence, theROC curve is suitable for measuring the accuracy of thematching process and showing the achieved performance ofa recognition algorithm. FAR is the probability of acceptingan imposter as an authorised subject and FRR is theprobability of a genuine authorised subject that is rejectedas being an imposter. In the recognition mode, the correctrecognition rate (CRR) is adopted to assess the efficacy ofthe algorithm, as shown in Table 2.

To assess the accuracy of the proposed algorithm, each testiris image in the database is individually matched to all theother iris images in the trained database. In the UBIRISdatabase, the total number of comparisons is 324 351 wherethe numbers of intra-class and extra-class comparisons are1792 and 322 559, respectively.

The performance curve (Fig. 8) has been plotted against thenumber of iterations. It is observed that the performance issteadily improving and leading to a value near to 1027. TheMDANN was trained by using 996 and 723 iris imagesfrom the CASIA and UBIRIS database, respectively.

9 Conclusion

We have presented a novel and effective approach for irisrecognition, which operates using the active contour models

Table 2 CRRs achieved by three matching measures using the

CASIA and UBIRIS database

Matching measure CRR %

WED 98.73

SP 98.26

HD 98.22

MDANN 99.25

Fig. 9 Obtained ROC curves to four different matching measuresusing the CASIA database

184

& The Institution of Engineering and Technology 2011

for preprocessing and segmentation, the 2D Gabor waveletfor feature extraction and the MDANN for matchingprocess. This method is found to give results that arecomparable in terms of efficiency of recognition with theother pattern recognition techniques and hence the resultsare encouraging to work on improving the MDANN forbetter matching capabilities. Moreover further work in thisfield could be in optimising the trade-off between the speedof recognition and the size of the database that can besupported by the network.

10 References

1 Boweyer, K.W., Hollingsworth, K., Flynn, P.J.: ‘Image understandingfor iris biometrics: a survey’, Comput. Vis. Image Underst., 2008,110, pp. 281–307

2 Duta, N.: ‘A survey of biometric technology based on hand shape’,Pattern Recognit., 2009, 42, pp. 2797–2806

3 Wildes, R.P.: ‘Iris recognition: an emerging biometric technology’,Proc. IEEE, 1997, 85, (9), pp. 1348–1363

4 Daugman, J.: ‘High confidence visual recognition of persons by a test ofstatistical independence’, IEEE Trans. Pattern Anal. Mach. Intell., 1993,15, (11), pp. 1148–1161

5 Almualla, M.: ‘The UAE iris expellees tracking and border controlsystem’. Biometrics Consortium, Crystal City, VA, September 2005

6 Daugman, J.: ‘Biometric personal identification system based on irisanalysis’. U.S. Patent No. 5-291-560, 1994

7 Xu, G., Zhang, Z., Ma, Y.: ‘An efficient iris recognition system based onintersecting cortical model neural network’, Int. J. Cogn. Inform. Nat.Intell., 2008, 2, (3), pp. 43–56

8 Sarhan, A.M.: ‘Iris recognition using discrete cosine transform andartificial neural networks’, J. Comput. Sci., 2009, 5, (5), pp. 369–373

9 Abhyankara, A., Schuckers, S.: ‘A novel biorthogonal wavelet networksystem for off-angle iris recognition’, Pattern Recognit., 2010, 43,pp. 987–1007

10 Tan, T., He, Z., Sun, Z.: ‘Efficient and robust segmentation of noisy irisimages for non-cooperative iris recognition’, Image Vis. Comput., 2010,28, pp. 223–230

11 Matey, J.R., Broussard, R., Kennell, L.: ‘Iris image segmentation andsub-optimal images’, Image Vis. Comput., 2010, 28, pp. 215–222

12 Kumar, A., Passi, A.: ‘Comparison and combination of iris matchers forreliable personal authentication’, Pattern Recognit., 2010, 43,pp. 1016–1026

13 Labati, R.D., Scotti, F.: ‘Noisy iris segmentation with boundaryregularization and reflections removal’, Image Vis. Comput., 2010, 28,pp. 270–277

14 Tisse, C., Martin, L., Torres, L., Robert, M.: ‘Person identification techniqueusing human iris recognition’, Proc. Vis. Interface, 2002, pp. 294–299

15 Ritter, N.: ‘Location of the pupil–iris border in slit-lamp images of thecornea’. Proc. Int. Conf. on Image Analysis and Processing, 1999

16 Ritter, N., Cooper, J.R.: ‘Locating the iris: a first step to registration andidentification’. Proc. Ninth IASTED Int. Conf. on Signal and ImageProcessing, 2003, pp. 507–512

17 Kong, W., Zhang, D.: ‘Eyelash detection model for accurate irissegmentation’. Proc. ISCA 16th Int. Conf. on Computers and theirApplications, 2001, pp. 204–207

18 Anderson, J.A.: ‘An introduction to neural networks’ (Prentice HallPublications, 1995)

19 Zhu, Y., Tan, T., Wang, Y.: ‘Biometric personal identification based oniris patterns’. Proc. 15th Int. Conf. on Pattern Recognition (ICPR),Spain, 2000, vol. 2, p. 2801

20 Daugman, J.: ‘How iris recognition works’, IEEE Trans. Circuits Syst.Video Technol., 2004, 14, (1), pp. 21–30

21 Riad, K.A., Farouk, R.M., Othman, I.A.: ‘Time of matching reductionand improvement of sub-optimal image segmentation for irisrecognition’. OSDA-2010, 2010, pp. 49–66

22 Proenca, H., Alexandre, L.A.: ‘UBIRIS: a noisy iris image database’.13th Int. Conf. on Image, Analysis and Processing, 2005, http://iris.di.ubi.pt, pp. 970–977

IET Comput. Vis., 2011, Vol. 5, Iss. 3, pp. 178–184doi: 10.1049/iet-cvi.2010.0133