Face recognition based on ordinal correlation

15
Int. J. Accounting, Auditing and Performance Evaluation, Vol. x, No. x, xxxx 1 Face Recognition based on Ordinal Correlation Ronny Tjahyadi, Wanquan Liu*, Senjian An and Svetha Venkatesh Department of Computing, Curtin University of Technology, GPO Box U1987 Perth WA 6845 Australia Email: {tjahyadi,wanquan, senjian, svetha}@cs.curtin.edu.au * Corresponding author Abstract: In this paper, we propose a new face recognition system based on the ordinal correlation principle. First, we explain the ordinal similarity measure for any two images and then propose a systematic approach for face recognition based on this ordinal measure. In addition, we design an algorithm for selecting a suitable classification threshold via using the information obtained from the training database. Experimentation is conducted on the Yale datasets and the results show that the proposed face recognition approach outperforms the Eigenface and 2DPCA ap- proaches significantly and also the threshold selection algorithm works effectively. Further, we carry out experimentation with various noise algorithms and the results show that the ordinal approach outperforms the Eigenface and 2DPCA approaches under different noise algorithms. Keywords: Ordinal measure, Face recognition, Image processing, com- puter vision Reference to this paper should be made as follows: Wanquan Liu, ‘Face Recognition based on Ordinal Correlation ’, Int. J. Intelligent Systems Technologies and Applications, Vol. x, No. x, pp.xxx–xxx,2006. Biographical notes: Ronny Tjahyadi is a research assistant at the Department of Computing at Curtin University of Technology in Perth, Western Australia. He received his MSc degree in Computer Science from Curtin University of Technology in 2005. His research interests include biometrics, machine learning, image processing and artificial in- telligence. Wanquan Liu is a Senior Lecturer of Dept. of Computing at Curtin University of Technology, Australia. He obtained his PhD in 1993 from Shanghai Jiaotong University, China. He once hold U2000, ARC and JSPS fellowships from 1998 to 2003 and has published over 100 research papers in control, signal processing and artificial intelligence. He is an Associate editor for International Journal of Information and Systems Sciences and IPC member for several international conferences. His research interests include biometrics, machine learning, behavior recog- nition, image processing and artificial intelligence. Senjian An is a Research Fellow in Department of Computing at Curtin University of Technology. He obtained his PhD in 1996 from Peking Copyright c 200x Inderscience Enterprises Ltd.

Transcript of Face recognition based on ordinal correlation

Int. J. Accounting, Auditing and Performance Evaluation, Vol. x, No. x, xxxx 1

Face Recognition based on Ordinal Correlation

Ronny Tjahyadi, Wanquan Liu*,Senjian An and Svetha Venkatesh

Department of Computing,Curtin University of Technology,GPO Box U1987 Perth WA 6845 AustraliaEmail: tjahyadi,wanquan, senjian, [email protected]∗Corresponding author

Abstract:In this paper, we propose a new face recognition system based on

the ordinal correlation principle. First, we explain the ordinal similaritymeasure for any two images and then propose a systematic approach forface recognition based on this ordinal measure. In addition, we design analgorithm for selecting a suitable classification threshold via using theinformation obtained from the training database. Experimentation isconducted on the Yale datasets and the results show that the proposedface recognition approach outperforms the Eigenface and 2DPCA ap-proaches significantly and also the threshold selection algorithm workseffectively. Further, we carry out experimentation with various noisealgorithms and the results show that the ordinal approach outperformsthe Eigenface and 2DPCA approaches under different noise algorithms.

Keywords: Ordinal measure, Face recognition, Image processing, com-puter vision

Reference to this paper should be made as follows: Wanquan Liu,‘Face Recognition based on Ordinal Correlation ’, Int. J. IntelligentSystems Technologies and Applications, Vol. x, No. x, pp.xxx–xxx,2006.

Biographical notes: Ronny Tjahyadi is a research assistant at theDepartment of Computing at Curtin University of Technology in Perth,Western Australia. He received his MSc degree in Computer Sciencefrom Curtin University of Technology in 2005. His research interestsinclude biometrics, machine learning, image processing and artificial in-telligence.

Wanquan Liu is a Senior Lecturer of Dept. of Computing at CurtinUniversity of Technology, Australia. He obtained his PhD in 1993 fromShanghai Jiaotong University, China. He once hold U2000, ARC andJSPS fellowships from 1998 to 2003 and has published over 100 researchpapers in control, signal processing and artificial intelligence. He is anAssociate editor for International Journal of Information and SystemsSciences and IPC member for several international conferences. Hisresearch interests include biometrics, machine learning, behavior recog-nition, image processing and artificial intelligence.

Senjian An is a Research Fellow in Department of Computing at CurtinUniversity of Technology. He obtained his PhD in 1996 from Peking

Copyright c© 200x Inderscience Enterprises Ltd.

2 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

University in China and then was an Associate Professor in Beijing In-stitute of Technology. From 1999 to 2004, He was a Research Fellow atUniversity of Sydney and Melbourne University. He has published over30 papers in control, signal processing and artificial intelligence. Hiscurrent research interests are in artificial intelligence and signal process-ing.

Dr Svetha Venkatesh is a Professor at the School of Computing at CurtinUniversity of Technology, Perth, Western Australia. Her research is inthe areas of large scale pattern recognition, image understanding andapplications of computer vision to image and video indexing and re-trieval. She is the author of about 200 research papers in these areasand is currently the co-director for the Center of Excellence in IntelligentOperations Management.

1 Introduction

As one of the most successful applications of image processing, face recognitionis flourishing due to its broad applications in surveillance and some secure systems.At least two reasons account for this trend: one is the wide range of commercial andlaw enforcement applications and the other is the availability of feasible technologiesafter so many years of research. Though current machine recognition systems havereached a certain level of maturity, it is still far away from the human perceptionsystem.

Current approaches for face recognition include the Eigenface approach basedon the Principle Component Analysis (PCA) [22], Fisherfaces [2], Kernel PCA [12],Kernel Direct Discriminant Analysis [15], Two-Dimensional PCA (2DPCA) [25] andneural network approach [9, 14, 16, 27]. For a comprehensive survey, one can referto the recent paper [28]. All these approaches are based on different methodologiesand have their own advantages. However, they have a common nature essentially,i.e., based on the individual pixel intensity values. When these approaches areused with the Euclidean distance or other metrics, they are usually sensitive tooutliers and some nonlinear monotonically increasing transforms [3]. These outliersare generally caused by different types of noise such as impulsive and bit-errornoise, which are not avoidable in some cases. Therefore, it is necessary to find asuitable approach for face recognition based on measurement for image similaritybeing insensitive to the noises.

Ordinal approach can overcome the outliers mentioned above. It operates onthe ranks of the pixels instead of on the pixel values directly. Thus, only the relativeordering between pixel values is of consequence in determining the similarity dis-tance between any two images. Two famous ordinal measures used in psychologicalmeasurements are Kendall’s τ and Spearman’s ρ [11]. Another ordinal measure forimage correspondence κ was presented in [3,4]. Roughly speaking, this measure κis based on the ranking of one image with respect to the ranks of the other that isrobust to the outliers and insensitive to the rank distortions [3].

A framework for performing ordinal-based image correspondence was proposedin [8]. It was shown there this framework includes Kendall’s τ and Spearman’s ρas special cases. More importantly, it also allows one to design other correspon-

Face Recognition based on Ordinal Correlation 3

dence measures that can incorporate region-based spatial information. This ideahas been used for shape correspondence in [6] and image correspondence in [3,8].The attractive property of the ordinal principle is its robustness and effectivenesswhich can be used widely in different applications. To our knowledge, this ordinalapproach has not been used for face recognition though it has been used in facedetection [23].

In this paper, we evaluate the performance of the region-based ordinal measureproposed in [3,8] for face recognition. First, we introduce the framework of theregion-based ordinal structure conceptually. Second, we establish a face recogni-tion system based on the region-based ordinal measure. Then the classificationthreshold selection issue is investigated in detail. Experimentation is conductedon Yale datasets and the results show that the proposed face recognition approachoutperforms the Eigenface and 2DPCA approaches significantly. Finally, furtherexperimentation is carried out to investigate the performance of face recognitionalgorithms with three noise algorithms: salt and pepper, gaussian, and poisson.The experimental results show that the ordinal approach outperforms both theEigenface and 2DPCA approaches under the tested noise algorithms.

2 Region-based Ordinal Image Similarity

In this section, we introduce the main idea for the ordinal correlation measure fortwo images, which is mainly adopted from the reference paper [8]. Figure 1 presentsthe structure of the region-based ordinal measure for two images. Suppose we havetwo images X and Y of same size. Let X1, X2, · · · , Xn and Y1, Y2, · · ·Yn bethe pixels belonging to image X and Y respectively. We divide the image into anumber of regions R1, R2, · · ·Rm and extract the pixels from both images thatbelong to these areas. Consequently, we denote RX

j and RYj as the pixels from

image X and Y in region Rj respectively.

k

. Pixel X k

Region R j

Image X

. Pixel Yk

Region R j

Image Y

Slice SX

ooo

OP1

OP2

kSlice S Y

ooo

OP1

Metaslice M jX

Metaslice M j

Y

D1 D Dj m....... .......

OP3

Metadifference

Figure 1 Region-based Ordinal Image Measure

4 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

The aim is to propose an image similarity measure between any two images ofsame size based on the regional information. To this end, we will compare RX

j andRY

j for each j = 1, 2, · · · ,m. For each pixel Xk, we can construct a slice

SXk =

SX

k,l

(1)

with

SXk,l =

1 if Xk < Xl

0 otherwise

As can be seen that slice SXk corresponds to pixel Xk and is a binary image of size

equal to image X. Slices can be built in a similar manner for image Y as well.With the aim of comparing regions RX

j and RYj , we first combine the slices from

image X, corresponding to all the pixels belonging to region RXj . The slices are

combined using one operation P1 into a so-called metaslice MXj , i.e.,

MXj = P1(SX

k : Xk ∈ RXj ), j = 1, 2, · · · ,m.

Similarly, we can obtain MYj . The metaslices defined above are depending on the

operation P1, which represents the relation between the region it corresponds toand the entire image itself. It should be noted that the size of these metaslices isthe same as to the original images. In this paper, P1 is just chosen simply to bethe summation of all the SX

k in region RXj and thus MX

j is same size as SXk .

The next step is comparison between all the pairs of metaslices MXj and MY

j

by using another operation P2, which result in the metadifference Qj with

Qj = P2(MXj ,MY

j ), j = 1, 2, · · · ,m.

Therefore, we can obtain a set of metadiferences Q = Q1, Q2, · · · , Qm. And nextP2 is chosen to be the squared Euclidean distance between the metaslices and Qj

will be a scalar value. Finally, we can obtain a scalar similarity between image Xand Y from the set Q, i.e.,

λ = P3(Q)

which can be chosen as λ =∑m

j=1 Qj .The core idea in the process is to use the slices in (1) instead of the pixel value Xk

to establish a measure λ. This measure is insensitive to the outliers and nonlineartransformations as demonstrated in [8]. The disadvantage of the ordinal approachis its computation cost due to large size of SX

k and it can be seen that every pixelvalue Xk is corresponding to a slice SX

k with the same size of X. Another issue ishow to select the region Rj properly in the process. Next, we will investigate thisordinal measure in a face recognition system.

3 A Face Recognition System based on the Ordinal Correlation

Research in face recognition has been conducted to solve three distinct scenar-ios: face verification, face identification and the watch list [17]. The aim of faceverification is to verify that an individual is who he or she claims to be, whereasthe face identification attempts to identify an individual in a database with the as-sumption that the individual is known and in the database. The watch list scenario

Face Recognition based on Ordinal Correlation 5

is similar to face identification, except that the individual to be identified may notbe in the database. Of these scenarios, the watch list is generally considered to bethe most difficult, as face recognition under this scenario confronts a large numberof false alarms [19].

In this section, we will use the ordinal correlation measure to design a facerecognition system for the watch list scenario. Figure 2 illustrates the face recogni-tion system with region-based ordinal measure and automatic threshold selectionalgorithm. In order to reduce the computation cost associated with the ordinalmeasure, the resolution of each training image is reduced to the size of 57 × 48using a bilinear interpolation method [18]. This interpolation method is usuallya compromise between interpolation accuracy and speed [10]. The ordinal algo-rithm with region area of 8 × 8 pixels is then performed on the resized images tocreate feature matrix (ΩX = (MX

1 , MX2 , · · · ,MX

m )). These feature matrices areused for threshold selection algorithm at training stage and classification at recog-nition stage. The threshold selection algorithm will be discussed in the next section.

Figure 2 Face Recognition System based on Ordinal Correlation Approach

To recognize a test image, the system compares the test image’s feature matrix(Ω) to each of the feature matrices in the database using P2 and P3 operationsdetailed in the last section. A straightforward pattern recognition approach forrecognition is to find a face image n that minimises the following

λn = P3(P2(Ω− Ωn))

where λn is a scalar measure of similarity between feature matrices Ω and Ωn whereΩn is a feature matrix describing the nth face image.

If λn is below some chosen classification threshold (θ), then the new image isclassified as belonging to a face image n, and otherwise is classified as “unknown”.

It should be noted that the proposed face recognition system is similar in struc-ture as the one we proposed in [21]. The difference is mainly on the feature vectorsand distance metrics. Previously, the feature vectors were based on the direct pixelvalues and here the feature vectors are based on the slices SX

k . Also the distance

6 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

metrics are some types of norms satisfying some mathematical axioms and here themetric is computed through two steps based on regional information.

4 Automatic Threshold Selection Approach

In order to have a good performance for the proposed face recognition system,it is necessary to choose a suitable classification threshold (θ) in general case. Tothis end, we propose an approach based on intra and inter class information gath-ered from the training dataset. The intra-class (D) is a set where the distancesbetween images of the same individual are calculated as shown in Algorithm 1.This class gives an indication of how similar the images of the same individual are.The inter-class (P ) is a set where the distances between the images of an individ-ual are measured against the images of other individuals in the training datasetas described Algorithm 1 . This class indicates how different each image of anindividual is when compared to images of other individuals in the training dataset.The classification threshold (θ) is then calculated from intra- and inter-class in-formation as described in Algorithm 2 . The training dataset requires a suitablenumber of images per individual in order to obtain sufficient distance information.Large training datasets should result in better estimation of threshold. From ourexperimentation in [21], every individual should have at least 4 images for train-ing. Therefore, the proposed algorithm for threshold selection can be described asfollows:

DenoteI ′ = number of individualsK ′ = number of images per individualI = 1,. . . ,I ′K = 1,. . . ,K ′Ω = feature matrices of training images

Algorithm 1: Finding the Intra and Inter Classesfor each image Fik where i ∈ I and k ∈ K do

Compute the intra distancesdik′

ik = ‖Ωik − Ωik′‖2 where i ∈ I, k′ ∈ K and k 6= k′

and the inter distancespjl

ik = ‖Ωik − Ωjl‖2 where j ∈ I, j 6= i and l, k ∈ Kend

Get the intra class (D) = dik′ik | k 6= k′, k, k′ ∈ K, i ∈ I

Get the inter class (P ) = pjlik | j ∈ I, j 6= i, k, l ∈ K

and sort the D and P in ascending order

Compute Dmax = max dik′ik | k 6= k′, k, k′ ∈ K, i ∈ I,

Dmax is a measure of generalisation among images for each individualCompute Pmin = min pjl

ik | j ∈ I, j 6= i, k, l ∈ K,Pmin is a measure of differences between one individual against others.

Face Recognition based on Ordinal Correlation 7

Algorithm 2: Finding the Classification ThresholdThe classification threshold (θ) can be defined through Dmax and Pmin. If Dmax

< Pmin, then TP ′ and FP ′ defined below will reach 100% and 0% respectively.Thus, θ can be directly defined as:

θ =Dmax + Pmin

2(2)

If Dmax > Pmin, then we need to find θ that maximises the TP ′ and minimises theFP ′ with the following steps:

Now, we have obtained D and P from the training dataset which will be usedto calculate True Positive′ (TP ′) and False Positive′ (FP ′). The TP ′ and FP ′

measure the percentage of correct classification and misclassification respectivelyand are defined as follows:

TP ′ = Q‖D‖ × 100%

FP ′ = L‖P‖ × 100%

where Q is the number of elements in D that are less than a given threshold. ‖D‖ isthe total number of elements in D which is given by (K′−1)×K′×I′

2 . L is the numberof elements in P that are less than a given threshold and ‖P‖ is the total number

of elements in P which is K′2×(I′−1)×I′

2 .Next, we will select a threshold to balance the correct classification and mis-

classification ratio. The idea is to seperate D as N parts evenly and draw a curveof FP ′ and TP ′ with different thresholds. We intend to find a balanced point onthis curve. The detail is presented as below.

From the Algorithm 1 , we have obtained D = d1, d2, . . . , d‖D‖ with d1 ≤d2 ≤ · · · ≤ d‖D‖. Then we need to find the index x1, x2, . . . , x10 such that Dis grouped into D = Dx1 , Dx2 , . . . , Dx10 where Dxj = dxj−1+1, . . . , dxj for

j = 1, 2, . . . , 10 with x10 = ‖D‖; x0 = 0. Further, xj = [−xj ] where

−xj satisfies

−xj

‖D‖ × N = j and [−xj ] refers to rounding the number (

−xj) to the nearest integer.

This process is to divide D into N groups where N is chosen subjectively dependingon the database. In the experiments of this paper, N is chosen as 10, which givesus a very good performance.

Now, we have obtained the possible threshold set A = dx1 , dx2 , . . . , dx10, andwe can compute the TP ′ and FP ′ with each element in A. Then, the derivative(Tj) is calculated as:

Tj =

∆TP ′j∆FP ′j

× max(FP ′)100%

(3)

where max FP ′ is the maximum value of FP ′;∆TP ′j = TP ′j+1 − TP ′j for j = 1, 2, . . . , 9;

and ∆FP ′j = FP ′j+1 − FP ′j for j = 1, 2, . . . , 9;TP ′j and FP ′j are the TP ′ and FP ′ values respectively when

we choose the threshold dxj with j = 1, 2, . . . , 10;If ∆FP ′j = 0 then we define Tj = 0.

To find an appropriate classification threshold (θ), we have to search for a j0 ∈2, 3, . . . , 9 such that Tj0−1 >1 and Tj0 < 1. Then such a θ can be chosen as thevalue in dxj0−1 . If none of the Tj values is less than 1, then θ is chosen tobedx10 .

8 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

The main idea of selecting the optimal threshold can be described as below.From the intra and inter class information, we want to find the optimal threshold tobalance the true positive and false positive defined above. These values are indirectapproximations for precision and recall [13]. In order to do so, we divide the set Dinto N partitions subjectively. Equation (3) is used to compute the approximatederivative for the curve and we intend to find a point with this derivative being1 approximately. With this threshold selection algorithm, we can implement theproposed face recognition system and evaluate its performance in next section.

5 Experimental Results

In this section, experimentation was carried out on six datasets created fromYale database [24]. This database contains 15 individuals (mostly male) with 11images each. Face images in the original database with strong light configurationslike in Figure 3 were excluded as the excessive light cast shadow on the backgroundrequires preprocessing in practice. The total number of images used in this exper-iment is 120. In all the dataset, the number of individual for training is set to be10 and the number of individual for testing is set to be 15. Table 1 shows a briefdescription of all datasets. For each dataset, we created 10 subsets via randomlyselecting the training images per individual. The remaining images not included ineach training subset were used to construct the corresponding testing subsets.

These datasets were used to evaluate the proposed face recognition algorithmin two scenarios. In the first scenario, the experiments were carried out on fourdatasets with fixed threshold value. The results were then compared to the well-known Eigenface [22] and a recent 2DPCA approach [25]. In the other scenario, weinvestigated the performance of the proposed automatic threshold selection algo-rithm on the remaining datasets. Finally, we further evaluate the performance ofthe proposed system over noisy images using the dataset 4. The query effectivenessis evaluated using precision and recall statistics [1].

Figure 3 Images with Light Configurations from Yale Face Database

5.1 Evaluation on the Proposed Ordinal Face Recognition System

In this subsection, we carried out the experimentation with the Eigenface,2DPCA and ordinal approaches for all datasets in Scenario 1. The original size

Face Recognition based on Ordinal Correlation 9

# of # of # ofScenario Dataset training images training testing

per individual images images

1 1 10 1102 2 20 100

1 3 3 30 904 4 40 805 4 40 80

2 6 5 50 70

Table 1 Training and Testing Datasets

of Yale images is 231 × 195. For simplicity in the proposed ordinal approach, wedownsized these images into 57×48 pixels. We use the downsized images for the or-dinal approach and the images with the original size for the Eigenface and 2DPCAapproaches.

Table 2 shows the classification thresholds used for those experiments and thosethresholds were selected via trial and error method such that precision and recallare balanced. We performed the ordinal, the Eigenface and 2DPCA on 10 subsetsof each dataset in Scenario 1. The average results in Figure 4 indicate that the or-dinal approach outperforms the Eigenfaces and 2DPCA in all datasets with muchhigher precision and recall rates.

Dataset Eigenfaces 2DPCA Ordinal(* 107) (* 104) (* 104)

1 8.75 3.10 3.602 9.50 2.90 3.403 10.00 2.80 3.404 10.50 2.60 3.30

Table 2 Classification Thresholds for Datasets on Scenario 1

the Eigenface 2DPCA OrdinalDataset Prec. Recall Prec. Recall Prec. Recall

(%) (%) (%) (%) (%) (%)1 75.5 74.9 83.0 82.7 88.4 85.72 78.8 79.3 86.9 87.3 96.5 91.83 84.0 82.3 90.8 91.0 95.6 96.64 81.6 84.2 85.4 87.8 98.2 96.3

Table 3 Performance of the Eigenface vs 2DPCA vs Ordinal (Average Rates)

One can see from Table 3 that the performance of the proposed face recognitionfor dataset 4 is much better than that for other datasets. This is due to a fact thatthe dataset 4 has more training images and less testing images. Unlike the ordinalapproach, the performance of the Eigenface and 2DPCA does not improved withdataset 4. This indicates that the ordinal features are becoming richer when moretraining images are used.

10 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

1 2 370

75

80

85

90

95

100

accu

racy

(%)

precisionrecall

Eigenface 2DPCA Ordinal

75.5% 74.9%

83% 82.7%

88.4%

85.7%

(a) dataset 1

1 2 370

75

80

85

90

95

100

accu

racy

(%)

precisionrecall

Eigenface 2DPCA Ordinal

78.8% 79.3%

87.3%86.9%

96.5%

91.8%

(b) dataset 2

1 2 370

75

80

85

90

95

100

accu

racy

(%)

precisionrecall

Eigenface 2DPCA Ordinal

84%

82.3%

91%90.8%

95.6%96.6%

(c) dataset 3

1 2 370

75

80

85

90

95

100

accu

racy

(%)

Eigenface 2DPCA Ordinal

81.6%

84.2%

87.8%

85.4%

98.2%96.3%

(d) dataset 4

Figure 4 Average Recognition Accuracy of the Eigenface, 2DPCA and Ordinal

5.2 Evaluation on Automatic Threshold Selection Algorithm

In the last subsection, we compared the performance of the proposed ordinalapproach, the Eigenface and 2DPCA approaches using the best selected trial anderror thresholds. In this subsection, we will examine the proposed threshold se-lection algorithm with 2 datasets in Scenario 2 where each dataset consists of 10subsets. The results of threshold selection algorithm on these subsets are shownin Figure 5. From this figure, one can see that the threshold selection approachperforms more stable with dataset 6. This is due to the fact that dataset 6 has moretraining images, hence, it provides more intra and inter distances for selecting thethreshold. Overall, these results indicate that the selected classification thresholdsare very stable with regard to the subsets.

Figure 6 shows the performance of 3 subsets of dataset 6 under various thresh-olds. The classification thresholds selected via the proposed algorithm were 3.39×104 for subset 1 and 3.23× 104 for subset 4 and 3.18× 104 for subset 10. As shownin Figures 6(a), 6(b) and 6(c), these thresholds perform well where the precisionand recall are balanced with high rates.

Face Recognition based on Ordinal Correlation 11

1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

3

3.5x 10

4

subset

clas

sific

atio

n th

resh

old

(a) Dataset 5

1 2 3 4 5 6 7 8 9 100

0.5

1

1.5

2

2.5

3

3.5x 10

4

subset

clas

sific

atio

n th

resh

old

(b) Dataset 6

Figure 5 Selected Classification Thresholds via the Proposed Threshold Approach

3 3.1 3.2 3.3 3.4 3.5 3.650

55

60

65

70

75

80

85

90

95

100

threshold

accu

racy

(%)

precisionrecall

(a) Subset 1

3 3.1 3.2 3.3 3.4 3.5 3.650

55

60

65

70

75

80

85

90

95

100

threshold

accu

racy

(%)

precisionrecall

(b) Subset 4

3 3.1 3.2 3.3 3.4 3.5 3.650

55

60

65

70

75

80

85

90

95

100

threshold

accu

racy

(%)

precisionrecall

(c) Subset 8

Figure 6 Performance of Ordinal Approach Under Various Thresholds

5.3 Evaluation on the Proposed Ordinal Face Recognition System over Noisy Im-ages

In this subsection, we investigate the performance of the Eigenface, 2DPCAand ordinal approaches over noisy images. The experimentation was carried out on10 subsets of the dataset 4. We applied 3 different noise algorithms (Table 4) onthe testing images only while the training images remained as the original images(without noise).

12 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

Noise Algorithm ParametersSalt and Pepper Noise Density = 0.05

Gaussian Mean = 0 and Variance = 0.01Poisson —

Table 4 Parameters Used in the Noise Algorithms

Figure 7 illustrates the effects of noise algorithms over an original image andtheir corresponding PSNR values (dB). The PSNR is a common metric for imagequality by measuring the ratio of the peak signal and the difference between the twoimages. Images with high rate of PSNR rates (above 32 dB) are considered to beperceptually lossless, while those images with low PSNR rates indicate a differencebetween towards the noisy image and the original [20]. From these PSNR values, wecan rank the noise algorithms. The salt and pepper algorithm produced the worsequality of image followed by Gaussian and Poisson. We also tested the PSNR valuesof the noise algorithms on other images and found that though their PSNR val-ues were varied, the relative ranks of these noised images remained almost the same.

(a) Salt and Pepper (17.24

dB)

(b) Gaussian (20.90 dB) (c) Poisson (27.28 dB)

(d) Original Image

Figure 7 Effects of the Noise Algorithms on the Original Image

The classification thresholds for the experimentation with noise algorithms areshown in Table 5. These thresholds were selected via trial and error method suchthat precision and recall are balanced. Figure 8 presents the average recognitionrates of the Eigenface, 2DPCA and ordinal approaches when tested with noisy im-ages on dataset 4. Overall, this figure shows that the ordinal performs better than

Face Recognition based on Ordinal Correlation 13

the Eigenface and 2DPCA approaches under all noise algorithms.

Noise Algorithm Eigenfaces 2DPCA Ordinal(* 107) (* 104) (* 104)

Salt and Pepper 10.50 2.60 3.90Gaussian 10.50 2.60 3.75Poisson 10.50 2.60 3.30

Table 5 Parameters Used in the Noise Algorithms

1 2 370

75

80

85

90

95

100

accu

racy

(%)

precisionrecall

Eigenface 2DPCA Ordinal

81.2%

83.6%

88.0%87.0%

97.7%

92.8%

(a) Salt and Pepper

1 2 370

75

80

85

90

95

100

accu

racy

(%)

precisionrecall

Eigenface 2DPCA Ordinal

81.8%83.1%

84.2%

88.3%

93.7%

88.0%

(b) Gaussian

1 2 370

75

80

85

90

95

100

accu

racy

(%)

precisionrecall

Eigenface 2DPCA Ordinal

80.1%

84.2%84.8%

88.0%

94.0%

91.8%

(c) Poisson

Figure 8 Average Recognition Accuracy of the Eigenface, 2DPCA and Ordinal onDataset 4 with Noise Algorithms

6 Conclusions

In this paper, we proposed a new face recognition system via the ordinal measurefor image correspondence. The performance of this new system outperforms theEigenface and 2DPCA approaches significantly due to its robustness to differenttypes of noise. This approach operates on the ranks of the pixels rather thandirectly on the pixel values, on which most of the current face recognition systemsare working.

14 Ronny Tjahyadi, Wanquan Liu, Senjian An and Svetha Venkatesh

The disadvantage of this approach is its computation cost since each pixel iscorresponding to a slice with the same size of the original image. This can beovercome by using the reduced size images for comparison as demonstrated in thispaper.

Future work should focus on choosing the suitable size of divided areas by usingsome cross validation algorithms [5]. Moreover, the effect of dithering and noise onthe proposed measure should be investigated in detail. Additionally, this approachshould be extended to operate on the color images [26]. Since the ordinal measure issome sort of structure similarity for two images, we will use the wavelet techniquesin the future on its feature matrices to reduce its computation cost and evaluateits performance.

Throughout the experimentation, we used the ordinal approach with regionarea of 8 × 8 pixels with no overlapping pixels. In the future, we will increase theoverlapping percentage of the region area since Eickeler et al. [7] reported that 75%overlapping (6 pixels overlapped) perform better recognition rates compared to nooverlapping.

Acknowledgements

The authors would like to acknowledge the many helpful suggestions of twoanonymous reviewers.

References and Notes

1 Alvarez S. A. (2002) ‘An Exact Analytical Relation among Recall, Precision, andClassification Accuracy in Information Retrieval‘, Technical Report BC-CS-2002-01,Computer Science Department, Boston College

2 Belhumeur, P. N., Hespanha, J. P. and Kriegman, D. J. (1997) ‘Eigenfaces vs fish-erfaces:recognition using class specific linear projection‘, IEEE Trans. on PatternAnalysis and Machine Intelligence, vol. 19, pp. 711–720.

3 Bhat D. N. and Nayar S. K. (1998) ‘Ordinal measures for image correspondence‘,IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 4, pp. 415–423.

4 Bhat D. N. and Nayar S. K. (1996) ‘Ordinal measures for visual correspondence‘,Columbia University, Computer Science, tech. rep. CUCS-009-96

5 Cawley G. C. and Talbot N. (2003) ‘Efficient leave-one-out cross validation of kernelFisher discriminant classifiers‘, Pattern Recognition, Vol. 36, pp. 2585–2592.

6 Cheikh F. A., Cramariuc B., Partio M., Reijonen P. and Gabbouj M. (2002) ‘Or-dinal Measure Based Shape Correspondence‘, EURASIP Journal on Applied SignalProcessing, Vol. 5, pp. 362–371.

7 Eickeler S. and Muller S. and Rigoll G. (2000) ‘Recognition of JPEG CompressedFace Images Based on Statistical Methods ‘, Image and Vision Computing Journal,Special Issue on Facial Image Analysis, Vol. 18, No. 4, pp. 279–287.

8 Gabbouj G., Makela A. and Cramariuc B. (2000) ‘A new image similarity measurebased on ordinal correlation ‘, International Conference on Image Processing, Vol. 3,pp. 718–721.

Face Recognition based on Ordinal Correlation 15

9 Hongtao S, Feng D.D., and Rong-Chun Z. (2002) ‘Face recognition using multi-feature and radial basis function network‘, Pan-Sydney Area Workshop on VisualInformation Processing (VIP2002), Vol. 22, pp. 51–57.

10 Kelly F. and Kokaram A. (2004) ‘ Fast Image Interpolation for Motion Estimation us-ing Graphics Hardware‘, IS&T/SPIE Electronic Imaging - Real-Time Imaging VIII,San Jose, California, USA.

11 Kendall M. and Gibbons J. D. (1990) ‘Rank correlation methods‘, New Work: Ed-ward Arnold.

12 Kim K. L., Jung K. and Kim H. J. (2002) ‘Face recognition using kernel principalcomponent analysis‘, IEEE Signal Processing Letters, Vol. 9, pp. 40–42.

13 Lai J. Z. C. and Huang Y. (2003) ‘A hierarchical face recognition system‘, 16th IPPRConference on Computer Vision, Graphics and Image Processing (CVGIP 2003).

14 Liu C. and Wechsler H. (2003) ‘Independent component analysis of gabor featuresfor face recognition‘, IEEE Transactions on Neural Networks, Vol. 14, pp. 919–928.

15 Lu W, Plataniotis k. N. and Venetsanopoulos A. N. (2003) ‘Face recognition usingkernel direct discriminant analysis algorithms‘, IEEE Trans. on Neural Networks,Vol. 14, pp. 1018–1021.

16 Lu W., Plataniotis K. N. and Venetsanopoulos A. N.(2003) ‘Face recognition usinglda-based algorithms‘, IEEE Transactions on Neural Networks, Vol. 14, pp. 195–200.

17 Lu X. (2003) ‘Image analysis for face recognition‘, [Online]http://www.cse.msu.edu/lvxiaogu/publications/ImAna4FacRcg lu.pdf.

18 Matlab (2004) ‘the language of technical computing‘, [Online]http://www.mathworks.com, Version 6.5 Release 13.

19 Phillips P. J., Grother P., Micheals R. J., Blackburn D. M., Tabassi E.and Bone J. M. (2003) ‘FRVT 2002: Overview and summary‘, [Online]http://www.frvt.org/FRVT2002/documents.htm.

20 Strang G. and Nguyen T. (1997) ‘ Wavelets and Filter Banks.‘, Wellesley-Cambridge,Wellesley.

21 Tjahyadi R. (2004) ‘Investigations on PCA and DCT based face recognition algo-rithms‘, Master Thesis, Curtin University of Technology.

22 Turk M. and Pentland A. (1991) ‘Eigenfaces for recognition‘, Journal of CognitiveNeuroscience, Vol. 13, No. 1, pp. 7186.

23 Vidal-Naquet S. U. M. an Sali E. (2002) ‘Visual features of intermediate complexityand their use in classification‘,Nature Neuroscience, Vol. 5, pp. 682-687.

24 Yale (2004) ‘Yale university face database‘, [Online]http://vismod.media.mit.edu/vismod/classes/mas622-00/datasets.

25 Yang J,, Zhang D., Frangi A. F., and Yang J. (2004) ‘ Two-dimensional pca: a newapproach to appearance-based face representation and recognition‘, IEEE Transac-tion on Pattern Analysis and Machine Intelligence, Vol. 26, No. 1, pp. 131–137.

26 Yip A. and Sinha P. (2002) ‘Role of color in face recognition‘, Perception, Vol. 31,pp. 995-1003.

27 Zhang B., Zhang H. and Ge S. (2004) ‘Face recognition by applying wavelet sub-band representation and kernel associative memory‘, IEEE Transactions on NeuralNetworks, Vol. 15, pp. 166-177.

28 Zhao W., Chellappa R., Phillips P. J. and Rosenfeld A. (2003) ‘Face recognition: Aliterature survey‘, ACM Computing Surveys, Vol. 35, pp. 399458.