Signature Recognition using Cluster Based Global Features

7
2009 IEEE International Advance Computing Conference (IACC 2009) Patiala, India, 6-7 March 2009 Signature Recognition using Cluster Based Global Features H B Kekre, V A Bharadi NMIMS University, Mumbai, India -400056 Abstract: In this paper we discuss an off-line signature We discuss four special features they are recognition system designed using clustering techniques. These 1. Grid & Texture Information features cluster based features are mainly morphological feature, they 2. Walsh Coefficients of Horizontal and vertical pixel include Walsh coefficients of pixel distributions, Vector distributions. Quantization based codeword histogram, Grid & Texture Vectr Quti b ds information features and Geometric centers of a signature. In 3. Vector Quantization based-Codeword histogram this paper we discuss the extraction and performance analysis of 4. Geometric Centers of Signature Template these features. We present the FAR, FRR achieved by the system These features are used and tested in the development of an using these features .We compare individual performance and off-line signature system. In this paper we provide an overall system performance. overview and performance analysis of the signature recognition system based on clustering techniques. I. INTRODUCTION II. CONVENTIONAL GLOBAL FEATURES Signature verification is an important research area in the field of authentication of a person [1]. We can generally Pre-processing is the first step of any signature recognition distinguish between two different categories of verification system, where we normalize the signature to make it a binary systems: online, for which the signature signal is captured template. This normalized template is then used for feature during the writing process, thus making the dynamic extraction. The extracted feature vector is used for comparison information available, and offline for which the signature is and classification of signatures. The performance of signature captured once the writing process is over and thus, only a recognition system is greatly influenced by the feature set. The static image is available. In this paper we deal with Off-line pre-processing is shown in Fig. 1. The steps in pre-processing signature verification System. We design a system capable of are as follows verifying authenticity of a signature based on test performed with genuine signatures (Verification Mode) and person 1. Noise Removal identification from signature (Recognition Mode). This system 2. Intensity Normalization is using special set of features extracted for group of signature 3. Scaling points or signature segments, these are cluster features. 4. Thinning Over the years, many features have been proposed to represent signatures in verification tasks. We distinguish between local features, where one feature is extracted for each sample point in the input domain, global features [1] [6], where one feature is extracted for a whole signature, based on all sample points in the input domain, and segmental features, . where the signature is subdivided into segments and one feature is extracted for each segment. Here we are using The conventional features [3] that are considered for system Global & Segmental features. Signature verification can be Thelopmentaong c features are as foro s cosdee as a tw-ls atenrcgiio rbe 3 development along with cluster features are as follows considered as a two-class pattem recognition problem [3], 1. Number of ixels - Total Number of black ixels in a where the authentic user is a class and all the forgers are the 1iNure pixep second class. Feature selection refers to the process by which signature template descriptors (features) extracted from the input-domain data are pte heit- The height in pixels of the signature selected to provide maximal discrimination capability between template after horizontal blank spaces removed. seete o rviemxia dsriiato apbliybewe 3. Picture width- The width in pixels of the image with classes. Here we consider only global features for verification 3.rPicturidth e width in pel fe a w ofsgaue vertical horizontal blank spaces removed 4. Maximum horizontal projection- The horizontal We divide the features in two types projection histogram is calculated and the highest value of it We divide the features in two types 1. Standard Global features is considered as the maximum horizontal projection. 2. Segmenal & Cluter basedfeatures5. Maximum vertical projection- The vertical projection of (Seilfetrs the skeletonized signature image is calculated. The highest value of the projection histogram is taken as the maximum vertical projection. 978-1-4244-2928-8/09/$25 .OO ( 2009 IEEE 1 323

Transcript of Signature Recognition using Cluster Based Global Features

2009 IEEE International Advance Computing Conference (IACC 2009)Patiala, India, 6-7 March 2009

Signature Recognition using Cluster Based GlobalFeatures

H B Kekre, V A BharadiNMIMS University, Mumbai, India -400056

Abstract: In this paper we discuss an off-line signature We discuss four special features they arerecognition system designed using clustering techniques. These 1. Grid & Texture Information featurescluster based features are mainly morphological feature, they 2. Walsh Coefficients of Horizontal and vertical pixelinclude Walsh coefficients of pixel distributions, Vector distributions.Quantization based codeword histogram, Grid & Texture Vectr Quti b dsinformation features and Geometric centers of a signature. In 3. Vector Quantization based-Codeword histogramthis paper we discuss the extraction and performance analysis of 4. Geometric Centers of Signature Templatethese features. We present the FAR, FRR achieved by the system These features are used and tested in the development of anusing these features .We compare individual performance and off-line signature system. In this paper we provide anoverall system performance. overview and performance analysis of the signature

recognition system based on clustering techniques.I. INTRODUCTION

II. CONVENTIONAL GLOBAL FEATURESSignature verification is an important research area in the

field of authentication of a person [1]. We can generally Pre-processing is the first step of any signature recognitiondistinguish between two different categories of verification system, where we normalize the signature to make it a binarysystems: online, for which the signature signal is captured template. This normalized template is then used for featureduring the writing process, thus making the dynamic extraction. The extracted feature vector is used for comparisoninformation available, and offline for which the signature is and classification of signatures. The performance of signaturecaptured once the writing process is over and thus, only a recognition system is greatly influenced by the feature set. Thestatic image is available. In this paper we deal with Off-line pre-processing is shown in Fig. 1. The steps in pre-processingsignature verification System. We design a system capable of are as followsverifying authenticity of a signature based on test performedwith genuine signatures (Verification Mode) and person 1. Noise Removalidentification from signature (Recognition Mode). This system 2. Intensity Normalizationis using special set of features extracted for group of signature 3. Scalingpoints or signature segments, these are cluster features. 4. Thinning

Over the years, many features have been proposed torepresent signatures in verification tasks. We distinguishbetween local features, where one feature is extracted for eachsample point in the input domain, global features [1] [6], whereone feature is extracted for a whole signature, based on allsample points in the input domain, and segmental features, .where the signature is subdivided into segments and onefeature is extracted for each segment. Here we are using The conventional features [3] that are considered for systemGlobal & Segmental features. Signature verification can be Thelopmentaong cfeatures are as foroscosdee as a tw-ls atenrcgiio rbe 3 development along with cluster features are as followsconsidered as a two-class pattem recognition problem [3], 1. Number of ixels - Total Number of black ixels in awhere the authentic user is a class and all the forgers are the 1iNure pixep

second class. Feature selection refers to the process by which signature templatedescriptors (features) extracted from the input-domain data are pte heit- The height in pixels of the signatureselected to provide maximal discrimination capability between template after horizontal blank spaces removed.seete o rviemxia dsriiato apbliybewe 3. Picture width- The width in pixels of the image withclasses. Here we consider only global features for verification 3.rPicturidth e width in pel fe a wofsgaue vertical horizontal blank spaces removed

4. Maximum horizontal projection- The horizontalWedivide the features in two types projection histogram is calculated and the highest value of itWe divide the features in two types

1. Standard Global features is considered as the maximum horizontal projection.2. Segmenal&Cluterbasedfeatures5. Maximum vertical projection- The vertical projection of

(Seilfetrs the skeletonized signature image is calculated. The highestvalue of the projection histogram is taken as the maximumvertical projection.

978-1-4244-2928-8/09/$25.OO ( 2009 IEEE 1323

6. Baseline shift- This is the difference between the y- zero and the highest value (for the rectangle with the highestcoordinate of centre of mass of left and right part. number of black pixels) would be one.7. Dominant Angle feature -Dominant angle of the signature. 5. The resulting 320 elements of the matrix (gf) form the grid8. Signature surface area - here we consider the modified feature vector.tri-area feature [4] A representation of a signature image and theThe features discussed above as shown in Table I for the corresponding grid feature vector is shown in Fig. 2. A darkersignature shown in Fig 1. Next we discuss the special features rectangle indicates that for the corresponding area of the

TABLEI skeletonized image we had the maximum number of blackFEATURE EXTRACTED FROM SIGNATURE SHOWN IN FIG.1 pixels. On the contrary, a white rectangle indicates that we had

the smallest number of black pixels.Extracted

Sr. Feature ValueValueI Number of pixels 5472 Picture Width (in pixels) 1663 Picture Height (in pixels) 1374 Horizontal max Projections 125 Vertical max Projections 156 Dominant Angle-normalized 0.6947 Baseline Shift (in pixels) 478 Areal 0.1513259 Area2 0.253030

10 Area3 0.062878

III. GRID & TEXTURE INFORMATION FEATURE EXTRACTION Fig. 2 Representation of grid feature.B. Texturefeature [3]

Grid and texture feature provide information about the Texture feature gives information about occurrence ofdistribution of pixels and the distribution density of the pixels specific pixel pattern. To extract the texture feature group, the[3]. Texture feature provide information about the occurrence co-occurrence matrices of the signature image are used. In aof specific pattern in the signature template. These features grey-level image, the co-occurrence matrix pd [i, j] is definedare not based on single pixel or whole signature but they are by first specifying a displacement vector d = (dx, dy) andbased on group of pixels or signature segments, hence these counting all pairs of pixels separated by d and having greyare cluster features. Grid feature gives information about the level values i and j. In our case, the signature image is binarypixel density in a segment and texture feature gives and therefore the co-occurrence matrix is a 2 X 2 matrixinformation about the distribution of specific pixel pattern. describing the transition of black and white pixels. Therefore,These features are discussed in detail here. the co-occurrence matrix Pd [i, j] is defined as

Ip00 pOl (1A. Grid information feature Pd[i, j] = P

1We have the pre-processed signature template. We use pIO pll

resolution of 200*160 pixels. To extract the grid information Where pOO is the number of times that two white pixelsfeature from the signature we use the following algorithm. occurs, separated by d. pO1 is the number of times that aAlgorithm for gridfeature extraction combination of a white and a black pixel occurs, separated by1. Divide the skeletonized image into 10 X 10 Pixels blocks. d. plO is the same as pO1. The element pIt is the number ofWe get total 320 blocks. times that two black pixels occur, separated by d. The image is2. For each block segment, calculate the area (the sum of divided into eight rectangular segments (4 X 2). For eachforeground pixels). This gives a grid feature matrix (gf) of size region the P (1, 0), P (1, 1), P (0, 1) and P (-1, 1) matrices are20 X 16 calculated and the pOt and pit elements of these matrices are3. Find minimum and maximum (min, max) values for pixels used as texture features of the signature.block. Ignore block with no pixels.4. Normalize the grid feature matrix by replacing each

T- A A

nonzero element 'e , j' by A B A \ f(ei, i - min) (31 B

ei,j = (3.1) B B 6 B- - -

max- menThis gives matrix with all elements within the range of 0 and d=,d= x1 y1 d= y1 d=1 y1. The results are normalized so that the lowest value (for the .rectangle with the smallest number of black pixels) would be Fg3.Pxlpstos whl cnigfrtedslcmn etr

11324 2009 IEEE Internactionalz Advance Computing Conference (IACC 2009)

We use this procedure to calculate the texture feature matrix We consider a normalized signature template of size 256*256the signature template is divided in eight segments as shown pixels; this gives horizontal & vertical pixel projections of sizein fig 4. 256 elements each. We use Hadamard transform to the

horizontal pixel distribution points (Hi) and vertical pixeldistribution points (Vi); Hadamard transform is fast to

4i calculate and gives moderate energy compaction. Thisoperation gives the horizontal Hadamard coefficients (HHi)and vertical Hadamard coefficients (VHi). We use Kekre's [4]

Cl l tv v /algorithm to get the Walsh coefficients from the Hadamardcoefficients. This operation yields Walsh coefficients of thehistograms (SHHi, SVHi). The coefficients are plotted and

>- shown in Fig. 7.These coefficients are calculated for the test signature as

.Il|5 6 7 8 well as standard signature and Euclidian Distance is evaluatedto measure the similarity between to sequences of thecoefficients and hence the similarity of the two signatures.

Fig. 4 Pixel positions while scanning for the displacement vector(representation). V. VECTOR QUANTIZATION BASED-CODEWORD HISTOGRAM

The Corresponding texture feature matrix is shown below in Next we discuss a feature based on Vector Quantization of aFig. 5. Thus we get two matrices, one for grid feature and one signature template. We segment the signature into blocks tofor texture feature as feature vectors. form the vectors. These vectors are represented by codewords

from the codebook. Here the codebook used for mapping theB1 B2 B3 B4 B5 B B7 B 8 image block plays very important role [2][4][6]. Here the

1.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~...............................................

pOl >..O 24 138 3 92 140 22 0 objective of the vector quantization is not to compress thep1 1 0 1 0 32 0 8 23 8 0 image but to classify the signature and verify the authenticity.| p0l12 0 26 130 2 88 109 12 0 Hence we extend the approach to serve our purpose. We use

p1a120 4 33 1 13 Go 18 0 the codeword distribution pattern as a characteristic of the

p0131 0 19 110 4 79 132 30 0 signature. The frequency of the codewords occurring in thep11l3 0 11 59 1 22 41 1 0 distribution is calculated and the histogram is plotted for the

0 24 1294 44 130 30 0~codeword group against the number of occurrences and finally

pUl°4 0 24 128 4 44 130 30 0 these distributions are compared using a similarity measurep1 14 06 38 1 ~~~2 41 1 0 ~used in [2], which is based on Euclidian distance.The normalized Signature template is taken, the image size

considered is 200 x 160 Pixels, this image is divided into 4 x 4pixel blocks; each block is treated as a code vector. Overall

IV. WALSH TRANSFORM OF PIXEL DiSTRIBUTION 2000 blocks will be there. The codebook is having all the 216Here we propose a novel parameter for signature codewords initially then the invalid code vectors are removed

recognition. This parameter is derived from the Pixel by thinning process as shown in Fig 8. The remaining codeDistribution of the signature [1] [3]. This is shown in Fig. 6. vectors are sorted as their appearance in gray code table so

that the consecutive codewords will have minimal change.Sioh&W6 6601SWNknAi6dT6016t6lSuch similar codewords are grouped to form codeword

groups. The codeword histogram is generated for givensignature template which will be used for comparison. This

--I -A i- 1- JL _ toperation is illustrated in Fig. 10.

Fig. 6 Signature and its horizontal and vertical pixel distributionsFig. 8 Thinning operation illustratedA is input block, B is Thinned Block

2009 IEEE Inxternational Advanxce Computing Conference (IACC 2009) 1325

400320240160800-80-160

2400L320J2400

80030-80-160

-L4060

L320

isthe se of coewrs usedlfo grou formatio|n.Nex stepisU

L400

C 10 20 30 40 50 60 70 00 90 100 11l 1 20 1 30 1 40 1 5 160l 1 70 1 l 19 200 21 0 220 230 240 250 260 270CCgffiEgerent VS Nd iif Pdiht --3

to form odeword roups. T form co ewordgoup weMM&starto"a (5)flflatFig.7. Hadamard Coefwrcients of Horizontal pixel distribution (Upper Plot) and their sequenced arrangement (Lower Plot) of the signature mentioned in Fig3

N Wil-Wi2|The thinning operation gives 11756 valid codewords. This dis(I2+ II) + i

is the set of codewords used for group formation. Next step isj=I Wi+W2

to form codeword groups. To form codeword groups we start (5)with initial codeword chosen from the set and compare with egttecdwrnitorma hw nFgthe other codewords to find group of codewords withminimum hamming distance. For current scenario we have

20

constructed 240 groups of 50 codewords each. Weform one extra group for the codewords which do not Signatur 2participate in any group. The observed intra group hammingdistance is in the range 2-6 and in extreme case distance suchas 8-9 is observed.

Similarity Measure [2]Given an encoded image having similar I

representation as a text document, image featuresag o ~ ~~~~~~~~~~~~Frequency s Codeword Group --> 237can be extracted based on codewords frequency. Thefeature vector for signature template I1 and the Fig.9CodewordHistogramforSignatureshowninFig.4feature vector for test signature 12aregivenbelow, The codeword histograms can be compared using the

F1= {W, W21, ...tWN} (2) g similarity measure shown in Equation 5.3. For all theseForI12, It is given by features we have used user specific thresholds which give

2= {W12 W22, ... WN2 } (3) better results as compared to the common thresholds. We haveIn the histogram model, WN2= F where 3)isthe used set of training signatures containing 8 signatures. TheInthehistofgrammoup eCi a ing=Fin jThuris, the thresholds are calculated by using training procedurefrequency of group Ci appearing in Ij Thus, the dicse in[]frequency~~~~~~~~~~~~~~~~~~~~~~~~~~discussedimn[1].feature vectors I1 and 12 are the codewordhistograms. The similarity measure is defined as vi. GEOMETRIC CENTERS OF SIGNATuRE TEMPLATE

s(I2,II1) -- (4)1 + dis(I(2 I)) We use the successive geometric centers [1] of signature1 + dis(12,It) as a global parameter in the development of a signature

Where the distance function is verification system. This parameter is derived from the centerof mass of an image segment. The term 'Successive' is used todescribe the nature of feature extraction process. We use theprocess recursively to generate the set of points called as

11326 2009 IEEE Internactionalz Advance Computing Conference (IACC 2009)

geometric centers. We find the center of mass of given center of mass, this process is repeated until we get specifiedsignature template and divide the template in two parts at the number of points.

I I I I I I I I I Codebook

000010110111101Sigjnatire Temrplate 200 x 160 Pixels

Codeblock I

|I ~~ ~~III11 ,wo Ii_ioiooio1I0X I16 Bits

Codeword /

I /I I.lImage Block /i oeod4 X 4: PIXe 16 Codewords4X4pixel / 2 Sortedl in Gray

/ f CodeCotilewol ts g ___h11s"ograrn

No of Occulrance

l4 2

Codeword blip I Codeword Groups Codeword JOIU'p N

Fig. 10. Vector Quantization applied for signature templateSix feature points are retrieving based on vertical splitting.

This algorithm requires splitting of image for four times and Here feature points are nothing but geometric centers. Thewe extract 24 points in each mode, this will be discussed in procedure for finding feature points by vertical splitting isdetail in the coming part. This process requires signature mentioned in Algorithm.template of larger size to capture the details properly. Aftertesting various sizes we have implemented this feature using Algorithmsize of 320*240 pixels. This is the procedure for generating feature points based on

vertical splitting.A. Geometric center ofan image: Input: Static signature image after moving the signature to

The geometric center is giving the idea of the distribution of center of imagepixels. Physically it is the point where the center of mass of an Output: vl, v2, v3, v4, v5, v6 (feature points)object is located. For an image the geometric center is defined 1. Split the image with vertical line at the center of image thenby Cx, Cy Where, we will get left and right parts of image.

ax ynax xf yIrux 2. Calculate geometric centers vl and v2 for left and rightx E b[x,y] byE [x,y] parts correspondingly.

X=1 y== X=1 y=4 3. Split left part horizontal line at vl and find out geometriccx =x Y CY = centers v3 and v4 for top and bottom parts of left part

s b[x,y] E b[x,y] correspondingly.4. Split right part horizontal line at v2 and find out geometric

(6)Y=1X=1 Y=1 centers v5 and v6 for top and bottom parts of left part

(6) correspondingly.For the current case we consider a normalized signature Fig. 11 shows the feature points retrieved from signature

template of 320*240 pixels resolution. Now to develop the image. These features we have to calculate for everynew set of feature point we adopt a method of splitting the signature image in both training and testing.template at the geometric centers and finding the center ofmass of the two segments obtained after splitting.We perform the splitting in two manners once horizontally

and once vertically. We use the algorithm to generate two setof points based on Vertical splitting mechanism andHorizontal splitting Mechanism.

B. Feature points based on vertical splitting

2009 IEEE International Advanlce Computilng Conference (IACC 2009) 1327

Fig. I I Feature points retrieved from signature image by vertical splitting ofdepth 1.

Similar sequence is generated by horizontal splitting, nextwe discuss generation of geometric centers of depth 2.Previous procedure is used for splitting depth of one to giveset of six points. We extend this concept to splitting depth of ..........

two so that total 24 points are generated for this we apply thesplitting algorithm to the four subparts obtained by splittingdepth 1. This operation is illustrated in Fig. 12 for verticalsplitting.By this procedure we obtain total 48 feature points (vl;;;

v24 and hl;; h24). This set of point is can be used forcomparing two signatures. For comparison purpose we usecoefficient of Extended Regression Square (ER2) [2]. Defined Fig. 12 Feature points retrieved from signature template by vertical splitting ofas R-squared is also called the coefficient of determination. It depth 2can be interpreted as the fraction of the variation in Y that is VII. RESULTSexplained by X. R-squared can be further derived as: We have used a signature database consisting 984

signatures from 75 different persons. Per person 12 signaturesare collected out of which 8 signatures are used for thresholds

M ( n 2 calculation and record creation. Remaining signatures are usedX Y1(XJi/)y Y)l as genuine test signatures. From some arbitrary persons we

2 L__________________ J have collected forged signatures for testing purpose. We haveER = M n 2 M n

2(7) collected 125 skilled forgery signatures, 30 casual or unskilled

ZZ(XJI- kJ)2 ZZ(yJ )2 forgeries. Total number of signatures used for testing is i139=1i= J=1 i=1 at 600 dpi. Out of 1139 samples 480 signatures were used for

user enrollment, 232 signatures were genuine test signatures,127 skilled forgeries, 35 casual or unskilled forgeries, 250 un-

n= Number of dimensions (For current scenario we have two enrolled users test signatures and 30 signatures were unusabledimensions) due to distortion.xi- Points for first sequence We have tested individual modules for each cluster basedyi- Points for second sequence feature discussed above and the final system implementing theWhere X, Y two sequences to be are correlated each of two feature set together. The final system uses user specificdimensions. thresholds and training mechanism discussed in [1]. We have

implemented a weighted comparator based classifier in thefinal system. The metrics FAR (False Acceptance Rate) &FRR (False Rejection Rate) [3] are evaluated. In the finalprogram we have integrated all the modules and designed atraining mechanism to decide thresholds separately for eachuser. For training we collect 12 signatures per person andthresholds are calculated from the variance of feature points.This gives better performance as compared to individualperformance.We have achieved accuracy up to 9500. The FAR-FRR plot

is as follows, using this test bed we have performed total 353tests for verification mode and 257 tests for recognition mode.The system is having decision threshold of 60%o for both, the

11328 2009 IEEE Internactionalz Advance Computing Conference (IACC 2009)

signature verification and signature recognition mode. Out of 6 FCR 04.54353 verification tests 152 tests were for genuine signatures and201 tests were for forged signatures. For the recognition mode VIII. CONCLUSIONwe made 135 tests for genuine signatures and 122 tests werefor forged signatures. Fig. 13 shows FAR-FRR plot for In this paper we have discussed an Off-line signaturesignature recognition system for verification mode and the recognition system based on cluster features. We haveEER is 3.29%. At selected threshold level of 60 %. We developed an Off-line signature recognition system based onhave achieved final FAR of the system as 2.5% and these features the system is using user specific threshold. Onaccuracy 95.08 %. the database of 1139 signatures we have performed 600 tests

for both signature verification and recognition modes. Thesystem is has reported accuracy of 95.46% (CCR). These

FAR-FRR Plot features are easy to implement and can be explored in a deeperapproach and sophisticated training mechanism and classifierslike neural networks to achieve higher recognition rates.

1 oo.oo -

x80 000 / REFERENCES

|X 0 A000 [1] B. Majhi, Y S Reddy, D Prasanna Babu (2006), "Novel Features for Off-

line Signature verification", International Journal of Computers,S XEOW Communications & Control Vol. i

20.000 - [2] L. Zhu, A. Rao & A. Zhang (2002), "Theory of Keyblock-based Imageretrieval", ACM Journal, Volume V, No. N,, PP 1-32

0.0W0 M,,,,M,,, [3] H. Baltzakis, N. Papamarkos (2001), A new signature verificationLn <I? 197 onM Ln C O CD technique based on a two-stage neural network classifier, Engineering

Threshold Applications of Artificial[4] J. K. Solanki (1978), "Image Processing using fast orthogonal transform",

PhD Thesis Submitted to IITMumbai, pp. 30, 31

Fig. 13 FAR-FRR plot for Signature Recognition System in Verification [5] A. K. Jain, A. Ross, S. Prabhakar (2004), "An Introduction toMode Biometric

Summarizing all the results together we have listed the Recognition", IEEE Transactions on Circuits and Systems forFalse Acceptance ratios of all the modules and the final Video

system in the following table. Technology, Vol. 14, No. 1system in the following table. [6] H B Kekre, T. Sarode (2005), "Clustering Algorithm for codebook

TABLE II generation using Vector Quantization", Proceedings of NationalFALSE ACCEPTANCE AND FALSE REJECTION RATIOS REPORTED BY THE SYSTEM Conference on Image Processing, TSEC, Mumbai.

Sr Feature FAR FRR1 Walsh Coefficients 40% 42%2 Vector Histogram 12% 22%3 Grid Feature 8% 12%4 Texture Feature 14% 20%5 Final System 2.5% 6.5%

TABLE IIIGROUPWISE PERFORMANCE METRICS FOR FINAL SYSTEM

Test Samples Ratio Result

TAR 93.42Genuine FRR 06.58

All sample Caul FAR 00.00Signature s Casual TRR 100.00

Forged Skilled FAR 05.60

TRR 94.40

TABLE IVPERFORMANCE METRICS IN PERCENTAGE FOR SIGNATURE RECOGNITION SYSTEM

Sr. Parameter % Value1 FAR 02.502 FRR 06.583 TAR 93.424 TRR 97.505 CCR 95.46

2009 IEEE Inxternational Advanxce Computing Conference (IACC 2009) 1329