A SOM combined with KNN for classification task

6
Abstract—Classification is a common task that humans perform when making a decision. Techniques of Artificial Neural Networks (ANN) or statistics are used to help in an automatic classification. This work addresses a method based in Self-Organizing Maps ANN (SOM) and K-Nearest Neighbor (KNN) statistical classifier, called SOM-KNN, applied to digits recognition in car plates. While being much faster than more traditional methods, the proposed SOM-KNN keeps competitive classification rates with respect to them. The experiments here presented contrast SOM-KNN with individual classifiers, SOM and KNN, and the results are classification rates of 89.48±5.6, 84.23±5.9 and 91.03±5.1 percent, respectively. The equivalency between SOM-KNN and KNN recognition results are confirmed with ANOVA test, which shows a p-value of 0.27. I. INTRODUCTION lassification is a habitual problem faced by humans when making a decision. In classification processes done by humans, an object is mapped into a pre-defined group or in a pre-defined class. This mapping is conducted considering some features of the investigated object and previous knowledge from that object, to decide to which class it belongs [1]. In an automatic classification process, the making of the decision is performed only based in features from the object. Many problems in medicine, marketing and other areas can be solved as a classification problem. For example, in medicine, features extracted from a mammogram could be used to recognize a tumor as benign or malign [2]. In marketing, the information on the customer, stored in a database, could be used to decide if the customer is a potential buyer [3]. The automatic classification process could be divided in the following steps [4]: i. representation: the target object is represented through its features. In images problems, the usual features are color, shape or texture. ii. adaptation: selecting the best features subset that ensure discriminatory information. iii. generalization: training and evaluation of the selected features by a classifier. There are several techniques to representation (for details, see [5], [6]), for example, image resizing and Discrete Wavelet Transform [5], [6]. L. A. Silva is with the School of Computing and Informatics of the Mackenzie Presbyterian University, São Paulo, Brazil (e-mail: [email protected]). Emilio Del-Moral-Hernandez is with the Polytechnic School of the University of São Paulo, São Paulo, Brazil (e-mail: [email protected]). The selection is a step to eliminate redundancies of features and consequently to reduce the dimension of the feature vector, what facilitates the classification process [5]. In classification step, several techniques could be used. These could be separated in Artificial Neural Networks (ANN) and in statistics techniques. The use of ANN could be done with the choice of architectures that perform classification by estimation, forecasting and clustering [1], [7-9]. The success in using the ANN is the classification speed. Once the ANN is trained, the classification process is very fast. On the other hand, statistics classifiers do not have training step, in general the classification process is slower than ANN, but the methods are non-parametric and for this reason they are used as benchmarking in many classification problems [8], [10]. This paper introduces a classifier method, which is based in Self-Organizing Maps Artificial Network (SOM) [11] and in K-Nearest Neighbor statistical classifier (KNN) [8]. The SOM is a clustering method, in which each neuron of trained Map represents a group of input patterns. If these input patterns are labeled, so the units could be labeled [12], [13]. In a classification task, an input pattern to be classified is compared with the neurons of Map and the label of the best match unit (BMU) is used in classification. A problem with this method happens when the BMU neurons are localized in border regions. The KNN, in a classification process, compares the input pattern with all of database samples and decides based on the classes of K nearest neighbors. The time consumed in the classification task using KNN is frequently a problem of this method in real applications. On the other hand only two parameters are required in a classification task: distance metric and the number of K. The approach addressed here explores the advantages of SOM and KNN. This consists in to use some best match units of SOM Map to define a subset of the input patterns set potentially similar with one that will be classified. This subset of the input patterns is retrieved and after it is used by KNN that perform the classification process. In this approach the SOM work as pre-processing to the KNN classifier and it is called SOM-KNN. For a conclusive experiment, SOM-KNN is contrasted with individual classifiers, SOM and KNN. For this, a database of plate digits is used, as a case study. The remainder of the paper is organized as follows: in Section II, the plate digits are discussed, including a brief introduction about the method used to feature extraction and the process of normalization. A quick explanation of Self- Organizing Maps and the methods to use it in classification A SOM combined with KNN for Classification Task Leandro A. Silva, Emilio Del-Moral-Hernandez C Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011 978-1-4244-9637-2/11/$26.00 ©2011 IEEE 2368

Transcript of A SOM combined with KNN for classification task

Abstract—Classification is a common task that humans perform when making a decision. Techniques of Artificial Neural Networks (ANN) or statistics are used to help in an automatic classification. This work addresses a method based in Self-Organizing Maps ANN (SOM) and K-Nearest Neighbor (KNN) statistical classifier, called SOM-KNN, applied to digits recognition in car plates. While being much faster than more traditional methods, the proposed SOM-KNN keeps competitive classification rates with respect to them. The experiments here presented contrast SOM-KNN with individual classifiers, SOM and KNN, and the results are classification rates of 89.48±5.6, 84.23±5.9 and 91.03±5.1 percent, respectively. The equivalency between SOM-KNN and KNN recognition results are confirmed with ANOVA test, which shows a p-value of 0.27.

I. INTRODUCTION lassification is a habitual problem faced by humans when making a decision. In classification processes done by humans, an object is mapped into a pre-defined

group or in a pre-defined class. This mapping is conducted considering some features of the investigated object and previous knowledge from that object, to decide to which class it belongs [1].

In an automatic classification process, the making of the decision is performed only based in features from the object. Many problems in medicine, marketing and other areas can be solved as a classification problem. For example, in medicine, features extracted from a mammogram could be used to recognize a tumor as benign or malign [2]. In marketing, the information on the customer, stored in a database, could be used to decide if the customer is a potential buyer [3].

The automatic classification process could be divided in the following steps [4]:

i. representation: the target object is represented through its features. In images problems, the usual features are color, shape or texture.

ii. adaptation: selecting the best features subset that ensure discriminatory information.

iii. generalization: training and evaluation of the selected features by a classifier.

There are several techniques to representation (for details, see [5], [6]), for example, image resizing and Discrete Wavelet Transform [5], [6].

L. A. Silva is with the School of Computing and Informatics of the

Mackenzie Presbyterian University, São Paulo, Brazil (e-mail: [email protected]).

Emilio Del-Moral-Hernandez is with the Polytechnic School of the University of São Paulo, São Paulo, Brazil (e-mail: [email protected]).

The selection is a step to eliminate redundancies of features and consequently to reduce the dimension of the feature vector, what facilitates the classification process [5].

In classification step, several techniques could be used. These could be separated in Artificial Neural Networks (ANN) and in statistics techniques. The use of ANN could be done with the choice of architectures that perform classification by estimation, forecasting and clustering [1], [7-9]. The success in using the ANN is the classification speed. Once the ANN is trained, the classification process is very fast.

On the other hand, statistics classifiers do not have training step, in general the classification process is slower than ANN, but the methods are non-parametric and for this reason they are used as benchmarking in many classification problems [8], [10].

This paper introduces a classifier method, which is based in Self-Organizing Maps Artificial Network (SOM) [11] and in K-Nearest Neighbor statistical classifier (KNN) [8]. The SOM is a clustering method, in which each neuron of trained Map represents a group of input patterns. If these input patterns are labeled, so the units could be labeled [12], [13]. In a classification task, an input pattern to be classified is compared with the neurons of Map and the label of the best match unit (BMU) is used in classification. A problem with this method happens when the BMU neurons are localized in border regions. The KNN, in a classification process, compares the input pattern with all of database samples and decides based on the classes of K nearest neighbors. The time consumed in the classification task using KNN is frequently a problem of this method in real applications. On the other hand only two parameters are required in a classification task: distance metric and the number of K.

The approach addressed here explores the advantages of SOM and KNN. This consists in to use some best match units of SOM Map to define a subset of the input patterns set potentially similar with one that will be classified. This subset of the input patterns is retrieved and after it is used by KNN that perform the classification process. In this approach the SOM work as pre-processing to the KNN classifier and it is called SOM-KNN.

For a conclusive experiment, SOM-KNN is contrasted with individual classifiers, SOM and KNN. For this, a database of plate digits is used, as a case study.

The remainder of the paper is organized as follows: in Section II, the plate digits are discussed, including a brief introduction about the method used to feature extraction and the process of normalization. A quick explanation of Self-Organizing Maps and the methods to use it in classification

A SOM combined with KNN for Classification Task Leandro A. Silva, Emilio Del-Moral-Hernandez

C

Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011

978-1-4244-9637-2/11/$26.00 ©2011 IEEE 2368

are presented in Section III. Experimental results, discussion and comparative results are given in Section IV. In the last Section, some conclusions are provided.

II. PLATE DIGITS DATABASE

The feature extraction of the plate images in literature is addressed by different techniques. Zang et. al introduces the Discrete Wavelet Transform with the Haar function to represent the images [14]. Anagnostopoulos et. al. contrasted Sliding Concentric Windows (SCW) and connected component analysis with Hough Transform, Gabor Transform, Wavelet transform and Vector Quantization [15].

In this work, the technique used for feature extraction is image resizing. The experiments aim to recognize the last plate digits. A potential practical application of this recognition system is using it in those populous cities that define which plates can traffic in each weekday, in specific areas of the metropolitan region with restricted traffic policies. Thus, here it is not discussed the problem of whole plate identification; for this, see [16], [17].

The digits extracted from plates for the training database preparation are organized in classes 0 to 9. All digits are normalized by maximum value of pixel intensity and resized in order that the images have the same size. In Fig.1, samples of the 0 digits and the normalization results are shown. For this sample, the images are resized to 20 lines and 10 columns (20 x 10 pixels), but in Section IV others scales will be tested. After that, as discussed in Section IV, experiments were conducted to verify the need for normalization and the best resized, the images are transformed in vectors of pixels, in order to the classification process.

III. SELF-ORGANIZING MAPS

Self-Organizing Maps (SOM) [11] consist of neurons located on a regular low-dimensional grid, usually two-dimensional (2-D). Typically, the lattice of the 2-D grid is either hexagonal or rectangular. We are assuming that each input pattern from the set of feature vectors (X) xi is defined as a real vector xi =[xi1, xi2, . . . , xid]T ε ℜd. The SOM training algorithm is iterative. Each neuron or unit has a d-dimensional weight vector wu =[wu1,wu2, . . . ,wud]T ε ℜd. Initially, in t = 0, the wu is initialized randomly preferably from the input vectors domain [9], [11]. At each training step t, an input vector xi(t) is randomly chosen from the training set (X). General distances between xi(t) and all weight vectors wu are computed. The winning neuron or the best match unit (BMU) is that with the wu closer to xi(t). Some methodologies using SOM have been successfully used in applications such as data visualization in high dimension, manuscript text classification, data mining and others (see [11]).

Fig. 1. Examples of digits normalizations

In this work, the SOM is used as a pre-processing step. However there are several approaches to use SOM in classification. In the next subsection is discussed about SOM used as classifier and the approach addressed here, which uses SOM as pre-processing.

A. SOM combined with KNN

For using SOM in recognition process, after it is trained, the images of the database (or new images from the same type), which are represented by the feature vectors, are projected into the SOM Map. This projection means to measure the Euclidean distance between the input patterns (i.e., the feature vector) and the weight vectors of the SOM Map. This defines the best match unit (BMU), as well as a set of the best matches units, which can be ordered in first (1BMU), second (2BMU) to last one (MBMU). When the input vector has an associated label, there is the possibility of using it for labeling the neurons of SOM as well. This labeling process can be in terms of vote (the class with the highest frequency into a BMU neuron) or in terms of histogram (frequency number of each class into a BMU neuron). For illustration, an artificial database was created, which is distributed in three classes (C1, C2 and C3), as represented in Fig. 2. With samples of those classes, a SOM Map with six neurons (3×2) was trained. The units of the SOM Map are identified with a sequence letter (A, B, C,…,F), as in Fig 3. The projection (Euclidian distance) associates the input patterns to its BMU.

The first possibility to labeling the Map, vote, considers all classes of the input vectors having a given SOM neurons as BMU, and takes that class with the highest frequency to label the unit. Fig. 4a illustrates this method of labeling the

2369

SOM Map, i.e. A neuron represents three input patterns from class C1 and for this it is labeled with C1 class [12], [13]. On the other hand, the second possibility to label the SOM Map, histogram, considers all classes mapped into each unit and labels it in terms of frequency number of each class. In Fig. 4b the unit B (Fig.2) represents two input patterns of C2 – C2(2) and one of C1- C1(1).

Fig. 2. Example of an artificial database with 3 classes plotted with the BMUs. The axes represent the features and the fd stands the high-dimension of the input patterns. The hexagon and its unknown label “?” represent a new input vector not considered in the learning process.

Fig. 3. Example of a SOM Map with six neurons with a sequential letter. These letters are there only to identify each unit.

In a classification process, when a new input pattern (represented by a hexagon and its unknown label “?” in Fig.2) is projected in the previously labeled SOM Map, by vote, the classification for this new input vector leads to the C2 class, because the unit B is the BMU and it is labeled with that class. On the other hand, the classification based on histogram shows a fuzzy response for unit B denoting C1 (1), one input vector of C1 class, and C2 (2), two input patterns of C2 class. In conclusion, these two methods to use the labeled SOM Map in classification can show problems when a new input pattern is in a border region, where neither the labeling by vote nor the labeling by histogram represent a sharp indication on the proper classification.

Another classification approach using SOM Map, proposed and explored here, is using the BMU neurons only to define the nearest region with similar patterns and, after this, the KNN is performed from the input patterns, which are represented by the units of nearest region, and the new

target is assigned to input pattern that needs classification. Thus, the class of the input pattern closest to the new input pattern is used for classification. This is illustrated in Fig. 5. The proposed approach can be considered as a pre-processing for the KNN classifier, because the visit for the nearest neighbors is limited to a region, as shown in Fig. 6, differently from the traditional KNN method, which visits all the input patterns. Therefore, the classification is defined based on input pattern neighboring, as in traditional KNN, with the important difference that for this classification approach, the SOM topological feature is explored – more than one BMU have to visit to avoid border region.

Fig. 4. SOM Map labeled with the classes of input vectors representation. In a) the map is labeled by vote process and in b) by the histogram. The class is assumed from input vectors labels used during training.

Fig. 5. A representation of the methodology approach explored here, using the BMUs to define the region of similar input vectors to be considered in the KNN classification.

The four steps to implement SOM-KNN are as follows:

1) For a new input pattern, verify all BMUs and sort them in matching order. An alternative approach is defining only the first BMU and after this, the closest neighbor units are defined. For a SOM trained with rectangular lattice topology, the units of the 4 direct neighbors – called here SOM4-KNN, are visited. For a hexagonal lattice topology, we have to consider the six neighbor units (SOM6-KNN), and so on.

2) A parameter, called here pn - patterns number, is defined, which ensures at least the visiting of two BMUs during classification. This parameter is associated with the number of input patterns that each BMU represents. Here the value of this parameter is the result of sum of the input patterns numbers that are associated to the two more representative BMUs. For SOM4-KNN or SOM6-KNN, this step will be not necessary.

a) b)

2370

3) The nearest BMU to be visited are defined and the input patterns that them represents are retrieved. Here the region with potential similar patterns is defined.

4) The KNN using the new input pattern and the retrieved input patterns from BMUs is performed. The class attributed to the new input pattern is that of nearest input pattern. Here, the K value is one, but others nearest (K>1) can be considered. At this point, the classification is performed.

Thus, this SOM-KNN approach minimizes the problem in

border regions with respect to the SOM Map by vote [13], and the categorization decision is made from input vectors represented by neuron A and B; see Fig. 6.

This approach is similar to the well-know KNN algorithm. For this reason, in the experimental results, besides the contrast with the classification using SOM Map by vote, we also compare it to the KNN, and the results are shown in terms of recognition rate and time consuming in the classification.

The next section discusses the experiments methodology and shows the results of contrastive experiments.

Fig. 6. The example represents the use of the SOM-KNN methodology approach in a classification process, which defines a potential region with similar vectors.

IV. EXPERIMENTAL RESULTS

The testing system was implemented on an Intel Core 2, 1800 MHz and 2GB RAM computer using MATLAB 7.1 software. The SOMToolbox [18] available to download is used and some new specific functions were implemented to conduct the classification experiments, previous discussed.

The plate digits database was created for experiments of this paper. Some cars were photographed, the plates were manually segmented and a total of 132 digits were extracted. The number of images per digit is illustrated in Fig. 7. For experiments the database is separated randomly in training (80%) and in test (20%).

Fig. 7. Histogram of sum of images per digits.

The experiments are separated in two parts. In first part,

the following experiments are realized: - normalization: recognition rate considering image non-

normalized and normalized by maximum value of pixel intensity are realized and the results summarized in Table I;

- image resizing: after to define the necessity of

normalization, experiments with different sizes of images are realized . These results are in Table II;

- size of SOM Map: for the first experiments, a SOM Map

of 10 x 10 neurons was worked. Now, experiments with others sizes are realized. The results are in Table III.

TABLE II

IMAGE RESIZING DEFINITION

image size 10x10 20x10 20x20 Recognition rate 74.23 84.23 79.23

Experimental results to define the best size of plate images.

TABLE I NORMALIZATION TEST

Non-normalized normalized Recognition rate

69.23 84.23

Experimental results to define the necessity of normalization of the image database.

2371

The results of first part of experiments allow some

conclusions, such as necessity of image normalization, the best image resizing is 20 lines and 10 columns (20 x 10) and the best size of Map SOM is 10 x 10. For these experiments only SOM classifier, those that consider the label of Map by vote, was used. In the next experiments an exhaustive contrast between classifiers are realized.

For the second part of experiments SOM, SOM4-KNN and SOM-KNN (all discussed in Section 3) are contrasted with the KNN classifier. The pn - patterns number was chosen with support of Fig.8 that shows the number of input patterns by neurons. This value was 6 (pn=6), which ensures a visit from at least two neurons. The value of K used in experiments was 1, i.e. 1-NN.

For the comparative experiments, the database is shuffled to choose the training set and test set. This process is repeated in 30 times. So the classifiers results are shown in terms of mean and standard deviation. These results and also the time consumed in classification process are in Table IV.

Fig. 8. Number of input patterns by neuron.

The experimental results allow some previous

conclusions. The first of all is the worst recognition rate of SOM. The results from approach addressed here, SOM-KNN and SOM4-KNN, are better than one. The result of

SOM-KNN is indicating a better result than SOM4-KNN, as well as the KNN shown the best results. On the other hand the time consumed in experiments is inversely proportional, i.e. SOM has the best result.

However, the contrasted results are based in a mean from 30 experiments and the standard deviation is indicating a possible equivalence in results.

For a conclusive discussion, the ANOVA test is performed. The result is expressed in Fig.9 and in Table V. The p values, measure of how much evidence to the null hypothesis, in this case the equivalence results, the values near to 0, means results different and 0.2 < p < 0.4 means equivalent results [19].

Fig. 9. The box plot of ANOVA test for classifiers

With this results is possible conclude that the approach SOM-KNN is better than SOM and SOM4-KNN (8x10-4 and 0.02 respectively). The main conclusion is that the SOM-KNN recognition rate is equivalent to KNN (p = 0.27), which is considered as a benchmarking. Another important result of SOM-KNN is that the time consumed in classification is 14.83 times faster than time consumed of KNN.

V. CONCLUSIONS

This paper addressed a classification method based in SOM and KNN. The proposed classification method can be implementated in two similar ways. The first, called SOM-KNN, uses the SOM to define the best match unit (BMU)

TABLE III MAP SIZE DEFINITION

Map size 5x5 7x7 10x10

Recognition rate 69.23 80.77 84.23

Experimental results to define the best size of SOM Map.

TABLE IV RESULTS OF RECOGNITION

Classifier Recogntion rate

Time (sec.)

SOM 84.23±5.9 0.01 SOM4-KNN 85.77±6.6 1.23 SOM-KNN 89.48±5.6 1.33

KNN 91.03±5.1 19.74 Comparative study between classifiers expressed in terms of mean and standard deviation.

TABLE V COMPARATIVE P-VALUES

SOM SOM4-KNN SOM-KNN KNN

SOM 1 0.34 8x10-4 1x10-5

SOM4-KNN 0.34 1 0.02 9x10-4

SOM-KNN 8x10-4 0.02 1 0.27 KNN 1x10-5 9x10-4 0.27 1

Confusion matrix contrasting p-values between the classifiers.

SOM SOM4-KNN SOM-KNN KNN

2372

for a new pattern and to find the neighbor neurons, in order of matching (1BMU, 2BMU,..,MBMU). That allows the definition of a reduced set of input patterns that are closest to the pattern to be classified. With such definition of a reduced set of input samples, the method performs then the classical 1NN to perform classification. The second version of the proposed method, called SOM4-KNN, differs with respect to the first version as follows: after defining the BMU, it visits the four direct SOM neighbors, considering the topology of map.

The experiments done with normalized plates digits, with 30 resample instances of the training set and test set, produced results in terms of mean and standard deviation of classification rate. The close similarities in the obtained results were solved with ANOVA statistical test. From literature, the p-values between the interval 0.2 < p < 0.4 are considered equivalent. So, in experimental results, the p-values allow some conclusions such as: the recognition rates for SOM and SOM4-KNN are similar (p = 0.34), as well as for SOM-KNN and KNN (p=0.27). However, and that is an important advantage of the proposed method, the time consumed by SOM-KNN in the performed contrastive experiments is 14.83 shorter than time consumed to KNN.

In the next work addressing the SOM-KNN method, a detailed study to a better definition of the parameter pn – patterns number, used in SOM-KNN, is planned.

REFERENCES [1] G.P.Zhang, “Neural Networks for Classification: A Survey”, IEEE

Transactions on Systems, Man and Cybernetics, vol. 30, no. 4, pp. 451-462, 2000.

[2] L.A.Silva, E.Del-Moral-Hernandez, R.M. Rangayyan, “Classification

of breast masses using a committee machine of artificial neural networks”, Journal of Electronic Imaging, v. 17, n. 1, p. 13–17, 2008.

[3] R.J.Sassi, L.A.Silva, E.Del-Moral-Hernandez, “A

methodology Using Neural Network to Cluster Validity Discovered from a Marketing Database”, Brazilian Symposium on Artificial Neural network (SBRN), IEEE, p. 3-8, 2008.

[4] Y.Rui, T.S Huang, “Image retrieval: Current techniques,

promising directions, and open issues”, Journal of Visual Communication and Image Representation, vol.10, pp.39–62, 1999.

[5] V.Castelli, L.Bergman, Image Databases- Search and Retrieval

of Digital Imagery, 1a. ed. New York: John Wiley Professio, 2001.

[6] R.C. Gonzalez, R.E.Woods, Digital Image Processing, Prentice

Hall, Upper Saddle River, NJ, 2007. [7] A.K.Jain, K.M.Mohiuddin, “Artificial Neural Networks: A Tutorial”,

Computer 29, 3 (Mar. 1996), 31-44, 1996. [8] R. Duda, P. Hart, D.G Stork. Pattern Classification and Scene

Analysis. JohnWiley Professio, Wiley,NY, 2000. [9] S.Haykin, Neural networks: A comprehensive foundation,

Upper Saddle River, NJ:Prentice Hall, 1999.

[10] A.K.JAIN, R.P.W Duin, J.Mao, “Statistical Pattern Recognition: A

Review”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1): 4 - 37, 2000.

[11] T.Kohonen, Self-Organizing Maps. Third extended edition.

Berlin, Heidelberg, New York: Springer, 2001. [12] L.A.Silva, E.Del-Moral-Hernandez, R.Moreno, S.S.Furuie,

“Cluster-based classification using self-organising maps for medical image databases”, Int. J. Innov. Comput. Appl., vol. 2, n. 1, pp. 13-22, 2009.

[13] A.Rauber, D.Merkl, “Automatic Labeling of Self-Organizing

Maps: Making a Treasure-Map Reveal Its Secrets”, Methodologies for Knowledge Discovery and Data Mining, pp. 228 – 237, 1999

[14] H.Zhang, W.Jia, X.He, Q.Wu, “Learning-Based License Plate

Detection Using Global and Local Features”, 18th International Conference on Pattern Recognition (ICPR'06), vol. 2, pp.1102-1105, 2006.

[15] C.-N.E. Anagnostopoulos, I.E. Anagnostopoulos, I.D.

Psoroulas, V.Loumos, E. Kayafas, “Plate Recognition From Still Images and Video Sequences: A Survey, Transactions on Intelligent Transportation Systems, vol. 9, n. 3, pp. 377 – 391, 2008

[16] K.Deb, K.-H.Kang-Hyun, “A vehicle license plate detection

method for intelligent transportation system applications, International Journal of Cybernetics and Systems, vol. 40, no. 8, pp. 689-705, 2009.

[17] R.Al-Hmouz, S.Challa, “License plate localization based on a

probabilistic model”, Journal Machine Vision and Applications, Computer Science, vol.30, no.10, pp. 138-164, 2008.

[18] SOMToolbox. Som toolbox, a function package for matlab 5

implementing the self-organizing map (SOM). Abril 2009. [19] R.V.Hogg, J.Ledolter, Engineering Statistics, MacMillan,

New York, 1987.

2373