A Novel Hybrid System for Skin Lesion Detection

6
A Novel Hybrid System for Skin Lesion Detection 1 Andy Chiem, 2 Adel Al-Jumaily, and 3 Rami N. Khushaba Mechatronics and Intelligent Systems Group Faculty of Engineering University of Technology, Sydney 1 Email: [email protected]. 2,3 Email: (Rkhushab, Adel)@eng.uts.edu.au. Abstract In this paper, a new intelligent method of classifying benign and malignant melanoma lesions is implemented. The system consists of four stages; image pre-processing, image segmentation, feature extraction, and image classification. As the first step of the image analysis, pre-processing techniques are implemented to remove noise and undesired structures from the images using techniques such as median filtering and contrast enhancement. In the second step, a simple thresholding method is used to segment and localise the lesion, a boundary tracing algorithm is also implemented to validate the segmentation. Then, a wavelet approach is used to extract the features, more specifically Wavelet Packet Transform (WPT). Finally, the dimensionality of the selected features is reduced with Principal Component Cnalysis (PCA) and later supplied to an Artificial Neural Network and Support Vector Machine classifiers for classification. The ability to correctly discriminate between benign and malignant lesions was about 95% for the Artificial Neural Network and 85% for the Support Vector Machine classifier. 1. INTRODUCTION Skin cancer is not uncommon in Australia, in fact skin cancer has one of the highest mortality rates in Australia compared with breast cancer and lung cancer [1]. Yet the survival rate of skin cancer if detected early is very good. Early detection is vital to the treatment process. Many researchers have advocated some kind of automatic early diagnostic aided systems to improve the diagnostic accuracy and to combat the rapidly increasing incidence rate. Many experimental classifiers have been attempted, showing an accuracy rate ranging from 60% to 92% [2]. The common approach to skin lesion classification from dermatoscopic images is composed mainly of four stages [3]. These stages include, pre-processing stage, segmentation stage, feature extraction stage, and classification stage. The Preprocessing stage involves the filtering of unwanted characteristics of the dermatoscopic image. The pre- processing could consist of noise suppression, removal of unwanted objects, dimensionality reduction or colour-space transformation. After appropriate pre-processing, the area of interest is segmented from the healthy surrounding skin from which the appropriate features can be extracted. Appropriate techniques could be clustering, thresholding, or boundary tracing. After appropriate pre-processing, a feature extractor is employed in order to describe characteristic properties of the data. Characteristic features could be colour information, shape descriptors, or texture properties. The final stage consists of developing an actual classifier. Extracted features of an object are identified and a decision on which class the object is likely to belong to is then made. The classification could be a parametric model, a k-nearest neighbour classifier or a neutral network. This paper is organized as follows; section 2 presents the related work in literature. In section 3, the proposed methodology is presented and explained. Section 4, presents the practical experiments and results, and finally section 5 presents the conclusion. 2. BACKGROUND During recent years many approaches evolved in skin cancer recognition. Most of those approaches adopted different pattern recognition techniques to the problem. One of them is the computer-aided diagnosis (CAD) system for pigmented skin lesions, with solutions for the lesion boundary detection and for the quantification of the degree of symmetry that was presented by Schmid [4]. The inputs to the system were images taken by Epiluminescence Microscopy (ELM), a technique which uses oil immersion to render the outer layer of the skin translucent. The image then went through the pre- processing stage, which detects the lesion either using boundary detection or an image segmentation technique. Once the lesion was localised, different features were quantified and used for the classification. A collection of benign and malignant lesions, together with their histopathology was stored in a database which served as a reference database. This provides physicians information during the diagnosis phase. The experiment showed that the dermatologists were not able to reproduce their results. They concluded that the feature extraction was restricted by a six dimensional vector, which was used to classify the pigmented skin lesions weather it is being benign or malignant. The classifier achieved a sensitivity of 60% and a specificity of 70% on a test of 100 skin lesion images. Aslandogan et al [5] presented an unsupervised approach to border detection in skin lesion images. Their method is based ISSNIP 2007 567 Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.

Transcript of A Novel Hybrid System for Skin Lesion Detection

A Novel Hybrid System for Skin Lesion Detection 1Andy Chiem, 2Adel Al-Jumaily, and 3Rami N. Khushaba

Mechatronics and Intelligent Systems Group Faculty of Engineering

University of Technology, Sydney 1Email: [email protected].

2,3Email: (Rkhushab, Adel)@eng.uts.edu.au.

Abstract

In this paper, a new intelligent method of classifying benign and malignant melanoma lesions is implemented. The system consists of four stages; image pre-processing, image segmentation, feature extraction, and image classification. As the first step of the image analysis, pre-processing techniques are implemented to remove noise and undesired structures from the images using techniques such as median filtering and contrast enhancement. In the second step, a simple thresholding method is used to segment and localise the lesion, a boundary tracing algorithm is also implemented to validate the segmentation. Then, a wavelet approach is used to extract the features, more specifically Wavelet Packet Transform (WPT). Finally, the dimensionality of the selected features is reduced with Principal Component Cnalysis (PCA) and later supplied to an Artificial Neural Network and Support Vector Machine classifiers for classification. The ability to correctly discriminate between benign and malignant lesions was about 95% for the Artificial Neural Network and 85% for the Support Vector Machine classifier.

1. INTRODUCTION

Skin cancer is not uncommon in Australia, in fact skin cancer has one of the highest mortality rates in Australia compared with breast cancer and lung cancer [1]. Yet the survival rate of skin cancer if detected early is very good. Early detection is vital to the treatment process. Many researchers have advocated some kind of automatic early diagnostic aided systems to improve the diagnostic accuracy and to combat the rapidly increasing incidence rate. Many experimental classifiers have been attempted, showing an accuracy rate ranging from 60% to 92% [2]. The common approach to skin lesion classification from dermatoscopic images is composed mainly of four stages [3]. These stages include, pre-processing stage, segmentation stage, feature extraction stage, and classification stage. The Preprocessing stage involves the filtering of unwanted characteristics of the dermatoscopic image. The pre-processing could consist of noise suppression, removal of unwanted objects, dimensionality reduction or colour-space transformation. After appropriate pre-processing, the area of interest is segmented from the healthy surrounding skin from which the appropriate features can be extracted. Appropriate

techniques could be clustering, thresholding, or boundary tracing. After appropriate pre-processing, a feature extractor is employed in order to describe characteristic properties of the data. Characteristic features could be colour information, shape descriptors, or texture properties. The final stage consists of developing an actual classifier. Extracted features of an object are identified and a decision on which class the object is likely to belong to is then made. The classification could be a parametric model, a k-nearest neighbour classifier or a neutral network. This paper is organized as follows; section 2 presents the related work in literature. In section 3, the proposed methodology is presented and explained. Section 4, presents the practical experiments and results, and finally section 5 presents the conclusion.

2. BACKGROUND

During recent years many approaches evolved in skin cancer recognition. Most of those approaches adopted different pattern recognition techniques to the problem. One of them is the computer-aided diagnosis (CAD) system for pigmented skin lesions, with solutions for the lesion boundary detection and for the quantification of the degree of symmetry that was presented by Schmid [4]. The inputs to the system were images taken by Epiluminescence Microscopy (ELM), a technique which uses oil immersion to render the outer layer of the skin translucent. The image then went through the pre-processing stage, which detects the lesion either using boundary detection or an image segmentation technique. Once the lesion was localised, different features were quantified and used for the classification. A collection of benign and malignant lesions, together with their histopathology was stored in a database which served as a reference database. This provides physicians information during the diagnosis phase. The experiment showed that the dermatologists were not able to reproduce their results. They concluded that the feature extraction was restricted by a six dimensional vector, which was used to classify the pigmented skin lesions weather it is being benign or malignant. The classifier achieved a sensitivity of 60% and a specificity of 70% on a test of 100 skin lesion images. Aslandogan et al [5] presented an unsupervised approach to border detection in skin lesion images. Their method is based

ISSNIP 2007567

Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.

on a modified version of the JSEG algorithm. The basic idea behind the algorithm is to separate the segmentation process into two independent phases, colour quantisation and spatial segmentation. The results were viewed by expert dermatologists and are found to be highly accurate. In a paper written by Tim Lee et al [6], they present an algorithm to identify skin lesions from the digitised colour images collected in a clinical study of malignant melanomas. They presented a three step algorithm developed to separate the lesions from normal skin using the images they collected. The three steps consists of a multi-stage median filter to suppress noise, a process to compute the threshold values of the lesions and the normal skin, and finally, a rule-based system to identify the lesions. In the first stage, they implemented a special Median Filter called a Central Weighted Median Filter (CWMF). The CWMF puts more weight on the central pixel by extending the set by adding 2K copies of the central pixel prior to the median operation. The authors constructed a three stage CWMF for this stage, with eight CWMF operators in the first phase, two CWMF operators in the second phase and finally to one CWMF in the third phase. The three stage CWMF construction is shown in fig.1 below. Ganster et al [3] developed a system to enhance the early recognition of malignant melanoma. As an initial step, the binary mask of the skin lesion is determined by several basic segmentation algorithms together with a fusion strategy. In the segmentation system, three different algorithms are used to segment a lesion, namely global thresholding, dynamic thresholding, and a 3-D colour clustering concept. A set of 122 features containing shape and radiometric features as well as local and global parameters is calculated to describe the malignancy of a lesion. Using statistical feature subset selection methods, 21 significant features were selected from this set. Finally, a k-nearest neighbour’s classifier is used for the classification with a sensitivity of 81%. Maglogiannis and Zafiropoulos [7] introduced an efficient methodology for the image analysis and characterisation of digital images containing skin lesions using Support Vector Machines (SVM). Using basic image processing techniques, features of the skin lesion from the digital image were extracted. The feature extractions used were the border based feature and colour based feature. The result of the support vector machines algorithm performed very well in classifying skin lesions, achieving 94.1% correct classification. The authors compared the results of the SVM to two other classification methodologies, namely discriminate analysis and neutral networks [8]. The results favoured the SVM algorithm, with discriminate analysis and neutral networks performing approximately the same, achieving 88% correct classification.

Fig.1 - Three stage CWMF

Where:

( , )X i j is input the input of the filter. ( , )Y i j is the output of the filter.

W is the size of the filter window. In a research conducted by Zagrouba and Barhoumi [9], colour images of skin lesions were identified by passing through a series of pre-processing sequences and using artificial neural network classifier to distinguish melanoma from benign pigmented lesions. The first step in their system was to apply a median filter to minimise the influence of noise and small structures (like hair) from the colour image. The authors identified the need for a specialised algorithm for the removal of thick hairs. They stated that median filter is insufficient method for thick hair suppression and that a DullRazor [10] algorithm provide the best practical solution for suppressing hair for images covered by thick hairs. Median filter could intensify the undesired pixels which may degrade the segmentation process, while the Dullrazor technique identified the hair areas and replacing hair pixels with nearby non-hair pixels. The next sequence in the pre-processing technique is to employ the Karhunen-Loeve (KL) transform to enhance the edges in the image. The KL transform works by projecting the three colour components on the eigenvectors of their covariance matrix [9]. The system managed to obtain 79.1% accuracy for the classification of malignant and benign lesions on real skin lesion images.

3. METHODOLOGY

The main algorithm used is summarised in fig 2. Each block stands for a different algorithm operation. The subsequent sections will then elaborate on each stage of the proposed system.

A. Image Acquisition The dataset of dermatoscopic images used in this paper were taken from the Department of Dermatology – University of Iowa College of Medicine and Derm Net. In total, 100 images were collected and used in the CAD system.

568

Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.

Fig.2 - Flowchart of CAD system, showing the four main processing blocks;

pre-processing, segmentation, feature extraction, and classification

Some of the images were taken using Epiluminescence Microscopy (ELM) while others were taken via a digital camera. The size of the images was cropped to 256x256 to reduce the computational resources needed for processing the images. ELM is an imaging technique that enables creating direct links between biological behaviour and distinct visual characteristics. Dermatoscopy is a non-invasive imaging technique that employs oil immersion to visualise subsurface skin structures by rendering it translucent [7]. The process is done by applying immersion oil onto the skin lesion and pressing a specialised glass plate (dermatoscope or dermatoscopic camera) onto the lesion. The oil is used to ensure that small cavities between the skin and the glass plate are filled in order to reduce reflections. The dermatoscope is integrated with a strong light source to allow physicians to inspect the skin lesion. The lenses of the dematoscopic device usually have a 10x magnification. Fig.3 shows a sample of both the traditional and the dermatoscopic images used by physicians in analysing skin lesions.

B. Image Pre-processing In the pre-processing stage, a median filter is applied to the images followed by a contrast enhancing technique. The median filter is used to ensure that any unwanted structures are eliminated from the image, such as fine hair, air bubbles and noise. The image contrast was enhanced to ensure that the edges of the lesion were prominent, an example is shown in fig.4. It was decided that the size of the median filter should be 11x11 which gave the best results by using a trial and error method. The output is shown below. The median filter is one of many filters available for image processing and is widey used because of it’s effectiveness in reducing unwanted structures in images. The median filter works by taking every

a. An example of a traditional image

b. A dermatoscopic image

Fig.3 Traditional and dermatoscopic images

pixel and replacing it with the median pixel value computed in a given neighbourhood [4]. Some of the properties of the median filter are summarised below [11, 12].

• The median filter is very effective in removing noise spikes that only cover a few pixels compared to the kernel neighbourhood size. Thus, it can be applied for removing thin lines, e.g. human hair, or isolated islands of pixels, e.g. small air bubbles.

• The median filter preserves spatial resolution well, i.e., it reduces blurring of edges compared to linear convolution filtering with Gaussian or averaging kernels.

C. Image Segmentation The next step after image pre-processing was to segment the lesion from the surrounding skin. Since a clear colour distinction existed between the lesion and the skin, thresholding was very suitable for the task. A black and white image was produced with its contrast adjusted to provide better segmentation, as shown in Fig.5. A boundary tracing algorithm is also applied to the image in order to validate that the segmentation is correct. Thresholding is based on a simple concept for portioning a digital image into mutually exclusive and exhaustive regions. It works on either greyscale image or colour image. The basic theory behind thresholding requires a parameter called “intensity value” or brightness threshold, this parameter is chosen so that given an image I[M,N], if I[M,N]≥ brightness threshold, then I[M,N]=1, else I[M,N]=0. This holds true if we are only interested in light objects on a dark background, but if we are interested in dark objects on a light background then it would be the reverse of the above equation. The thresholding technique is also computationally inexpensive method and thus does not require a huge amount of computational time and power. The

569

Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.

Fig 4 - Image passing through a median filter with size 11x11

results of segmenting the lesion using the described algorithm are shown below.

D. Feature Extraction The next stage of the CAD system is extracting the important features of the image. The ABCD system offers an alternative approach in extracting features of interest however; the system requires a lot of subjective judgment on the part of the practitioner. Since firm clinical inspection rules do not exist, it is not possible to make a good judgment based classification system. If clearly defined description existed for what constitutes features, it would be relatively easy to extract the necessary features and make an accurate diagnosis. Unfortunately such information does not exist and we must rely on other techniques. One technique that does not require subjective judgment on feature extraction is the Wavelet algorithm. Specifically Wavelet Packet Transform (WPT) was used to implement the feature extraction process. Variability appears to be what most separates malignant melanoma from benign nevi, therefore the best approach at feature extraction would retain as much of the data variability as possible [2]. Wavelet analysis by its very nature looks at variability within a signal, in this case colour indexes in an image. Since images are only composed of colour values, changes in texture, granularity, and colour are all represented by the same value system [2]. Wavelet analysis looks at these changes over different scales which should detect whole lesion changes such as texture, colour, and local changes like granularity. The WPT is a generalised version of the Wavelet Transform: the high-frequency part also splits into a low and a high-frequency part, this produce a decomposition tree as shown in the Fig.6. The WPT provides high dimensional feature vector thus providing more information about the images.

Fig 5 - Results for image segmentation

However, the WPT complicates the analysis process as the high dimensionality of the feature vector causes an increase in the learning parameters of the pattern classifier, and the convergence of the learning error deteriorates. Therefore, dimensionality reduction will play an important role before applying the feature vector to the pattern classifier. The energy of the wavelet coefficients at the second level of decomposition were considered as features. Those were in turn reduced in dimensionality with Principal Component Analysis (PCA) algorithm. PCA is a statistical technique that identifies the dependence structure behind a multivariate stochastic observation in order to obtain a compact description of it. It works by computing the eigenvalues of the covariance matrix of the data. Then the eigenvectors corresponding to the largest eigenvalues are considered as the principal components. As a result of applying PCA on the extracted data using WPT, the resulting features set consisted of 32 features only.

4. EXPERIMENTS AND PRACTICAL RESULTS

Wavelets from different families have features that may be more suitable for certain tasks. Three wavelets from three different families were tested to see if the wavelet family itself had any impact on the classification. The results showed that for the wavelets; Daubechies 6, Daubechies 10, and coiflet 1, they all have an accuracy rate of around 85%. Daubechies 2, coiflet 3, coiflet 5, and Symmlet 2, all have an accuracy rate of 91%. The best wavelet with the overall classification accuracy for the back-propagation neural network classifier was the Symmlet 10 with about 95% accuracy, as shown in Table.1.

Fig 6 - A Wavelet Packet decomposition tree

570

Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.

TABLE 1- CLASSIFICATION ACCURACY OF DIFFERENT MOTHER WAVELETS

Wavelet Accuracy

for 1st run (%)

Accuracy for 2nd

run (%)

Accuracy for 3rd

run (%)

Average Accurac

y (%) Daubechies 2 92.2 90.9 91.8 91.6 Daubechies 6 87.4 87.1 87.9 87.5

Daubechies 10 88.7 90.2 88.3 89.1 Coiflet 1 87.3 86.7 85.8 86.6 Coiflet 3 91.5 92.4 91.8 91.9 Coiflet 5 90 90.1 91.4 90.5

Symmlet 2 91.1 91.9 91.6 91.5 Symmlet 6 88.7 89.5 90.2 89.5 Symmlet 10 95.2 95.8 95.4 95.5

A. Back-Propagation Neural Network Back-propagation algorithm is one of the well-known algorithms in neural networks. Back-propagation algorithm has been popularised by Rumelhart, Hinton, and Williams in 1980s as a euphemism for generalised delta rule [13]. It works by propagating the input through the network and the error is calculated and then is propagated back through the network while the weights are adjusted in order to make the error smaller. With the back-propagation neural network, a number of different network topologies were tested. Hidden layer numbers from two to 6 were tested with 2 performing best. Since the neural network used random starting values, network accuracy varied slightly with each run. The average accuracy for all three runs was recorded. The results showed that by increasing the numbers of nodes, the accuracy rate tends to decrease slightly. In general, the maximum accuracy achieved was 95.1%. The complete results are shown in the Table.2. It is noticed from the results in Table.2 that there is a slight difference between results during different runs, due to the effect of randomization of data before testing with the 10-fold technique. The results were performed on feature size of 32 using a Symmlet 10 mother wavelet. Originally the back-propagation neural network was tested on a feature size of 15; however the performance was only satisfactory. The accuracy rate for the feature size of 15 achieved about 70% accuracy. It was first thought that by reducing the number of feature vectors to a minimum it would give good results and reduce the complexity of the neural network. Nevertheless, it was later realised that having too few feature vectors would not allow a proper description of malignant melanomas making it hard to perform classification, thus achieved poor accuracy. By increasing the number of features, it allowed enough information to properly classify the lesion and hence increase the accuracy rate.

B. Support Vector Machine (SVM) Support vector machine performs classification by constructing an N-dimensional hyper-plane that optimally separates the data into two categories. The goal of SVM modelling is to find the optimal hyper-plane that separates clusters of vectors in such a way that cases with one category of the target variable are on one side of the plane and cases

TABLE 2- ACCURACY RATE FOR DIFFERENT NUMBER OF HIDDEN LAYERS IN THE BACK PROPAGATION NEURAL NETWORK

No. of Hidden Layers

Accuracy for 1st

run (%)

Accuracy for 2nd

run (%)

Accuracy for 3rd

run (%)

Average Accuracy

(%) 2 95.1 95.8 94.3 95.1 3 94.7 94.1 94.2 94.3 4 93.7 94.1 92.5 93.4 5 92.5 93.1 92.6 92.7 6 92.4 92.1 91.6 92

with the other category are on the other side of the plane. The vectors near the hyper-plane are the support vectors. Table.3 shows the results for the Support Vector Machine classifier. Table.3 illustrates the values obtained using different kernel functions with other parameters kept constant (i.e. number of feature, wavelets used, level of decomposition etc). It turned out that the best classification rates could be achieved by using the polynomial function, which achieved an accuracy rate of above 85%.

5. CONCLUSION

In this paper, a new skin caner recognition system was proposed based on a mixture of intelligent methods. In the first stage, the images were put through a pre-processing stage which consisted of a median filter and contrast enhancing techniques to remove any unwanted structures that might corrupt the image. Segmentation was achieved through simple thresholding and boundary tracing algorithm to localise the lesion. The next step involved extracting important features that can be used to distinguish malignant and benign lesions. The traditional method of identifying lesions using the ABCD rule was discarded and instead wavelet packet analysis was used. Wavelet packet analysis and wavelet analysis in general provides detailed information about an image on different scales, focusing in on different features with each scale. The wavelet coefficients present useful information about the overall trend of the image, its texture and granularity. Overall, the final system uses 32 descriptors, using a 2 level decomposition. The final process involves the classification of images to either malignant lesions or benign lesions. Two algorithms were implemented to see which one performed better, namely Back-propagation Neural Network and Support Vector Machine. The initial testing showed that the BNN has a classification accuracy of 95% and the SVM has classification accuracy above 85%. These results are comparable, if not better than the system proposed by Barhoumi et al, which only achieved 79.1% using 200 test images [9].

571

Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.

TABLE 3– ACCURACY RATE FOR DIFFERENT KERNEL FUNCTIONS USING SVM CLASSIFIER

Kernel Function

Accuracy for 1st run

(%)

Accuracy for 2nd run

(%)

Accuracy for 3rd run

(%)

Average Accuracy

(%) Quadratic 76.79 73.17 73.12 74.36

Polynomial 88.25 85.53 82.16 85.3 Gaussian

Radial Basis Function

68 68 68 68

Multilayer Perceptron 53 53.8 50.7 52.5

REFERENCES [1] D. Hill, E. White, R. Marks, and R. Borland,

"Changes in sun-related attitudes and behaviors, and reduced sunburn prevalence in a population at high risk of melanoma," European Journal of Cancer Prevention, vol. 2, pp. 447-56., 1993.

[2] J. Sikorski, "Identification of malignant melanoma by wavelet analysis," presented at Proceedings of Student/Faculty Research Day, CSIS, Pace University,, 2004.

[3] H. Ganster, P. Pinz, R. Rohrer, E. Wildling, M. Binder, and H. Kittler, "Automated melanoma recognition," IEEE Transactions on Medical Imaging, vol. 20, pp. 233-239, 2001.

[4] P. Schmid, "Segmentation of digitized dermatoscopic images by two-dimensional color clustering," IEEE Transactions on Medical Imaging, vol. 18, pp. 164-171, 1999.

[5] M. E. Celebi, Y. A. Aslandogan, and P. R. Bergstresser, "Unsupervised border detection of skin lesion images," presented at International Conference on Information Technology: Coding and Computing, 2005. ITCC 2005. , 2005.

[6] T. Lee, V. Ng, D. McLean, A. Coldman, R. Gallagher, and J. Sale, "A multi-stage segmentation method for images of skin lesions," presented at Proceedings. IEEE Pacific Rim Conference on Communications, Computers, and Signal Processing, 1995.

[7] I. G. Maglogiannis and E. P. Zafiropoulos, "Characterization of digital medical images utilizing support vector machines," BMC Medical Informatics and Decision Making, 2004.

[8] A. K. Jain, J. Mao, and K. Mohiuddin, "Artificial neural networks: A tutorial," IEEE Computer, 1996.

[9] W. Barhoumi and E. Zagrouba, "A Preliminary approach for the automated recognition of malignant melanoma," Image Analysis and Stereology Journal, vol. 23, pp. 121-135, 2004.

[10] A. Coldman and R. Gallagher, "Dullrazor: A software approach to hair removal from images," Comput Biol Med, vol. 27, pp. 533-543, 1997.

[11] A. K. Jain, Fundamentals of digital image processing: Prentice Hall, 1989.

[12] R. Boyle, V. Hlavac, and M. Sonka, Image processing, Analysis and machine vision: Chapman & Hall, London, 1993.

[13] L. Fausett, Fundamentals of neural network: architectures, algorithms and applications: Englewood Cliffs: Prentice Hall, 1994.

572

Authorized licensed use limited to: University of Technology Sydney. Downloaded on February 19, 2009 at 03:37 from IEEE Xplore. Restrictions apply.