A comparative evaluation of preprocessing methods for automatic detection of retinal anatomy

7
Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007 - 24 - Abstract Digital fundus images are getting more interest while developing automated screening systems for diabetic retinopathy. And owing to the acquisition process, these images are very often of poor quality that hinders further analysis. Up to date studies still struggle with the issue of preprocessing the fundus images, mainly due to the lack of literature reviews and comparative studies. Furthermore, available methods are not being evaluated on large benchmark publicly-available datasets. The paper discusses three major preprocessing methodologies described in literature (Mask Generation, Illumination Equalization, and Color Normalization), and their effect on detecting Retinal Anatomy. In each methodology, a comparative performance measure based on proposed appropriate metrics is accomplished among available methods, using two publicly- available fundus datasets. In addition, the paper proposed the comprehensive normalization method which recorded acceptable results when applied for color normalization. Keywords — Biomedical image processing, comparative study, fundus image analysis, retinal imaging, telemedicine. 1. Introduction REPROCESSING of digital fundus images is a major issue in automatic screening systems for diabetic retinopathy. Diabetes is a disease that affects about 5.5% of the population worldwide [1]. In Egypt, over 13% of the population 20 years will have diabetes by the year 2025 [2]. Consequently, diabetic retinopathy is considered one of the most prevalent complications of diabetes that causes blindness [3–5]. Due to the growing number of diabetic patients, and with insufficient ophthalmologists to screen them all, automatic screening can reduce the threat of blindness by 50% [6], provide considerable cost savings, and decrease the pressure on available infrastructures and resources [7]. Generally, digital fundus images are more appropriate for automatic screening systems [8]. Unfortunately, a significant percentage of these images is of poor quality that hinders further analysis due to many factors such as patient movement, poor focus, or inadequate illumination [9]. Besides, the improper focusing of light may show the optic disc (OD) darker than other areas of the image [10, 11]. These artifacts are significant enough to impede human grading in about 10% of retinal images [9], and it can reach 15% in some retinal images sets [12]. Preprocessing of the fundus images can wilt or even remove the mentioned interferences. Though [13] is a published study that compares the performances of various preprocessing methods, these methods have not been evaluated on large publicly available benchmark datasets, such as [14, 15]. This study continues on the work presented in [16] which compared different preprocessing methods affecting the segmentation of the retinal vasculature using [14] and [15]. Overall, preprocessing methods for digital fundus images can be categorized into mask generation, illumination equalization, color normalization, and contrast enhancement. Literature methods related to each of the first three mentioned categories are reviewed in Section 2. In Section 3, a description of the material used is given. Section 4 presents and discusses results of the conducted comparative studies. Finally, Section 5 presents the conclusion and future work. 2. Preprocessing Methods: Literature Review 2.1. Mask Generation Mask generation (Fig. 1) labels the pixels of the (semi) circular retinal fundus, Region-of-Interest (ROI), in the entire image, and exclude the background of the image (i.e. pixels outside the ROI belonging to the dark surrounding region) [13, 17]. Figure 1. A typical digital fundus image from the STARE (left) and the corresponding mask generated using [11] (right) Goatman et al. [13] generated the masks automatically by simple thresholding of the green color channel followed by 5×5 median filtering. In [17], Gagnon et al. again applied thresholding but for the three channels (the R, G, and B color Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim Department of Computer Science, the Faculty of Computers and Information, Helwan University, Cairo, Egypt e-mail: [email protected], [email protected], [email protected] A Comparative Evaluation of Preprocessing Methods for Automatic Detection of Retinal Anatomy P

Transcript of A comparative evaluation of preprocessing methods for automatic detection of retinal anatomy

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 24 -

Abstract Digital fundus images are getting more interest while

developing automated screening systems for diabetic retinopathy. And owing to the acquisition process, these images are very often of poor quality that hinders further analysis. Up to date studies still struggle with the issue of preprocessing the fundus images, mainly due to the lack of literature reviews and comparative studies. Furthermore, available methods are not being evaluated on large benchmark publicly-available datasets. The paper discusses three major preprocessing methodologies described in literature (Mask Generation, Illumination Equalization, and Color Normalization), and their effect on detecting Retinal Anatomy. In each methodology, a comparative performance measure based on proposed appropriate metrics is accomplished among available methods, using two publicly-available fundus datasets. In addition, the paper proposed the comprehensive normalization method which recorded acceptable results when applied for color normalization. Keywords — Biomedical image processing, comparative

study, fundus image analysis, retinal imaging, telemedicine. 1. Introduction

REPROCESSING of digital fundus images is a major issue in automatic screening systems for diabetic

retinopathy. Diabetes is a disease that affects about 5.5% of the population worldwide [1]. In Egypt, over 13% of the population ≥ 20 years will have diabetes by the year 2025 [2]. Consequently, diabetic retinopathy is considered one of the most prevalent complications of diabetes that causes blindness [3–5]. Due to the growing number of diabetic patients, and with insufficient ophthalmologists to screen them all, automatic screening can reduce the threat of blindness by 50% [6], provide considerable cost savings, and decrease the pressure on available infrastructures and resources [7]. Generally, digital fundus images are more appropriate for automatic screening systems [8]. Unfortunately, a significant percentage of these images is of poor quality that hinders further analysis due to many factors such as patient movement, poor focus, or inadequate illumination [9]. Besides, the improper focusing of light may show the optic disc (OD) darker than other areas of the image [10, 11].

These artifacts are significant enough to impede human

grading in about 10% of retinal images [9], and it can reach 15% in some retinal images sets [12]. Preprocessing of the fundus images can wilt or even remove the mentioned interferences. Though [13] is a published study that compares the performances of various preprocessing methods, these methods have not been evaluated on large publicly available benchmark datasets, such as [14, 15]. This study continues on the work presented in [16] which compared different preprocessing methods affecting the segmentation of the retinal vasculature using [14] and [15].

Overall, preprocessing methods for digital fundus images can be categorized into mask generation, illumination equalization, color normalization, and contrast enhancement. Literature methods related to each of the first three mentioned categories are reviewed in Section 2. In Section 3, a description of the material used is given. Section 4 presents and discusses results of the conducted comparative studies. Finally, Section 5 presents the conclusion and future work. 2. Preprocessing Methods: Literature Review 2.1. Mask Generation

Mask generation (Fig. 1) labels the pixels of the (semi) circular retinal fundus, Region-of-Interest (ROI), in the entire image, and exclude the background of the image (i.e. pixels outside the ROI belonging to the dark surrounding region) [13, 17].

Figure 1. A typical digital fundus image from the STARE (left) and the corresponding mask generated using [11]

(right)

Goatman et al. [13] generated the masks automatically by simple thresholding of the green color channel followed by 5×5 median filtering. In [17], Gagnon et al. again applied thresholding but for the three channels (the R, G, and B color

Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim

Department of Computer Science, the Faculty of Computers and Information, Helwan University, Cairo, Egypt

e-mail: [email protected], [email protected], [email protected]

A Comparative Evaluation of Preprocessing Methods for Automatic Detection of Retinal Anatomy

P

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 25 -

bands separately) to generate a binary image for each band. The threshold value was automatically calculated using pixel value statistics (mean and standard deviation) outside the ROI for each color band. Logical operators are then used to combine the binary results of all bands, identifying the largest common connected mask [17].

In [11], Frank ter Haar applied a threshold t=35 to the red color band and then the morphological operators – opening, closing, and erosion – were applied respectively (to the result of the preceding step) using a 3×3 square kernel to give the final ROI mask. 2.2. Illumination Equalization

The illumination in a retinal image is non-uniform due to the variation of the retina response or the non-uniformity of the imaging system (e.g. vignetting, and varying the eye position relative to the camera). For instance, the OD is characterized as the brightest anatomical structure in a retinal image. Yet, due to uneven illumination – vignetting in particular – the OD may appear darker than other retinal regions, especially when retinal images are often captured with the fovea appearing in the middle of the image and the OD to one side [10]. OD localization methods, particularly those based on intensity variation or just on intensity values, won’t be straightforward [11].

To overcome the non-uniform illumination, each pixel is adjusted (equalized) using the following equation [10, 11]: ),(),(),( crImcrIcrI Weq −+= (1)

where m is the desired average intensity (128 in an 8-bit grayscale image) and ),( crI W is the mean intensity value of the pixels within a window W of size N×N. In [10] the window varies between 30 and 50.

In [11], a running window of only one size (40×40) was used to calculate the mean intensity value. Although the resulting images look very similar to those using the variable running window, the ROI of the retinal images is shrunk by five pixels to discard the pixels near the border where the chances of erroneous values are higher [11].

In [18], Yang et al. corrected the non-uniform illumination by dividing the image by an over-smoothed version of it using a spatially large median filter. Usually, the illumination equalization process (Fig. 2) is applied to the green band (green image) of the retina [10, 11, and 18]. 2.3. Color Normalization

Color information is potentially useful in classification given that color measurements of lesions in retinal images showed significant differences [13]. In order to consistently point out colored objects and lesions, color must relate directly to the inherent properties of the imaged objects and be independent of imaging conditions. Usually, imaging conditions such as lighting geometry (scene illumination) and the imaging device (illuminant color) scales pixels' values in

an image, though it does not affect the human perception of color. Moreover, retina's color (pigmentation) across the population and different patients is variable being strongly correlated to skin pigmentation (amount of melanin) and iris color, thus affecting the classification based on the relatively small color variation between the different retinal lesions.

Figure 2. Illumination Equalization applied to the green channel of the fundus image in Fig. 1. Using the method

proposed by [11] (left) and using the method proposed by [18] (right)

2.3.1 Gray-World Normalization. It aims to eliminate the effects due to illumination color [13, 19] (Fig. 3(a)). The new values ),,( newnewnew bgr for any pixel ),,( bgr can be simply calculated using the following algebraic equation by Finlayson et al. [19]:

Avg

new

Avg

new

Avg

new

Bbband

Ggg

Rrr === ,, (2)

where AvgAvgAvg BandGR ,, represent the average (mean) of

all the pixels in each of the R, G, and B bands respectively. 2.3.2 Comprehensive Normalization. Whereas gray-world normalization can successfully remove the effect of different illuminant colors, Chromaticity is a representation of digital images which is invariant to light geometry (i.e. light source direction and power) [13, 19]. Therefore, chromaticity values represent an image while discarding the intensity (brightness) information, and so can be thought of as hue and saturation taken together [20]. For any pixel ),,( BGR , the chromaticity normalized values ),,( bgr are defined as:

BGRBband

BGRGg

BGRRr

++=

++=

++= ,, (3)

Further, only two chromaticity values are needed to represent a color, since r + g + b = 1. Since practically, both variations (in lightning geometry and color) don't occur separately, comprehensive normalization (Fig. 3(b)) simply applies the chromaticity normalization followed by gray-world normalization, this process is repeated (typically for 4 or 5 iterations) until the change in values is less than a certain termination amount [19]. 2.3.3 Histogram Equalization. It's a typical well-known technique that spans the histogram of an image to a fuller range of the gray scale [20]. A histogram equalized image (Fig. 3(c)) is obtained by mapping each pixel in the input

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 26 -

image to a corresponding pixel in the output image using an equation based on the cumulative distribution function. Although it's primarily considered a contrast enhancement method, applying histogram equalization individually to the 3 RGB color bands of an image affects the color perceived [13]. This will notably increase the influence of the blue channel in the output retinal images which normally reflect little blue light.

(a) (b)

(c) (d)

(e)

Figure 3. Color normalization of the fundus image in Fig.1 (a) Gray-World normalization. (b) Comprehensive

normalization. (c) Histogram equalization. (d) Histogram specification using (e) as a reference image

2.3.4 Histogram Specification (Matching). Is a technique employed to generate an image with a specific histogram [20] (i.e. a histogram with a desired specified shape instead of the original histogram or a uniform histogram that results from histogram equalization). The first preprocessing step in [4] was to normalize the retinal images since the retina’s color in different patients is variable being strongly correlated to skin pigmentation and iris color. A retinal image was selected as a reference (Fig. 3(e)), and then histogram specification was used to modify the values of each image in the database such that its frequency histogram matched the reference image distribution (see Fig. 1(left) and Fig. 3(d)).

Histogram specification can be applied through the following summarized procedure given by [20]. Given the input image with gray-levels kr , the histogram )( krh and the histogram equalized gray-levels ks are obtained. Given also the reference image – the one having the desired histogram – with gray-levels kz , its histogram and its histogram equalized gray-levels kv are obtained. Then we equalize the ‘histogram equalized gray-levels’ of both the given and reference images (i.e. kk sv = ), therefore:

1,,2,1,0

/)()(00

−=

==== ∑∑==

Lk

snnzpzGv k

k

jj

k

jjzkk

K

(4)

Since kk sv = , then – kk zr & – can be given related as follows:

1,,2,1,0)()]([ 11 −=== −− LksGrTGz kkk K (5)

where 1−G is the inverse transformation function of G. Since 0=− kk sv , then to find the new gray-level values for the given image, each kr will be assigned to the smallest integer ]1,0[ −∈ Lz k satisfying the following condition:

1,,2,1,00)( −=≥− LkszG kk K (6) 3. Material

Two publicly available datasets were used. The first one is the DRIVE dataset [14], established to facilitate comparative studies on RV segmentation [1, 21]. The dataset consists of a total of 40 color fundus photographs used for making actual clinical diagnoses, and divided into 2 equal sets (training and test), where 33 photographs do not show any sign of diabetic retinopathy and 7 show signs of mild early diabetic retinopathy. The – 24 bits, 768 by 584 pixels – color images are in compressed JPEG-format, commonly used in screening. They were acquired using a Canon CR5 non-mydriatic 3CCD camera with a 45 degree field-of-view (FOV). A manually thresholded FOV is included for each image beside a manual segmentation of the vasculature for the training set, and two manual segmentations for the test set [1].

The second dataset used is a – two subsets – of the STARE Project's database [15], a project concerned with automatically diagnosing diseases of the human eye. The images were captured using a TopCon TRV-50 fundus camera at 35° FOV, and subsequently digitized at 605 x 700, 24-bits pixel [10, 22]. The first subset contains 20 fundus images used by Hoover et al. [22] for testing an automated vessel segmentation method, with two manual segmentations available. Ten of the images are of patients with no pathology while the other ten images contain pathology that obscures or confuses the blood vessels appearance. The second subset contains 81 images used by Hoover et al. [10] to evaluate their automatic OD localization method. The dataset contains 31 images of healthy retinas and 50 of diseased retinas.

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 27 -

4. Results and Discussion 4.1. Results of Comparing Mask Generation Methods

To compare the performance of the 3 automatic mask generation methods described in literature, we used the manually thresholded FOV included in the DRIVE dataset as a Gold standard. We applied the 3 methods as described in literature. For the simple threshold used by Goatman et al. [13] we used the algorithm proposed by Nobuyuki Otsu [23] that selects a global image threshold using only the gray-level histogram. As for Gagnon et al. [17], we used the 'mean + 4*std. deviation' as a threshold value.

The average sensitivity (number of detected true positives) and specificity (number of detected true negatives) of the 3 methods when applied to the DRIVE are almost equal, ranging from 98.3 to 100 and 99.3 to 99.95 respectively, with the second method [17] having rather best results. Conversely, when applying the 3 methods to the STARE database, the method proposed by [11] achieved extensively best results, followed by the method in [17]. No concrete results can be shown to the later comparison since no manual masks – to serve as a gold standard – are included with the STARE images. 4.2. Results of Comparing Color Normalization

Methods

In [13], Goatman et al. used the chromaticity values distribution for evaluating color normalization methods. The Chromaticity coordinates were used to evaluate the effects of applying the three color normalization methods (Grey-world normalization, Histogram equalization, and Histogram Specification) to retinal images in order to discriminate different retinal lesions. The clustering of the chromaticity values representing four different retinal lesions was measured before and after applying each of the normalization methods.

Instead of discriminating retinal lesions, we use the chromaticity values to discriminate vessels from non-vessels so as to evaluate the effect of the color normalization methods on automatic retinal vasculature segmentation methods. The clustering of the chromaticity values representing vessels/non-vessels was measured before applying any of the normalization methods, and after applying each of the four normalization methods independently. The average – vessels/non-vessels – chromaticity values per each image were plotted, and an ellipse was drawn centered on the mean value of each cluster as proposed in [13]. Principal components analysis (Hotelling Transform) is used to find the direction of maximum variance with which the major axis is aligned. The semi-major axis equals to two standard deviations in the major axis's direction, while the semi-minor axis equals to a standard deviation in the orthogonal direction.

Fig. 4 shows that before applying any of the normalization methods, there was a noticeable overlap with elements

scattered in both clusters. After applying the color normalization methods the overlap persists, however, the elements in each cluster (especially the non-vessels) become more intense. A clear separation between clusters was found only after applying histogram equalization. Comprehensive normalization showed more intense clusters, and a narrower overlap compared to the gray-world normalization. Histogram specification recorded the worst results with a comprehensive overlap, although it showed the clearest separation of four retinal lesions clusters in [13]. 4.3. Results of Comparing Illumination Equalization

Methods

As mentioned before, one of the main drawbacks of uneven illumination (vignetting in particular) is the inability to simply analyze the OD. Simple thresholding of the high intensity pixels in a standard fundus image should help in localizing the OD, or at least serve as successful candidates for the OD location. Thus, to measure the effect of both equalization methods proposed by [10] and [18], we investigate whether any of the highest 2% intensity pixels lie within the OD region or not. This approach for roughly localizing the OD was applied to the red band intensities by Li and Chutatape [5]. In the present work, we investigate the highest 2% intensity pixels in the green band after applying both equalization methods.

Using the 121 images in both the DRIVE and STARE datasets, the highest 2% intensity pixels in the green band successfully fell in the OD region of 102 images, with a success rate of about 84.3%. After applying both illumination equalizations proposed by [11] and [18], the success rates noticeably rose to 98.35% (i.e. failed in 2 images only), and 97.52% (i.e. failed in 3 images), respectively.

It is clear that applying illumination equalization to fundus images considerably improves further analysis tasks. Both equalization methods found in literature have nearly the same influence. 5. Conclusion and Future Work

The paper presented different categories of preprocessing methods for retinal fundus images. Comparative evaluation measures among different methods, in each category, were conducted using publicly available datasets. Initially, three mask generation methods were tested using 121 images, and the method applying morphological operators to the thresholded red band was shown superior to other methods. Illumination equalization methods noticeably improved illumination across fundus images, and thus improved the process of localizing successful OD candidates (success rate of 98.35%). And finally, four color normalization methods were tested (using 60 images), and the histogram equalization was found to be the most effective method in clustering the vessels and non-vessels pixels.

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 28 -

Figure 4. Color normalization chromaticity plots (a) Before applying any normalization methods (b) Gray-world normalization (c) Comprehensive normalization The red dash-dotted ellipse & the plus signs '+' represent the non-vessels cluster, while the blue solid ellipse & the points '.' represent the vessels cluster.

b

a

c

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 29 -

An extension for this study could be investigating other preprocessing (contrast, illumination, and color normalization) methods available in literature. In addition, more comprehensive results can be achieved by examining the effect of the presented preprocessing methods using actual retinal-anatomy detection modules, or by using image quality assessment methodologies. Besides, a more inclusive comparison can be achieved when using larger datasets of actual screening images for evaluation. 6. Acknowledgment

The authors wish to thank their fellow authors of references [1, 6, 14, 15, 17, 21, and 22] for their support in acquiring the materials and resources needed to conduct the present study.

References [1] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and

B. van-Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imag., vol. 23, no. 4, pp. 501-509, April 2004.

[2] W. H. Herman, R. E. Aubert, M. A. Ali, E. S. Sous, and A. Badran, “Diabetes mellitus in Egypt: risk factors, prevalence and future burden,” Eastern Mediterranean Health J., vol. 3, Issue 1, pp. 144-148, 1997.

[3] M. El-Shazly, M. Zeid, and A. Osman, “Risk factors for eye complications in patients with diabetes mellitus: development and progression,” Eastern Mediterranean Health J., vol. 6, Issue 2/3, pp. 313-325, 2000.

[4] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, “Classification and localisation of diabetic-related eye disease,” Proc. European Conf. on Computer Vision, Springer-Verlag, pp. 502-516, 2002.

Figure 4. Color normalization chromaticity plots (Continued) (d) Histogram equalization (e) Histogram specification The red dash-dotted ellipse & the plus signs '+' represent the non-vessels cluster, while the blue solid ellipse & the points '.' represent the vessels cluster.

d

e

Proceedings of the Fifth International Conference on Informatics & Systems (INFOS’07), Cairo-Egypt, March 24-26, 2007

- 30 -

[5] H. Li and O. Chutatape, “Fundus image features extraction,” Proceedings of the 22nd Annu. EMBS Int. Conf., Chicago IL., pp. 3071-3073, July 23-28, 2000,.

[6] C. Sinthanayothin, “Image analysis for automatic diagnosis of diabetic retinopathy,” Ph.D. Thesis, University of London (King's College London), September 1999.

[7] C. Sinthanayothin, J. F. Boyce, T. H. Williamson, H. L. Cook, E. Mensah, S. Lal, and D. Usher, “Automated detection of diabetic retinopathy on digital fundus images,” Diabetes UK – Diabetic Medicine, vol. 19, pp. 105-112, 2002.

[8] S. C. Siu, T. C. Ko, K. W. Wong, and W. N. Chan, “Effectiveness of non-mydriatic retinal photography and direct ophthalmoscopy in detecting diabetic retinopathy,” Hong Kong Med. J., vol. 4, no. 4, pp. 367-370, Dec. 1998.

[9] T. Teng, M. Lefley, and D. Claremont, “Progress towards automated diabetic ocular screening: a review of image analysis and intelligent systems for diabetic retinopathy”, Med. & Biological Engineering & Computing, vol. 40, pp.2-13, 2002.

[10] A. Hoover and M. Goldbaum, “Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels,” IEEE Trans. Med. Imag., vol. 22, no. 8, pp. 951-958, 2003.

[11] Frank ter Haar, “Automatic localization of the optic disc in digital colour images of the human retina,” M.S. Thesis, Utrecht University, Dec. 16, 2005.

[12] B. Liesenfeld, E. Kohner, W. Piehlmeter, S. Kluthe, S. Aldington, M. Porta, T. Bek, M. Obermaier, H. Mayer, G. Mann, R. Holle, and K-D. Hepp, “A telemedical approach to the screening of diabetic retinopathy: Digital fundus photography”, Diabetes Care, vol. 23, no. 3, pp.345-348, March 2000.

[13] Keith A. Goatman, A. David Whitwam, A. Manivannan, John A. Olson, and Peter F. Sharp, “Colour normalisation of retinal images,” In: Proc. Med. Imag. Understanding and Analysis, 2003.

[14] University Medical Center Utrecht, Image Sciences Institute, Research section, Digital Retinal Image for Vessel Extraction (DRIVE) database, [Online]. Available: http://www.isi.uu.nl/Research/Databases/DRIVE

[15] STARE project website. Clemson Univ., Clemson, SC. [Online]. Available: http://www.ces.clemson.edu/~ahoover/stare

[16] Aliaa A. A. Youssif, Atef Z. Ghalwash, and Amr S. Ghoneim, “Comparative Study of Contrast Enhancement and Illumination Equalization Methods for Retinal Vasculature Segmentation,” Proc. Cairo Int. Biomedical Engineering Conf. (CIBEC'06), 21-24 Dec. 2006.

[17] L. Gagnon, M. Lalonde, M. Beaulieu, and M.-C. Boucher, “Procedure to detect anatomical structures in optical fundus images,” Proc. Conf. Med. Imag. 2001: Image Processing (SPIE #4322), San Diego, pp.1218-1225, 19-22 Feb. 2001.

[18] G. Yang, L. Gagnon, S. Wang, and M.-C. Boucher, “Algorithm for detecting micro-aneurysms in low-resolution color retinal images,” Proc. Vision Interface 2001, Ottawa, pp. 265-271, June 7-9, 2001.

[19] Graham D. Finlayson, Bernt Schiele, and James L. Crowley, “Comprehensive colour image normalization,” Proc. European Conf. on Computer Vision, Springer-Verlag, pp.475-490, 1998.

[20] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd Ed. Prentice-Hall, 2002.

[21] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abràmoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” in: SPIE Med. Imag., vol. 5370, pp. 648-656, 2004.

[22] A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag., vol. 19, no. 3, pp. 203-210, March 2000.

[23] Otsu, N. , “A Threshold Selection Method from Gray Level Histograms,” IEEE Trans. on Systems, Man, and Cybernetics, vol. SMC-9, no. 1, pp. 62-66, Jan. 1979.