Fovea center detection based on the retina anatomy and mathematical morphology

35
Fovea Center Detection Based on the Retina Anatomy and Mathematical Morphology Daniel Welfer a , Jacob Scharcanski *,a , Diane Ruschel Marinho b a Instituto de Inform´atica, Universidade Federal do Rio Grande do Sul, Av. Bento Gon¸ calves 9500, Porto Alegre, RS, Brasil; CEP. 91509-900 b Faculdade de Medicina, Universidade Federal do Rio Grande do Sul, Rua Ramiro Barcelos, 2400, Porto Alegre, RS, Brasil; CEP. 90035-003 Abstract In this work, we present a new fovea center detection method for color eye fundus images. This method is based on known anatomical constraints on the relative locations of retina structures, and mathematical morphology. The detection of this anatomical feature is a prerequisite for the computer aided diagnosis of several retinal diseases, such as Diabetic Macular Edema. The proposed method is adaptive to local illumination changes, and it is robust to local disturbances introduced by pathologies in digital color eye fundus images (e.g. exudates). Our experimental results using the DRIVE image database indicate that our method is able to detect the fovea cen- ter in 37 out of 37 images (i.e. with a success rate of 100%). Using the DIARETDB1 database, our method was able to detect the fovea center in 92.13% of all tested cases (i.e. in 82 out of 89 images). These results in- dicate that our approach potentially can achieve a better performance than * Corresponding author Email addresses: [email protected] (Daniel Welfer ), [email protected] (Jacob Scharcanski ), [email protected] (Diane Ruschel Marinho) Preprint submitted to Computer Methods and Programs in Biomedicine July 15, 2010

Transcript of Fovea center detection based on the retina anatomy and mathematical morphology

Fovea Center Detection Based on the Retina Anatomy

and Mathematical Morphology

Daniel Welfera, Jacob Scharcanski∗,a, Diane Ruschel Marinhob

aInstituto de Informatica, Universidade Federal do Rio Grande do Sul, Av. BentoGoncalves 9500, Porto Alegre, RS, Brasil; CEP. 91509-900

bFaculdade de Medicina, Universidade Federal do Rio Grande do Sul, Rua RamiroBarcelos, 2400, Porto Alegre, RS, Brasil; CEP. 90035-003

Abstract

In this work, we present a new fovea center detection method for color eye

fundus images. This method is based on known anatomical constraints on

the relative locations of retina structures, and mathematical morphology.

The detection of this anatomical feature is a prerequisite for the computer

aided diagnosis of several retinal diseases, such as Diabetic Macular Edema.

The proposed method is adaptive to local illumination changes, and it is

robust to local disturbances introduced by pathologies in digital color eye

fundus images (e.g. exudates). Our experimental results using the DRIVE

image database indicate that our method is able to detect the fovea cen-

ter in 37 out of 37 images (i.e. with a success rate of 100%). Using the

DIARETDB1 database, our method was able to detect the fovea center in

92.13% of all tested cases (i.e. in 82 out of 89 images). These results in-

dicate that our approach potentially can achieve a better performance than

∗Corresponding authorEmail addresses: [email protected] (Daniel Welfer ),

[email protected] (Jacob Scharcanski ), [email protected] (Diane RuschelMarinho)

Preprint submitted to Computer Methods and Programs in Biomedicine July 15, 2010

comparable methods proposed in the literature.

Key words: Fovea Center Detection, Mathematical Morphology, Diabetic

Macular Edema.

1. Introduction

Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) are

complications caused by diabetes, and may damage the visual acuity of peo-

ple at working age [1]. Diabetic Retinopathy has five severity levels, namely:

1) absence of retinopathy; 2) Mild nonproliferative Diabetic Retinopathy; 3)

Moderate nonproliferative Diabetic Retinopathy; 4) Severe nonproliferative

Diabetic Retinopathy; and 5) proliferative Diabetic Retinopathy. The ab-

normalities that characterise the Diabetic Retinopathy are the presence of

microaneurysms, intraretinal hemorrhages, venous beading and neovascular-

ization. However, as described by Ciulla et al. [1], at any time during the

progression of DR, patients with diabetes can also develop DME. DME has

three severity levels, namely [1], [2]: 1) DME mild; 2) DME moderate; 3)

DME severe. In mild DME usually some retinal thickening or hard exudates

occur far from the macula center. The moderate DME is characterized by

the occurrence of retinal thickening, or by hard exudates in the neighborhood

of the macula center. Finally, the severe DME presents retinal thickening or

hard exudates involving the macula center. Thus, the macula center play an

important role in the detection of the DME. The macula is a very important

region of the eye because it contains the maximum density of cones (the color

receptors of the visual system) [3]. In most fundus images, the macula is the

darker region of the image [4], and the central region of the macula is de-

2

nominated fovea [5]. Figure 1(a) illustrates a typical retinal image showing

its main features, such as macula, fovea, blood vessels and optic disk. In

physical terms, the fovea region is a circle of 0.25 mm of diameter [6], with

its center located a number of optic disk diameters away (e.g. 2 disk diame-

ters) from the optic disk center, in the temporal side of the optic nerve (i.e.

towards the macula center)[4] [7] [8]. Hard exudates appear as bright lesions

in eye fundus images, and are the most commonly found retinal abnormal-

ities [5]. However, as mentioned above, the detection of hard exudates is

not sufficient to detect and grade the Diabetic Macular Edema (DME), since

the distribution of these exudates around the fovea also must be considered.

For example, as the exudate lesions are located further away from the fovea

region, less severe the Diabetic Macular Edema is. Figure 1(b) illustrates the

polar coordinate system (which is centered on the fovea center) used to ana-

lyze the distribution of the exudates in the fovea [8]. Thus, the identification

of the fovea is a prerequisite for detecting and grading the Diabetic Macular

Edema. In other words, the fovea is the retinal main component used to

calculate the polar coordinate system, which is used to estimate the severity

level of DME, as mentioned above. Therefore, the severity level estimate of

the Diabetic Macular Edema depends on the polar coordinate system, and

the polar coordinate system depends on the fovea center.

This work is part of a larger research project that includes a method

for automatically grading the Diabetic Macular Edema. We present in this

paper an automatic method for detecting the fovea center, that is used for

estimating the DME severity levels within the context of this project. Our

method identifies the fovea center as a point (a pixel), and has been validated

3

Figure 1: (a) Annotations of the retinal main features on a typical fundus image. (b)

The polar coordinate system (which is centered on the fovea center) used to analyze the

distribution of the exudates around the fovea.

based on two public databases of retinal images. The accuracy of the fovea

detection refers to how close is the automatically detected fovea center point

(i.e. the distance in pixels) to the fovea center hand-labeled by an expert.

In Section 2, we discuss recent approaches for the detection of the fovea.

In Section 3, we explain our fovea center detection method. Our experiments

are discussed in Section 4, and our conclusions are presented in Section 5.

2. Detection of the Fovea in the Literature

There are several approaches for detecting the fovea in retinal images

(see Table 1). Sinthanayothin et al. [4] use a fovea template to find the

fovea locus in retinal images. This template is an artificial grayscale model

of size 40x40 pixels that mimics a real fovea region, and is obtained using a

Gaussian distribution with a fixed standard deviation [4]. In order to detect

the fovea candidate regions, first they calculate the correlation coefficients

4

between the retinal model and the eye fundus image. Afterwards, these

correlation coefficients are compared with a threshold, and the candidate

regions most correlated with the template are selected. Then, the center of

the darkest candidate region, detected at a plausible distance from optical

disk, is selected as the fovea center.

Narasimha-Iyer et al. [9], proposed to locate the fovea center using a two-

step approach, which is based on the optic disk diameter, a region of interest

and an adaptive threshold. Singh et al. [10] use an appearance-based method

for their fovea detection. They enhance the local contrast and detect the fovea

as a dark image structure. Kose et al. [11] identify the relative location of the

macula with respect to the optic disk position. For example, if the macula

in the left eye can be detected on right image side, then the macula in the

right eye must be detected on the left side. However, the method proposed

by Kose et al. [11] does not detect the fovea, it only detects approximately

the macula region.

Niemeijer et al. [12] use a method based on a cost function as well as

a point distribution model to detect and locate the fovea. They report a

success rate of 94.4% for their fovea detection and location method using 500

images. Li et al. [8] combine the information provided by low intensity pixels

(characteristic of the fovea region) and the main vessels arcade, and detect

the fovea with a parabola fitting method. They report a success rate of 100%

using 89 color eye fundus images.

Sagar et al. [13] use the spatial relationship between the optic disk diam-

eter and the macula region to find the fovea center. Firstly, they detect a

ROI (Region of Interest) in the eye fundus image, and then they mask out all

5

blood vessels pixels in this ROI using morphologic operations. Afterwards,

the darkest pixels in this ROI are identified and clustered. Then, the cen-

troid of the largest cluster is selected as the fovea center. They reported a

success rate of 96% for the correct detection of the fovea center in 100 im-

ages. Sopharak et al. [14] used a similar approach to find the fovea. They

detect the fovea locus using the optic disk diameter, and then they mask out

the high contrast vessels using the morphological closing operator. Sekhar et

al. [15] proposed to use the Hough Transform and some morphological op-

erations to automatically detect the fovea. Firstly, they estimate the optic

disk locus using morphological operations, and afterwards they locate the

optic disk boundary using the Hough Transform. Then, using the spatial

relationship between the optic disk diameter and the fovea region, a region

of interest (ROI) is identified. Within this ROI they apply thresholding and

the morphological opening operator to identify the fovea center.

Most methods available in the literature rely on information such as the

existing spatial relationship between the optic disk diameter and the fovea

region, and/or the location of blood vessels (e.g. the main vessels arcade),

to detect and locate the fovea. Therefore, most methods in the literature

tend to be negatively affected by pathologies (e.g. bright and dark areas

that potentially are associated with diabetic lesions), which often results in

an inaccurate detection of the fovea.

In this paper, we introduce a new method based on the spatial relation-

ship between the optic disk diameter and the fovea region to detect and locate

automatically the fovea in color retinal images. Using this spatial relation-

ship, we select a ROI (Region of Interest) in the eye fundus image. Inside

6

this ROI, we detect automatically fovea candidate regions by using specific

morphological filters. With these filters, bright lesions (i.e. hard exudates)

and dark lesions (i.e. microhemorrhages or microaneurysms) are removed

before finding fovea candidate regions, making our fovea detection method

robust to such image disturbances. Afterwards, the center of the darkest

candidate region, located below the optic disk center, is selected as the fovea

center. In addition, it shall be observed that we do not apply thresholding

techniques to find fovea candidate regions. Thresholding techniques tend

to not provide robust estimators for the fovea candidate regions (since the

threshold estimate depends on pixel intensities, and these vary substantially

in the same image). Thus, we address the problem of finding fovea candi-

date regions using morphological operators (i.e. our method is morphological

and adaptive to typical disturbances in retinal images, e.g. lesions and local

illumination variability). Our experimental results are encouraging, and in-

dicate that our approach potentially can achieve a better performance than

other known methods proposed in the literature. Table 1 summarizes some

of the major features of the proposed method and other methods available

in the literature.

3. Materials and Methods

3.1. Materials

The images of the retina of the DRIVE database (Digital Retinal Im-

ages for Vessel Extraction) [16] were used in our experiments. The DRIVE

database is available on the WEB, and consists of 40 color eye fundus images

with 584 x 565 pixels, captured using a 45 degree field-of-view (FOV) digital

7

Table 1: Some major features of the proposed and other methods available in the literature.Requires to Requires to Negatively Success Dataset

Method Approach know in know in affected by ratea used for

advance the advance the pathologies testing

optic disk blood vessels

Sinthanayothin et al. [4] Template-based Yes No Not 80.4% Local

method specified

Narasimha-Iyer et al. [9] Selection of ROI Yes No Opaque Not Local

and Threshold lesions specified

Singh et al. [10] Appearance-based No No Not 96.61% Local, DRIVE

method specified and STARE

Kose et al. [11] Selection of ROI, No Not Not Local

Relative locations of Yes No specified specified

optic disk and macula

Niemeijer et al. [12] Cost function and

a point distribution Yes Yes No 94.4% Local

model

Li et al. [8] Parabola fitting Yes Yes Not 100% Local

specified

Sagar et al. [13] Selection of ROI Yes Yes Not 96% DRIVE and

and Morphology specified STARE

Sopharak et al. [14] Selection of ROI, Not Not Local

Thresholding and Yes No specified specified

Morphology

Sekhar et al. [15] Selection of ROI, Not DRIVE

Thresholding and Yes No specified 100% and

Morphology STARE

Our proposed method Selection of ROI Yes No Large opaque100% and DRIVE and

and Morphology lesions 96.62% DIARETDB1

a The success rates (percentages) are those claimed by the authors in the corresponding

references.

fundus camera. The DRIVE database contains 7 images with mild signs of

diabetic retinopathy (i.e. 15% of images) [17]. Also, the database images

vary in quality (e.g. background illumination). In addition, the proposed

8

method was also tested on the public domain database DIARETDB1 [18].

DIARETDB1 is an image database consisting of 89 color eye fundus im-

ages of 1500 x 1152 pixels, captured using a 50◦ field-of-view digital fundus

camera. In order to save computation time, we resized the DIARETDB1

images to 640 x 480 pixels. It shall be observed that the DIARETDB1 has

84 images (i.e. 94.39% of images) containing mild non-proliferative signs of

diabetic retinopathy (e.g. exudates, microaneurysms and hemorrhages), and

5 images with no signs of the diabetic retinopathy [18]. Among the 84 images

containing mild non-proliferative signs of the diabetic retinopathy, there are

47 retinal images containing hard exudates (i.e 55.95% of images). A retina

expert labeled manually the fovea centers in the images of the DRIVE and

DIARETDB1 databases. Then, these hand-labeled images were taken as ref-

erence images (i.e. ground truth images used to evaluate the performance of

our method). It is important to clarify that we use all 89 images of the DI-

ARETDB1 for adjusting parameters and testing our methodology and then,

without changing any parameter settings in our method, we use the 37 images

of the DRIVE database to validate our approach.

3.2. An Overview of Basic Morphological Operators

In this Section we provide a brief overview of the morphological operators

used in this paper. In this work, we use basic morphological and geodesic

transformations, where two input images are combined to produce new mor-

phological primitives [19].

Consider two input images f and g, where f is the marker image and g

the mask image. Let δ denote the basic morphological dilation, and ε the

basic morphological erosion. The geodesic dilation of order n (δ(n)g (f) where

9

f ≤ g) and the geodesic erosion of order n (ε(n)g (f) where f ≥ g) can be

written as follows:

δ(n)g (f) = δ(1)

g [δ(n−1)g (f)], where δ

(1)g (f) = δ(1)(f) ∧ g,

ε(n)g (f) = ε(1)

g [ε(n−1)g (f)], where ε

(1)g (f) = ε(1)(f) ∨ g,

where, n represents successive geodesic dilations, or erosions, of f with re-

spect to g, and ∧ and ∨ are operators representing the point-wise minimum

and maximum, respectively. The notation, f ≤ g indicates the dilation of f ,

and if values greater than g occur, these are set to the corresponding values

in the mask g. In the other hand, f ≥ g denotes that the erosion of f where

the grayscale values lower than g are set to the grayscale value of g [19].

If the geodesic dilation, or erosion, is performed successively until sta-

bility, we obtain the morphological reconstruction by dilation Rg(f), or the

reconstruction by erosion R∗g(f), transformations respectively:

Rg(f) = δ(i)g (f), when δ

(i)g (f) = δ

(i+1)g (f),

R∗g(f) = ε(i)

g (f), when ε(i)g (f) = ε

(i+1)g (f).

In addition, from the reconstruction by erosion we can define the regional

minimum of an image f . The regional minimum, RMIN , is a grayscale

image where regions such that RMIN ≤ f are delimited. If a pixel value

of f , namely f(x, y), is smaller or equal to its neighboring pixels, then it is

kept, else it is assigned to zero. That is, each pixel of f surrounded by pixels

brighter than itself is a regional minimum. The RMIN image can be found

according the Equation 1:

RMIN(f) = R∗f (f + 1)− f. (1)

10

3.3. Our Proposed Method

Our proposed method needs two pieces of information in order to detect

the fovea: the optic disk diameter and the optic disk center. We have de-

veloped our own morphological method for detecting the optic disk because

of its adequacy to our present needs [20]. Our approach to detect the optic

disk relies on mathematical morphology techniques, and has two main stages,

namely: 1) detection of the optic disk location; 2) detection of the optic disk

boundary. The optic disk is located using the vascular tree as a reference.

Thus, based on the vascular tree detected previously, we use three algorithms

to detect an internal point of the optic disk and several others points in the

vicinity of this internal point. Then, we use all the points previously de-

tected inside the optic disk region to identify the optic disk boundary using

the Watershed Transform from Markers [20]. Figure 2, illustrates the output

of our method method applied to an image of the DIARETDB1 database.

After identifying the optic disk boundaries, the optic disk diameter and its

center are calculated as shown in Figure 2 (in this example, the disk diame-

ter, DD, is equal to 68.91 pixels). It shall be observed that using the DRIVE

database, our method correctly located the optic disk in 100% of the im-

ages, and using the DIARETDB1 the success rate for the localization of the

optic disk was 97.75% (i.e. 87 correct optic disk detections in a total of 89

images) [20]. The reason for the failure in only two images was due to an

incorrect identification of the vascular tree in the presence of large amount

of opaque lesions (e.g. hemorrhages). See reference [20] for more details.

Given the optic disk center and diameter, a region of interest (ROI) is

selected in the image, and inside this ROI the fovea is located based on

11

Figure 2: Segmented optic disk. The optic disk boundary is used to find the optic disk

center and diameter; a ROI image which contains the fovea is then identified. The plus

symbol at the left indicates the selected ROI center; the plus symbol at the right indicates

the optic disk center.

anatomical information (see Figure 3 (a)). This ROI has 160× 160 pixels in

all our experiments (i.e. 2 times the average disk diameter), and its center

is located at 2.6 disk diameters, in pixels, away from the optic disk center,

always in the temporal side of the optic disk. For example, if the input image

corresponds to the right eye, as illustrated by Figure 2, then the left side of

the optic disk is the temporal side.

We developed a method to locate the side where the fovea shall be located,

which is described next. The vascular tree skeleton of the original retinal

image is used to help identify the optic disk position. On this vascular tree

skeleton, only the curve containing and connecting the inferior and superior

main vessels of the vessels arcade is identified. The endpoints of this curve

are the extreme points of the main vessels arcade. Thus, if the optic disk

12

center is at the right of the most distant endpoint, the arcade is on its left,

indicating that the fovea is at left of the optic disk (i.e. and indicating that

this is the right eye). Otherwise, if the main vessels arcade is on the right

side of the optic disk, the fovea is located at right of the optic disk (i.e. and

in this case, we have the left eye). However, it shall be observed that the

method to locate the side of the fovea also will work properly on optic disk

centered images, whose temporal arcades are outside the image. In the case

of centered images, the method attempts to identify the optic disk location

using the superior and inferior nasal arcades (i.e. the arcades that are located

in the opposite side of the temporal arcades) [20].

The ROI center is aligned with the optic disk center, in other words,

they are on the same image row, as illustrated in Figure 2. We assume that

the fovea center is inside the ROI. This is because, anatomically, the fovea

and the optic disk are related, since the fovea can be located at a minimum

distance of 2 times the optic disk diameter [4], [8], [7]. In order to detect the

fovea center inside the ROI, morphological methods are used, as described

next.

Applying the regional minima and the geodesic morphological reconstruc-

tion by dilation on the green channel of the original image, namely fg, we

remove bright areas that potentially are associated with diabetic lesions. The

regional minima of fg are computed, and then a reconstruction by dilation

of fg is performed using the previously calculated regional minima image as

a marker. The central idea is to identify the foreground and background of

the fg. We assume as the foreground the brighter structures (e.g. exudates),

and as background we assume all the remaining structures (e.g. thin vessels

13

and hemorrhage). Equation 2 defines this idea:

fg1 = Rfg(RMIN(fg)), (2)

where, the new image fg1 do not contain bright spots (e.g. hard exudates).

Figure 3 (a) shows the green channel of the original ROI image, containing

a diabetic lesion (indicated by the white arrow). Figure 3 (b), depicts the

resultant image fg1, showing that signs of diabetic lesions have been removed.

However, some undesirable features remain in fg1, like dark spots (which can

be attributed to the natural eye pigmentation or to microhemorrhage), and

thin vessels (e.g. capillaries). In order to remove these undesirable features,

we apply the υ-minima filter [21] [22] on the fg1 image (see fg2 in Equation

3).

fg2 = υfg1(1000), (3)

In our experiments, we use a large threshold value (i.e. υ = 1000 in all

our experiments) to eliminate all basins with a volume less or equal to 1000

from the fg1 image. The resultant image fg2, is illustrated by Figure 3 (c).

See Appendix A for a more detailed explanation about the υ-minima filter.

Next, the Regional Minima operator, RMIN, of fg2 is used to identify low

intensity regions as fovea candidates, as specified in Equation 4. This step

produces a binary image as the output (see fg3 in Equation 4, and illustrated

in Figure 3 (d) ).

fg3 = Rfg2(RMIN(fg2)). (4)

14

Figure 3: Steps for fovea detection using our approach: (a) Original ROI image fg. (b) Im-

age without signs of bright lesions (i.e. hard exudates) (fg1 = Rfg(RMIN(fg))). (c) Im-

age without small basins (i.e. microhemorrhages or microaneurysms) (fg2 = υfg1(1000)).

(d) Binary image showing the fovea candidate regions (fg3 = Rfg2(RMIN(fg2))). (e) fg4

image: only candidate regions below the optic disk center remain in this image. (f) fg5

image: the centroid of the darker region is selected as the fovea center.

All candidate fovea regions can be found in the foreground of fg3 (see

Figure 3 (d)). Thus, some criteria are needed to eliminate unlikely fovea

candidate regions from the background of fg3. Anatomically, the fovea cen-

ter is located below the optic disk center [23], therefore all fovea candidate

regions in fg3 above the ROI center are discarded. Recall that the optic disk

center and the ROI center are horizontally aligned (are on the same image

row, as shown in Figure 2). Figure 3 (e) illustrates the resulting fg4 im-

age, where only candidate fovea regions below the optic disk center remain.

Finally, the average intensity of the remaining fovea candidate regions are

15

calculated, and the centroid of the lowest intensity fovea candidate region (in

average) is chosen as the fovea center. Figure 3 (f), illustrates the image fg5

with the candidate region selected as the most likely fovea location.

The flowchart in Figure 4 shows the proposed fovea detection algorithm

step-by-step. The input is the green channel fg of the original color image of

the retina, and the output is the fovea center position, indicated by a white

arrow.

Figure 4: Summary of the fovea center detection algorithm, step-by-step.

16

4. Experimental Results and Discussion

We tested experimentally our approach on the DRIVE and the DIARETDB1

databases. In these experiments, 3 images have been excluded from the

DRIVE database for not presenting a visually-detectable fovea, according

to experts, namely image#23, image#31 and image#34. The Euclidean

distance have been used for measuring the fovea detection accuracy of our

method. For example, we considered correct all automatically detected fovea

centers within a distance of 34 pixels of the ground truth (in terms of the

Euclidean distance). The idea of using a fixed distance for measuring the

fovea detection accuracy has been used previously by Niemeijer et al. [12].

However, they used a distance of 50 pixels and images of size 768 x 576 pix-

els. So, we adjusted this distance proportionally to the smallest image size

used in our experiments (i.e. 640 x 480 pixels). The fovea center ground-

truth were manually created by an expert, who marked the fovea location

in each retinal image of the DRIVE and DIARETDB1 databases, assigning

the grayscale value 255 to the pixel that best represents the fovea center.

Thus, an Euclidean distance value equal to zero indicates that our method

is detecting the fovea center exactly, and there is no error. For example, the

Euclidean distance calculated for the first image of the DRIVE database is

5.38 pixels, and for the second image is 6.08 pixels (see Appendix B). There-

fore, the fovea center identified automatically on the first DRIVE database

image is nearer to the ground truth than in the second image.

In our experiments, the fovea center was detected correctly by our method

in 37 out of the 37 images of the DRIVE database (100% of the cases). This

means that all automatically detected fovea centers using our method are

17

within the tolerance interval of 34 pixels (in terms of the Euclidean distance

to the ground truth). In addition, our method detected correctly the fovea

center in 92.13% of the cases of the DIARETDB1 database (i.e. in 82 out of

the 89 images). Appendix B and Appendix C, show the results obtained by

our method in detail for the DRIVE and DIARETDB1 databases, respec-

tively.

Table 2: Comparison of our fovea center detection method and other methods available in

the literature (DRIVE and DIARETDB1 databases). In our experiments, we considered

correct all automatically detected fovea centers within 34 pixels to the ground truth.Method DRIVE(%) DIARETDB1(%)

Sinthanayothin et al. [4] 78.38% (29 out of the 37 images) 65.16%(58 out of the 89 images)

Narasimha-Iyer et al.[9] 83.78% (31 out of the 37 images) 80.89%(72 out of the 89 images)

Singh et al.[10] 86.48% (32 out of the 37 images) 65.16%(58 out of the 89 images)

Sagar et al.[13] 94.59% (35 out of the 37 images) 84.26%(75 out of the 89 images)

Sopharak et al. [14] 51.35% (19 out of the 37 images) 38.20%(34 out of the 89 images)

Sekhar et al.[15] 91.89% (34 out of the 37 images) 67.41%(60 out of the 89 images)

Our proposed method 100% (37 out of the 37 images) 92.13%(82 out of the 89 images)

Our method has been compared with other methods available in the liter-

ature, and the results are shown in Table 2 and in Table 3. Moreover, it shall

be observed that in order to obtain results for both databases, we have im-

plemented and tested all methods presented in Table 2 and in Table 3 (we set

the parameters of each method to obtain its best performance for the image

dataset). Our experimental results based on the DRIVE and DIARETDB1

database, indicate that our method can outperform other methods available

in the literature. For example, our method obtained the smallest average Eu-

clidean distance and the smallest standard deviation distance for the DRIVE

and DIARETDB1 databases (see Table 3), indicating that our method can

18

Table 3: Comparison of the fovea detection performance using the average Euclidean

distance and the standard deviation as the criterion for measuring the fovea detection

accuracy for the proposed and other methods available in the literature.

DRIVE database

Methods average ± distance

distance standard

deviation

Sinthanayothin et al. [4] 62.23 116.45

Narasimha-Iyer et al.[9] 29.23 53.88

Singh et al.[10] 14.43 14.36

Sagar et al.[13] 12.61 14.92

Sopharak et al. [14] 122.06 142.87

Sekhar et al.[15] 10.45 12.13

Our proposed method 7.39 5.54

DIARETDB1 database

Methods average ±distance

distance standard

deviation

Sinthanayothin et al. [4] 62.68 84.16

Narasimha-Iyer et al.[9] 32.52 56.24

Singh et al.[10] 37.93 47.55

Sagar et al.[13] 24.79 49.81

Sopharak et al. [14] 81.08 90.05

Sekhar et al.[15] 30.81 30.22

Our proposed method 10.12 14.99

detect the fovea center more accurately than other comparable methods.

Figure 5 illustrates the performance of our method in the presence and in

the absence of diabetic signs (e.g. small dark lesions and bright lesions), and

it also compares the fovea center detected automatically by our method with

19

the fovea center manually detected. Figure 5 (a) illustrates the green channel

of a color eye fundus image without diabetic signs, where the fovea center

location error is relatively small (i.e., 6.08 pixels). On the other hand, Figure

5 (b) illustrates the green channel of a color eye fundus image containing dia-

betic signs, where the fovea center detection error is 5.84 pixels. However, we

have verified experimentally that our method is not suitable for detecting the

fovea center in retinal images where the optic disk was erroneously detected,

and in retinal images that contains large opaque lesions (e.g. hemorrhages).

Figure 5 (c) shows our detection results superimposed on the green chan-

nel of a color eye fundus image presenting an unacceptable error (i.e. the

tolerance interval of 34 pixels was exceeded). The reason for this failure is

the incorrect identification of the optic disk rim by the method that we have

proposed. Figure 5 (d) illustrates a situation in which our method fails in

the presence of large opaque lesions. Normally the υ-minima filter removes

small opaque lesions, but in this case it was unable to remove it. Then, the

algorithm identified the opaque lesion as being the fovea center.

It shall be observed that our method potentially can provide the best

performance among the tested methods, with the smallest average Euclidean

distance and average distance standard deviation. Also, our method was able

to detect the fovea center in 37 images of the DRIVE database, and to detect

the fovea center in more images of the DIARETDB1 database than the other

tested methods (representative of the state-of-the-art). These results indicate

that an improvement has been obtained over other methods proposed in the

literature. We believe that our method obtains better results because it

does not apply thresholding techniques to find fovea candidate regions, and

20

Figure 5: Performance of our method in the absence and in the presence of diabetic

signs. (a) Detected fovea center on an image without diabetic lesions (error=6.08 pixels).

(b) Detected fovea center on an image containing diabetic lesions (error = 5.84 pixels).

(c) Failure in the detection of the fovea center due to an incorrect identification of the

optic disk rim (error = 53.36 pixels). (d) Failure in the detection of the fovea center due

the presence of a large opaque lesion in the fovea vicinity (error = 88.10 pixels). The

dashed arrow depicts the fovea center manually detected (i.e. ground truth), and the solid

arrow depicts the fovea center automatically detected. The error is the Euclidean distance

between the fovea center automatically detected and the fovea center manually detected.

21

because it makes an effort to remove bright and dark lesions before finding

fovea candidate regions.

However, our method has the negative side effect of failing to remove

large opaque lesions. Moreover, it shall be observed that we did not compare

our method with all available methods in literature because we have given

priority to the methods based on mathematical morphology, region of interest

(i.e. ROI) and on the methods based on thresholding (i.e. that are the most

closely related methods of our approach).

5. Conclusions

This paper presented a new method for detecting the fovea center using

the green channel of a color retinal image. The proposed method explores

anatomic characteristics to identify the regions where the fovea center is most

likely to be found. This feature of the method tends to make it more robust

to the presence of signs of abnormality in the eye fundus image, a common

drawback of other fovea detection methods available in the literature.

The experimental results with two public domain eye fundus databases

(namely DRIVE and DIARETDB1) indicate that our approach achieves a

fovea detection rate comparable to the best methods available in the litera-

ture (100% for the DRIVE database, and 92.13% for DIARETDB1 database).

Besides, these experiments confirm that our method tends to be more robust

to image artifacts near the fovea region, such as exudates, microaneurysms

and microhemorrhages. The quantitative analysis using the Euclidean Dis-

tance to the ground truth (i.e. the manually detected fovea centers), indicate

that our method tends to detect the fovea center more accurately than com-

22

parable approaches available in the literature.

However, in the presence of large hemorrhages, our method may fail.

Future work shall concentrate on the automatic detection of exudates lesions

and the automatic identification of subfields to qualify these exudates lesions,

and the severity degree of DME.

Acknowledgments

The authors wish to thank the DRIVE and DIARETDB1 project teams

for making available their image databases on the Internet, and to the anony-

mous referees for their valuable comments that have helped us to improve

this paper. Also, the authors are particularly grateful to CNPq (National

Council for Scientific and Technological Development), Brazil) for financial

support. Furthermore, we thank Singh et al. [10] for providing their results

obtained for the DRIVE database.

References

[1] T. A. Ciulla, A. G. Amador, B. Zinman, Diabetic retinopathy and dia-

betic macular edema, Diabetes Care 26 (9) (2003) 2653–2664.

[2] World Health Organization, Prevention of blindness from diabetes mel-

litus: Report of a WHO consultation in geneva, switzerland, Tech. rep.

(2005).

[3] J. R. Sparrow, RPE lipofuscin: Formation, properties and relevance

to retinal degeneration, in: J. Tombran-Tink, C. J. Barnstable (Eds.),

23

Retinal Degenerations : Biology, Diagnostics, and Therapeutics, Hu-

mana Press Inc., New Jersey, 2007, Ch. 12, pp. 213–236.

[4] C. Sinthanayothin, J. F. Boyce, H. L. Cook, T. H. Williamson, Au-

tomated localisation of the optic disc, fovea, and retinal blood vessels

from digital colour fundus images, British Journal of Ophthalmology 83

(1999) 902–910.

[5] P. Frith, R. Gray, S. MacLennan, P. Ambler, The Eye in CIinical Prac-

tice, 2nd Edition, Blackwell Science Ltd, London, 2001.

[6] K. W. Tobin, E. Chaum, V. P. Govindasamy, T. P. Karnowski, Detection

of anatomic structures in human retinal imagery, IEEE Transactions on

Medical Imaging 26 (12) (2007) 1729–1739.

[7] M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee, J. Boyd, E. Hunter,

R. Jain, Automated diagnosis and image understanding with object ex-

traction, object classification, and inferencing in retinal images, in: In-

ternational Conference on Image Processing, Vol. 3, IEEE Computer

Society, 1996, pp. 695–698.

[8] H. Li, O. Chutatape, Automated feature extraction in color retinal im-

ages by a model based approach, IEEE Transactions on Biomedical En-

gineering 51 (2) (2004) 246–254.

[9] H. Narasimha-Iyer, A. Can, B. Roysam, C. V. Stewart, H. L. Tanen-

baum, A. Majerovics, H. Singh, Robust detection and classification of

longitudinal changes in color retinal fundus images for monitoring dia-

24

betic retinopathy, IEEE Transactions on Biomedical Engineering 53 (6)

(2006) 1084–1098.

[10] J. Singh, G. D. Joshi, J. Sivaswamy, Appearance based object detection

in colour retinal images, in: IEEE International Conference on Image

Processing, IEEE, San Diego, California, U.S.A, 2008.

[11] C. Kose, U. Sevik, O. Gencaliaglu, Automatic segmentation of age-

related macular degeneration in retinal fundus images, Computers in

Biology and Medicine 38 (2008) 611–619.

[12] M. Niemeijer, M. D. Abramoff, B. V. Ginneken, Segmentation of the op-

tic disc, macula and vascular arch in fundus photographs, IEEE Trans-

actions on Medical Imaging 26 (2007) 116–127.

[13] A. V. Sagar, S. Balasubramanian, V. Chandrasekaran, Automatic detec-

tion of anatomical structures in digital fundus retinal images, in: IAPR

Conference on Machine Vision Applications, Tokyo - Japan, 2007, pp.

483–486.

[14] A. Sopharak, B. Uyyanonvara, S. Barmanb, T. H. Williamson, Au-

tomatic detection of diabetic retinopathy exudates from non-dilated

retinal images using mathematical morphology methods, Computerized

Medical Imaging and Graphics 32 (2008) 720–727.

[15] S. Sekhar, W. Al-Nuaimy, A. Nandi, Automated localisation of optic

disk and fovea in retinal fundus images, in: 16th European Signal Pro-

cessing Conference (EUSIPCO-2008), Lausanne, Switzerland, 2008.

25

[16] J. Staal, M. Abramoff, M. Niemeijer, M. Viergever, B. van Ginneken,

Ridge based vessel segmentation in color images of the retina, IEEE

Transactions on Medical Imaging 23 (2004) 501–509.

[17] D. A. Adjeroh, U. Kandaswamy, Texton-based segmentation of retinal

vessels, Journal of the Optical Society of America A 24 (5) (2007) 1384–

1393.

[18] T. Kauppi, V. Kalesnykiene, J.-K. Kamarainen, L. Lensu, I. Sorri,

A. Raninen, R. Voutilainen, H. Uusitalo, H. Kalviainen, J. Pietila, DI-

ARETDB1 diabetic retinopathy database and evaluation protocol, in:

Medical Image Understanding and Analysis (MIUA), 2007, pp. 61–65.

[19] B. Jahne, H. Haußecker, P. Geißler, Handbook of Computer Vision and

Applications: Signal Processing and Pattern Recognition, Vol. 2, Aca-

demic Press, New York, 1999.

[20] D. Welfer, J. Scharcanski, C. M. Kitamura, M. M. D. Pizzol, L. W.

Ludwig, D. R. Marinho, Segmentation of the optic disk in color eye

fundus images using an adaptive morphological approach, Computers

in Biology and Medicine 40 (2) (2010) 124–137.

[21] C. Vachier, Morphological scale-space analysis and feature extraction,

in: IEEE (Ed.), Proceedings of IEEE Intl. Conf. on Image Processing,

Vol. 3, 2001, pp. 676–679.

[22] E. R. Dougherty, R. A. Lotufo, Hands-on Morphological Image Process-

ing, Vol. TT59, SPIE Publications, 2003.

26

[23] J. M. Torsten Schlote, Matthias Grueb, J. M. Rohrbach, Pocket Atlas

of Ophthalmology, Georg Thieme Verlag, New York, 2006.

[24] G. P. Thierry Geraud, N. V. Vliet, Fast color image segmentation based

on levellings in feature space, in: Proceedings of Computer Vision and

Graphics, Vol. 32, Springer, 2004, pp. 800–807.

27

Appendix A: Description of the v-minima filter.

According to Dougherty et al. [22], there are two filters that use the

concept of volume. The υ-maxima filter that removes all peaks with a volume

less than υ, and the υ-minima filter that removes all basins with a volume

less than υ from a grayscale image. For the sake of clarity, next we explain

the concepts of volume of peaks and basins of a grayscale image. Figure 6

shows a flowchart indicating the concept of volume. The step (a) of this

flowchart illustrates a 3 × 3 input image f(x, y). Then, using a 3 × 3 cross-

shaped structuring element, centered on this basin, only 5 pixels are selected

from the input image f(x, y) to perform the next set of operations, namely

f(x) = {5, 5, 1, 5, 5} (see Figure 6 (b)). Figure 6 (c) shows a 1-D profile

f(x), where the basin is in the profile center. Afterwards, as illustrated

in Figure 6 (d), the level components of the two peaks on the profile are

identified (i.e. the connected components that exist at each gray level). In

order to identify each level component, the 1-D profile f(x) is decomposed

into the sets F(i)t [24], where t is the (gray) level of the 1-D profile f(x), and

i is the number of connected components at the gray level t.

There are only 9 level components in this 1-D basin profile, namely: F(i)t ,

the level component at gray level t and label value i; e.g. F(2)2 , is the level

component at gray level 2 (above gray level 1) and label value 2; F(3)2 , is

the level component at gray level 2 (above gray level 1) and label value 3;

F(4)3 , and so on. The next step is the identification of the area of each level

component F(i)t , as illustrated in Figure 6 (e) for the 1-D profile. The area

of a level component (i.e. A(F(i)t )) is the number of labeled elements of that

specific connected component. For instance, the number of labeled elements

28

of the level component F(1)5 is 2, then its area A

(1)5 = 2. The number of labeled

elements of the level component F(1)1 is 5, then its area A

(1)5 = 5. Given the

areas (A(F(i)t )), the volume can be calculated as shown in Figure 6 (f). The

volume of a level component (e.g. F(1)5 ) is given by adding its area and the

areas of all the level components above it [21], [22], [24]. For instance, in the

case of the 1-D profile shown in Figure 6 (c), the volume of F(1)1 (i.e. υ(F

(1)1 ))

is given by: υ(F(1)1 ) = A|F (1)

1 | + A|F (2)2 | + A|F (4)

3 | + A|F (6)4 | + A|F (8)

5 |. This

results in υ(F(1)1 ) = 5 + 2 + 2 + 2 + 2 = 13. Generalizing, the volume of a

level component F(i)t can be computed by Equation 5.

υ(F(i)t ) =

t′≥t

|F (i′)t′ | (5)

For the basins, there are only 4 level components in this 1-D basin profile,

one at each gray level, namely: F(1)5 , the level component at gray level 5 and

label value 1; F(2)4 , the level component at gray level 4 (below gray level

5) and label value 2; F(3)3 , the level component at gray level 3 (below gray

level 4), and label value 3; F(4)2 , the level component at gray level 2 (below

gray level 3) and label value 4; In the case of this 1-D basin profile f(x), in

each one of these level components there is only one connected component,

therefore all of them have only one label by level. For this basin, the number

of labeled elements in each one of the level components (i.e., F(1)5 , F

(2)4 , F

(3)3 ,

F(4)2 ) is 1; Therefore, the area of A

(1)5 = 1; A

(2)4 = 1; A

(3)3 = 1; A

(4)2 = 1; Thus,

given the areas (A(F(i)t )) of this basin, the volumes can be found as shown in

Figure 6 (h). For instance, in the case of the 1-D profile shown in Figure 6

(c), the volume of F(1)5 (i.e. υ(F

(1)5 )) is given by: υ(F

(1)5 ) = A|F (1)

5 | + A|F (2)4 |

+ A|F (3)3 | + A|F (2)

2 |. This results in υ(F(1)5 ) = 1 + 1 + 1 + 1 = 4.

29

Figure 6: Summary of the steps to obtain the volume of the peaks and basins of an input

image. (a) A 3x3 input image containing a basin. (b) Selected pixels from the input

image(i.e using a 3x3 elementary cross structuring element). (c) 1-D profile of the input

image showing two peaks and one basin. (d) Level components of the two peaks. (e) Area

of each level component of the two peaks. (f) Volume of each level component of the two

peaks. (g) Area of each level component of the basin. (h) Volume of each level component

of the basin.

Figure 7 illustrates how the υ-minima filter algorithm works, and the same

3×3 input image matrix f(x, y) used in Figure 6 is used in Figure 7(a). The

30

central pixel with the grayscale level “1” (i.e., f(2, 2)) represents the basin

that must be removed. First, the volume of each level component is obtained,

as shown in Figure 6 (h). Afterwards, the level components below a volume

threshold are removed. In other words, if the volume of a level component

is less than or equal to a given volume threshold, the level component is

replaced by another one with a gray level higher than the volume threshold.

For instance, if we remove all level components with a volume less than or

equal to 1, the gray level of f(2, 2) is replaced by 2 in f(x, y) (as shown in

Figure 7 (b)). Now, if all level components with a volume less than or equal

to 3 are removed, the gray level of f(2, 2) is replaced by 4 (see Figure 7 (d)).

The basin depicted in Figure 7 (a) is removed when all level components with

volume less than or equal to 4 are removed, as illustrated in Figure 7 (e).

Figure 7: Removing a basin with the υ-minima filter. (a) 3 × 3 representing the original

image f(x, y). The basin is the central pixel with the gray level 1. (b) Output image after

υ-minima filtering the original image f(x, y), using as input parameter υ=1. All basins

with volume less than or equal to 1 are removed. (c) Resulting image after υ-minima

filtering the original image f(x, y) (i.e. υ=2). (d) Output using υ=3. (e) Output image

without basins, obtained with υ=4.

31

Appendix B: Experimental Results Using the DRIVE Database.

Table 4: Fovea detection performance of our approach using the Euclidean distance in all

images of the DRIVE database. If the Euclidean distance (error) is within 34 pixels, then

the fovea was successfully detected. Otherwise, the fovea detection failed.Distance in pixels (error) Evaluation of Fovea Detection

image#01 5.38 success

image#02 6.08 success

image#03 12.04 success

image#04 9.84 success

image#05 8 success

image#06 9.05 success

image#07 13.89 success

image#08 15.52 success

image#09 3.60 success

image#10 1 success

image#11 6.08 success

image#12 6.32 success

image#13 2.23 success

image#14 13.34 success

image#15 4 success

image#16 3 success

image#17 7.21 success

image#18 4.12 success

image#19 3.16 success

image#20 10.44 success

image#21 10 success

image#22 18.38 success

image#24 7 success

image#25 3.16 success

image#26 7.21 success

image#27 5 success

image#28 2.82 success

image#29 3.60 success

image#30 14.86 success

image#32 3.16 success

image#33 26.47 success

32

Fovea detection performance of our approach using the Euclidian

distance in all images of the DRIVE database (Continuation).

Distance in pixels (error) Evaluation of Fovea Detection

image#35 3 success

image#36 3 success

image#37 1.41 success

image#38 1.43 success

image#39 5.38 success

image#40 13.34 success

Appendix C: Experimental Results Using DIARETDB1 Database.

Table 5: Fovea detection performance of our approach using the Euclidean distance in all

images of the DIARETDB1 database. If the Euclidean distance is within 34 pixels, then

the fovea was successfully detected. Otherwise, the fovea detection failed.Distance in pixels (error) Evaluation of Fovea Detection

image#01 7.21 success

image#02 3.16 success

image#03 3 success

image#04 40.11 failure (the tolerance interval of 34 pixels was exceeded)

image#05 16.64 success

image#06 2.23 success

image#07 9.21 success

image#08 0 success

image#09 5.65 success

image#10 4.12 success

image#11 2 success

image#12 13.60 success

image#13 2 success

image#14 1.41 success

image#15 6.40 success

image#16 49.81 failure (the tolerance interval of 34 pixels was exceeded)

33

Fovea detection performance of our approach using the Euclidean

distance in all images of the DIARETDB1 database (Continuation)

Distance in pixels (error) Evaluation of Fovea Detection

image#17 3.16 success

image#18 3.60 success

image#19 9.05 success

image#20 5.83 success

image#21 6.40 success

image#22 7.61 success

image#23 38.01 failure (the tolerance interval of 34 pixels was exceeded)

image#24 2.82 success

image#25 20.24 success

image#26 10.19 success

image#27 34.01 failure (the tolerance interval of 34 pixels was exceeded)

image#28 2 success

image#29 75.00 failure (the tolerance interval of 34 pixels was exceeded)

image#30 3.16 success

image#31 6.08 success

image#32 10.44 success

image#33 3 success

image#34 1.41 success

image#35 2.82 success

image#36 5.09 success

image#37 9.48 success

image#38 2.82 success

image#39 3.60 success

image#40 5.38 success

image#41 5.83 success

image#42 29.15 success

image#43 4 success

image#44 6.83 success

image#45 2.23 success

image#46 10.63 success

image#47 2.23 success

image#48 2.10 success

image#49 14.95 success

image#50 5 success

image#51 1 success

image#52 2.82 success

image#53 19.41 success

image#54 3.16 success

image#55 2 success

image#56 5.83 success

image#57 4.12 success

image#58 6 success

image#59 53.36 failure (the tolerance interval of 34 pixels was exceeded)

34

Fovea detection performance of our approach using the Euclidean

distance in all images of the DIARETDB1 database (Continuation)

Distance in pixels (error) Evaluation of Fovea Detection

image#60 4.12 success

image#61 2 success

image#62 1 success

image#63 9.05 success

image#64 88.10 failure (the tolerance interval of 34 pixels was exceeded)

image#65 27.07 success

image#66 1.41 success

image#67 14 success

image#68 3 success

image#69 10.29 success

image#70 14.42 success

image#71 7.07 success

image#72 2.23 success

image#73 4.47 success

image#74 7.81 success

image#75 3 success

image#76 1 success

image#77 1.10 success

image#78 12.07 success

image#79 5.97 success

image#80 3.16 success

image#81 3.82 success

image#82 3.17 success

image#83 1.41 success

image#85 16.76 success

image#86 7.07 success

image#87 14.03 success

image#88 2.23 success

image#89 9.21 success

There are seven failures on the fovea detection using the DIARETDB1

database. Two of them were caused by the failure to detect the optic disk,

and five failures were caused by the large opaque lesions near the fovea

vicinity.

35