The use of radial symmetry to localize retinal landmarks

8
Computerized Medical Imaging and Graphics 37 (2013) 369–376 Contents lists available at ScienceDirect Computerized Medical Imaging and Graphics jo ur nal ho me pag e: www.elsevier.com/locate/compmedimag The use of radial symmetry to localize retinal landmarks A. Giachetti a,, L. Ballerini b , E. Trucco b , P.J. Wilson c a Dipartimento di Informatica, Università di Verona, Italy b VAMPIRE, School of Computing, University of Dundee, UK c Department of Ophthalmology, Ninewells Hospital, Dundee, UK a r t i c l e i n f o Article history: Received 5 December 2012 Received in revised form 19 June 2013 Accepted 20 June 2013 Keywords: Optic disc Fovea Fast Radial Symmetry a b s t r a c t Locating the optic disc center and the fovea in digital fundus images is surprisingly difficult due to the variation range in color and contrast and the possible presence of pathologies creating bright spots or changing the appearance of retinal landmarks. These reasons make it difficult to find good templates of optic disc and fovea shape and color for pattern matching. In this paper we propose radial symmetry as the principal cue to locate both optic disc and macula cen- ters. Centers of bright and dark circularly symmetrical regions with arbitrary radii, can be found robustly against changes in brightness and contrast by using the Fast Radial Symmetry transform. Detectors based on this transform coupled with a weak hypothesis on vessel density (optic disc intersects large vessels while the fovea lies in an avascular region), can provide a fast location of both OD and macula with accuracy similar or better than state-of-the-art methods. The approach has been chosen as the default technique for fast localization of the two landmarks in the VAMPIRE software suite. © 2013 Elsevier Ltd. All rights reserved. 1. Introduction One of the most important tasks in retinal image processing is the location of the optic disc (OD) and of the macula, or, better, its central part, the fovea. The location of such landmarks is a prereq- uisite for the algorithms detecting signs of several retinal diseases. OD and fovea are often identified in digital fundus images with heuristics based on color and shape priors. The significant vari- ability of their appearance, however, makes it difficult to devise template based detectors working well on poor- or limited-quality images and in the presence of anomalies. Solutions proposed in the literature use heavily contextual information (e.g. vascularization). This makes the detector more robust, but can make the localization less accurate. Furthermore, anomalous cases may create problems in training multivariate models of objects and context. A possible alternative to a rigid modelling of the context is to find detectors that capture the peculiarities of the target landmarks better and are less sensitive to noise and disturbances, adding only a “soft” contextual reasoning to cope with anomalous cases. In this paper we show that radial symmetry is a simple and effec- tive cue to detect OD and fovea, and locate their centers. While it is difficult to find a good template for the fovea, and while differ- ent templates proposed so far for the quick detection of the OD do not outperform a simple bright circle, as shown by Yu et al. [1], we found that the use of a radial symmetry detector can considerably Corresponding author. Tel.: +39 045 8027998. E-mail address: [email protected] (A. Giachetti). improve robustness and accuracy with respect to other methods. Both optic OD and fovea have variable appearances (shape, con- trast with background, color components), but the OD, after vessel removal, is almost always characterized as a radially symmetrical bright spot, and the foveal region is better characterized as a radi- ally symmetric dark region than as the darker part of the image, or a part with a specific shape (e.g. a circle of specific radius). Fur- thermore, the OD should always be crossed by the largest vessels, while the center of the foveal region is avascular. We use the Fast Radial Symmetry (henceforth FRS) transform [2] to detect and localize centers of symmetry of dark and bright regions of arbitrary radius, independent of the presence of evi- dent contrast or edges. This allows us to design specific OD and fovea detectors, computing this transform on vessel-inpainted and coarsened images and combining the results with a vascular density estimator. This work has been carried on within the VAMPIRE project [3]. VAMPIRE (Vessel Assessment and Measurement Platform for Images of the REtina) is an international collaboration of 10 image processing and clinical centers, developing a software suite allowing efficient semi-automatic measurements on the retinal vasculature. 2. Related work OD and macula detection in digital fundus images is decep- tively simple: images quality can vary heavily in clinical samples of even modest size, lesions may create false targets, and even a normal OD is covered by vessels of irregular and variable shape. 0895-6111/$ see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compmedimag.2013.06.005

Transcript of The use of radial symmetry to localize retinal landmarks

T

Aa

b

c

a

ARRA

KOFF

1

tcuOhatilTliatac

tienf

0h

Computerized Medical Imaging and Graphics 37 (2013) 369– 376

Contents lists available at ScienceDirect

Computerized Medical Imaging and Graphics

jo ur nal ho me pag e: www.elsev ier .com/ locate /compmedimag

he use of radial symmetry to localize retinal landmarks

. Giachetti a,∗, L. Ballerinib, E. Truccob, P.J. Wilsonc

Dipartimento di Informatica, Università di Verona, ItalyVAMPIRE, School of Computing, University of Dundee, UKDepartment of Ophthalmology, Ninewells Hospital, Dundee, UK

r t i c l e i n f o

rticle history:eceived 5 December 2012eceived in revised form 19 June 2013ccepted 20 June 2013

eywords:

a b s t r a c t

Locating the optic disc center and the fovea in digital fundus images is surprisingly difficult due to thevariation range in color and contrast and the possible presence of pathologies creating bright spots orchanging the appearance of retinal landmarks. These reasons make it difficult to find good templates ofoptic disc and fovea shape and color for pattern matching.

In this paper we propose radial symmetry as the principal cue to locate both optic disc and macula cen-

ptic discoveaast Radial Symmetry

ters. Centers of bright and dark circularly symmetrical regions with arbitrary radii, can be found robustlyagainst changes in brightness and contrast by using the Fast Radial Symmetry transform. Detectors basedon this transform coupled with a weak hypothesis on vessel density (optic disc intersects large vesselswhile the fovea lies in an avascular region), can provide a fast location of both OD and macula withaccuracy similar or better than state-of-the-art methods. The approach has been chosen as the default

tion

technique for fast localiza

. Introduction

One of the most important tasks in retinal image processing ishe location of the optic disc (OD) and of the macula, or, better, itsentral part, the fovea. The location of such landmarks is a prereq-isite for the algorithms detecting signs of several retinal diseases.D and fovea are often identified in digital fundus images witheuristics based on color and shape priors. The significant vari-bility of their appearance, however, makes it difficult to deviseemplate based detectors working well on poor- or limited-qualitymages and in the presence of anomalies. Solutions proposed in theiterature use heavily contextual information (e.g. vascularization).his makes the detector more robust, but can make the localizationess accurate. Furthermore, anomalous cases may create problemsn training multivariate models of objects and context. A possiblelternative to a rigid modelling of the context is to find detectorshat capture the peculiarities of the target landmarks better andre less sensitive to noise and disturbances, adding only a “soft”ontextual reasoning to cope with anomalous cases.

In this paper we show that radial symmetry is a simple and effec-ive cue to detect OD and fovea, and locate their centers. While its difficult to find a good template for the fovea, and while differ-

nt templates proposed so far for the quick detection of the OD doot outperform a simple bright circle, as shown by Yu et al. [1], we

ound that the use of a radial symmetry detector can considerably

∗ Corresponding author. Tel.: +39 045 8027998.E-mail address: [email protected] (A. Giachetti).

895-6111/$ – see front matter © 2013 Elsevier Ltd. All rights reserved.ttp://dx.doi.org/10.1016/j.compmedimag.2013.06.005

of the two landmarks in the VAMPIRE software suite.© 2013 Elsevier Ltd. All rights reserved.

improve robustness and accuracy with respect to other methods.Both optic OD and fovea have variable appearances (shape, con-trast with background, color components), but the OD, after vesselremoval, is almost always characterized as a radially symmetricalbright spot, and the foveal region is better characterized as a radi-ally symmetric dark region than as the darker part of the image,or a part with a specific shape (e.g. a circle of specific radius). Fur-thermore, the OD should always be crossed by the largest vessels,while the center of the foveal region is avascular.

We use the Fast Radial Symmetry (henceforth FRS) transform[2] to detect and localize centers of symmetry of dark and brightregions of arbitrary radius, independent of the presence of evi-dent contrast or edges. This allows us to design specific OD andfovea detectors, computing this transform on vessel-inpainted andcoarsened images and combining the results with a vascular densityestimator.

This work has been carried on within the VAMPIRE project[3]. VAMPIRE (Vessel Assessment and Measurement Platform forImages of the REtina) is an international collaboration of 10image processing and clinical centers, developing a software suiteallowing efficient semi-automatic measurements on the retinalvasculature.

2. Related work

OD and macula detection in digital fundus images is decep-tively simple: images quality can vary heavily in clinical samplesof even modest size, lesions may create false targets, and even anormal OD is covered by vessels of irregular and variable shape.

3 al Imaging and Graphics 37 (2013) 369– 376

Aap[o[pmbcilplc1

p[ec

dv[sbtOiumotarTdwtiss

otdf

flttd

3

uti(pat

Fig. 1. Fast Radial Symmetry: each pixel generates a positively affected pixel �p+ atdistance R along the positive gradient direction, and a negatively affected pixel �p− .

70 A. Giachetti et al. / Computerized Medic

pproximate automatic detection of the OD center has beenchieved using simple features (e.g. brightness, edges) and tem-late matching, using ad hoc heuristics to avoid false detections4–6]. The result is used directly to estimate parameters and regionsf interest, or to initialize accurate contour segmentation methods7,8,1]. Simpler methods can fail in noisy cases; to overcome thisroblem, “robust” OD location techniques using contextual infor-ation, mainly related to the geometry of the vascular tree, have

een proposed by several authors using local maps [9,10] or vas-ular models [11–13]. Context can be advantageous when the ODs nearly invisible, but results depend on the quality of the vascu-ar segmentation. Furthermore such techniques do not sometimesrovide good feature localization. Recent works reporting good OD

ocalization and validated on large image sets [8,1] use multipleues based on local brightness to find the OD center, even based onD projections [14] rather than training complex models.

In any case, finding good features to characterize the OD inde-endent of image quality or lesions is still an open issue. Yu et al.1] showed recently that complex OD templates proposed in lit-rature do not improve localization performance with respect toonvolution with a simple binary circular template.

Fovea localization based on local image properties is even moreifficult. Methods reported in the literature exploit more heavilyasculature and other contextual information. Sinthanayothin et al.4] used correlation with a template to locate candidate regions,electing the darkest one. Chutatape [15] segmented the mainlood vessels with an active shape model, then fitted a parabola tohe result to indirectly locate the fovea. Welfer et al. [16] exploitedD location to constrain a candidate region where the fovea was

dentified with morphological processing. A similar approach wassed by Yu et al. [17], finding the fovea as the point of lowestatched filter response within a search area determined by the

ptic disc location. Chin et al [18] used geometrical priors coupledo vascular density. Niemeijer et al. used a combination of globalnd local cues [19] and in [13] defined the problem of localizingetinal structures as a regression problem on a large feature space.he usual risk in learning based or regression methods combiningifferent parameters is the inability to provide reasonable resultshen the training set does not cover the characteristics of images

o be analyzed. This is a general problem in medical decisions, as its often not obvious whether the use of complex models involvingeveral parameters and trained classifiers is better than the use ofimple heuristics [20].

The substantial amount of techniques and heuristics reported,f which the cited papers are but a representative sample, suggestshat it is still necessary to investigate improved retinal featuresetectors possibly not influenced by color and quality variability ofundus images.

In this work we show that very good results for both OD andovea location can be obtained even on difficult images by makingimited and general assumptions on bright or dark radial symme-ry of the searched landmarks and on local vessel density, withoutraining complex models on specific data, and without contouretection.

. The proposed approach

The algorithm is based on the computation of likelihood val-es for pixels to be OD or fovea centers, derived from the FRSransform and vessel density estimation. We start by rescalingmages to obtain a reference pixel size (reference image). A rough

morphology-based) vessel removal and an accurate inpaintingrocedure [21] are then performed to obtain a image with vesselsnd other small dark spots removed that is used for the symme-ry detection. This image is furtherly subsampled at the resolution

Summing contributions in the “affected pixels” generated by all the image pixelswe obtain functions that are maximal near centers of bright or dark circular shapeswith radius R.

chosen for the symmetry analysis that corresponds, for the imagestested in the experimental section to a 200 pixel width and to anexpected OD size of about 10 pixels and pixel size of about 0.05 mm(coarse image). This low resolution is sufficient to obtain the goodresults described in the experimental section keeping the compu-tation time small.

3.1. Fast Radial Symmetry transform

The Fast Radial Symmetry (FRS) transform is created with a vot-ing procedure fully described in [2] and that can be summarized inour customized version as follows. For each radius R in a definedrange and for each pixel location �p, a positively affected pixel �p+ anda negatively affected pixel �p− are obtained translating �p along thegradient direction in positive (dark to bright) and negative (brightto dark) orientations (see Fig. 1):

�p+(R) = �p + round

(∇I(�p)

||∇I(�p)||R

)(1)

�p−(R) = �p − round

(∇I(�p)

||∇I(�p)||R

), (2)

where round indicates closest integer.Summing values in these points we build two maps enhancing

centers of bright and dark circularly symmetrical regions. Differ-ently from [2], in fact, we create separate maps for the positive andnegative contributions and use them to detect OD and fovea.

To create the first one we scan image pixels and for each pixelposition �p, we consider only the positively affected pixels �p+. Incre-menting a counter in all that locations we first obtain a map called“orientation projection image” O+

R . Summing in that positions thegradient modules computed in the originating pixels, ||∇I(�p)|| weobtain the so-called “magnitude projection image” M+

R . Finally, thebright radial symmetry response map S+

R is defined as

S+R = F+

R · AR, (3)

where

F+R (�p) = M+

R (�p)kR

(max(O+

R , kR)kR

(4)

In the above, AR is a radius-dependent Gaussian kernel, that is inour case taken with � = R/4, ̨ is the “radial strictness parameter”(higher values create more peaked maps) that we take equal to 2.The clamping parameter kR is taken equal to 9.9 if R > 1 and 8 if R = 1as suggested in [2]. To detect symmetries at varying radii, Loy andZelinsky [2] summed contributions at various values of R, showing

that if radii are sampled densely enough (every 2 pixels accordingto [1]) in a given range, all the symmetric structures within thatrange are captured. This means that a detector for an approximatelysymmetric structure like the OD can be built computing S+

R in a

al Ima

rgaa

O

FReoppr

iupatl

F

3

tpsSpvammuhttbc

jt

S

D

d

tata

tsr

A. Giachetti et al. / Computerized Medic

adius range around the expected OD size. In our downsampledrayscale converted inpainted image we used the range 7–12 pixelsnd finally normalized the result dividing by its maximum to create

“symmetry based OD center likelihood” ODs(�p):

Ds(�p) =∑12

R=7S+R (�p)

maxx∈I(∑12

R=7S+R (�x))

(5)

or each pixel location we also store, in a separate map, the value of giving the largest contribution, which allows us to obtain a roughstimate of the OD radius. This value is used in the current versionf the VAMPIRE software tool to automatically initialize a moduleerforming an accurate OD contour segmentation, using a circleositioned in the detected center with this roughly estimated localadius as initial guess.

A symmetry dependent fovea center likelihood function is sim-larly obtained by creating a dark radial symmetry response map S−

Rsing the same quantities summed in the negatively affected pixel�−. The only difference is that, being the fovea radially symmetriclso for smaller radii (near the center), we sum contributions inhe range 3–12 pixels, to obtain our “fovea center symmetry-basedikelihood” FOs(�p):

Os(�p) = |∑12

R=3S−R (�p)|

maxx∈I(|∑12

R=3S−R (�x)|)

(6)

.2. Vessel density

In our method we use vessel density only to discard false posi-ive for OD related to bright spots in avascular regions or and falseositive for fovea related to major vessels. To estimate vessel den-ity we first compute a vesselness map on the reference image.everal methods could be applied for this task: the morphologicalrocessing used in the inpainting step is a simple example, Frangi’sesselness [22] or other creaseness functions [23] could be useds well. VAMPIRE has also its own accurate vessel segmentationodule based on matched filters [24]. We tested some of theseethods (e.g. morphological, Frangi’s Vesselness, Vampire mod-

le) obtaining similar results. A novel algorithm we tested gave,owever slightly better results due to a good specificity and washerefore adopted for this task. The basic idea of this algorithm iso build a filter giving high response for bright lines against a darkackground with a specific thickness range, giving null response inase of step edges (differently from usual vesselness measures).

In our filter, for each pixel, and for a set of 4 orientations �oj, = 1...4 and a set of given vessel radii ri ∈ [rmin, rmax], we computewo quantities on the original green channel of the image G(�x):

(�x, ri, �oj) = G(�x + ri�oj) − 2G(�x) + G(�x − ri�oj) (7)

(�x, ri, �oj) = 2 |G(�x + ri�oj) − G(�x − r�oj)|, (8)

The “vesselness” (i.e. the likelihood of a vessel pixel at �x) isefined as the largest value over i, j of the quantity:

S(�x, ri, �oj) − min(D(�x, ri, �oj), S(�x, ri, �oj))ri

, (9)

hat represents an enhancement of dark lines with symmetricalppearance. It is easy to see that near the step edges correspondingo border of large contrasting regions this functions is zero valuednd does not results in false positives.

We perform the vessel extraction on the green channel ofhe “reference image” described above (e.g. resized to have atandard pixel dimension). At this resolution we consider vesseladii between 5 and 11 pixels, corresponding to the size of the major

ging and Graphics 37 (2013) 369– 376 371

retinal vessels. To remove noise the map is then thresholded settingto zero all but the 3% of pixels with highest values.

The vesselness map is then downsampled at the resolution ofthe coarse image used for the symmetry detection, convolved witha disc-shaped kernel with a 10-pixels radius (close to the OD sizeat that scale), saturated at half the maximal value and rescaled inthe range [0 1] to obtain the final vessel kernel density estimation,V(�p).

We expect that the OD is located where V(�p) is high, and thefovea where V(�p) = 1 − V(�p) is high.

3.3. OD and fovea location

Given the symmetry maps and the vessel density we simplycombine them to obtain a corrected likelihood functions for ODand fovea location. Specifically, we define two functions that weassume to be proportional to the probabilities for each pixel to bethe center of the OD or of the fovea:

pOD(x, y) = ODs(x, y) · max(0.1, V(x, y)) (10)

pF (x, y) = FOs(x, y) · V(x, y) (11)

where the correction to the vessel density in the first formula(nonzero small value corresponding to zero vessel density) is doneto take into account possible cases of poor vessel characteriza-tion. The best candidates OD and fovea centers are the locationsmaximizing pOD and pF:

xOD, yOD = argmax(pOD(x, y)) (12)

xF , yF = argmax(pF (x, y)) (13)

To deal with anomalous cases we can actually find different localmaxima of the two functions, selecting the best one with more com-plex heuristics based for example on contextual information andrelative positions. Actually we perform non-maxima suppressionon the probability maps storing maxima with the largest values.We also compute a reliability value for the best candidates givenby the ratios between the maxima and the probability values of thelargest among the other local maxima.

In our experiments, however, we did not apply corrections tothe previously described localization estimates based on this reli-ability value, also to demonstrate the stability and reliability of thesymmetry+vesselness cues, that failed in a very small percentageof the tested images.

4. Experimental results

Fig. 2 shows qualitatively the advantages of using the radialsymmetry detector rather than a classical template matching. ODand fovea center symmetry-based likelihoods in fact, present welldefined peaks (B and D), while the convolutions of the samegrayscale image originating the maps and of its negative with abinary circular template (A and C) are blurred. Maxima of the sym-metry functions are closed to the true centers of the retinal featureswhile maxima of the convolution maps are often less accuratelypositioned or wrong Fig. 2E. The symmetry detector seems there-fore to improve the robustness and the accuracy of OD and fovealocalization with respect to a simple binary circular template, while,as stated in [1] more complex OD templates proposed so far in theliterature did not show the ability to outperform the circular tem-plate for OD detection. This fact is confirmed by the quantitative

experiments presented in the following subsections.

The principal vessels extracted with the procedure described inSection 3.2 are similar to those obtained with other techniques (seeFig. 3B). A little advantage of the method used is, however, related

372 A. Giachetti et al. / Computerized Medical Imaging and Graphics 37 (2013) 369– 376

Fig. 2. Symmetry based detectors perform better than circular templates. (A) out-put of the convolution of a circular bright template of the expected OD size over theimage (higher values in darker grays). (B) Symmetry dependent OD center likelihoodODs computed for the same image: it clearly defines a more peaked local maximum.(C) Output of the convolution of a circular dark macula template over the image.(D) Symmetry dependent fovea center likelihood FOs . (E) The corresponding fea-tSe

ttlwpm

cHdce

abua

uaf

4

i(baCr

Fig. 4. Functions used to determine the position of OD and fovea. (A) Grayscaleinpainted image. (B) OD probability obtained by multiplying vessel density and

F

ures detected as maxima (circles for symmetry based, squares for template-based).ymmetry based OD center estimation is more accurate, template based maculastimation is completely wrong.

o its specificity, i.e. no false vessels are extracted at the border ofhe bright spots as often found with other filters. This is particu-arly important to avoid false OD detections due to large intensities

here no vessels are present. The subsequent blurring and clam-ing create a final vessel density estimation that is constant andaximal around the extracted vessels.An example of the OD and fovea center likelihoods obtained by

ombining symmetry maps and vessel density is shown in Fig. 4.ere it is possible to see that the symmetry map avoids the falseetection of a dark region as the macula without the use of moreomplex heuristics or multiple hypotheses (that could be, however,asily added, improving the robustness of the system).

Our OD and fovea detectors have been implemented in Matlab,s the entire VAMPIRE software suite. The code is not optimized,ut the computation time does not seem a problem for practicalse: it takes only about 5 s for OD and macula centers estimate on

Dell XPS 17 L 701X laptop (CPU Intel Core i7 CPU Q 740).A quantitative analysis of the performances of the OD and mac-

la localization has been finally realized on a very large data setsvailable online and on images previously used in the VAMPIREramework, and is detailed in the following subsections.

.1. Optic disc location

The localization of the OD center has been validated onmages kindly provided by the Messidor program partnershttp://www.messidor.crihan.fr). The image database is composed

y 1200 images, divided in three groups of 400 images each,cquired in different centers (Service Ophtalmologie Lariboisire,HU de St Etienne, LaTIM - CHU de BREST) and have differentesolutions (2304 × 1536, 2240 × 1488, and 1440 × 960). Images

ig. 3. Vessel density function estimation. (A) Green channel image. (B) Cleaned vessel en

bright radial symmetry. (C) Fovea center probability obtained from inverted ves-sel density and dark radial symmetry. (D) Detected OD (dark dot, blue in the webversion of the article) and fovea centers (bright dot, yellow in the online version).

are annotated with grades of retinopathy and risk of macular edemaderived by the presence of microaneurysms, hemorrhages, neovas-cularization, exudates. On these images we tested if the methodcorrectly detects the feature and which is the error in its localiza-tion. To compute an error measure not depending on the pixel sizewe define a “normalized” localization error estimating the distancebetween the detected OD center and the manually annotated one(center of OD segmentation mask) and then dividing the result bythe “expected” OD radius for that image, estimated from the imageacquisition parameters [17]. As done in most papers, e.g. [13,17,1],we then consider the OD detection successful if the distance of theestimated OD center from the annotated one is less than half theexpected OD radius, e.g. if the normalized localization error is lessthan 1.

The success rate obtained with our technique (Tables 1 and 2) ishigher than those obtained with the other recent techniques testedon the same database (less than one third of the failures of thebest compared method). The influence of retinopathy and risk ofmacular edema grades seems also extremely limited.

The performance of our technique seems good also comparedwith other, more complex methods reported in the literature. For

example, and remembering that different data sets were used, theresults reported by Niemejier et al. [13] are success rates in ODdetection (evaluated with the same method applied here) of 99.4%,99.4% and 93% and a localization error of about 21%, 31% and 25%

hancement filter output. (C) Kernel smoothed and saturated vascular density.

A. Giachetti et al. / Computerized Medical Imaging and Graphics 37 (2013) 369– 376 373

Table 1Optic disc center detection rates and normalized localization errors (see text) obtained for the different retinopathy grades annotated on the whole Messidor dataset andcompared with previous works.

Retinopathy grade

0 1 2 3 Any

Avg. normalized error (our) 0.131 0.106 0.113 0.125 0.118Success rate (our) 99.62% 100.00% 100.00% 99.23 99.66%Success rate [25] 99.44% 99.35% 100.00% 96.15% 98.91%Success rate [17] 98.71% 99.34% 99.19% 96.16% 98.33%

Table 2Optic disc center detection rates and normalized localization errors (see text) obtained for the different grades of risk of macular edema, annotated on the whole Messidordataset and compared with previous works.

Risk of macular edema

0 1 2 Any

Avg. normalized error (our) 0.124 0.113 0.122 0.118Success rate (our) 99.69% 100% 99.35% 99.66%Success rate [25] 99.89% 100% 94.80% 98.91%Success rate [17] 98.39% 100% 97.26% 98.33%

F f the

s ersio

otasan

“ioF

l

Fdt9

for our symmetry based detector in Section 3). A reasonable methodof this kind should be, however, trained on a large number of highly

ig. 5. Examples of correct OD center location (dark dots, blue in the web version oignal. Detected fovea centers are also represented (bright dots, yellow in the web v

f the expected OD size for the three datasets tested (with only thehird including abnormalities), while we have 100% success andn average localization error of less than 17% of the expected ODize. No images have been removed by the Messidor image groupsnnotated, including cases with pathologies (e.g. exudates, microa-eurysms) .

Fig. 6 shows a cumulative histogram with the percentage ofnormalized errors” falling below increasing thresholds. The major-ty of the detected centers are rather close to the manually placed

nes, and this is the case also for pathological cases, as shown inig. 5.

Fig. 7 shows an example of OD localization failure due to thearge bright area around the OD. In such cases it is clear that the

ig. 6. Cumulative histogram showing percentages of estimated OD centers withinifferent fractions of the “expected” OD radius from the true center. More than halfhe detected centers are within 10% of the expected OD radius and approximately0% are within 20%.

article) in the case of differently colored images, brighter anomalous spots or poorn).

use of a different cue, like fuzzy convergence of blood vessels [9]would result in a correct detection, even if a similar feature alonefails in many more cases where the symmetry cue works well (seeTable 3).

We plan, therefore, to improve the method with a smart selec-tion of different detectors for critical cases, based on the evaluationof reliability values for each detector (we introduced a similar value

anomalous data.

Fig. 7. Example of OD center detection error. Detected point (darker dot, blue in theweb version of the article) is not within one OD radius from the manually annotatedone (lighter dot, red in the web version), due to a large, bright lesion surroundingthe OD.

374 A. Giachetti et al. / Computerized Medical Imaging and Graphics 37 (2013) 369– 376

Table 3The symmetry cue is more effective than other simple features for OD localization.Here we compare the results obtained with symmetry and vessel density with themaximum OD detection success rates we reached with combinations of simple fea-tures proposed in the literature (bright template convolution coupled with vesseldensity and fuzzy vessel convergence coupled with inpainted image intensity).

Vess. con-vergence+intensity

Template+vess.density

Symmetry+vess.density

afsMli

ott2Camdsia“

tmbmao

tFsV

4

toop[w

itf

TAe

Table 5Fovea center detection rates and centers position accuracy for different retinopathygrades obtained on the whole Messidor dataset, compared with previous works.

Retinopathy grade

0 1 2 3 Any

Avg. normalized error (our) 0.130 0.119 0.189 0.271 0.168Avg. normalized error [27] 0.183 0.188 0.221 0.357 0.225Success rate (our) 99.8 100 99.2 98.7 99.1Success rate [27] 99.2 98.7 99.1 95.4 98.2Success rate [17] 97.8 99.3 96.0 85.8 95.0

Table 6Fovea center detection rates and centers position accuracy obtained on the wholeMessidor dataset for different annotated grades of risk of macular edema, comparedwith previous works.

Risk of macular edema

0 1 2 Any

Avg. normalized error (our) 0.156 0.123 0.277 0.168Avg. normalized error [27] 0.201 0.235 0.393 0.225Success rate (our) 99.4 100 96.9 99.1

ing abnormalities), with localization errors of about 72%, 40% and63% of the expected OD size, relevantly higher than ours (includingpathologies).

Avg. normalized error 0.629 0.209 0.118Success rate 91.1% 98.0% 99.7%

The symmetry cue coupled with rough vessel density seems inny case better as basic detector than commonly used OD locationeatures. Table 3 shows a comparison of result obtained with ourystem compared with those that we would have obtained on theessidor data by replacing the symmetry cue with a bright circu-

ar template or a fuzzy convergence measure coupled with imagentensity.

A further analysis has been performed on the first 300 imagesf the Messidor database (Annotation Base 11, 12, 13, Service Oph-almologie Lariboisire). These images have been also annotated byhree experts using VAMPIRE software annotation tools. Images are240 × 1488, 8 bits per color plane and were acquired using a 3-CD camera in a Topcon TRC NW6 non-mydriatic retinograph with

45◦. field of view. Experts also annotated a “quality” (good, inter-ediate, bad) score indicating the perceived quality of the optic

isc boundaries: hard images are affected by peripapillary atrophy,cleral rim, myelinated nerve fibers or other pathologies. The qual-ty score annotated by the experts seems more correlated with theutomatic system (and human) accuracy than the “retinopathy” orrisk of macular edema” grades (see Table 4).

The inter-observer error is relevantly lower than the localiza-ion error given by our method, but this is reasonable, being the

anual annotations used the centroids of accurately extracted ODorders, while our localization algorithm based on rough approxi-ations. The inter-observer error between our experts’ annotations

nd Huelva experts’ annotations is similar to inter-observer errorsf our experts (average normalized error 0.04).

In any case, the robustness of the approximate OD center loca-ion, together with the estimate of the OD radius provided by theRS transform, makes the method suitable to initialize an accurateegmentation of the OD contour, as it is actually done in the currentAMPIRE software package.

.2. Fovea center location

We obtained single-expert annotations of the fovea center fromhe public database at the University of Huelva [26] for 1136 imagesf the Messidor dataset. We could then analyze the performance ofur macula detector on the same data used for the OD location, com-aring the results with the reference values of the Huelva system27] and with results reported in [17] (where all the 1200 imagesere annotated).

Also for the fovea center we can estimate a “normalized local-

zation error” by dividing the distance between the computed andhe annotated fovea center positions by the expected OD radiusor the same image. In the literature fovea is usually considered

able 4verage normalized (e.g. divided by the expected OD size) errors for the threexperts’ annotations, compared with inter-observer average “normalized” errors.

Easy Intermediate Hard Average

Avg. normalized error 0.09 0.13 0.15 0.12Avg. inter-observer error 0.03 0.04 0.06 0.04

Success rate [27] 98.8 98.6 95.8 98.2Success rate [17] 96.2 98.4 96.1 95

successfully detected when the distance of automatically computedcenter from the manually annotated one is lower than the expectedOD radius estimated for the same image, e.g. the normalized local-ization error is larger than 1.

Using the set of images and annotations obtained from Huelva,we obtained a success rate of 99.1% and an average localizationerror of less than 17% of the expected OD radius. The results arerelevantly better than those obtained by the reference method [27](Tables 5 and 6). Our results seem also to be relevantly better thanthose reported in [17] even a few more annotated fovea centerswere available in that work.

It is interesting to note that the effect of retinopathy on the per-formances seems negligible for low grades and very small for highgrades. Fig. 8 shows a cumulative histogram with the percentageof distances falling below different thresholds. It is possible to seethat more than 90% of the detected fovea centers are within 1/10of the expected OD size.

Using the same success rule, but using different data and anno-tations not available for our work, Niemejier et al. [13], with amulti-cue regression method, obtained success rates equal to 93.4%,96.8% and 89% for the three datasets (two of which not includ-

Fig. 8. Cumulative histogram showing percentages of estimated fovea centers of theMessidor images within different “normalized localization error” thresholds. Morethan 80% of the centers have an average normalized localization error lower than0.2.

A. Giachetti et al. / Computerized Medical Ima

Table 7Average normalized distances of the computed fovea centers from the manuallyplaced ones, for “good”, “intermediate” and “difficult” images of the Tayside diabeticretinopathy screening data and fovea detection success rates compared with [18].Distances are normalized dividing by the average expected OD size.

Good Interm. Diff. Avg.

Avg. dist. (normalized) 0.198 0.355 0.655 0.317Success rate 100.0% 96.7% 90.0% 97.5%Success rate [18] 92.4% 80% 70% 85.3%

Fig. 9. Example of fovea center detection error. The detected point (white dot, yel-low in the web version of the article) is not within 1/2 of the expected OD sizeflp

attttibcmwcdvawca

5

tcrtaewaotmt

i

[

[

[

[

[

[

[

[

[

[

[

[

rom the manually annotated one (darker dot, red in the online version) due to poorandmark characterization: the annotated fovea is not located in a dark symmetricalart. The darkest dot (blue in the web version) is the detected OD center.

The detection of the fovea center has been compared also with previous method applied in the VAMPIRE framework. In this casehe test data set was composed by 116 images acquired withinhe Tayside diabetic screening programme and used to validatehe algorithm presented in [18]. Images are 3504 × 2336 pixels andype-2 field, i.e. centered on the macula. The images were dividednto three quality classes: good (66), medium (30), and difficult (20)y a trained observer who also annotated the ground truth foveaenter. Quality was determined by the visibility and integrity of theacula region. The expected average OD diameter for these imagesas 382 pixels at the original resolution. Using the same rule to

onsider correctly detected the fovea centers, our novel algorithm,espite a really challenging dataset largely outperforms the pre-ious method (see Table 7). The majority of the detected centersre sufficiently close to the manually placed ones; failures occurhen the fovea is actually not visible in the images (Fig. 9). In this

ase additional constraints related to the relative position of ODnd fovea candidates could, for example, fix the results.

. Conclusions

We presented a simple, fast and effective method to locate bothhe OD center and fovea center in digital fundus images, based on aombination of a vessel density and radial symmetry at selectedadii. The method provided good results (similar or better thanhose reported on state-of-the-art papers) on large image databasesnd also on images of pathological subjects or rated difficult byxperts. This means that the simple detectors proposed captureell discriminative and stable features of the structures analyzed

nd that a very simple heuristic based on simple detectors canutperform techniques exploiting a huge amount of local and con-extual information for the retinal landmarks localization task. The

ethod proposed has been adopted in the VAMPIRE suite to ini-ialize OD contour extraction and locate the fovea.

A strategy we will follow to improve the OD and fovea local-zation further is to implement new algorithms to detect robustly

[

ging and Graphics 37 (2013) 369– 376 375

the specific retinal pathologies that may cause the symmetry basedmethod fail, modifying subsequently the OD and macula detectorsonly when those anomalies are found.

Acknowledgements

Thanks to D. Knight, L.L. Tang and J. Welch (Department of Oph-thalmology, Ninewells Hospital, Dundee, UK) for providing medicalsupport for the VAMPIRE development and to people at Universityof Huelva for providing ground truth annotations.

References

[1] Yu H, Barriga E, Agurto C, Echegaray S, Pattichis M, Bauman W, et al.Fast localization and segmentation of optic disk in retinal images usingdirectional matched filtering and level sets. IEEE Trans Inf Technol Biomed2012;16(4):644–57, doi: 10.1109/TITB.2012.2198668.

[2] Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest.IEEE Trans Pattern Anal Mach Intell 2003;25:959–73, doi: 10.1109/TPAMI.2003.1217601.

[3] Perez-Rovira A, MacGillivray T, Trucco E, Chin KS, Zutis K, Lupascu C, et al. Vam-pire: vessel assessment and measurement platform for images of the retina. In:Proceedings of IEEE engineering in medicine and biology society. IEEE. 2011. p.3391–4.

[4] Sinthanayothin C, Boyce J, Cook H, Williamson T. Automated localisation of theoptic disc, fovea, and retinal blood vessels from digital colour fundus images.Br J Ophthalmol 1999;83(8):902–10, doi: 10.1136/bjo.83.8.902.

[5] Lalonde M, Beaulieu M, Gagnon L. Fast and robust optic disc detection usingpyramidal decomposition and hausdorff-based template matching. IEEE TransMed Imaging 2001;20(11):1193–200.

[6] Sekhar S, Al-Nuaimy W, Nandi AK. Automated localisation of retinal optic diskusing hough transform. In: ISBI. IEEE. 2008. p. 1577–80.

[7] Xu J, Chutatape O, Chew P. Automated optic disk boundary detection by mod-ified active contour model. IEEE Trans Biomed Eng 2007;54(3):473–82, doi:10.1109/TBME.2006.888831.

[8] Aquino A, Gegundez-Arias M, Marin D. Detecting the optic disc boundaryin digital fundus images using morphological, edge detection, and featureextraction techniques. IEEE Trans Med Imaging 2010;29(11):1860–9, doi:10.1109/TMI.2010.2053042.

[9] Hoover A, Goldbaum M. Locating the optic nerve in a retinal image usingthe fuzzy convergence of the blood vessels. IEEE Trans Med Imaging2003;22(8):951–8, doi: 10.1109/TMI.2003.815900.

10] Rangayyan R, Zhu X, Ayres F, Ells A. Detection of the optic nerve head in fundusimages of the retina with gabor filters and phase portrait analysis. J DigitalImaging 2010;23(4):438–53, doi: 10.1007/s10278-009-9261-1.

11] Foracchia M, Grisan E, Ruggeri A. Detection of optic disc in retinal images bymeans of a geometrical model of vessel structure. IEEE Trans Med Imaging2004;23(10):1189–95.

12] Perez-Rovira A, Trucco E. Robust optic disc location via combination of weakdetectors. In: Conference proceedings of the IEEE engineering in medicine andbiology society. 2008. p. 3542–5.

13] Niemeijer M, Abrmoff M, van Ginneken B. Fast detection of the optic disc andfovea in color fundus photographs. Med Image Anal 2009;13(6):859–70, doi:10.1016/j.media.2009.08.003. Special Section on Computational Biomechanicsfor Medicine.

14] Mahfouz A, Fahmy A. Fast localization of the optic disc using projec-tion of image features. IEEE Trans Image Process 2010;19(12):3285–9, doi:10.1109/TIP.2010.2052280.

15] Chutatape O. Fundus foveal localization based on vessel model. In: Proceedingsof EMBS’06. 28th annual international conference of the IEEE. 2006.p. 4440–4.

16] Welfer D, Scharcanski J, Marinho D. Fovea center detection based on the retinaanatomy and mathematical morphology. Comput Methods Programs Biomed2011;104(3):397–409.

17] Yu H, Barriga S, Agurto C, Echegaray S, Pattichis M, Zamora G, et al. Fast local-ization of optic disc and fovea in retinal images for eye disease screening. In:Proceedings of SPIE 7963, Medical Imaging 2011. 2011. p. 796317–796317-12.doi: 10.1117/12.878145.

18] Chin KS, Trucco E, Tan L, Wilson PJ. Automatic fovea location in retinal imagesusing anatomical priors and vessel density. Pattern Recog Lett 2013, doi:10.1016/j.patrec.2013.03.016.

19] Niemeijer M, Abrmoff M, van Ginneken B. Segmentation of the optic disc,macula and vascular arch in fundus photographs. IEEE Trans Med Imaging2007;26(1):116–27, doi: 10.1109/TMI.2006.885336.

20] Gigerenzer G. Gut feelings. The intelligence of the unconscious. London:Pengiun; 2007.

21] Giachetti A, Chin K, Trucco E, Cobb C, Wilson P. Multiresolution localization and

segmentation of the optical disc in fundus images using inpainted backgroundand vessel information. In: ICIP. 2011. p. 2145–8.

22] Frangi A, Niessen W, Vincken K, Viergever M. Multiscale vessel enhancementfiltering. In: Wells W, Colchester A, Delp S, editors. Medical image comput-ing and computer-assisted intervention – MICCAI’98; vol. 1496 of Lecture

3 al Ima

[

[

[

[

[

Studies, Research and Development, Sardinia), where he has been head of the Med-

76 A. Giachetti et al. / Computerized Medic

Notes in Computer Science. Berlin/Heidelberg: Springer; 1998. p. 130–7, doi:10.1007/BFb0056195.

23] Lôpez AM, Lloret D, Serrat J, Villanueva JJ. Multilocal creaseness basedon the level-set extrinsic curvature. Comput Vision Image Understand2000;77(9):111–44.

24] Soares J, Leandro J, Cesar R, Jelinek H, Cree M. Retinal vessel segmentation usingthe 2-d gabor wavelet and supervised classification. IEEE Trans Med Imaging2006;25:1214–22, doi: 10.1109/TMI.2006.879967.

25] Aquino A, Gegundez ME, Marin D. Automated optic disc detection in retinal

images of patients with diabetic retinopathy and risk of macular edema. Int JBiol Life Sci 2012;8(2):87–92.

26] Expert system for early automatic detection of diabetic retinopathy by anal-ysis of digital retinal images. http://www.uhu.es/retinopathy; 2013 [accessed30.03.13].

ging and Graphics 37 (2013) 369– 376

27] Gegundez-Arias ME, Marin D, Bravo JM, Suero A. Locating the fovea centerposition in digital fundus images using thresholding and feature extractiontechniques. Computerized Med Imaging Graph Special Issue: Retinal ImageAnal 2013;37(5–6):386–93.

Andrea Giachetti is Associate Professor at the Department of Computer Science ofthe University of Verona. He received the Ph.D. degree in Physics from the Universityof Genova, Italy, in 1993. Since 1997 he has worked with CRS4 (Center for Advanced

ical Image Processing area, and key staff of various EU funded projects. His researchactivity is mainly focused on computer vision, computer graphics, medical imageprocessing and shape analysis. He is author of more than 80 publications in journalsand conference proceedings.