Fast Localization of the Optic Disc Using Projection of Image Features
Transcript of Fast Localization of the Optic Disc Using Projection of Image Features
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
1
Abstract—Optic Disc (OD) localization is an important pre-processing step that significantly
simplifies subsequent segmentation of the OD and other retinal structures. Current OD localization
techniques suffer from impractically-high computation times (few minutes per image). In this work,
we present a fast technique that requires less than a second to localize the OD. The technique is
based on obtaining two projections of certain image features that encode the x- and y- coordinates
of the OD. The resulting 1D projections are then searched to determine the location of the OD. This
avoids searching the 2D image space and thus enhances the speed of the OD localization process.
Image features such as retinal vessels orientation and the OD brightness are used in the current
method. Four publicly-available databases, including STARE and DRIVE, are used to evaluate the
proposed technique. The OD was successfully located in 330 images out of 340 images (97%) with
an average computation time of 0.65 seconds.
Index Terms—Optic disc, localization, projection, image features
I. INTRODUCTION
With the new advances in digital modalities for retinal imaging, there is a progressive need of image
processing tools that provide fast and reliable segmentation of retinal anatomical structures. The optic
disc (OD) is a major retinal structure that usually appears in retinal images as a circular bright object [1].
This work is supported by a grant from Center for Informatics Science (CIS), Nile University (NU), Egypt.
A. E. Mahfouz is with the Medical Imaging & Image Processing Lab, Nile University, Egypt (e-mail: [email protected]).
A. S. Fahmy, PhD. is with the School of Communication & Information Technology, Nile University, Egypt (phone: 002-02-35342069; fax: 002-02-35392350; e-mail: [email protected]).
Fast Localization of the Optic Disc Using
Projection of Image Features
Ahmed E. Mahfouz and Ahmed S. Fahmy*
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
2
It is the region where the optic nerve and the retinal and choroidal vessels emerge into the eye [2]. A large
number of algorithms have been proposed in literature to segment the OD; this includes the use of Hough
Transform [3]–[5], active contour models [6], and Gradient Vector Flow (GVF) [7] and [8]. Nevertheless,
the success and efficiency of these algorithms depend mainly on determining a seed point inside the OD,
i.e. localization of the OD [6]–[8]. Although manual localization of the OD is sufficient, the process can
be prohibitively cumbersome when dealing with large number of images. This has stimulated several
research groups to develop algorithms for automatic localization of the OD [1], [2] and [9]–[12]. OD
localization can also be useful for a number of applications. For example, the OD location can serve as a
landmark for localizing and segmenting other anatomical structures such as the fovea (where the distance
between the OD center and the center of the fovea is roughly constant) [2]. The location can also be used
to classify left and right eyes in fovea-centered retinal images [13]. In addition, the detection of OD
location is sometimes necessary for computing some important diagnostic indices for hypertensive
retinopathy based on vasculature such as Central Retinal Artery Equivalent (CRAE) and Central Retinal
Vein Equivalent (CRVE) [10]. Also, since the OD can be easily confounded with large exudates and
lesions, the detection of its location is important to remove it from a set of candidate lesions [9].
In normal eyes, automatic localization of the OD is simple because it has well-defined features.
Nevertheless, developing fast and robust methods for automatic localization of the OD could be very
challenging due to the presence of retinal pathologies that alter the appearance of the OD significantly
and/or have similar properties to the OD [3]. OD localization methods can be classified into two main
categories, appearance-based methods and model-based methods. Appearance-based methods identify the
location of the OD as the location of the brightest round object within the retinal image. These methods
include techniques such as intensity thresholding [4] and [5], highest average variation [14], matched
spatial filter [12], and principle component analysis [10]. Although these methods are simple and have
high success rates in normal images, they fail to correctly localize the OD in diseased retinal images
where the pathologies have similar appearance properties to the OD.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
3
Model-based methods depend mainly on extracting and analyzing the structure of the retinal vessels
and defining the location of the OD as the point where all the retinal vessels originate [9], [1] and [2].
Techniques such as geometrical models [9], template matching [11], and convergence of vasculature [1],
[2] have a relatively high success rate in diseased images, but they are computationally very expensive
because they require segmentation of the retinal vessels as an initial step of the localization process. For
example, the geometrical model-based method proposed by M. Foracchia et al. [9] achieves a success rate
of more than 97.5%, but it takes an average computation time of 2 minutes to localize the OD in a given
image. Another OD localization method based on vasculature convergence has been described by A.
Youssif et al. [2]. The method achieves an accuracy of more than 98.77%, but it takes an average
computation time of 3.5 minutes per image to correctly locate the OD.
In this work, a novel fast technique for OD localization is proposed. The new method can be classified
as a model-based method in which the OD is considered the region where the main retinal vessels
originate in a vertical direction. The computational time of the localization process is significantly
enhanced by reducing the problem from one 2D localization problem to two 1D problems that does not
require segmentation of the retinal vessels. The remaining sections of this manuscript are organized as
follows; Section 2.1 describes the “easy-to-compute” image features that can be used to decompose the
image into two 1D signals. Section 2.2 contains the methodologies of determining the horizontal and the
vertical locations of the OD from the resulting two 1D signals. Section 2.3 proposes a geometry-based
method that can be used to enhance the robustness of the localization process. Section 3 contains the
detailed algorithm that can be used to implement the proposed technique and reproduce the results which
are displayed in Section 4. Sections 5 and 6 contain a discussion of the results and the conclusion
respectively.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
4
II. THEORY AND METHODS
A. Projection of Image Features
Searching for the OD location in a 2D space (image space) renders any localization algorithm highly
expensive in terms of computational time. The main objective of this work is to propose a localization
algorithm with significantly enhanced speed by converting the typical 2D localization problem into two
1D localization problems, i.e. reducing the dimensionality of the problem. This reduction of
dimensionality is achieved by projecting certain features from the retinal image onto two orthogonal axes
(horizontal and vertical). The resulting two 1D signals are then searched to determine the horizontal and
vertical coordinates of the OD location. The key factor needed for the success of the dimensionality
reduction process is to obtain two meaningful 1D signals that can be used to determine the coordinates of
the OD location. A meaningful horizontal (or vertical) signal can be defined to be a signal whose
maximum value occurs at the horizontal (or vertical) location of the OD. In order to produce such 1D
signals, the set of retinal image features to be projected on either axis should be carefully determined.
Two features are used to create the two 1D projection signals. The first, and the most fundamental
feature, is based on the simple observation that the central retinal artery and vein emerge from the OD
mainly in the vertical direction and then progressively branch into main horizontal vessels, see fig. 1(a).
These main horizontal vessels branch further in all directions to feed most of the retina. This vasculature
structure of the retina suggests that a vertical window (with height equal to the image height and a proper
width) would always be dominated by vertical edges (vertical vessels) when centered at the OD. Although
the window may contain vertical edges at other locations, e.g. small vascular branches and lesions, it will
always be populated by strong horizontal edges as well, i.e. the edges of the two main horizontal branches
of the retinal vessels. Given the above retinal vessels structure, the integration of the difference between
the vertical and horizontal edges, over a region represented by the pre-described vertical window, is a
possible scoring index of the horizontal location of the OD. Simple gradient operators (the kernel [1 0 -1]
and its transpose) are used to produce the vertical and horizontal edge maps of the image.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
5
The second feature used in this work is the intensity profile of the region containing the OD. The OD is
usually a region that is brighter than its surroundings with a thin dark vertical slice in the center
(representing the vertical vessels inside the OD). Sections 2.3 and 2.4 give details on how to incorporate
this feature into the proposed technique.
B. OD Localization
In order to localize the OD, the process is split into two steps. In the first step, the image features are
projected onto the horizontal axis to determine the horizontal location of the OD. In the second step, the
horizontal location, determined from step 1, is searched for the correct vertical location of the OD. The
following two sections show these two steps in detail.
It is worth noting that the areas outside the camera aperture (circular region) are excluded using a
binary mask generated by thresholding the red component of the image based on the method described in
[3].
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
6
Fig. 1. (a) A retinal image showing the sliding window at two different locations, sliding direction and
projection direction. (b) Plot of the 1D signal resulting from projecting the image features onto the
horizontal axis (Hprojection).
Horizontal Localization of the OD
In order to follow the approach described in the preceding two sections to determine the horizontal
location of the OD, define a horizontally sliding window whose width and height are equal to double the
thickness of a main vessel and the image height, respectively. Let this window scan the retinal image
from left to right and project the image features within it onto a horizontal axis to form the first 1D signal,
used later for horizontal localization. Assume that the image features of interest, the features to be
projected, are: (1) the absolute difference between image's vertical and horizontal edges, and (2) the
image‟s intensity values.
Fig. 1(a) shows a retinal image with the horizontally sliding window placed at two different locations
(location1 & location 2). When the window is located over the OD (location 1), it encloses a large number
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
7
of vertical edges and almost no horizontal edges; that is the projection of the integration of the difference
between the vertical and horizontal edges produces a maximum value. Also at the location of the OD, the
projection of pixels' intensity within the same window returns a minimum value, because the window
contains a large number of vertical vessels (represented by low intensity pixels). When the window is
centered at any other location in the retinal image (location 2 for example), it may enclose a significant
number of vertical edges (representing small vascular branches and/or lesions), but it will always contain
a high population of horizontal edges (representing the two main horizontal branches of the retinal
vessels).
Fig. 1(b) shows the 1D signal resulting from projecting the two features described above on the
horizontal axis. The value of the signal at each horizontal location is the ratio between: (1) the projection
of the difference between the vertical and horizontal edges, and (2) the projection of the intensity values
within the window, when centered at this horizontal location. Notice that the horizontal location of the
optic disc is easily identified as the location of the maximum peak of the resulting 1D signal.
It is worth Noting that the vessels thickness and the OD diameter are calculated automatically from the
image resolution; assuming that the average OD diameter in adults is 1.5mm and the main vessels
thickness is 15% of the OD diameter [15]. Small variations in these values don‟t alter the resulting signal
significantly.
Vertical localization of the OD
Assuming that the approach followed in the previous section has successfully identified the horizontal
location of the OD, the objective now is to search this horizontal location for the correct vertical location
of the OD. Define a vertically sliding window whose width and height are equal to the diameter of the
OD. Let this window, centered at the horizontal location determined from the previous section, scan the
retinal image from top to bottom and project the image features within it onto a vertical axis to form the
second 1D signal, used later for vertical localization. The image features of interest, features to be
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
8
projected, are: (1) the summation of the image's vertical and horizontal edges, and (2) the image‟s
intensity values.
Fig. 2. (a) A retinal image showing the vertically sliding window, sliding direction and projection
direction. (b) Plot of the 1D signal resulting from projecting the image features onto the vertical axis
(Vprojection).
Fig. 2(a) shows the same retinal image of fig. 1 with the vertically sliding window centered at the
horizontal location of the OD, determined in the previous section. When this vertically sliding window is
located over the OD, it encloses a large number of both vertical and horizontal edges. Also at this
location, the projection of pixels‟ intensity values within the window has a maximum value, i.e. the
window contains a maximum number of bright pixels. At any other location along the vertical line
defining the pre-determined horizontal location of the OD, the window encloses fewer vertical and
horizontal edges and fewer bright pixels. This follows the fact that the possibility of having lesions in the
regions above or below the OD is very small, because no retinal vessels are present in these regions.
Fig. 2(b) shows the 1D signal resulting from projecting the features described above on the vertical
axis. The value of the signal at each vertical location is the multiplication of two quantities: (1) the
projection of the summation of the vertical and horizontal edges and (2) the projection of the intensity
values within the window, when the window is centered at this vertical location. Notice that the vertical
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
9
location of the optic disc is easily identified as the location of the maximum peak of the resulting 1D
signal.
C. Improving the Robustness
Consider the horizontal signal of the image shown in fig. 3. It can be shown that the true peak
corresponding to the OD horizontal location, peak 2 in fig. 3, is not the maximum peak. This is due to the
image artifact that appears as a bright spot to the left of the image. If we follow the algorithm described
above, the estimated OD location will be at a point that, by intuition, cannot belong to an optic disc (e.g.
belongs to a non-circular structure). On the other hand, if the second peak of the horizontal signal is
considered a candidate horizontal location for the OD, the estimated OD location will correspond to the
true location of the OD.
This observation can be used to improve the total accuracy of the technique. That is, instead of
considering the maximum peak of the horizontal signal only, a candidate list of possible horizontal OD
locations is used. The candidate list contains the locations of the maximum n peaks and the algorithm is
repeated for each candidate horizontal location. This results in n possible (2D) candidate locations of the
OD. In order to determine the final location, a set of image features is used to score each candidate
location. Then, the final location of the OD is taken as the candidate with the maximum scoring index.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
10
Fig. 3. (a) The vertical localization signals corresponding to Peak 1, (b) A retinal image showing the two
candidate OD locations, (c) the vertical localization signals corresponding to Peak 2, (d) horizontal
localization signal.
In this work, the candidate list contains two locations. The scoring index is calculated as the peak
strength of the horizontal signal at the candidate location multiplied by a weighing factor. The weighting
factor incorporates some a priori knowledge of the typical geometric and appearance properties of the
OD.
To calculate the weighting factor, a square window (with edge equal to twice the OD diameter) is
centered at the candidate OD location. Then, 10% of the brightest pixels within this window are
segmented. If an object (large cluster of bright pixels) exists at the candidate location, the eccentricity,
defined as the ratio of the object‟s minor axis length to the object‟s major axis length [16], of this object is
calculated. If no object exists, the eccentricity of the candidate location is set to a very small value (e.g.,
0.1). Then, the weighting factor of this location is set equal to the calculated eccentricity.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
11
III. ALGORITHM
STEP 1: Get image features
1. Get an image of horizontal edges (Eh) and an image of vertical edges (Ev)
2. Calculate EdgeDiff = |Ev| - |Eh|; where |.| is the absolute operator
3. Calculate EdgeSum = |Ev| + |Eh|
STEP 2: Projecting the image features on the horizontal axis
1. Define Whrz as a rectangular window of size (image height, 2×main vessel width) and centered at a
horizontal location x
2. Slide the window Whrz over the retinal image from left to right, and for each x:
- Calculate Fhrz = sum of EdgeDiff inside the window Whrz
- Calculate Ghrz = sum of pixels’ intensity values inside the window Whrz
- Calculate the ratio Hproj (x) = Fhrz / Ghrz
3. The horizontal location of the OD, HRZ_CAND, is the location of the maximum peak of Hproj
STEP 3: Projecting the image features on the vertical axis
1. Define Wver as a rectangular window of size (OD diameter, OD diameter) and centered at the
horizontal location HRZ_CAND and a vertical location y
2. Slide the window Wver over the image from top to bottom, and for each y:
- Calculate: Fver = sum of EdgeSum inside the window Wver
- Calculate Gver = sum of pixels’ intensity values inside the window Wver
- Calculate the value Vproj = Fver Gver
3. The vertical location of the OD, VER_CAND, is the location of the maximum peak of Vproj
STEP 4: Improving the robustness
1. For every candidate location (HRZ_CAND, VER_CAND), select a region of interest (ROI) of size
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
12
(2 × OD diameter, 2 × OD diameter)
2. Segment the brightest 10% pixels within each ROI
3. Group neighboring pixels into objects
4. Calculate the eccentricity of the largest object
eccen. = minor axis length / major axis length
5. Calculate the Scoring Index of each candidate location as:
Scoring Index(HRZ_CAND, VER_CAND) = Hproj(HRZ_CAND) × eccen(HRZ_CAND, VER_CAND)
6. Select the final OD location as the location with the largest Scoring Index
IV. RESULTS
Four publicly available databases are used to evaluate the accuracy and the computation time of the
proposed technique. The four databases are: (1) STARE database (81 images, 605 700 pixels) [17], (2)
DRIVE database (40 images, 565 584 pixels) [18], (3) Standard Diabetic Retinopathy Database
'Calibration Level 0' (DIARETDB0) (130 images, 1500 1152 pixels) [19] and (4) Standard Diabetic
Retinopathy Database 'Calibration Level 1' (DIARETDB1) (89 images, 1500 1152 pixels) [19]. The
diseased images in the four databases contain signs of DR, such as hard exudates, soft exudates,
hemorrhages and neo-vascularization (NV). The accuracy and computation time results of evaluating the
proposed method using these databases are summarized in Table 1. The table includes the results of
applying the method without constructing the candidate list (the maximum peak of the horizontal
localization signal is selected as the correct location) and also the results with the list containing two
candidates.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
13
Fig. 4. Success and failure cases in 6 images selected from Stare database. (a) – (l) show successful OD localization samples. (m) – (o) show a sample of failure in OD localization. The white „X‟ indicates the location of the OD as detected by the proposed method.
Table 1. Accuracy and computation time of the proposed OD localization technique.
Database STARE DRIVE DIARETDB0 DIARETDB1 Total
Normal Images 31 33 20 5 89
Diseased Images 50 7 110 84 251
Number of Images 81 40 130 89 340
Success Rate 89% 100% 94.6% 96.6% 94.4%
Success Rate
(with improvement) 92.6% 100% 98.5% 97.8% 97%
Computation Time 0.46 sec. 0.32 sec. 0.98 sec. 0.98 sec.
The proposed method was implemented using Matlab® (The MathWorks, Inc.). The results shown in
Table 1 are acquired by running the developed code on a PC (2.66 Intel® Core 2 Due and 4 GB RAM).
The detected location of the OD is considered correct if it falls within 60 pixels of a manually identified
OD center, as proposed by A. Hoover et al. in [1], M. Foracchia et al. in [9] and A. Youssif et al. in [2].
The center of the OD is manually identified as the point from which all the retinal vessels emerge.
The proposed method achieved a total accuracy of 97% when tested using the four databases, i.e. the
OD was correctly located in 330 images out of the 340 images tested. The OD was correctly located in 75
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
14
images out of STARE‟s 81 images (92.6% accuracy) with an average computation time of 0.46 seconds
per image and an error of 14±15 pixels (mean ± Standard Deviation). In addition, the OD was correctly
located in all the 40 images of DRIVE database (100% accuracy) with an average computation time of
0.32 seconds per image and an error of 11±11 pixels. Fig. 3 shows the results of applying the proposed
method to selected retinal images from the four databases.
V. DISCUSSION
A new method for OD localization is presented. The method achieves fast localization by reducing the
dimensionality of the search space from a 2D space (image space) to two 1D spaces (two 1D signals). The
process of dimensionality reduction is achieved through the projection of certain image features onto two
orthogonal axes (horizontal and vertical). In this work, two features are selected for projection. The first
one is the directionality of the retinal vessels (represented by the horizontal and vertical edge maps of the
image). The second feature is the intensity profile of the OD (a bright circular region with a thin dark slab
in the middle).
The robustness of the novel technique is guaranteed by evaluating the method using four databases
(340 images); most commonly used for evaluation by currently available techniques [1], [2] and [9]. The
method achieved a relatively high success rate in the four databases (97%) with all the parameters of
algorithm maintained at constant values (except for linear scaling of the sizes of the used projection
windows according to the image resolution). That is, there is no need to tailor the algorithm parameters
for different databases.
The step of reducing the dimensionality of the search space resulted in a significantly reduced
computation time (less than one second), compared to currently available techniques (3.5 minutes in [2]
and 2 minutes in [9]). The key factor that greatly enhanced the speed of the method is that the
directionality of the retinal vessels, weather vertical or horizontal, is described by the directionality of
their corresponding edges in the vertical and horizontal edge maps of the retinal image. The process of
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
15
convoluting the image with a 3 1 gradient mask to produce the edge maps is the most computationally
demanding operation in the algorithm, but this operation is negligible if compared to the initial step of
extracting the retinal vessels that is required in all model-based techniques. The latter is usually achieved
by applying a 2D matched filter (typically a 10 15 mask for the resolution of STARE images) with
several orientations (typically at 12 different angles) [20].
The accuracy of the proposed technique is highly dependent on the accuracy of the horizontal
localization process, because searching for the OD‟s vertical location is restricted by a window centered
at the estimated horizontal location. Hence, to increase the accuracy of the proposed technique, two
candidate locations of the OD are estimated and additional scoring of these candidates is done by
incorporating the appearance and geometric properties of the candidate OD. By investigating these two
candidate locations, instead of one location only, the total accuracy of the technique increased from
94.4% to 97%. Note that the process of investigating two candidate locations was applied to all the
images with no significant computation overhead, i.e. a negligible step in terms of computational time.
As shown in fig. 3, even in the presence of retinal pathologies and/or image artifacts, the selected
features were unique to the OD and thus allowed proper localization with relatively high accuracy. Fig 3
(a) – (l) show different images from the four databases where the OD location was detected correctly. Fig
3(m) – (o) show three retinal images in which the proposed method failed to locate the OD.
VI. CONCLUSION
A novel OD localization technique is presented. The new technique achieves accurate results relative to
currently available techniques, but with a significant reduction in the computation time. The main idea
proposed is to reduce the dimensionality of the search space. This is achieved by decomposing the 2D
problem into two 1D problems by projecting certain image features (vessels structure and OD
appearance) onto two orthogonal axes.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
16
REFERENCES
[1] G A. Hoover, M. Goldbaum, "Locating the Optic Nerve in a Retinal Image Using the Fuzzy
Convergence of the Blood Vessels," IEEE Trans. Medical Imaging, vol. 22, pp. 951 – 958, Aug.
2003.
[2] Aliaa Youssif, Atef Ghalwash, and Amr Ghoneim "Optic Disc Detection from Normalized digital
Fundus Images by means of a Vessels' Direction Matched Filter" IEEE Trans. Medical Imaging,
vol.27, pp. 11–18, Jan. 2008.
[3] F. ter Haar, “Automatic localization of the optic disc in digital color images of the human retina,”
M.S. Thesis,Computer Science Department, Utrecht University, Utrecht, The Netherlands, 2005.
[4] S. Tamura, Y. Okamoto, and K. Yanashima "Zero crossing interval correction in tracking eye-fundus
blood vessels", in 1988 Proc. Int. Conf. on Pattern Recognition, pp. 227–233.
[5] Z. Liu, O. Chutatape, and S. M. Krishnan "Automatic image analysis of fundus photograph", in 1997
Proc. Conf. IEEE Engineering in Medicine and Biology Society, pp. 524–525.
[6] Alireza Osareh, Majid Mirmehdi, Barry Thomas, Richard Markham, "Colour Morphology and
Snakes for Optic Disc Localisation" in 2002 Proc. 6th Medical Image Understanding and Analysis
Conf., pp. 21–24.
[7] V. Thongnuch and B. Uyyanonvara "Automatic optic disk detection from low contrast retinal images
of ROP infant using GVF snake", Suranaree Journal. Sci. Technol. Vol. 14, pp.223–226, Jan. –
Dec. 2007.
[8] Alizreza Osareh, Majid Mitmehdi, Richard Markham, "Comparison of Colour Spaces for Optic Disc
Localisation in Retinal Images" in 2002 Proc. Int. Conf. on Pattern Recognition (ICPR'02), pp.
10743.
[9] M. Foracchia, E. Grisan, A. Ruggeri, "Detection of Optic Disc in Retinal Images by Means of a
Geometrical Model of Vessel Structure," IEEE Trans. Medical Imaging, vol. 23, pp. 1189 – 1195,
Oct. 2004.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
17
[10] Huiqi Li, O. Chutatape, "Automatic Feature Extraction in Color Retinal Images by a Model Based
Approach," IEEE Trans. Biomedical Engineering, vol. 51, pp. 246 – 254, Feb. 2004.
[11] M. Lalonde, M. Beaulieu, and L. Gagnon, “Fast and robust optic disk detection using pyramidal
decomposition and Hausdorff-based template matching,” IEEE Trans. Medical Imaging, vol. 20, pp.
1193–1200, Nov. 2001.
[12] Michael Goldbaum, Saied Moezzi, Adam Taylor, Shankar Chatterjee, Jeff Boyd, Edward Hunter,
and Ramesh Jain “Automated Diagnosis and Image Understanding with Object Extraction, Object
Classification, and Inferencing in Retinal Images” in 1996 Proc. Conf. IEEE International Conf.
Image Processing (ICIP), pp. 695 – 698.
[13] M. Niemeijer, B. Ginneken and F. Haar, "Automatic detection of the optic disc, fovea and vascular
arch in digital color photographs of the retina", in 2005 Proc. British Machine Vision Conf., pp. 109-
118.
[14] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H.Williamson "Automated location of the optic
disk, fovea, and retinal blood vessels from digital color fundus images", Br. J. Ophthalmol., vol. 83,
pp. 902–910, August 1999.
[15] Daniel John Cunningham "Cunningham's text-book of anatomy" Oxford University, 1981 pp.815.
[16] Sonka, M., Hlavac, V., Boyle, R. "Image Processing, Analysis, and Machine Vision" Thomson
Learning, 2008, Ch.8.
[17] STARE project Website Clemson Univ., Clemson, SC [Online]. Available:
http://www.ces.clemson.edu~ahoover/stare.
[18] J.J. Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, B. van Ginneken, "Ridge based vessel
segmentation in color images of the retina," IEEE Trans. Medical Imaging, vol. 23, pp. 501-
509, April 2004.
[19] Kauppi, T., Kalesnykiene, V., Kamarainen, J.-K., Lensu, L., Sorri, I., Raninen A., Voutilainen R.,
Uusitalo, H., Kälviäinen, H., Pietilä, J., DIARETDB1 diabetic retinopathy database and evaluation
protocol, Technical report.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <
18
[20] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in
retinal images using two-dimensional matched filters,” IEEE Trans. Medical Imaging, vol. 8, pp.
263–269, Sep. 1989.