Semi-automatic registration of retinal images based on line matching approach

6
Semi-automatic registration of retinal images based on line matching approach Carmen Alina Lupas ¸cu Dipartimento di Matematica e Informatica Universit` a degli Studi di Palermo Palermo, Italy [email protected] Domenico Tegolo Dipartimento di Matematica e Informatica Universit` a degli Studi di Palermo Palermo, Italy [email protected] Fabio Bellavia Dipartimento di Sistemi Informatici Universit` a degli Studi di Firenze Firenze, Italy [email protected] Cesare Fabio Valenti Dipartimento di Matematica e Informatica Universit` a degli Studi di Palermo Palermo, Italy [email protected] Abstract Accurate retinal image registration is essential to track the evolution of eye-related diseases. In this paper, a semi- automatic method based on features relying upon retinal graphs for temporal registration of retinal images is pro- posed. The features represent straight lines connecting vas- cular landmarks on the vascular tree of the retina: bifur- cations, branchings, crossings and end points. In the built retinal graph, one straight line between two vascular land- marks indicates that they are connected by a vascular seg- ment in the original retinal image. The locations of the landmarks are manually extracted, from each image, in or- der to avoid the loss of information caused by errors in a retinal vessels segmentation algorithm. A straight line model is designed in order to compute a similarity measure to quantify the line matching between images. From the set of matching lines, corresponding points are extracted and a global transformation is computed. The performance of the registration method is evaluated in the absence of ground truth using the cumulative inverse consistency error, which shows the effectiveness of the proposed method. 1. Introduction This paper describes a semi-automatic framework for retinal image registration useful to diagnose and and moni- tor the progress of a variety of eye-related diseases. In image registration two or more images of the same scene, taken at different times, from different viewpoints and/or by different sensors [23] are superimposed by estab- lishing correspondences between the images. One of the images is referred to as the reference image and the second image is referred to as the target image. Image registration involves spatially target image to align with the reference image. The registration methods can be classified into sev- eral categories including area-based approaches, intensity- based methods and feature-based techniques. In area-based approaches, a small window of points in the target image is statistically compared with windows of the same size in the reference image. Normalized cross- correlation or mutual information are similarity metrics commonly used and area-based approaches require opti- mization of the above metrics with global search techniques [17] in order to achieve the best registration. Large texture- less regions, nonconsistent contrast and nonuniform illumi- nation within images may degrade the performance of area- based approaches. In intensity-based registration methods [21] the mapping transformation is estimated directly from the observed im- age intensities of the two images. Registration is achieved based on the optimization [15] of intensity differences, cross correlation, gradient correlation, or mutual informa- tion of the images [14]. Intensity based methods may fail to align the images in the case of low quality images or small overlap region between the images to align. Feature based techniques [18, 4, 10] involve the detec- tion of landmark points in retinal vascular network and the extraction of features representing the landmark points, fol- lowed by the application of a match metric to identify the correspondences between two images. Most of the feature based methods use bifurcation points as landmarks since they are a remarkable indicator of vasculature, but some of them use also other control points such as Harris corners [5].

Transcript of Semi-automatic registration of retinal images based on line matching approach

Semi-automatic registration of retinal images based on line matching approach

Carmen Alina LupascuDipartimento di Matematica e Informatica

Universita degli Studi di PalermoPalermo, Italy

[email protected]

Domenico TegoloDipartimento di Matematica e Informatica

Universita degli Studi di PalermoPalermo, Italy

[email protected]

Fabio BellaviaDipartimento di Sistemi Informatici

Universita degli Studi di FirenzeFirenze, Italy

[email protected]

Cesare Fabio ValentiDipartimento di Matematica e Informatica

Universita degli Studi di PalermoPalermo, Italy

[email protected]

Abstract

Accurate retinal image registration is essential to trackthe evolution of eye-related diseases. In this paper, a semi-automatic method based on features relying upon retinalgraphs for temporal registration of retinal images is pro-posed. The features represent straight lines connecting vas-cular landmarks on the vascular tree of the retina: bifur-cations, branchings, crossings and end points. In the builtretinal graph, one straight line between two vascular land-marks indicates that they are connected by a vascular seg-ment in the original retinal image. The locations of thelandmarks are manually extracted, from each image, in or-der to avoid the loss of information caused by errors ina retinal vessels segmentation algorithm. A straight linemodel is designed in order to compute a similarity measureto quantify the line matching between images. From the setof matching lines, corresponding points are extracted and aglobal transformation is computed. The performance of theregistration method is evaluated in the absence of groundtruth using the cumulative inverse consistency error, whichshows the effectiveness of the proposed method.

1. Introduction

This paper describes a semi-automatic framework forretinal image registration useful to diagnose and and moni-tor the progress of a variety of eye-related diseases.

In image registration two or more images of the samescene, taken at different times, from different viewpointsand/or by different sensors [23] are superimposed by estab-

lishing correspondences between the images. One of theimages is referred to as the reference image and the secondimage is referred to as the target image. Image registrationinvolves spatially target image to align with the referenceimage. The registration methods can be classified into sev-eral categories including area-based approaches, intensity-based methods and feature-based techniques.

In area-based approaches, a small window of points inthe target image is statistically compared with windows ofthe same size in the reference image. Normalized cross-correlation or mutual information are similarity metricscommonly used and area-based approaches require opti-mization of the above metrics with global search techniques[17] in order to achieve the best registration. Large texture-less regions, nonconsistent contrast and nonuniform illumi-nation within images may degrade the performance of area-based approaches.

In intensity-based registration methods [21] the mappingtransformation is estimated directly from the observed im-age intensities of the two images. Registration is achievedbased on the optimization [15] of intensity differences,cross correlation, gradient correlation, or mutual informa-tion of the images [14]. Intensity based methods may fail toalign the images in the case of low quality images or smalloverlap region between the images to align.

Feature based techniques [18, 4, 10] involve the detec-tion of landmark points in retinal vascular network and theextraction of features representing the landmark points, fol-lowed by the application of a match metric to identify thecorrespondences between two images. Most of the featurebased methods use bifurcation points as landmarks sincethey are a remarkable indicator of vasculature, but some ofthem use also other control points such as Harris corners [5].

Zana et al. in [22] use the branching angles of each bifur-cation point to produce a probability for every point pair. Ahierarchical method is proposed in [2] to solve the problemof one to multiple matching bifurcation points. The corre-spondences are refined gradually from the coarse translationmodel to the fine quadratic model. This idea is extendedto the dual-bootstrap iterative closest point (ICP) algorithm[19]. Laliberte et al. in [12] search the minimal registrationerror by imposing the transformation to any combination offeature points. This exhaustive search needs huge computa-tions when the number of feature points increases. In orderto solve this problem, Chen et al. in [6] present a new struc-tural feature for retinal image registration. Different frompoint-matching techniques, the method proposed by Chen etal. is a structure-matching approach. The bifurcation struc-ture is composed of a master bifurcation point and its threeconnected neighbors. The characteristic vector of each bi-furcation structure consists of the normalized branching an-gle and length. The similarity measure for any bifurcationstructure pair is the Euclidean distance between the char-acteristic vectors. After a verification step, where spuri-ous correspondences are removed when it yields differentmodel than others, in the final stage, the refined bifurcationstructures are used together to estimate the transformationmodels, such as quadratic spherical transformation. Denget al. in [8] proposed a graph-based algorithm frameworkfor retinal image registration. Hierarchical retinal featureswere used, including vessel shape models, vascular bifur-cations and the underlying vascular topological structures.Thus, the image registration problem was transformed intoa pairwise (edge-to-edge) correspondence problem. Theretinal vessels were automatically detected and representedas vascular structure graphs. A graph matching is thenperformed to find global correspondences between vascu-lar bifurcations. Finally, a revised Iterative Closest Point(ICP) algorithm incorporating with quadratic transforma-tion model was used at fine level to register vessel shapemodels. Moreover, aligning spatial relational graphs usingerror-correcting graph matching was also used by Arakalaet al. in [1] for biometric purposes to separate genuine fromimposter comparisons.

In order to reduce the computational time needed forthe feature extraction and matching in feature-based tech-niques based on points and in order to reduce error regis-tration due to errors in automatic retinal vessels segmen-tation algorithms, we present a novel feature-based regis-tration technique based on lines and on manually extractedlandmarks. In our feature-based approach, features are rep-resenting straight lines connecting manually extracted vas-cular landmarks on the vascular tree of the retina: bifur-cations, branchings, crossings and end points. In the builtretinal graph, one straight line between two vascular land-marks indicates that they are connected by a vascular seg-

ment in the original retinal image. The characteristic vectorof each straight line consists of the length and the orien-tation of the segment line, the Cartesian expression of themidpoint of the line segment, the length and the orientationof the line segments connecting the origin to the end pointsand to the midpoint of the analyzed line segment and theperpendicular and the tangential distance from the origin tothe analyzed line segment. The Eucidean distance betweencharacteristic vectors in the two images is used as similaritymetric in order to match segment lines. A quadric model isused to estimate the global transformation between the twoimages using the end points of the matched line segments.Experimental results based on the cumulative inverse con-sistency error, computed in the absence of the ground truthregistration, show that our method is accurate, effective andefficient for retinal image registration.

2. Methodology

The first step is the manual extraction from the retinaltree of the vascular landmarks: bifurcations, branchings,crossings and end points. A manual landmark extractionavoids the loss of information caused by errors in a reti-nal vessels segmentation algorithm. We have manually ex-tracted landmarks from the VARIA database [16], which isa set of retinal images used for authentication purposes. Thedatabase includes 233 images, from 139 different individu-als. The images have been acquired with a TopCon non-mydriatic camera NW-100 model and are optic disc cen-tered with a resolution of 768x584. (Figure 1)

In the manually built retinal graph, one straight line be-tween two vascular landmarks indicates that they are con-nected by a vascular segment in the original retinal image.(Figure 2)

2.1 Feature matching

Based on the manually retinal graph built as describedin the previous section, a feature-based image registrationtechnique is proposed, where the features are representedby the straight lines connecting the vascular landmarks onthe vascular tree of the retina.

2.1.1 Straight line segment model

Our straight line segment model extends the minimalrepresentations of line segments used in the state of the artliterature for line tracking. Lines are corresponding to theedges extracted from an image belonging to a sequence oftime-varying images [9, 7].

Each straight line segment with end points at pointsA(xA, yA) and B(xB , yB) is accurately described, as itmay be noticed from Figure 3, in terms of:

(a)

(b)

(c)

Figure 1. Three retinal images from the VARIAdatabase belonging to the same individual.(a) image R002.png (b) image R180.png and (c)image R181.png.

• length: |AB|;

• orientation (directional angle): α (angle between theline segment and x-axis);

• x coordinate of the midpoint M: Mx;

• y coordinate of the midpoint M: My;

(a)

(b)

(c)

Figure 2. Three retinal graphs from the VARIAdatabase belonging to the same individual.(a) image R002.png (b) image R180.png and (c)image R181.png.

• length of OM: |OM |;

• orientation of OM;

• length of OA: |OA|;

• length of OB: |OB|;

• orientation of OA;

• orientation of OB;

• distance from the origin of the Cartesian coordinatesystem associated to the image to the line segment:|OF |;

• distance along the line from the perpendicular intersec-tion to M: |FM |.

Figure 3. Straight line segment model.

These values are derived from the end points of the seg-ment and from its midpoint as follows:

|AB| =√

(xA − xB)2 + (yA − yB)2;

α = arctan((yA − yB)/(xA − xB));

Mx = (xA + xB)/2;

My = (yA + yB)/2;

|OM | =√

M2x +M2

y ;

OMorientation = arctan((yA + yB)/(xA + xB));

|OA| =√

x2A + y2A;

|OB| =√

x2B + y2B ;

OAorientation = arctan(yA/xA);

OBorientation = arctan(yB/xB);

|OF | = Mx ∗ sin(α) +My ∗ cos(α);|FM | = −Mx ∗ cos(α)−My ∗ sin(α).

2.1.2 Line matching

The Euclidean distance between feature vectors is usedto match the correspondences between the reference imageand the target image. The same matching procedure is ap-plied, but with the roles of the reference image and the tar-get image switched. From the 30 best correspondences fromeach procedure, only the common ones are kept.

Using the correspondences between the end points of thematched line segments a global transformation between thetwo images is computed.

Figure 4. Less than 30 common best cor-respondences (matching lines) identified byour method between two images from theVARIA database (image R180.png on the leftand image R002.png on the right).

Figure 5. Resulting end points correspon-dences identified by our method betweentwo images from the VARIA database (imageR180.png on the left and image R002.png onthe right).

2.2 Global transformation estimation

A second order polynomial transformation is used toregister two retinal images for its capability in describingcomplex non-linear distortion as well as the small computa-tional cost. In the same time, it can better take into accountthe anatomic structure of the human eye and its sphere-likeshape, as well as the physical properties of optical imagingsystem [2, 19, 20, 4]. The second order polynomial modelgreatly reduce the registration error [3, 11], but it requiresat least 12 pairs of landmark control points.

Given a point p = (x, y)T in one image, the correspond-ing point q = (x′, y′)T in the second image is located bythe quadric spatial map θm,n according to:

q = θm,nX(p), (1)

where θm,n is 2× 6 parameter matrix:

θm,n =

(θ11 θ12 θ13 θ14 θ15 θ16θ21 θ22 θ23 θ24 θ25 θ26

), (2)

and

X(p) =(x2 xy y2 x y 1

). (3)

3. Experimental results

Accuracy validation is essential to clinical application ofmedical image registration techniques. Registration vali-dation remains a challenging problem in practice mainlydue to lack of ’ground truth’ [13]. The quality of the pro-posed method is assessed without a ground truth registra-tion. Since for a consistent registration, the forward trans-formation should match the inverse of the backward trans-formation. According to this observation, the cumulativeinverse consistency error (CICE) is defined as the squareddifference between the composition of the forward and re-verse transformations and the identity mapping

CICE(x) = ||hji(hij(x))− x||2, (4)

where hij is the transformation from each evaluated image ito template image j, x is a vector containing the coordinatesof an image pixel and || · || is the standard Euclidean norm.CICE is computed for all pixels in the image and mean andstandard deviation are calculated. A pair of transformationsthat provide good correspondence between images shouldhave zero CICE.

In this preliminary study, we test our method on 3 pairsof images belonging to the same individual. The results areshown in Table 1.

Table 1. REGISTRATION RESULTS in terms ofmean and standard deviation of the cumula-tive inverse consistency error (CICE)

Pair of imagesmeanCICE

stdCICE

computationaltime

R002.png and R180.png 0.6580 0.5471 26.37 (s)R181.png and R180.png 5.1600 5.2760 28.95 (s)R002.png and R181.png 3.4352 3.4115 24.89 (s)

The best registration result in terms of CICE is shownin Figure 6(a) and 6(b), where R180.png was the refer-ence image and R002.png was the target image, while theworst registration result in terms of CICE is shown in Fig-ure 6(c) and 6(d), where R180.png was the reference imageand R181.png was the target image.

4. Conclusion

In this paper, we presented a preliminary work on a novelfeature-based framework for retinal image registration. Fea-

tures are representing straight lines connecting manually ex-tracted vascular landmarks on the vascular tree of the retina:bifurcations, branchings, crossings and end points. In thebuilt retinal graph, one straight line between two vascularlandmarks indicates that they are connected by a vascularsegment in the original retinal image. The characteristicvector of each straight line describes the geometric and spa-tial location properties of a line segment. The Eucidean dis-tance between characteristic vectors in the two images isused as similarity metric in order to match segment lines,while a quadric model is used to estimate the global trans-formation between the two images using the end points ofthe matched line segments. Based on the cumulative inverseconsistency error computed in the absence of the groundtruth registration and on the visual inspection of the fusionof the reference image and the transformed target image aswell as the visual inspection of the fusion of the referenceretinal graph and the transformed target graph, experimen-tal results show that our method is accurate and efficient forretinal image registration.

Future work will include additional evaluation and thedevelopment of new feature matching metrics and valida-tion schemas in order to improve the performance of theproposed retinal image registration method.

References

[1] A. Arakala, S. A. Davis, and K. J. Horadam. Retina fea-tures based on vessel graph substructures. In Biometrics(IJCB), 2011 International Joint Conference on, pages 1–6,Oct. 2011.

[2] A. Can, C. Stewart, B. Roysam, and H. Tanenbaum. Afeature-based, robust, hierarchical algorithm for registeringpairs of images of the curved human retina. IEEE Trans.Pattern Anal. Mach. Intell., 24:347–364, March 2002.

[3] T. Chanwimaluang, G. L. Fan, and S. R. Fransen. Hybridretinal image registration. IEEE Transactions on Informa-tion and Technology in Biomedcine, 10(1):129–142, 2006.

[4] J. Chen, R. T. Smith, J. Tian, and F. A. Laine. A novelregistration method for retinal images based on local fea-tures. In IEEE Engineering in Medicine and Biology Society(EMBC), pages 2242–2245, 2008.

[5] J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and F. A.Laine. A partial intensity invariant feature descriptor formultimodal retinal image registration. IEEE TRANSAC-TIONS ON BIOMEDICAL ENGINEERING, 57(7):1707–1718, July 2010.

[6] L. Chen, Y. Xiang, Y. J. Chen, and X. Zhang. Retinal imageregistration using bifurcation structures. In Image Process-ing (ICIP), 2011 18th IEEE International Conference on,pages 2169–2172, Sept. 2011.

[7] J. L. Crowley, P. Stelmaszyk, T. Skordas, and P. Puget. Mea-surement and integration of 3-d structures by tracking edgelines. International Journal of Computer Vision, 8(1):29–52,July 1992.

[8] K. Deng, J. Tian, J. Zheng, X. Zhang, X. Dai, and M. Xu.Retinal fundus image registration via vascular structuregraph matching. International Journal of Biomedical Imag-ing, pages 1–13, 2010.

[9] R. Deriche and O. Faugeras. Tracking line segments. Imageand Vision Computing, 8(4):261–270, Nov. 1990.

[10] M. Fernandes, Y. Gavet, and J. C. Pinoli. A feature-baseddense local registration of pairs of retinal images. In Com-puter VISion Theory and APplications (VISAPP), 4th Inter-national Conference on, volume 1, pages 265–268, 2009.

[11] F. Laliberte, L. Gagnon, and Y. Sheng. Registration and fu-sion of retinal images: a comparative study. In InternationalConference on Pattern Recognition, pages 715–718, Aug.2002.

[12] F. Laliberte, L. Gagnon, and Y. L. Sheng. Registration andfusion of retinal images-an evaluation study. IEEE Trans.Medical Imaging, 22(5):661–673, May 2003.

[13] Z. Liu, X. Deng, and G. Z. Wang. Accuracy validation formedical image registration algorithms: a review. Chin MedSci J, 27(3):176–181, Sept. 2012.

[14] J. B. A. Maintz and M. A. Viergever. A survey of medi-cal image registration. Medical Image Analysis, 2(1):1–36,1998.

[15] G. K. Matsopoulos, N. A. Mouravliansky, K. K. Deliba-sis, and K. S. Nikita. Automatic retinal image registrationscheme using global optimization techniques. IEEE Trans-actions on Information Technology in Biomedicine, 3(1):47–60, 1999.

[16] M. Ortega, M. G. Penedo, J. Rouco, N. Barreira, and M. J.Carreira. Personal verification based on extraction and char-acterization of retinal feature points. Journal of Visual Lan-guages and Computing, 20(2):80–90, 2009.

[17] N. Ritter, R. Owens, J. Cooper, R. Eikelboom, andP. Van Saarloos. Registration of stereo and temporal im-ages of the retina. IEEE Transactions on Medical Imaging,18(5):404–418, 1999.

[18] N. Ryan, C. Heneghan, and P. de Chazal. Registration of dig-ital retinal images using landmark correspondence by expec-tation maximization. Image and Vision Computing, 22:883–898, 2004.

[19] C. V. Stewart, C. L. Tsai, and B. Roysam. The dual-bootstrap iterative closest point algorithm with applicationto retinal image registration. IEEE Trans. Med. Imag.,22(11):1379–1394, Nov. 2003.

[20] L. Wei, L. Pan, L. Lin, and L. Yu. The retinal image regis-tration based on scale invariant feature. In Biomedical En-gineering and Informatics (BMEI), 2010 3rd InternationalConference on, volume 2, pages 639–643, oct. 2010.

[21] C. Xing and P. Qiu. Intensity-based image registrationby nonparametric local smoothing. Pattern Analysis andMachine Intelligence, IEEE Transactions on, 33(10):2081–2092, Oct. 2011.

[22] F. Zana and J. C. Klein. A multimodal registration algorithmof eye fundus images using vessels detection and houghtransform. IEEE Trans. Medical Imaging, 18(5):419–428,May 1999.

[23] B. Zitova and J. Flusser. Image registration methods: a sur-vey. Image and Vision Computing, 21:977–1000, 2003.

(a)

(b)

(c)

(d)

Figure 6. Registration results on images fromthe VARIA database belonging to the sameindividual. (a) and (c) fusion of reference im-age and transformed target image; (b) and (d)fusion of reference retinal graph (blue) andtransformed target retinal graph (red).