Image Registration Methods: A Short Review

11
Columbia International Publishing American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49 doi:10.7726/ajac.2013.1003 Review ______________________________________________________________________________________________________________________________ *Corresponding e-mail: [email protected] 1 Faculty of Engineering & Technology, Manav Rachna International University, Faridabad, India 1* Research & Development Cell, Manav Rachna International University, Faridabad, India 2 Faculty of Engineering & Technology, Jamia Millia Islamia University, New Delhi, India 39 Image Registration Methods: A Short Review Sunanda Gupta 1 , S. K. Chakarvarti 1* , and Zaheerudin 2 Received 4 September 2013; Published online 14 December 2013 © The author(s) 2013. Published with open access at www.uscip.us Abstract The purpose of this paper is to provide a review of classic as well as recent image registration methods. Image registration method aims to align two or more images of the same scene taken at different times, with different instruments, from different viewpoints. In this process two images (the reference and sensed images) are geometrically aligned. The reviewed approaches are classified according to four basic steps of the image registration procedure: feature detection, control points matching, a design of the mapping function, and image transformation and according to their nature (area based and feature-based). The paper also has an objective to provide a comprehensive study of different image registration methods, regardless of particular application areas. Keywords: Image registration; Feature detection; Feature matching; Mapping function; Resampling; Area based registration 1. Introduction Image registration is one of the most important image processing applications of geometric transformation. It is to find the correspondence between images of the same scene. Many image processing applications like computer vision, medical imaging, and remote sensing require image registration which is a process of overlaying two and more images taken at different times or from different viewpoint, acquired by same/different sensors. To register images, we need to find a geometric transformation function that aligns images with respect to the reference image (Zitova and Flusser, 2003). Rigid, perspective, affine, projective are commonly used geometric transformations in image registration process. A large variety of registration techniques have been studied for different kind of applications in past years. The objective of this paper is to distinguish between image variations and registration method applied for the particular variation in the image.

Transcript of Image Registration Methods: A Short Review

Columbia International Publishing American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49 doi:10.7726/ajac.2013.1003

Review

______________________________________________________________________________________________________________________________ *Corresponding e-mail: [email protected] 1 Faculty of Engineering & Technology, Manav Rachna International University, Faridabad, India 1* Research & Development Cell, Manav Rachna International University, Faridabad, India 2 Faculty of Engineering & Technology, Jamia Millia Islamia University, New Delhi, India

39

Image Registration Methods: A Short Review

Sunanda Gupta 1, S. K. Chakarvarti1*, and Zaheerudin 2

Received 4 September 2013; Published online 14 December 2013

© The author(s) 2013. Published with open access at www.uscip.us

Abstract The purpose of this paper is to provide a review of classic as well as recent image registration methods. Image registration method aims to align two or more images of the same scene taken at different times, with different instruments, from different viewpoints. In this process two images (the reference and sensed images) are geometrically aligned. The reviewed approaches are classified according to four basic steps of the image registration procedure: feature detection, control points matching, a design of the mapping function, and image transformation and according to their nature (area based and feature-based). The paper also has an objective to provide a comprehensive study of different image registration methods, regardless of particular application areas. Keywords: Image registration; Feature detection; Feature matching; Mapping function; Resampling; Area based registration

1. Introduction Image registration is one of the most important image processing applications of geometric transformation. It is to find the correspondence between images of the same scene. Many image processing applications like computer vision, medical imaging, and remote sensing require image registration which is a process of overlaying two and more images taken at different times or from different viewpoint, acquired by same/different sensors. To register images, we need to find a geometric transformation function that aligns images with respect to the reference image (Zitova and Flusser, 2003). Rigid, perspective, affine, projective are commonly used geometric transformations in image registration process. A large variety of registration techniques have been studied for different kind of applications in past years. The objective of this paper is to distinguish between image variations and registration method applied for the particular variation in the image.

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

40

2. Image registration process As mentioned above Image registration, is widely used in computer vision, remote sensing and medical imaging etc. Depending upon the image acquisition process image registration can be divided into the following categories (Zitova and Flusser, 2003):

(a) Multiview analysis: In this kind of analysis pictures of the same scene are taken from different viewpoints e.g. mosaicking of images of the surveyed area

(b) Multitemporal analysis: In this analysis, images of the same scene are taken at different times under different conditions e.g. landscape planning, monitoring of healing process of a patient (like tumor growth)

(c) Multimodal analysis: Pictures of the same scene are taken by different sensors in multimodal analysis. The objective is to enhance the visualization of the scene e.g. two medical images of a patient may be a PET and MRI scan.

As the technology is developing very fast these days, today’s latest technology is no longer tomorrow. So there is a lot of diversity and degradation of images to be registered. Therefore a single method of registration cannot be used for all kinds of images to be registered. Every method is developed for a special kind of images. Generally the image registration process consists of following steps: 2.1Detection of features The feature is any portion of the image which can be identified and located easily in both images. The feature can be a point, line and corner. Identification of features can be done manually as well as automatically. These features are represented by their point representation and are called control points. Basically there are two main approaches for feature detection:

2.1.1 Feature based methods Feature based methods are also known as point based methods. In this approach important features are extracted by using feature extraction algorithms. Important regions (fields, lakes,), lines (region boundaries, roads, rivers) and points (line intersections, points on region corners) are taken as features here. It should be assured that selected features should be found uniquely, efficiently detectable in both images. Fig (2) shows the different steps while implementing a registration process using feature detection. It is also taken care that features are uniformly spread all over the image. They are more tolerant to local distortions (Zitova and Flusser, 2003). It is expected that features are invariant means stable in time to stay in fixed positions. Feature based methods are used for images having large intensity variations. Projections of high contrast closed-boundary regions of an appropriate size (Goshtasby et al., 1986; Flusser and Suk, 1994) water reservoirs, and lakes (Goshtasby and Stockman, 1985; Holm, 1991) buildings (Hsieh et al., 1992) forests (Sester et al., 1998) urban areas (Roux, 1996) or shadows (Brivio et al., 1992) are generally considered as the region-like features. Region features are

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

41

detected by means of segmentation methods (Pal and Pal, 1993). The resulting registration accuracy is influenced by the accuracy of the segmentation now a days, emphasis is also given to selection of region features invariant with respect to change of scale. The idea of virtual circles, using distance transform I was described by Alhichri and Kamel (Alhichri and Kamell, 2003). Matas and Obdrzˇa´lekl (2002) demonstrated different approach to this problem using Maximally Stable External Regions based on homogeneity of image intensities. General line segments (Maitre and Wu, 1987; Wang and Chen, 1997; Moss and Hancock, 2002) coastal lines (Shin et al., 1997), object contours (Li et al., 1995; Dai and Khorram, 1997; Govindu et al., 1998), roads (Li et al., 1992), or elongated anatomic structures (Vujovic and Brzakovic, 1997) in medical imaging exemplify representation of the line features. Most commonly used line feature detection methods are Canny detector (Canny, 1986) or Laplacian of Gaussian (Marr and Hildreth, 1980). The point features group comprises the approaches working with line intersections (Stockman et al., 1982; Vasileisky et al., 1998) road crossings (Roux, 1996; Growe and Tonjes, 1997) centroids of water regions, the most distinctive points with respect to a specified measure of similarity (Likar and Pernus, 1999) and corners (Wang et al., 1983; Hsieh et al., 1992; Bhattacharya and Sinha, 1997). Computational time necessary for the registration increases as the number of detected points increases. There are several methods available to detect less number of feature points without degrading the quality of registration method.

2.1.2 Area based methods

Area based methods are often used for template matching in which the orientation of template is found in the reference image. So the first step of feature detection is omitted in Area based methods.

Fig (3) shows the different steps while implementing a registration process using template matching.

2.2 Corresponding features matching Once the features are detected in reference image and sensed image, need to be matched respectively using the spatial relationship between the features. Detected features can also be matched using image intensity values in their close neighborhood. 2.2.1 Feature based methods

As we know that detected features are called control points in both reference image and sensed image. In feature matching step, the pair wise correspondence is calculated between detected features using their spatial distribution or their different descriptors of features. Spatial relations based methods are used when there is obscure information about the detected features or their neighborhoods are locally distorted. The information about the distance between the CPs and about their spatial distribution is exploited.

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

42

Goshtasby (Goshtasby and Stockman, 1985) purposed the registration method based on the graph matching algorithm. (Stockman et al., 1982) in their paper developed a Clustering technique to match points connected by abstract edges or line segments. Estimation of correspondence of features using their description is an alternative approach to methods exploiting spatial relationships. Features with the most similar invariant descriptions are paired as the corresponding ones from the sensed and reference images. The feature characteristics and the assumed geometric deformation of the images decide the choice of the type of the invariant description. While looking for the best matching feature pairs in the space of feature descriptors, the minimum distance rule with thresholding is generally applied. The matching likelihood coefficients (Flusser, 1995) for better handling of questionable situations can be an appropriate and a more robust algorithm solution. Guest et al. demonstrated the selection of features according to the reliability of their possible matches (Guest et al., 2001). The image intensity function itself is the simplest feature description, limited to the close neighborhood of the feature (Abdelsayed et al., 1995; Lehmann, 1998). The Cross Correlation is computed to estimate the feature correspondence on these neighborhoods. Ventura et al. (Ventura et al., 1990) used a multi value logical tree to represent relations among image features by various descriptors (ellipticity, angle, thinness, etc.) Then they find the feature correspondence after comparing the multi value logical trees of the reference and sensed images. Brivio (Brivio et al., 1992) also applied multi value logical trees together with moment invariants.

2.3 Estimation of geometric transformation After establishing the feature correspondence, Geometric transformation function also known as mapping function is constructed. Geometric transformation function (mapping function) maps the feature of one image on to the locations of matching features in sensed image. Generally a particular parametric transformation model is chosen depending upon the capture geometry of sensed image. In some methods, while searching for feature correspondence also estimate parameters of mapping function simultaneously, thus combine this step with previous i.e. second step. Sensed image should be transformed to overlay over the reference one. When the sensed image transformation is employed in the mapping function design, correspondence of the CPs from the sensed and reference images together with the fact that the corresponding CP pairs should be as close as possible. The problem is solved by choosing the type of the mapping function and its parameter estimation. The assumed geometric deformation of the sensed image, the method of image acquisition and required accuracy of the registration decide the type of the mapping function (Zitova andFlusser, 2003). Depending upon the amount of image data mapping functions, models of mapping functions can be classified into two broad categories.

2.3.1 Global models These kinds of models use all the control points to estimate only one set of mapping functions, which is used for the entire image. Similarity transform is the simplest global model. The most common transformations are rotation, shear and scaling. Transformation is a mapping from one

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

43

vector space to another, consisting of a linear part, expressed as a matrix multiplication, and an additive part an offset or translation. Fig. (1) Shows the several similarity transformations applied to an image. As a mathematical and computational convenience, the transformation can be written as [ x y 1 ] = [ w z 1 ] T

where T is affine matrix (transformation matrix).

2.3.1.1 Rotation transformation

100

0)cos()sin(

0)sin()cos(

T rr

rr

‘r’ specifies the angle of rotation.

(a) Original image (b) Rotated image

(c) Scaled image (d) Shear image

Fig1. Example of global transformation 2.3.1.2 Scaled Transformation

100

0y0

00sx

T s

sx specifies the scale factor along the x axis sy specifies the scale factor along the y axis.

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

44

2.3.1.3 Shear Transformation

100

01sh

0sh1

T x

y

xsh specifies the shear factor along the x axis

ysh specifies the shear factor along the y axis.

A similarity transformation preserves angles between lines and changes all distances in same ratio. Similarity transformation is also called ‘shape preserving mapping’. 2.3.2 Local models In this type of modeling, only one set of mapping function cannot be used for the entire image. Here the image is broken into a number of parts and each part is considered a separate image. Also the parameters of mapping function are defined for each part separately. The Superiority of the local registration methods over the global ones is shown by Goshtas (Goshtasby, 1988), Ehlers and Fogel, 1994; Wiemker et al., 1996; Flusser, 1992. The weighted least square and weighted mean methods are used to register images locally by adding the slight variation to the original least square method (Goshtasby, 1998). The local methods called piecewise linear mapping (Goshtasby, 1986) and piecewise cubic mapping (Goshtasby, 1987) together with the Akima’s quintic approach (Weimker et al.,1996) apply the combination of the CP-based image triangulation and of the collection of local mapping functions each valid within one triangle. These approaches belong to the group of the interpolating methods 2.4 Resampling image Mapping functions estimated in above step are utilized to transform the sensed image and then to register the image. Transformation of each pixel from the sensed image can be done directly using the estimated mapping functions. It is called a forward method approach but implementation is complicated. But due to the discretization it can produce holes and overlaps in the output image. So another approach called the backward approach is usually used. In this approach the registered image data from the sensed image are estimated using the coordinates of the target pixel (the same coordinate system as of the reference image) and the inverse of the estimated mapping function. The image interpolation takes place in the sensed image on the regular grid. In this way neither holes nor overlaps can occur in the output image. The interpolation itself is usually realized via convolution of the image with an interpolation kernel. A survey article covering main interpolation methods is published by Lehman et al., 1999; Parker et al. 1983, presented a comparison of interpolating methods for 2D image re-sampling where as Grevera and Udupa showed 3D image interpolation methods (Grevera and Udupa, 1998).

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

45

(a) Reference image (b) Sensed image

(c) Show the detected features

(d) Registered image

Fig 2. Image registration process using feature detection.

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

46

(a) Reference image (b) Template image

(c) NCC plot (d) Shows Registered image

Fig3. Image registration process using template matching.

3. Current trends and future scope

Various methods have been studied in literature to register images. The feature based method uses features like edges, corners, point of intersection, centers of Contours etc. for sensed image with reference image. But it is a manual method and thus time consuming. The Correlation method is used as a similarity measure in area based method. It is seen that correlation method shows match on multiple points in natural images like scenery or buildings. So the method combining image features with correlation method have many advantageous properties of both feature-based as well as area based. It overcomes the limitation of the intensity based method. A lot of work is being done but automatic image registration is still an open problem. Image registration with local distortions, complex nonlinear, multimodal registration, and registration of N-D images are most challenging tasks at this moment. When registering images with non-linear, local geometric distortions, we come across with two basic problems. The first problem is how to match the CPs. While the second problem is how to choose mapping functions for registration as sometimes it is difficult to discriminate the image deformation and real changes occurred in the scene. An automatic matching method cannot be used for matching of control points (first problem) as the deformation between images is arbitrary (Zitova and Flusser, 2003). Computational complexity is also one of the major problems during N-D image registration. In the future, we need to develop an expert image registration method derived from a combination of various approaches which takes care of the type of the given task and provides an appropriate solution.

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

47

References Abdelsayed, S., Ionescu, D., Goodenough, D., 1995. Matching and registration method for remote sensing

images, Proceedings of the International Geoscience and Remote Sensing Symposium IGARSS'95, lorence, Italy, 1029–1031.

Alhichri, H.S., Kamel, M., 2003. Virtual circles: a new set of features for fast image registration, Pattern Recognition Letters 24, 1181–1190.

http://dx.doi.org/10.1016/S0167-8655(02)00300-8 Barbu, A., Ionasec, R., 2009. Boosting Cross-Modality Image Registration, IEEE Urban Remote Sensing Event,

Joint. Brivio, P. A., Ventura, A.D., Rampini, A., Schettini, R., 1992. Automatic selection of control points from shadow

structures, International Journal of Remote Sensing 13, 1853–1860. http://dx.doi.org/10.1080/01431169208904234 Bhattacharya, D., Sinha, S., 1997. Invariance of stereo images via theory of complex moments, Pattern

Recognition 30, 1373–1386. http://dx.doi.org/10.1016/S0031-3203(96)00177-X Canny, J., 1986. A computational approach to edge detection, IEEE Transactions on Pattern Analysis and

Machine Intelligence 8, 679–698. http://dx.doi.org/10.1109/TPAMI.1986.4767851 PMid:21869365 Cheng, C., Ma, L., Yu, Y., Wang, J., 2013. A novel image mosaic method based on improved SIFT algorithm and

contourlet Transform, IEEE Proceedings of the 32nd Chinese Control Conference Xi'an, China, 26-28. Dai, X., Khorram, S., 1997. Development of a feature-based approach to automated image registration for

multitemporal and multisensory remotely sensed imagery, International Geoscience and Remote Sensing Symposium IGARSS'97, Singapore, 243–245.

Ehlers, M., Fogel, D.N., 1994. High-precision geometric correction of airborne remote sensing revisited: the multiquadric interpolation, Proceedings of SPIE: Image and Signal Processing for Remote Sensing 2315, 814–824.

http://dx.doi.org/10.1117/12.196779 Flusser, J., 1992. An adaptive method for image registration, Pattern Recognition 25, 45–54. http://dx.doi.org/10.1016/0031-3203(92)90005-4 Flusser, J., Suk, T., 1994. A moment-based approach to registration of images with affine geometric distortion,

IEEE Transactions on Geoscience and Remote Sensing 32, 382–387. http://dx.doi.org/10.1109/36.295052 Flusser, J., 1995. Object matching by means of matching likelihood coefficients, Pattern Recognition Letters

16, 893–900. http://dx.doi.org/10.1016/0167-8655(95)00032-C Goshtasby, A., Stockman, G.C., 1985. Point pattern matching using convex hull edges, IEEE Transactions on

Systems, Man and Cybernetics 15, 631–637. http://dx.doi.org/10.1109/TSMC.1985.6313439 Goshtasby, A., Stockman, G.C., Page, C.V., 1986. A region-based approach to digital image registration with

subpixel accuracy, IEEE Transactions on Geoscience and Remote Sensing 24, 390–399. http://dx.doi.org/10.1109/TGRS.1986.289597 Goshtasby, A., 1986. Piecewise linear mapping functions for image registration, Pattern Recognition 19, 459–

466. http://dx.doi.org/10.1016/0031-3203(86)90044-0 Goshtasby, A., 1987. Piecewise cubic mapping functions for image registration, Pattern Recognition 20, 525–

533. http://dx.doi.org/10.1016/0031-3203(87)90079-3

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

48

Goshtasby, A., 1988. Image registration by local approximation methods, Image and Vision Computing 6, 255–261.

http://dx.doi.org/10.1016/0262-8856(88)90016-9 Growe, S., Tonjes, R., 1997. A knowledge based approach to automatic image registration, Proceedings of the

IEEE International Conference on Image Processing ICIP'97, Santa Barbara, California, 228–231. http://dx.doi.org/10.1109/ICIP.1997.632067 Govindu,V., Shekhar, C., Chellapa, R.,1998. Using geometric properties for correspondence-less image

alignment, Proceedings of the International Conference on Pattern Recognition ICPR'98, Brisbane, Australia, 37–41.

Grevera, G.J., Udupa, J.K., 1998. An objective comparison of 3D image interpolation methods, IEEE Transactions an Medical Imaging 17, 642–652.

http://dx.doi.org/10.1109/42.730408 PMid:9845319 Guest, E., Berry, E., Baldock, R.A., Fidrich,M., Smith, M.A., 2001. Robust point correspondence applied to two-

and three-dimensional image registration, IEEE Transaction on Pattern Analysis and Machine Intelligence 23, 165–179.

http://dx.doi.org/10.1109/34.908967 Holm, M., 1991.Towards automatic rectification of satellite images using feature based matching, Proceedings

of the International Geoscience and Remote Sensing Symposium IGARSS'91, Espoo, Finland, 2439–2442. Hsieh, Y.C., McKeown, D.M., Perlant, F.P., 1992. Performance evaluation of scene registration and stereo

matching for cartographic feature extraction, IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 214–237.

http://dx.doi.org/10.1109/34.121790 Li, S.Z., Kittler, J., Petrou, M.,1992. Matching and recognition of road networks from aerial images, Proceedings

of the Second European Conference on Computer Vision ECCV'92, St Margherita, Italy, 857–861. Li, H., Manjunath, B.S., Mitra, S.K., 1995. A contour-based approach to multisensor image registration, IEEE

Transactions on Image Processing 4, 320–334. http://dx.doi.org/10.1109/83.366480 PMid:18289982 Lehmann, T.M., 1998. A two stage algorithm for model-based registration of medical images, Proceedings of

the Interantional Conference on Pattern Recognition ICPR'98, Brisbane, Australia, 344–352. Likar, B., Pernus, F., 1999. Automatic extraction of corresponding points for the registration of medical

images, Medical Physics 26, 1678–1686. http://dx.doi.org/10.1118/1.598660 PMid:10501067 Lehmann,T.M., onner, C. G ,̈ Spitzer, K., 1999. Survey: interpolation methods in medical image processing,

IEEE Transactions on Medical Imaging 18, 1049–1075. http://dx.doi.org/10.1109/42.816070 PMid:10661324 Marr, D., Hildreth, E., 1980. Theory of edge detection, Proceedings of the Royal Society of London, B 207, 187–

217. http://dx.doi.org/10.1098/rspb.1980.0020 PMid:6102765 Maitre, H., Wu, Y., 1987.Improving dynamic programming to solve image registration, Pattern Recognition 20,

443–462. http://dx.doi.org/10.1016/0031-3203(87)90071-9 Moss, s., Hancock, E.R., 1997. Multiple line-template matching with EM algorithm, Pattern Recognition Letters

18, 1283–1292. Matas, J., Obdrzˇa’lek, S ˇ., Chum, O., 2002. Local affine frames for widebaseline stereo, in: R. Kasturi, D.

Laurendeau, C. Suen (Eds.), 16th International Conference on Pattern Recognition ICPR, vol. 4, 363–366.

Sunanda Gupta, S.K. Chakarvarti, and Zaheerudin/ American Journal of Algorithms and Computing (2013) Vol. 1 No. 1 pp. 39-49

49

Parker, J.A., Kenyon, R.V., Troxel, D.E., 1983. Comparison of interpolating methods for image resampling, IEEE Transactions on Medical Imaging 2, 31–39.

http://dx.doi.org/10.1109/TMI.1983.4307610 PMid:18234586 Pal, N.R., Pal, S.K., 1993. A review on image segmentation techniques, Pattern Recognition 26, 1277–1294. http://dx.doi.org/10.1016/0031-3203(93)90135-J Roux, M., 1996. Automatic registration of SPOT images and digitized maps, Proceedings of the IEEE

International Conference on Image Processing ICIP'96, Lausanne, Switzerland, 625–628. http://dx.doi.org/10.1109/ICIP.1996.560951 Reji, R., Vidya, R., 2012. Comparative Analysis in Satellite image Registration, IEEE International Conference

on Computational Intelligence and Computing Research . Stockman,G., Kopstein, S., Benett, S., 1982. Matching images to models for registration and object detection via

clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence 4, 229–241. http://dx.doi.org/10.1109/TPAMI.1982.4767240 PMid:21869030 Shin, D., Pollard, J.K., Muller, J.P., 1997.Accurate geometric correction of ATSR images, IEEE Transactions on

Geoscience and Remote Sensing 35, 997–1006. http://dx.doi.org/10.1109/36.602542 Sester, M., Hild, H., Fritsch, D., 1998. Definition of ground control features for image registration using GIS

data, Proceedings of the Symposium on Object Recognition and Scene Classification from Multispectral and Multisensor Pixels, CD-ROM, Columbus, Ohio, 7 pp.

Ventura, A.D., Rampini, A., Schettini,R.,1990. Image registration by recognition of corresponding structures, IEEE Transactions on Geoscience and Remote Sensing 28, 305–314.

http://dx.doi.org/10.1109/36.54357 Vujovic, N., Brzakovic, D., 1997. Establishing the correspondence between control points in pairs of

mammographic images, IEEE Transactions on Image Processing 6, 1388–1399. http://dx.doi.org/10.1109/83.624955 PMid:18282894 Vasileisky, A.S., Zhukov, B., Berger, M., 1998. Automated image coregistration based on linear feature

recognition, Proceedings of the Second Conference Fusion of Earth Data, Sophia Antipolis, France, 59–66. Wang, C.Y., Sun, H., Yadas, S., Rosenfeld, A., 1983. Some experiments in relaxation image matching using

corner features, Pattern Recognition 16, 167–182. http://dx.doi.org/10.1016/0031-3203(83)90020-1 Wiemker, R., Rohr, K., Binder, L., Sprengel, R., Stiehl, H.S., 1996. Application of elastic registration to imaginery

from airborne scanners, International Archives for Photogrammetry and Remote Sensing XXXI-B4, 949–954.

Wang, W.H., Chen, Y.C., 1997. Image registration by control points pairing using the invariant properties of line segments, Pattern Recognition Letters 18, 269–281.

http://dx.doi.org/10.1016/S0167-8655(97)00010-X Zitova, B., Flusser, J., 2003. Image registration methods: a survey, Image and Vision Computing21, 977-1000. http://dx.doi.org/10.1016/S0262-8856(03)00137-9