Mixed scheme based multimodal medical image fusion using Daubechies Complex Wavelet Transform

6
IEEE/OSA/IAPR Inteational Conference on Infonnatics, Electronics & Vision Mixed scheme based multimodal medical image fusion using Daubechies Complex Wavelet Transform Rajiv Singh ! , Richa Srivastava ! , Om Prakash ! , 2 and Ashish Khare ! 1 Department of Electronics and Communication, University of Allahabad, Allahabad, INDIA 1 Email: ivsingh.gaur.richa}@gmail.com.ashishkhare@hotmail.com 2 Centre of Computer Education, University of Allahabad, Allahabad, INDIA. 2 Email: [email protected] Abstract- Multimodal medical image fusion is an important task for retrieving complementary information from different modality of medical images. Image fusion can be performed using either spatial or transform domain methods. Limitations of spatial domain fusion methods led to transform domain methods. Discrete wavelet transform (DWT) based fusion is one of the most widely used transform domain method. But it suffers from shiſt sensitivity and does not provide any phase information. These disadvantages of DWT motivated us to use complex wavelet transform. In the present work, we have proposed a new multimodal medical image fusion method using Daubechies complex wavelet transform (DCxWT) which applies two separate fusion rules for approximation and detail coefficients. Shiſt invariance, availability of phase information and multiscale edge information properties of DCxWT improves the quality of fused image. We have compared the proposed method with spatial domain fusion methods (PCA and linear fusion) and transform domain fusion methods (discrete and liſting wavelet transforms). Comparison of results has been done qualitatively as well as by using different fusion metrics (entropy, standard deviation, fusion factor, sion symmetry and ). On the basis of qualitative and quantitative analysis of the obtained results, the proposed method is found to be better than spatial domain fusion methods (PCA and linear fusion) and transform domain fusion methods (discrete and liſting wavelet transforms). I. INTRODUCTION Medical image processing is a challenging and interesting area of research for scientists om last three decades with the invention of multimodal imaging sensors such as X-ray, CT(Computed Tomography), MRI(Magnetic Resonance Imaging) scanners etc. These inventions influenced the interest of researchers in medical imaging including a wide range of applications like denoising, registration, classification, segmentation and sion. Unlike other applications, of medical imaging, image fusion [1] is an important application for extracting complementary information om different modality of medical images. Fusion of images is important for multimodal images, as single modal medical image provides only specific kind of information. For example, CT image provide the details of 978-1-4673-1154-0112/$3l.00 ©2012 IEEE dense hard tissues and usel in tumor or anatomical detection, whereas MRI image gives information of soſt tissues. Similarly Tl-MRI image provide details about anatomical structure of tissues whereas T2-MRI image gives information about normal and abnormal tissues. Thus we are not able to get all kind of information om one single modal medical image. Hence sion of multimodal images is required. Generally image sion [2] is defined as the process of combining multiple source images into single image which is more informative for human or machine perception. The basic requirement for image sion is that it should carry all relevant information with reduced noise and it should not introduce any artifact during sion process. Available literatures [3, 4] in this area proved the clinical importance of sion of medical images. Image fusion methods can be grouped into three categories: Pixel level, Region level and Decision level. The present work uses pixel level sion as it is simple and computationally efficient. There are two approaches to perform sion process: spatial domain and transform domain. Spatial domain sion techniques are simple and sed image can be obtained by directly applying sion rules on pixel values of source images. Simple averaging, PCA (Principal Component Analysis) [5] and linear sion [6] are some examples of spatial domain sion techniques. But major disadvantage of spatial domain techniques are that they introduce spatial distortions in the sed image [7] and do not provide any spectral information. Transform domain techniques overcome the disadvantages of spatial domain fusion. Pyramid and wavelet transforms based techniques are the mostly used transform domain image sion methods. Laplacian pyramid [8], contrast pyramid [9] and ratio of low pass pyramid [10] are popular methods of pyramid transform based image sion. These methods overcome the disadvantages of spatial domain techniques but suffer from blocking effect [7]. As a result wavelet transform based sion [II, 12] approaches are used and it was found that no blocking effect occurred during sion process. Discrete wavelet transform (DWT) is widely used wavelet transform for image ICIEV 2012

Transcript of Mixed scheme based multimodal medical image fusion using Daubechies Complex Wavelet Transform

IEEE/OSA/IAPR International Conference on Infonnatics, Electronics & Vision

Mixed scheme based multimodal medical image fusion using Daubechies Complex Wavelet

Transform

Rajiv Singh!, Richa Srivastava!, Om Prakash!,2 and Ashish Khare! 1 Department of Electronics and Communication,

University of Allahabad, Allahabad, INDIA 1 Email: {ikrajivsingh.gaur.richa}@[email protected]

2Centre of Computer Education, University of Allahabad, Allahabad, INDIA.

2Email: [email protected]

Abstract- Multimodal medical image fusion is an important task

for retrieving complementary information from different

modality of medical images. Image fusion can be performed using

either spatial or transform domain methods. Limitations of

spatial domain fusion methods led to transform domain methods.

Discrete wavelet transform (DWT) based fusion is one of the

most widely used transform domain method. But it suffers from

shift sensitivity and does not provide any phase information.

These disadvantages of DWT motivated us to use complex

wavelet transform. In the present work, we have proposed a new

multimodal medical image fusion method using Daubechies

complex wavelet transform (DCxWT) which applies two separate

fusion rules for approximation and detail coefficients. Shift

invariance, availability of phase information and multiscale edge

information properties of DCxWT improves the quality of fused

image. We have compared the proposed method with spatial

domain fusion methods (PCA and linear fusion) and transform

domain fusion methods (discrete and lifting wavelet transforms).

Comparison of results has been done qualitatively as well as by

using different fusion metrics (entropy, standard deviation,

fusion factor, fusion symmetry and Q:rn ). On the basis of

qualitative and quantitative analysis of the obtained results, the

proposed method is found to be better than spatial domain fusion

methods (PCA and linear fusion) and transform domain fusion

methods (discrete and lifting wavelet transforms).

I. INTRODUCTION

Medical image processing is a challenging and interesting area of research for scientists from last three decades with the invention of multimodal imaging sensors such as X-ray, CT(Computed Tomography), MRI(Magnetic Resonance Imaging) scanners etc. These inventions influenced the interest of researchers in medical imaging including a wide range of applications like denoising, registration, classification, segmentation and fusion. Unlike other applications, of medical imaging, image fusion [1] is an important application for extracting complementary information from different modality of medical images. Fusion of images is important for multimodal images, as single modal medical image provides only specific kind of information. For example, CT image provide the details of

978-1-4673-1154-0112/$3l.00 ©2012 IEEE

dense hard tissues and useful in tumor or anatomical detection, whereas MRI image gives information of soft tissues. Similarly Tl-MRI image provide details about anatomical structure of tissues whereas T2-MRI image gives information about normal and abnormal tissues. Thus we are not able to get all kind of information from one single modal medical image. Hence fusion of multimodal images is required. Generally image fusion [2] is defined as the process of combining multiple source images into single image which is more informative for human or machine perception. The basic requirement for image fusion is that it should carry all relevant information with reduced noise and it should not introduce any artifact during fusion process. Available literatures [3, 4] in this area proved the clinical importance of fusion of medical images.

Image fusion methods can be grouped into three categories: Pixel level, Region level and Decision level. The present work uses pixel level fusion as it is simple and computationally efficient. There are two approaches to perform fusion process: spatial domain and transform domain. Spatial domain fusion techniques are simple and fused image can be obtained by directly applying fusion rules on pixel values of source images. Simple averaging, PCA (Principal Component Analysis) [5] and linear fusion [6] are some examples of spatial domain fusion techniques. But major disadvantage of spatial domain techniques are that they introduce spatial distortions in the fused image [7] and do not provide any spectral information. Transform domain techniques overcome the disadvantages of spatial domain fusion.

Pyramid and wavelet transforms based techniques are the mostly used transform domain image fusion methods. Laplacian pyramid [8], contrast pyramid [9] and ratio of low pass pyramid [10] are popular methods of pyramid transform based image fusion. These methods overcome the disadvantages of spatial domain techniques but suffer from blocking effect [7]. As a result wavelet transform based fusion [I I, 12] approaches are used and it was found that no blocking effect occurred during fusion process. Discrete wavelet transform (DWT) is widely used wavelet transform for image

ICIEV 2012

[EEE/OSA/[APR International Conference on [nfonnatics, E[ectronics & Vision

fusion. It provides better directional selectivity in horizontal, vertical and diagonal directions and has better image representation. Maximum and weighted average fusion rules [7, 13] are applied for fusion of different source images using DWT. Another approach for image fusion is based on Lifting wavelet transform (LWT) [14] using modulus maxima criterion. The disadvantages of DWT or L WT are their shift sensitivity, poor directionality and lack of phase information [15]. DWT has been found shift sensitive [16] due to down sampling step in its implementation. Also DWT or L WT does not provide any phase information as it uses real filter banks.

Due to these limitations of real valued DWT, complex wavelet transform based fusion techniques [17] are used. Complex wavelet transforms like Dual tree complex wavelet transform (DTCWT) [18] provides high directionality and shift invariance. But DTCWT has high computational requirement and due to redundancy it requires high memory. Also DTCWT is not a true complex wavelet transform because it uses real filter banks in its implementation. To overcome these limitations of DTCWT, an approximately shift invariant Daubechies Complex Wavelet Transform (DCxWT) [19, 20] is proposed and uses complex filter banks that makes it true complex wavelet transform.

In the present work, we have proposed a new multimodal medical image fusion method using Daubechies complex wavelet transform (DCxWT) which applies two separate fusion rules for approximation and detail wavelet coefficients. Qualitative and quantitative comparison of the proposed method has been done with spatial domain fusion methods (PCA and Linear fusion) and wavelet transform (discrete and lifting wavelet transform) based fusion methods using different fusion performance measures.

Rest of the paper is organized as follows: construction of Daubechies complex wavelet transform (DCxWT), in brief, is given in section II. Usefulness of DCxWT in image fusion is described in section III. Section IV explains the proposed fusion method. Experimental results and discussions are given in sections V. Finally, conclusions of the work are given in section VI.

II. DAUBECHIES COMPLEX WAVELET TRANSFORM (DCxWT)

The scaling equation of multiresolution theory is given by

¢(x) = 22: ak¢(2x-k) (I) k

where ak are the coefficients. The ak can be real as well as

complex valued and L� =1 . Daubechies's wavelet bases

{If/j,k(t)} in one dimension are defined through the above scaling function and multiresolution analysis of L2

(RJ, To provide general solution, Daubechies considered ak to be real valued only. The construction details of Daubechies complex wavelet transform are given in [19], The generating wavelet If/(t) is given by

If/(t)=2L C-l)n

a1_,,¢C2t-n) (2)

305

Here If/(t) and f/J(t) share the same compact support

[-N,N+J], Any function fit) can be decomposed into complex scaling

function and mother wavelet as: jmal-. -1

f(t) = Lc£of/Jio,k(t)+ I dflf/j,k(t) k FJo

(3)

where, jo is a given resolution level, {c£G} and {df} are

known as approximation and detailed coefficients. The Daubechies complex wavelet transform has the

following advantages: (i). [t has perfect reconstruction, (ii). [t is non redundant wavelet transform, unlike Dual tree complex wavelet transform (DTCWT) [18] which has redundancy of 2m : I for m-dimensional signal. (iii), [t has the same number of computation steps as in DWT (although it involves complex computations), while DTCWT have 2m times computations as that of DWT for an m­dimensional signals. (iv). It is symmetric. This property makes it easy to handle edge points during the signal reconstruction.

III. USEFULNESS OF DCxWT IN IMAGE FUSION

Daubechies complex wavelet transform exhibits following two important properties that directly improve the quality of fusion process.

A, Reduced Shift Sensitivity

Daubechies complex wavelet transform is approximately shift invariant. A transform is shift sensitive if an input signal shift causes an unpredictable change in transform coefficients. In DWT shift sensitivity arises from use of downsamplers in the implementation. Figure 1 shows a circular edge structure reconstructed using real and complex Daubechies wavelets at single scale, From figure I, it is clear that as the circular edge structure moves through space, the reconstruction using real valued DWT coefficients changes erratically, while complex wavelet transfonn reconstructs all local shifts and orientations in the same manner. Shift invariance is desired during fusion process otherwise miss registration [21] problem will occur i.e. we will get mismatched or non aligned image.

(a) (b) (c)

Figure 1: (a) A circular edge structure, (b) reconstructed using

wavelet coefficients of real-valued DWT at single scale and (c)

reconstructed using wavelet coefficients of Daubechies complex

wavelet transform at single scale,

ICIEV 2012

IEEE/OSA/IAPR International Conference on Infonnatics, Electronics & Vision

B. Availabiltiy of Phase Information

Daubechies complex wavelet transform provides phase information through its imaginary part of wavelet coefficients. The most of the structural information about images are contained in the phase of image. To show importance of phase we have taken two images, cameraman and medical image and decomposed those by DCxWT. Reconstruction of these images are done with exchanging the phase of these images with each other. From figure 2, it is clear that phase of image represents structural details or skeleton of the image. It was found that phase is an important criterion to detect strong features of images such as edges, comers etc. Hence by using Daubechies complex wavelet transforms, we are able to preserve more relevant information during fusion process and this will give better representation of fused image.

(c)

Figure 2: (a) cameraman image, (b) medical image, (c) image

reconstructed from the phase of wavelet coefficients of cameraman

image and modulus of wavelet coefficients of medical image and (d)

image reconstructed from the phase of wavelet coefficients of

medical image and modulus of wavelet coefficients of cameraman

image.

IV. THE PROPOSED METHOD

We have proposed a new multimodal medical image fusion method by using Daubechies complex wavelet transform which uses two different fusion rules separately for scaling and detailed wavelet coefficients. Since scaling or approximation coefficients of an image represent the average information and a good fusion process should not lose this information, therefore we have chosen maximum fusion rule for scaling or approximation coefficients.

Detailed complex valued coefficients carry information about strong features such as edges, corners etc. therefore we have to fuse the detailed coefficients in such a way that fused image will preserve edge information as well as represent

306

strong features (boundaries, corners) in a precise way and it was found experimentally that energy values of detailed wavelet coefficients carry the most of the structural information like edges and boundaries. Therefore, we have chosen energy based fusion method for detailed complex valued coefficients. In energy based fusion scheme, energy of each detailed band is computed and fused image is obtained by replacing detailed complex valued coefficients on the basis of energy values of detailed sub bands.

The proposed fusion algorithm can be summarized as follows: (i). Decompose source images SICx,y) and S2Cx,y) using Daubechies complex wavelet transform (DCxW7) to obtain approximation AS/(x,y), AS2(x,y) and detail DS/(x,y), DS2(x,y) complex wavelet coefficients. Mathematically,

[AS] (x,y) DS] (x,y)] = DCxWT[S] (x,y)]

and (4)

(ii). For approximation (i.e. ASj(x,y) and AS2(x,y», maximum fusion rule has been applied as below:

AS(x, ) = { A\\(X,Y) •.

ifIAS� (X.}.·)I>IAS;(X •

.

Y)1 (5) Y AS2 (x,y). if IAS2 (x.y)I>IASj (x.y)1

where AS(x,y) is approximation level wavelet coefficient for fused image. (iii). For detail sub bands DS/x,y) (j is the total number of detail sub bands) of source images Slx,y) (i is the total number of source images), the energy of each sub band is denoted by EDSlx,y) and defined by:

n 2

EDSj(x,y) = I[ DSj(x,y)] k=]

where k= 1,2, ... ,n is the maximum size of detail sub bands.

(6)

If EDSj(x,y), EDS2Cx,y) are the energy of detail sub bands DSj(x,y), DS2Cx,y) for source images Sj(x,y) and Six,y) respectively, then selection of detail coefficients can be obtained by following rule:

DS(x, ) ={J),�I(X'Y)' Y [J,\,(X,y). (7)

where DS(x,y) is detail wavelet coefficient for fused image. (iv). Fused image F(x,y) is obtained by taking inverse Daubechies complex wavelet transform of AS(x,y) and DS(x,y), i.e.

F(x,y) = Inverse DCxWT[ AS(x,y) DS(x,y)]

V. RESULTS AND DISCUSSIONS

(8)

We have experimented with several medical images and here we have shown results on two different medical image data sets. The first set of medical image is MRI and CT image. Fusion result for first medical image data set is shown in figure 3. From results shown in figure 3 one can easily conclude that fused image obtained by the proposed method provides details of boney structures as well as information about soft tissues. Second set of medical images are TI-MR and MRA image showing some abnormality as calcified white structure in the image. TI-MR image provides the details of soft tissues but is unable to detect abnormality present in the

ICIEV 2012

IEEE/OSA/IAPR International Conference on Infonnatics, Electronics & Vision

MRA image, similarly MRA image detects abnormality but unable to give information about soft tissues. From figure 4, it is clear that in this case the fused image obtained by the proposed method is more infonnative than the fused images obtained by the other fusion methods (PCA fusion [5], linear fusion [6], DWT based fusion [13] and LWT based fusion [14]).

(c).The proposed fusion method (d).LWT based fusion [14]

(e). DWT based fusion [13] (t). Linear fusion [6]

(g). PCA fusion [5]

Figure 3: Fusion results for first set of medical images.

307

(a).TI-MR image (b). MRA image

(c).The proposed fusion method (d).LWT based fusion [14]

(e). DWT based fusion [13] (t). Linear fusion [6]

(g). PCA fusion [5]

Figure 4: Fusion results for second set of medical images.

This visual representation is not sufficient for analysis of the obtained fusion results. Therefore we have compared the proposed method with other ones on quantitative measures. For comparison of the proposed method, we have used different fusion performance measures [5], [22]-[24] which are entropy, standard deviation, fusion factor, fusion

symmetry and Q�,. Higher values of entropy, Q:,i' standard

deviation, fusion factor indicate high quality fused image while lower value of fusion symmetry indicate good quality fused image.

These fusion performance measures are tabulated in following two tables for two different medical imaging data sets:

ICIEV 2012

IEEE/OSA/IAPR International Conference on Infonnatics, Electronics & Vision

Table I: Performance Comparison for first set of medical images

Method Entropy Standard Fusion Fusion Q�n Deviation Factor Symmetry

The Proposed 6.0013 32.8511 3.3514 0.3669 0.6512

method

LWT based 6.1921 34.0205 1.0883 0.2887 0.5809

method [14]

DWT based 6.0640 33.3744 1.1189 0.3997 0.5764

method [13]

Linear fusion 5.6557 29.6860 2.2821 0.2553 0.6409

[6]

PCAfusion 5.7638 28.3914 1.8034 0.4309 0.5278

[5]

Table 2: Performance Comparison for second set of medical images

Method Entropy Standard Fusion Fusion Q�n Deviation Factor Symmetry

The Proposed 5.9913 66.9296 4.3662 0.1576 0.5725

method

LWT based 5.4987 43.6309 3.8142 0.1094 0.4564

method [14]

DWT based 5.3948 61.5091 3.0902 0.0155 0.4556

method [13]

Linear fusion 5.2756 38.3802 3.8227 0.0083 0.3897

[6]

PCAfusion 5.7417 56.6432 4.6493 0.2067 0.6402

[5]

By observing data values of table 1 and table 2, it is clear that no method is better than other ones on each quantitative measure, but by considering the ranking of methods on each performance measure, it can be concluded that the proposed method excels majority of other methods In every perfonnance measure.

Form the quantitative and visual analysis of the obtained results (i.e. comparison of fusion performance of figure 3 and figure 4 as well as measures in table 1 and table 2), it can be concluded that the proposed mixed fusion scheme based on Daubechies complex wavelet transform is better than other methods and is more beneficial for low contrast medical images. The proposed method preserves detail information (edge information) in better manner.

VI. CONCLUSIONS

Multimodal medical image fusion is an important task of medical imaging to retrieve complementary information from different modality of medical images. In the present work, we have proposed a new multimodal medical image fusion using Daubechies complex wavelet transform. The proposed method

308

uses mixed fusion scheme i.e. two separate fusion rules for approximation and details of complex wavelet coefficients. This fusion scheme is helpful for preserving average information of image as well as strong features such as edges, comers etc. Experiments were done with two different set of medical images. Performance evaluation of the obtained results are done with well defined fusion measures (entropy,

standard deviation, fusion factor, fusion symmetry and Q�/i ). On the basis of quantitative and visual analysis, the proposed energy based fusion method is found to be better than other transform domain methods (LWT and DWT based methods) and spatial domain methods (PCA and linear fusion). Also the proposed method is able to preserve detail information (edges, boundaries) in better way.

ACKNOWLEDGMENTS

This work was supported in part by the Department of Science and Technology, New Delhi, India, under grant no. SRiFTP/ETA-023/2009 and the University Grants Commission, New Delhi, India, under grant no. 36-246/2008(SR).

REFERENCES

[I] B.V. Darasthy, Information fusion in the realm of medical applications­A bibliographic glimpse at its growing appeal, Information Fusion, voU3, pp. 1-9,2012.

[2] A. Goshtasby, S. Nikolov, Image Fusion: Advances in the state of the art, Guest editorial, Information Fusion, vol.8, pp. 114-118,2007.

[3] H. Schoder, H.W. Yeung, M. Gonen, D. Kraus, S.M. Larson, Head and Neck Cancer: Clinical Usefulness and Accuracy of PETfCT Image Fusion, Radiology, pp. 65-72, 2004.

[4] Y. Nakamoto, K. Tarnai, T. Saga, T. Higashi, T. Hara, T. Suga, T. Koyama, K. Togashi, Clinical Value of Image Fusion from MR and PET in Patients with Head and Neck Cancer, Molecular Imaging and Biology, pp. 46-53, 2009.

[5] V.P.S. Naidu, J.R. Rao, Pixel-level Image Fusion using Wavelets and Principal Component Analysis, Defence Science Journal, vol. 58, No. 3, pp. 338-352,2008.

[6] J. G. P. W. Clevers, R. Zurita-Mill a, Multisensor and multiresolution image fusion using the linear mixing model, Image Fusion: Algorithms and Applications, pp. 67-84,2008.

[7] H. Li, B. S. Manjunath, S. K. Mitra, Multisensor image fusion using the wavelet transform, Graphical Models and Image Processing, vol. 57, no. 3,pp. 235-245,1995.

[8] P. J. Burt, R. J. Kolczynski, Enhanced image capture through fusion, in Proceedings of the 4th IEEE International Conference on Computer Vision (ICCV '93), pp. 173-182, 1993.

[9] A. Toet, J. J. Van Ruyven, J. M. Valeton, Merging thermal and visual images by a contrast pyramid, Optical Engineering, vol. 28, no. 7, pp. 789-792, 1989.

[10] A. Toet, Image fusion by a ratio of low-pass pyramid, Pattern Recognition Letters, vol. 9, no. 4, pp. 245-253, 1989.

[11] G. Pajares, 1. M. Cruz, A wavelet-based image fusion tutorial, Pattern Recognition, vol. 37, no. 9, pp. 1855-1872,2004.

[12] K. Amolins, Y. Zhang, P. Dare, Wavelet based image fusion techniques-an introduction, review and comparison, ISPRS Journal of Photogrammetry & Remote Sensing, vol. 62, no. 4, pp. 249-263, 2007.

[13] C. Shangli, H.E. Junmin, L. Zhongwei, Medical Images of PETfCT Weighted Fusion Based on Wavelet Transform, Bioinformatics and Biomedical Engineering, pp. 2523-2525, 2008.

[14] S. Kor, U. S. Tiwary, Feature Level Fusion of Multimodal Medical Images in Lifting Wavelet Transform Domain, Engineering in Medicine

ICIEV 2012

IEEE/OSA/IAPR International Conference on Infonnatics, Electronics & Vision

and Biology Society, EMBS '04. 26th Annual International Conference of the IEEE, vol. 1, pp. 1479 - 1482, 2004.

[15] A. Khare, U. S. Tiwary, M. Jeon, Daubechies complex wavelet transform based multilevel shrinkage for deblurring of medical images in presence of noise, International Journal on Wavelets, Multiresolution and Information Processing, vol. 7, no. 5, pp. 587-604,2009.

[16] A. Khare, M. Khare, Y. Y. Jeong, H. Kim, M . .leon, Despeckling of medical ultrasound images using complex wavelet transform based Bayesian shrinkage, Signal Processing, vol. 90, no. 2, pp. 428-439, 2010.

[17] P. Hill, N. Canagarajah, D. Bull, Image fusion using complex wavelets, in Proceedings of the 13th British Machine Vision Conference, Cardiff, UK,2002.

[18] 1. W. Selesnick, R. G. Baraniuk, N. G. Kingsbury, The Dual-Tree Complex Wavelet Transform, IEEE Signal Processing Magazine, vol. 22, No. 6, pp. 123-151,2005.

[19] D. Clonda, 1. M. Lina, B. Goulard, Complex Daubechies Wavelets: Properties and statistical image modeling, Signal Processing, vol. 84, pp. 1-23,2004.

309

[20] A. Khare, U. S. Tiwary, W. Pedrycz, M. Jeon, Multilevel adaptive thresholding and shrinkage technique for denoising using Daubechies complex wavelet transform, The Imaging Science Journal, vol. 58, no. 6, pp. 340-358, 2010.

[21] Z. Qiang, W. Long, L. Huijuan, M. Zhaokun, Similarity-based multimodality image fusion with shiftable complex directional pyramid, Pattern Recognition Letters, vol. 32, pp. 1544-1553,2011.

[22] K. Kotwal, S. Chaudhuri, A novel approach to quantitative evaluation of hyperspectral image fusion techniques, Information Fusion (Article in Press), 2011. doi:10.1016/j.inffus.2011.03.008

[23] S. Xydeas, V. Petrovic, Objective Image Fusion Performance Measure, Electronics Letters, vol. 36, no. 4, pp. 308-309, 2000.

[24] C. Ramesh, T. Ranjith, Fusion performance measures and a lifting wavelet transform based algorithm for image fusion, Proceedings of the Fifth International Conference on Information Fusion, vol. I, pp. 317-320,2002.

ICIEV 2012