Adaptive image sequence resolution enhancement using multiscale-decomposition-based image fusion

12

Transcript of Adaptive image sequence resolution enhancement using multiscale-decomposition-based image fusion

Adaptive image sequence resolution enhancement using

multiscale decomposition based image fusion

Jeong Ho Shin, Jung Hoon Jung, Joon Ki Paik, and Mongi A. Abidi*Department of Image Engineering,

Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University221 Huksuk-Dong, Tongjak-Ku, Seoul 156-756, Korea*Department of Electrical and Computer Engineering,

University of Tennessee, Knoxville, TN, USA

ABSTRACT

This paper presents a regularized image sequence interpolation algorithm, which can restore high frequency details byfusing low-resolution frames. Image fusion algorithm gives the feasibility of using di�erent data sets, which correspondto the same scene to get a better resolution and information of the scene than the one obtained using only one dataset. Based on the mathematical model of image degradation, we can have an interpolated image which minimizesboth residual between the high resolution and the interpolated images with a prior constraint. In addition, by usingspatially adaptive regularization parameters, directional high frequency components are preserved with e�cientlysuppressed noise. The proposed algorithm provides a better-interpolated image by fusing low-resolution frames. Weprovide experimental results which are classi�ed into non-fusion and fusion algorithms. Based on the experimentalresults, the proposed algorithm provides a better interpolated image than the conventional interpolation algorithmsin the sense of both subjective and objective criteria. More speci�cally, the proposed algorithm has the advantageof preserving high frequency components and suppressing undesirable artifacts such as noise.

Keywords: image interpolation, resolution enhancement, multiscale decomposition, regularization, image fusion.

1. INTRODUCTION

Images, in general, contain much greater amount of information than other types of signals. Therefore, correspond-ingly great amount of data is required for image processing, which often becomes an obstacle to e�cient applications.In order to reduce the amount of data in image communications, many image compression standards, such as JPEGand MPEG, have been established. In the process of reducing image data, resolution drop is unavoidable.

In this paper we mainly deal with image interpolation techniques to enhance the image quality in the sense ofresolution. The terminology \resolution" represents the number of pixels in an image, which determines the physicalsize of the image, and at the same time it also represents the �delity to high-frequency details in the image. By thisreason, resolution is a fundamental issue in evaluating the quality of various image processing systems.

Image interpolation is used to obtain a higher resolution image from a low resolution image, and therefore it is veryimportant in multi-resolution or high-resolution image processing. For example, the spatial scalability function inMPEG-2 and wavelet-based image processing techniques require image interpolation techniques. On the other hand,high-resolution image processing applications such as digital HDTV, aerial photo, medical imaging, and militarypurpose images, need high-resolution image interpolation algorithms. Recently, it can also be used in changing theformat of various types of images and videos, and in increasing the resolution of images.

Conventional interpolation algorithms, such as zero-order or nearest neighbor, bilinear, cubic B-spline, and theDFT-based interpolation, can be classi�ed by basis functions, and they focus on just enlargement of image.1{3 Thosealgorithms have been developed under assumption that there are no mixture among adjacent pixels in the imagingsensor, no motion blur due to �nite shutter speed of the camera, no isotropic blur due to out-of-focus, and no aliasing

Further author information-J. H. Shin : E-mail: [email protected]; WWW:http://ms.cau.ac.kr/�shinjJ. H. Jung : E-mail: [email protected]; WWW:http://ms.cau.ac.kr/�jhjungJ. K. Paik : E-mail: [email protected]; WWW:http://cau.ac.kr/�paikjM. A. Abidi : E-mail: [email protected]

in the process of sub-sampling. Since the above mentioned assumptions are not satis�ed in general low-resolutionimaging systems, it is not easy to restore the original high-resolution image by using the conventional interpolationalgorithms.

In order to improve the performance of the above mentioned algorithms, a spatially adaptive cubic interpolationmethod has been proposed in.4 Although it can preserve a number of directional edges in the interpolation process, itis not easy to restore original high frequency components which are lost in the sub-sampling process. As a alternative,multi-frame interpolation techniques which use sub-pixel motion information have been proposed in.5{8,10{14

It is well-known that image interpolation is an ill-posed problem. More speci�cally, we regard a sub-samplingprocess as a general image degradation process. Then the regularized image interpolation is to �nd the inversesolution de�ned by the image degradation model subject to a priori constraints.7,8 Since the conventional regularizedinterpolation methods used isotropic smoothness as a prior constraints, their interpolation performance for imageswith various edges is limited.

As another approach to the high-resolution image interpolation problem, Schultz and Stevenson have addressed amethod for nonlinear single-channel image expansion which preserves the discontinuities of the original image basedon the maximum a posteriori (MAP) estimation technique.9 They also proposed a video superresolution algorithm,which used MAP estimator with edge preserving Huber-Markov random �eld (HMRF) prior.10 A similar approachto the superresolution problem was suggested by Hardie, Barnard, and Amstrong, which simultaneously estimatesimage registration parameters and the high-resolution image.12 However, all of these methods have similar Gibbspriors which represent non-adaptive smoothness constraints.

A spatial image interpolation algorithm using projections onto convex sets (POCS) theory, which obtains highresolution images from low resolution image sequences, has been studied in.11,13,28,29 In general, POCS methods havea simple structure to implement, while theoretical and numerical shortcomings, such as non-uniqueness of solutionand considerable computation, are included in these approaches. For regularized interpolated method, deterministicor statistical information about the �delity to high-resolution image and statistical information about the smoothnessprior are incorporated to obtain the feasible solution between constraint sets. Therefore, regularized method providemore feasible estimates than POCS.

On the other hand, a hybrid algorithm combining optimized method, such as the maximum likelihood (ML),MAP, and POCS approaches, was presented by Elad and Feuer.14 They used the regularization weigh matrix withlocally adaptive smoothness to obtain higher resolution image quality. Nevertheless, the estimated image may haveno high frequency details along edges. In,15 modi�ed edge-based line average (ELA) techniques for the scanningrate conversion is proposed.

This paper proposes an adaptive image sequence resolution enhancement algorithm using multiscale decomposi-tion (MSD) based image fusion. Restored high resolution image frames with high frequency components along thedirection of edges can be obtained from low resolution image frames every frames. Data fusion algorithms are usuallyused in applications ranging from Earth resource monitoring, weather forecasting, and vehicular tra�c control tomilitary target classi�cation and tracking.16 In this paper, image fusion approaches are applied to obtain enhancedhigh resolution images and the fused images are used as the input images of adaptive regularization algorithms.

This paper is organized as following. Section 2 de�nes mathematical models of low resolution image formationsystem. In Section 3, we summarize the iterative algorithm with real-time structure brie y. We propose a spatiallyadaptive image interpolation algorithm based on image fusion in Section 4. Some experimental results for highresolution image interpolation are provided in Section 5. Finally, Section 6 concludes the paper.

2. DEGRADATION MODEL FOR A LOW RESOLUTION IMAGING SYSTEM

Let xc(p; q) represent a two-dimensional continuous image, and x(n1; n2) the corresponding digital image obtainedby sampling xc(p; q), with size N �N , such as

x(m1;m2) = xc(m1Tv;m2Th); for m1;m2 = 0; 1; � � � ; N � 1; (1)

where Tv and Th respectively represent the vertical and the horizontal sampling intervals. We consider the N �Ndigital image, x(m1;m2), as a high resolution original image, and the N

q� N

qimage, y(n1; n2), with q times lower

(a) (b) (c)

Figure 1. Relationships between two images with di�erent resolutions: (a) Sampling grids for two di�erent images;\ " for y 1

4

, and \�" for x, (b) an array structure of photo-detectors for x(n1; n2), and (c) an array structure of

photo-detectors for y 1

4

(m1;m2).

resolution in both horizontal and vertical directions can be represented as

y(n1; n2) =1

q2

q�1Xi=0

q�1Xj=0

x(qn1 + i; qn2 + j); for n1; n2 = 0; 1; � � � ;N

q� 1: (2)

Two di�erent sampling grids to obtain the image with 4 times lower resolution in both horizontal and verticaldirections, which respectively correspond to images in (1) and (2), are shown in Fig. 1a. And the correspondingarray structures of photo-detectors are shown in Figs. 1b and 1c, respectively.

As shown in (1) and (2), a sub-sampling process from the high resolution image to the low resolution image canbe given as

y(n1; n2) =Xm1

Xm2

x(m1;m2)h(n1; n2;m1;m2) + �(n1; n2); (3)

where y(n1; n2), x(m1;m2), and �(n1; n2) respectively represent two-dimensional arrays of the low resolution image,the high resolution image, and additive noise. And h(n1; n2;m1;m2) represents the space-variant point spreadfunction (PSF), which determines the relationship between the high resolution image and the low resolution image.In this paper we assume that the sub-sampling process is separable and space-invariant.

Under the separability assumption, the sub-sampling process from the N � N high resolution image to theN=2�N=2 low resolution image is shown in Fig. 2.

The corresponding discrete linear space-invariant degradation model can be expressed in matrix-vector form as

y = Hx+ �; (4)

where the�N2

�2� 1 vectors, y and � respectively represent the lexicographically ordered low resolution image and

noise, and the N2 � 1 vector, x, represents the original high resolution image. And H represents a�N2

�2�N2 block

Toeplitz matrix can be written asH = H1 H1; (5)

where represents the Kronecker product, and the N2�N matrix, H1, represents the one dimensional (1D) low-pass

�ltering and sub-sampling by a factor of 2, for example, such as

H1 =1

2

26664

1 1 0 0 � � � 0 00 0 1 1 � � � 0 0...

......

.... . .

......

0 0 0 0 � � � 1 1

37775 : (6)

N NN/2

2Horizontal LPF Vertical LPF 2

N

N/2 N/2

x(m1,m2) y(n1,n2)

Figure 2. Sub-sampling process from the N �N high resolution image to the N=2�N=2 low resolution image.

In the right hand side of (5), the �rst and the second H 0

1s respectively represent the low-pass �ltering and sub-sampling process in the horizontal and the vertical directions. In Eq. (5) and (6), H represents a matrix which iscomposed of asymmetric structure. As shown in Fig. 1, one way to increase the spatial resolution is to increase thedensity of photo-detectors by reducing their sizes. However, the smaller the size of each photo-detector is, the less theamount of incoming light energy becomes. The decrease in incoming light energy degrades the picture quality sinceshot-noise is unavoidable, in principle. Therefore, in current sensor technology, there is limitation in reducing thesize of the photo-detector in order to keep shot noise invisible, and its size limitation is known to be approximately50�m2.25 In26 it has been noted that current CCD technology has almost reached the bound. In other words, higherresolution over that bound should be obtained by using digital image interpolation techniques.

3. ITERATIVE REGULARIZED ALGORITHM WITH REAL-TIME STRUCTURE

In order to solve (4), the regularized iterative image interpolation algorithm tries to �nd the estimate, x̂, whichsatis�es the following optimization problem.

x̂ =arg minxi f(xi); (7)

where,f(xi) = jjyi �Hxijj

2 + �jjCxijj2: (8)

In Eq. (8), C represents a high-pass �lter, and jjCxijj represents a stabilizing functional whose minimization sup-presses high frequency components due to noise ampli�cation. And � represents the regularization parameter whichcontrols the �delity to the original image and smoothness of the restored image.

The cost function given in (8) can be rewritten as

f(xi) =1

2xTi Txi � bTi xi; (9)

whereT = HTH + �CTC and bi = HT yi: (10)

The best estimate which minimizes (9) is equal to the solution of following linear equation,

Txi = bi: (11)

In order to solve the above equation given in (11), the successive approximation equation describing the interpolatedimage, x, at the k + 1st iteration step, is given by

xk+1 = xk + �fHT yk � (HTH + �CTC)xkg; (12)

where yk represents the motion compensated image frame of the k-th frame yk, and � the function that controls theconvergence rate.21

In dealing with iterative algorithms, the convergence and the rate of convergence are very important. Unlessconvergence is guaranteed, the iterative algorithm cannot be used. Even if convergence is guaranteed, su�cientlyhigh rate of the convergence is needed for practical applications. In-depth convergence analysis has been proposedin.22

4. SPATIALLY ADAPTIVE RESOLUTION ENHANCEMENT ALGORITHMS

The regularized iterative image interpolation algorithms presented in the previous section does not satisfy the humanvisual system. However, we introduce adaptive image interpolation algorithm which is adequate to human visualsystem and present its e�cient implementation in this section. In addition, the pixel values of the fused low resolutionimages have dominant feature of MSD images, so it makes us possible to obtain better low resolution images, whichcan be able to make a high resolution image by using image fusion algorithms. The framework of the proposedresolution enhancement algorithm using image fusion is shown in Fig. 3. MSD-based image fusion processing acceptssensor data i.e. LR image sequence from LR imaging sensor, and fuses the data to obtain the LR frames with moreinformation. Regularized image resolution enhancement processing acts on the results of MSD-based image fusionprocessing to interpolate LR images. This processing makes it to fuse data compatibility and smoothness constraintswith feature of directional information.

LRImage 1

LRImage 2

LRImage 3

LRImage N

Observation Preprocessing: DWT, DWF, and Steerable Pyramid, etc.

Data Alignment(Registration)

MSD-based Image Fusion (Pixel-level)

Fused LRImage 1

Fused LRImage 2

Fused LRImage 3

Fused LRImage N

Orientation Analysis of Edges

Adaptive Regularized Image Resolution Enhancement Algorithm

HRImage 1

HRImage 2

HRImage 3

HRImage N

Figure 3. The framework of the proposed resolution enhancement algorithm using image fusion.

Fusion Aspect

Decompositionby Discrte

Wavelet Frame

Acitivity levelmeasurement

ConsistancyVerification

RegistrationMaximumselection

rule

Figure 4. The structure of the proposed image fusion method.

4.1. Multiscale decomposition-based image fusion

The purpose of image interpolation is to recover or enhance the resolution of original image or scene. Fusion maybe an alternative approach for image interpolation. The reason why we use fusion concept in this paper is that itcan provide a better resolution and information of the scene than the one obtained using only one data set. Datafusion can be implemented at either the signal, the pixel, the feature or the symbolic level of representation. We usepixel-level fusion which is to merge multiple images on pixel-by-pixel basis to improve the performance of many imageprocessing tasks.16,17 From this idea, we can apply it to our new interpolation algorithm, which uses the feature ofMSD images as a fusion criterion. This processing provides more feasible input data by fusing LR image frames inpixel-level. There are two approaches to perform image fusion. One is spatial domain processing using gradient andlocal variance,23 and the other is transformed domain processing using discrete wavelet transforms (DWT), discretewavelet frame (DWF), and steerable pyramid, etc. In this paper, the low-resolution image frames are decomposedby using DWF. Since DWT have a property of space-variant, DWF is used to solve misregistration problems.

In order to apply the fusion algorithm to (12), the corresponding iteration process can be rewritten as

xk+1 = xk + �fbkfused � (HTH + �CTC)xkg; (13)

where bkfused represents the fused image obtained from low-resolution frames.

To obtain the k-th fused frame, bkfused in (13), we should de�ne an appropriate image fusion scheme, which isdescribed in Fig. 4. First, low-resolution frames have to be registered so that the corresponding pixels are aligned.Then, the low-resolution image frames are decomposed by using DWF. We perform the activity level measurementas preliminary to choice of DWF coe�cients. In our implementation we use the maximum absolute value withinthe window as an activity measure associated with the center pixel. As a result, we can determine the maximumvalue among DWF coe�cients by the maximum selection rule. A binary decision map of the same size of the DWFis then created to record the selection results based on a maximum selection rule. Finally, majority �lter is appliedto the binary decision map. To implement the �ltering, we apply the median �ltering of binary decision map whichrepresents the origin of coe�cient values to provide a constraint in the process of consistency veri�cation. By usingthe proposed fusion algorithm, we can de�ne the fused frame as

bkfused(i; j) =

�HT yk(i; j); if binary decision map = 1HT yk�1(i; j); if binary decision map = 0;

(14)

According to results from image fusion algorithm, we can choose HT yk(i; j) as a proper pixel value, on condition thatk-th binary decision map is equal to one as (14). After the image fusion process, we perform the interpolation processby using iterative regularized interpolation algorithm summarized in Sec. 3. In general, low resolution images havemore lowpass components than original high resolution image frames. Based on this observation, high resolutionimages, interpolated by using these low resolution images, may be sub-optimal in the sense of resolution. In ourproposed algorithm, however, low resolution images have maximum values at each pixel, so it enables us to obtainbetter low resolution observations for making higher resolution image. The proposed interpolation algorithm usesthe current and previous low-resolution frames in order to obtain a high-resolution frame. It means that we canreuse the information of low-resolution frames without information loss.

4.2. Interpolation with spatially adaptive constraints

The human visual characteristics are partly revealed by psychophysical experiments. According to those experiments,human visual system is sensitive to noise in at regions, while it becomes less sensitive to noise at sharp transitionsin image intensity. In conclusion, human visual system is less sensitive to noise in edges than in at regions .20

Based on the results of of the experiments, various ways to subjectively improve the quality of the restored imagehave been proposed in.18,27

There solution of (12) converges to the point between ultra-rough least squares solution and ultra smooth solution,and the location of the solution can be controlled by regularization parameters. In many non-adaptive versions ofregularized image restoration, a two-dimensional (2D) isotropic high-pass �lter has been used for C. The space-invariant high-pass �lter, however, cannot e�ciently restore high frequency details in the image, and as a result,multi-frame interpolation methods have been proposed to solve that problem by utilizing the additional informationof temporally adjacent frames.7,21

As an alternative to the nonadaptive interpolation, we propose a spatially adaptive interpolation algorithm byusing a set ofM di�erent high-pass �lters, Cj , for j = 1; � � � ;M , to selectively suppress the high frequency componentalong the corresponding edge direction. For example, each pixel in an image can be classi�ed into one of monotone,horizontal edge, vertical edge, and two diagonal edges. In this case M is equal to 5, and each Cj represents ahigh-pass �lter along the given direction. As a result of applying the proposed spatially adaptive constraints, thek-th regularized iteration step can be given as

xk+1 = xk + �(b�

MXi=1

IiTixk); (15)

whereb = HT y and Ti = HTH + �CT

i Ci; (16)

and Ij represents a diagonal matrix with diagonal elements either zero or one. More speci�cally, diagonal elementin Ij , which has one-to-one correspondence with each pixel in the image, is equal to one if it is on the correspondingedge, or zero otherwise.

The properties of Ii can simply be summarized as

IiIj = 0; for i 6= j; and

MXi=1

Ii = I; (17)

where I represents an N2 � 1 identity matrix.

5. EXPERIMENTAL RESULTS

In order to demonstrate the performance of the proposed algorithm, we make an image sequence with twenty 130�90image frames with subpixel motion. Fig. 6a shows the 520� 360 high resolution �ghter image. In addition, Fig. 6bshows twenty 130�90 subsampled images of Fig. 6a by using the degradation model given in (3) and is the arti�ciallyshifted version of Fig. 6a. The 3rd and 4th image frames in the 20 LR images of Fig. 6b have di�erently focused objectand background. The 11th and 12th image frames are occluded by synthetic clouds. Resulting interpolated imagesusing zero-order interpolation, bilinear interpolation, and cubic B-spline interpolation are shown in Figs. 6c, 6d, and6e, respectively. Cubic B-spline based interpolated image of Fig. 6e is shown using the `congrid(,/cubic)' functionof IDL. Figs. 7 and 8 respectively show interpolated images by using the conventional non-fusion based algorithmand the proposed fusion-based adaptive regularized interpolation algorithms. For both interpolated images, only oneiteration is performed per each image frame. Therefore, almost real-time interpolation can be implemented based onthe proposed algorithm.

As �ve di�erent constraints used in (16),

C1 =1

4

24 0 �1 0�1 4 �10 �1 0

35 ; C2 =

1

4

24 0 0 0�1 2 �10 0 0

35 ; C3 =

1

4

24 0 �1 0

0 2 00 �1 0

35 ;

C4 =1

4

24 0 0 �1

0 2 0�1 0 0

35 ; C5 =

1

4

24 �1 0 0

0 2 00 0 �1

35 ; (18)

no no no

yes

no

yes yes yes

vertical edge horizontal edge

monotoneadjacent pixels

of Xi,j

45oedge 135oedge

|Xi-1,j-Xi+1,j|>M

|Xi,j-1-Xi,j+1|>M

|Xi-1,j-1-Xi+1,j+1|>M

|Xi+1,j-1-Xi-1,j+1|>M

Figure 5. The process of determining the direction of edges

are used. In determining Ii, di�erences along the four directions are compared at each pixel, and at each iteration.This algorithm is summarized in Fig. 5.

To compare the performance of algorithms, the magni�ed images of the cockpit are shown in Figs. 9a-9e. As weknow from the experimental results, the proposed fusion-based adaptive algorithm gives better interpolation resultsin the sense of both subjective and objective measures.

6. CONCLUSIONS

In this paper we proposed a spatially adaptive image interpolation algorithm using image fusion. To obtain a higherresolution image, the fused images from low resolution image sequence are used as input of iterative regularizationalgorithm. In addition, the pixel values of the fused low resolution images have dominant feature of MSD images, soit makes us possible to obtain better low resolution images, which can be able to make a high resolution image byusing image fusion algorithms. In this paper, the low-resolution image frames are decomposed by using DWF amongMSD methods. Since DWT have a property of space-variant, DWF is used to solve misregistration problems.

According to the observation model, we propose a regularization-based spatially adaptive interpolation algorithmby using �ve di�erent constraints. From experimental results, the proposed adaptive algorithm has shown to be ableto restore high frequency details, and at the same time to give higher PSNR value than the algorithms without imagefusion.

Acknowledgments

This research was supported by the National Research Lab. Project of Korea.

REFERENCES

1. A. K. Jain, Fundamentals of Digital Image Processing, Prentice- Hall, 1989.

2. M. Unser, A. Aldroubi, and M. Eden, \Fast B-spline transforms for continuous image representation and inter-polation," IEEE Trans. Pattern Analysis, Machine Intelligence, vol. 13, no. 3, pp. 277-285, March 1991.

3. J. A. Parker, R. V. Kenyon, and D. E. Troxel, \Comparison of interpolating methods for image resampling,"IEEE Trans. Med. Imaging, vol. 2, no. 1, pp. 31-39, March 1983.

4. K. P. Hong, J. K. Paik, H. J. Kim, and C. H. Lee, \An edge-preserving image interpolation system for a digitalcamcoder," IEEE Trans. Consumer Electronics, vol. 42, no. 3, pp. 279-284, August 1996.

5. S. P. Kim, H. K. Bose, and H. M. Valenzuela, \Recursive reconstruction of high-resolution image from noisyundersampled frames," IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 1013-1027, June 1990.

6. A. Patti, M. I. Sezan, and A. M. Tekalp, \High-resolution image reconstruction from a low-resolution imagesequence in the presence of time varying motion blur," Proc. 1994 Int. Conf. Image Processing, November 1994.

7. M. C. Hong, M. G. Kang, and A. K. Katsaggelos, \An iterative weighted regularized algorithm for improving theresolution of video sequences," Proc. 1997 Int. Conf. Image Processing, vol. 2, pp. 474-477, October 1997.

8. B. C. Tom and A. K. Katsaggelos, \An iterative algorithm for improving the resolution of video sequences," Proc,SPIE Visual Comm., Image Proc., pp. 1430-1438, March 1996.

9. R. R. Schultz and R. L. Stevenson, \A bayesian approach to image expansion for improved de�nition," IEEE

Trans. Image Processing, vol. 3, no. 3, pp. 233-242, May 1994.

10. R. R. Schultz and R. L. Stevenson, \Extraction of high-resolution frames form video sequences," IEEE Trans.

Image Processing, vol. 5, no. 6, pp. 996-1011, June 1996.

11. J. H. Shin, J. H. Jung, and J. K. Paik, "Spatial interpolation of image sequences using truncated projectionsonto convex sets," IEICE Trans. Fundamentals of Electronics, Communications, Computer Sciences, to appear,June 1999.

12. R. C. Hardie, K. J. Barnard, and E. E. Armstrong, \Joint MAP registration and high-resolution image estimationusing a sequence of undersampled images," IEEE Trans. Image Processing, vol. 6, no. 12, pp. 1621-1633, December1997.

13. A. J. Patti, M. I. Sezan, and A. M. Tekalp, \High-resolution standards conversion of low resolution video," Proc.1995 Int. Conf. Acoust., Speech, Signal Processing, pp. 2197-2200, 1995.

14. M. Elad and A. Feuer, \Restoration of a single superresolution image from several blurred, noisy, and undersam-pled measured images," IEEE Trans. Image Processing, vol. 6, no. 12, pp. 1646-1658, December 1997.

15. C. J. Kuo, C. Liao, and C. C. Lin, \Adaptive interpolation technique for scanning rate conversion," IEEE Trans.

Circuit, System, Video Technology, vol. 6, No. 3, pp. 317-321, June, 1996.

16. L. A. Klein, Sensor and Data Fusion Concepts and Applications, SPIE Optical Engineering Press, 1999.

17. D. L. Hall, Mathematical Techniques in Multisensor Data Fusion, Artech House, 1992.

18. A. K. Katsaggelos, \Iterative image restoration algorithms," Optical Engineering, vol. 28, pp. 735-748, 1989.

19. R. W. Schafer, R. M. Mersereau, and M. A. Richards, \Constrained iterative restoration algorithms," Proc.

IEEE, vol. 69, no. 4, pp. 432-450, April 1981.

20. G. L. Anderson and A. N. Netravali, \Image restoration based on a subjective criterion," IEEE Trans. Sys.,

Man, Cybern., vol. SMC-6, no. 12, pp. 845-853, December 1976.

21. J. H. Shin, Y. C. Choung, and J. K. Paik, \A general framework of image sequence interpolation," Proc, SPIE

Visual Comm., Image Proc., vol. 3309, part. 1, pp. 297-304, January 1998.

22. J. H. Shin, J. S. Yoon, J. K. Paik, and M. A. Abidi, \Fast superresolution for image sequence using motionadaptive relaxtion parameters," Proc. 1999 Int. Conf. Image Processing, vol. 3, pp. 676-680, October 1999.

23. J. H. Shin, J. S. Yoon, and J. K. Paik, \Image fusion-based adaptive regularization for image expansion," Proc.

SPIE Image, Video Comm., Proc., January 2000.

24. J. S. Lim, Two-Dimensional Signal and Image Processing, Prentice-Hall, pp. 495-497, 1990.

25. K. Aizawa, T. Komatsu, and T. Saito,\A scheme for acquiring very high resolution images using multiplecameras,", Proc. 1992 Int. Conf. Acoust., Speech, Signal Processing, vol. 3, pp. 289-292, 1992.

26. T. Ando, \Trend of high-resolution and high-perfomance solid state imaging technology," J. ITE Japan, vol. 44,no. 2, pp.105-109, Feburuay 1992.

27. A. K. Katsaggelos, J. Biemond, R. W. Schafer, R. M. Mersereau, \A regularized Iterative image restorationalgorithms," IEEE Trans. Signal Processing, vol. 39, no. 4, pp. 914-929, 1991.

28. H. J. Trussell and M. R. Civanlar, \The feasible solution in signal restoration," IEEE Trans. Acoust. Speech

Signal Processing, vol. ASSP-32, no. 2, pp. 201-212, April 1984.

29. D. C. Youla and H. Webb, \Image restoration by the method of convex projections: Part 1-theory," IEEE Trans.

Medical Imaging, vol. MI-1, no. 2, pp. 81-94, October 1982.

30. W. T. Freeman and E. H. Adelson, \The design and use of steerable �lters," IEEE Trans. Pattern Analysis,

Machine Intelligence, vol. 13, no. 9, pp. 891-906, September 1991.

31. Z. Zhang and R. C. Blum, "A categorization of multiscale-decomposition-based image fusion schemes with aperformance study for a digital camera application," Proc. IEEE, vol. 87, pp. 1315-1326, August 1999.

19th frame

1st frame

2nd frame

3rd frame

4th frame

11th frame

12th frame

with occlusion

differently focused

(a) (b)

(c) (d)

(e)

Figure 6. (a) Original image, (b) 20 synthetically subsampled images with di�erently focused and occluded ob-jects, (c) Zero-order interpolated image (PSNR=27.60[dB]), (d) Bilinear interpolated image (PSNR=26.40[dB]), and(e) Cubic B-spline interpolated image (PSNR=26.37[dB])

Figure 7. Regularized interpolated image without fusion algorithm (PSNR=29.17[dB])

Figure 8. Adaptively regularized interpolated image with fusion algorithm (PSNR=29.69[dB])

(a) the cockpit of Fig.6c (PSNR=27.60[dB]) (b) the cockpit of Fig.6d (PSNR=26.40[dB])

(c) the cockpit of Fig.6e (PSNR=26.37[dB])) (d) the cockpit of Fig.7 (PSNR=29.17[dB])

(e) the cockpit of Fig.8 (PSNR=29.69[dB])

Figure 9. Magni�ed Results