Three-Dimensional Image Sensing and Reconstruction with Time-Division Multiplexed Computational...

7
Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging Adrian Stern and Bahram Javidi A method to compute high-resolution three-dimensional images based on integral imaging is presented. A sequence of integral images IIs is captured by means of time-division multiplexing with a moving lenslet array technique. For the acquisition of each II, the location of the lenslet array is shifted periodically within the lenslet pitch in a plane perpendicular to the optical axis. The II sequence obtained by the detector array is processed digitally with superresolution reconstruction algorithms to obtain a reconstructed image, appropriate to a viewing direction, which has a spatial resolution beyond the optical limitation. © 2003 Optical Society of America OCIS codes: 110.6880, 100.6640, 100.6890, 100.3020, 110.4190. 1. Introduction Integral photography is the oldest autostereoscopic three-dimensional 3-D imaging technique. It was first invented by Lippmann in 1908 and subsequently developed by many others. 1 It was referred to as integral imaging in Ref. 2 because a charge-coupled device CCD camera was used for pickup followed by digital reconstruction. In a basic conventional inte- gral imaging system, multiple elemental images that have different perspectives of a given 3-D object are generated by a lenslet array and recorded on a pho- tographic plate Fig. 1. The 3-D reconstruction is carried out by a reverse-ray propagation scheme in which reconstructing rays are passing from the re- cording plate to an image through a similar lenslet array. Recently, the availability of high-resolution light-sensitive devices, such as a high-resolution CCD, replaced the photographic plate and enabled further applications that involve electronic transmis- sion and reconstruction of 3-D images, 3–7 computer- ized reconstruction, 2 and recognition 8 of 3-D objects by means of digital image processing. In this paper we present a computational integral image CII method to reconstruct superresolution SR images representing the perspectives of 3-D ob- jects. By superresolution we mean to remedy the reconstruction that overcomes the optical spatial res- olution limitation. The resolution limitations of in- tegral imaging are briefly described in Subsection 2.A. To achieve SR we apply a time-division multi- plexing approach by using a moving lenslet array technique MALT. 6 The time-division multiplexing approach is described in Section 3. A sequence of images obtained by time-division multiplexing is then processed digitally by means of SR methods to obtain the desired SR images. The SR techniques are presented in Section 4. 2. Spatial Resolution Limitation of Computational Integral Images The spatial resolution of a reconstructed 3-D image from an integral image II is determined by many system parameters. The resolution limitations of optical reconstructions were studied by Burckhardt, 9 Okoshi, 10 and recently by Hoshino et al. 11 It was found 11 that the lateral resolution of an optically re- constructed image is limited by max min m c diff , Nyq cyclesrad , (1) where Nyq is the maximum lateral spatial frequency due to the sampling, c diff is the cutoff spatial fre- quency due to diffraction limitation, and m is the optical magnification constant determined by the op- A. Stern [email protected] and B. Javidi bahram@engr. uconn.edu are with the Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-1157. Received 26 March 2003; revised manuscript received 16 July 2003. 0003-693503357036-07$15.000 © 2003 Optical Society of America 7036 APPLIED OPTICS Vol. 42, No. 35 10 December 2003

Transcript of Three-Dimensional Image Sensing and Reconstruction with Time-Division Multiplexed Computational...

Three-dimensional image sensing andreconstruction with time-division multiplexedcomputational integral imaging

Adrian Stern and Bahram Javidi

A method to compute high-resolution three-dimensional images based on integral imaging is presented.A sequence of integral images �IIs� is captured by means of time-division multiplexing with a movinglenslet array technique. For the acquisition of each II, the location of the lenslet array is shiftedperiodically within the lenslet pitch in a plane perpendicular to the optical axis. The II sequenceobtained by the detector array is processed digitally with superresolution reconstruction algorithms toobtain a reconstructed image, appropriate to a viewing direction, which has a spatial resolution beyondthe optical limitation. © 2003 Optical Society of America

OCIS codes: 110.6880, 100.6640, 100.6890, 100.3020, 110.4190.

1. Introduction

Integral photography is the oldest autostereoscopicthree-dimensional �3-D� imaging technique. It wasfirst invented by Lippmann in 1908 and subsequentlydeveloped by many others.1 It was referred to asintegral imaging in Ref. 2 because a charge-coupleddevice �CCD� camera was used for pickup followed bydigital reconstruction. In a basic conventional inte-gral imaging system, multiple elemental images thathave different perspectives of a given 3-D object aregenerated by a lenslet array and recorded on a pho-tographic plate �Fig. 1�. The 3-D reconstruction iscarried out by a reverse-ray propagation scheme inwhich reconstructing rays are passing from the re-cording plate to an image through a similar lensletarray. Recently, the availability of high-resolutionlight-sensitive devices, such as a high-resolutionCCD, replaced the photographic plate and enabledfurther applications that involve electronic transmis-sion and reconstruction of 3-D images,3–7 computer-ized reconstruction,2 and recognition8 of 3-D objectsby means of digital image processing.

A. Stern �[email protected]� and B. Javidi �[email protected]� are with the Department of Electrical and ComputerEngineering, University of Connecticut, Storrs, Connecticut06269-1157.

Received 26 March 2003; revised manuscript received 16 July2003.

0003-6935�03�357036-07$15.00�0© 2003 Optical Society of America

7036 APPLIED OPTICS � Vol. 42, No. 35 � 10 December 2003

In this paper we present a computational integralimage �CII� method to reconstruct superresolution�SR� images representing the perspectives of 3-D ob-jects. By superresolution we mean to remedy thereconstruction that overcomes the optical spatial res-olution limitation. The resolution limitations of in-tegral imaging are briefly described in Subsection2.A. To achieve SR we apply a time-division multi-plexing approach by using a moving lenslet arraytechnique �MALT�.6 The time-division multiplexingapproach is described in Section 3. A sequence ofimages obtained by time-division multiplexing isthen processed digitally by means of SR methods toobtain the desired SR images. The SR techniquesare presented in Section 4.

2. Spatial Resolution Limitation of ComputationalIntegral Images

The spatial resolution of a reconstructed 3-D imagefrom an integral image �II� is determined by manysystem parameters. The resolution limitations ofoptical reconstructions were studied by Burckhardt,9Okoshi,10 and recently by Hoshino et al.11 It wasfound11 that the lateral resolution of an optically re-constructed image is limited by

�max � min�m�c diff, �Nyq� �cycles�rad�, (1)

where �Nyq is the maximum lateral spatial frequencydue to the sampling, �c diff is the cutoff spatial fre-quency due to diffraction limitation, and m is theoptical magnification constant determined by the op-

tical reconstruction geometry. From the Nyquistsampling theorem, the upper lateral resolution limitdue to the lenslet sampling array �Nyq is inverselyproportional to the pitch p of the lenslet array:

�Nyq �1p

. (2)

The diffraction cutoff spatial frequency due to themicrolenses is given by

�diff �w�

�cycles�rad�, (3)

where w is the aperture size of each lens in the lensletarray and � is the wavelength of the light.

In the case that the II is recorded digitally, thedetector pixel dimensions ��x2, �y2� may furtherlimit the resolution. If the pixel size is larger thanthe point-spread function �PSF� extent of the lensletused in the recording step, the elemental images areblurred, which in turn degrades the resolution of thereconstructed images and causes discretization of theimage perspectives. To avoid this, detectors withsufficiently small pixels should be used. Alterna-tively, additional optics can be inserted between thelenslet array and the camera to properly scale theimage size so that the pixel size is smaller than thePSF extent of the overall optics. However, evenwith the resolution of the detector optimized, there isa trade-off between �Nyq and �c diff. We can remedythe sampling limitation �decreasing �Nyq� by decreas-ing the lenslet array pitch according to relation �2�.However, when we reduce p, the microlens aperturesize is reduced �because w � p�, and diffraction of thelens limits the resolution according to Eq. �3�.Therefore the resolution of an II is always limited,and scaling of the optical setup cannot increase itwithout limit.

3. Time-Division Multiplexing by the Moving ArrayLenslet Technique

A. Sequence of Integral Images Measured by LensletArray Movement

A method to overcome the resolution limitation de-scribed in Section 2 is by time-division multiplexing.6In Ref. 6 it is suggested that an integral imaging andreconstructing method that is based on use of non-stationary micro-optics or MALT be used to overcomethe limitation due to �Nyq. By moving the lensletarrays, we can generate a continuous number of ele-mental images of slightly different perspectives thatare integrated by the eye during the integration timeconstant of the eye response. If the reconstruction ofthe 3-D object is performed optically as described inRef. 6, both the pickup and display lenslet need to bemoved synchronously to avoid phase distortions.

In this paper we follow the MALT idea of Ref. 6, buttake advantage of digital reconstruction. Thereforethe resolution due to diffraction limitation of the len-slet aperture can be compensated by digital signal

processing. At the same time, the resolution limita-tion due to lenslet array sampling is compensated byMALT. We capture a sequence of IIs by slightlymoving the lenslet in a plane perpendicular to theoptical axis �Fig. 1�. The motion of the lenslet arrayis done in steps smaller than the lenslet pitch size p.The range of lenslet array scanning has to be in therange of the lenslet pitch size. A larger range is notnecessary because the lenslet array is periodical.The subpitch shift is represented by subpixel shifts inthe computational reconstructed image. This is be-cause adjacent pixels of the computational recon-structed image originate from adjacent elementalimages2 that are produced by adjacent microlenses.

The sequence of captured IIs can be processed dig-itally by means of a SR method �see Subsection 4.B�to obtain an improved resolution image for an arbi-trary viewing angle. With the SR method we cantake advantage of the subpixel displacement betweenthe measured images to produce an image that has aresolution limitation beyond the pixel size of an im-age reconstructed from one II.

B. Modeling of Integral Image Sequences

The SR imaging method developed in this study isbased on time-division multiplexing. By changingslightly the location of the lenslet array, we can cap-ture a sequence of slightly different IIs. We canview the sequence gk1

N of N IIs obtained by shiftingthe lenslet as the output of a multichannel systemwhere each channel produces one II of the sequence.The input for each channel is the intensity emergingfrom a 3-D object f �x1, y1, z1�, and the output of thekth channel is the kth II gk�n�x2, m�y2�, where �x2,�y2 are the horizontal and vertical detector pixels.

Let us consider a simplified version of such a mul-tichannel system in which each channel images only

Fig. 1. Integral imaging of a 3-D object f �x1, y1, z1�. The lensletpitch and aperture size are denoted by p and w, respectively.Elementary images are denoted by arrows in the detector plane.Solid arrows represent parallel rays propagating in direction �.The lenslet array motion is performed in the �x, y� plane within onelenslet pitch.

10 December 2003 � Vol. 42, No. 35 � APPLIED OPTICS 7037

one perspective projection of the 3-D object. The in-put to each channel is the optical intensity f��x�, y��corresponding to parallel beams arriving at the len-slet from the direction � �Fig. 1� determined by theangles �, in a polar coordinate system. The outputof each channel is an image gk�n�, m�; ��, which is asubset of the II gk�n�x2, m�y2�, corresponding to par-allel imaging with direction of observation �.Clearly, the II gk�n�x2, m�y2� can be obtained byunification of all the perspective images from all theobservation angles �. The method to compute theperspective images gk�n�, m�; �� from the captured IIsgk�n�x2, m�y2� is described in Subsection 4.A.

The multichannel model used to generate a se-quence of N images gk�n�, m�; ��k�1

N is described inFig. 2. The operator P� denotes the projection oper-ator that projects the 3-D object f �x1, y1, z1� in direc-tion �: f��x�, y�� � P�f �x1, y1, z1�. Each channelconsists of a shift due to the particular lenslet loca-tion, sampling by the lenslet array, and distortiondue to the optics and noise corruptions. The kthshift of the lenslet array is modeled by the translationoperator T�dk�, where dk � �d1k, d2k� is the lensletdisplacement vector normalized to the lenslet pitchsize p. We denote the lenslet displacement duringthe kth exposure by the vector �dxk, dyk�T. The nor-malized displacement vector is given by

�d1k, d2k�T � �dxk

p,

dxy

p �T

where superscript T denotes the transpose of a vec-tor. Because the lenslet is periodic with a period p,the operator T�dk� is modulo 1, that is, T�dk � �n1,n2��� � T�dk� for any integers n1, n2. The downsam-pling operator s2models the sampling operation per-formed by the lenslet array. The optical distortion isrepresented by the overall optical PSF hPSF. In thesetup of Fig. 2, hPSF is the PSF of the microlenses.In the general case, it may include the PSF of otheroptical elements located between the lenslet array

and the detector. Finally, noise �k�n�, m�� is addedin each channel modeling the detector noise andother statistical uncertainties.

4. Reconstruction Algorithm

The image at the output of each channel gk�n�, m�; ��in the system of Fig. 2 is a low-resolution, sampledversion of the continuous f��x�, y��. The resolution isfirst limited by the sampling operator s2 and may befurther degraded by the optics PSF. Our purpose isto reconstruct an image that is a deblurred version ofgk�n�, m�; �� having a resolution beyond the low-resolution pixel size ��x2, �y2�. The reconstructionis carried out by use of a SR technique described inSubsection 4.B. The inputs for the SR algorithm arethe low-resolution images gk�n�, m�; ��k � 1

N gener-ated from the measured IIs by the method describedin Subsection 4.A.

A. Low-Resolution Image Generation

We can generate the perspectives gk�n�, m�; ��k � 1N

from the recorded II sequence gk�n�x2, m�y2�k � 1N

by extracting points periodically from the measuredelemental image array.2,3,5,8 The elemental imagearray has to be sampled with a period p2 similar tothe elemental images period, which is the lenslet ar-ray pitch as imaged in the detector plane:

p2 � m2 p, (4)

where m2 is the optical magnification between thelenslet and the detector plane. The position of thesampling grid �Fig. 3� determines the viewing angleof the reconstructed image. We can generate differ-ent perspectives appropriate to different viewing di-rection � by appropriately choosing of the location ofthe sampling grid determined by the vector s� � �sx

�,sy

��T �Fig. 3�.In our application the position of the sampling grid

needs to be aligned for each channel to compensatefor the relative motion between the lenslet array andthe CCD. The position vector sk

� of the sampling

Fig. 2. Multichannel modeling of time-division multiplexing CII.

7038 APPLIED OPTICS � Vol. 42, No. 35 � 10 December 2003

grid for an � view reconstruction from the kth II isgiven by

sk� � �sx

�, sy��T � m2�dxk, dyk�

T. (5)

Therefore we obtain sk� by adding to s� the kth len-

slet shift as it is reflected in the detector domain.

B. Superresolution Image Generation

Numerous SR approaches and algorithms were de-veloped in the past two decades. Typical applica-tions of SR methods can be found in the fields ofremote sensing, medical imaging, reconnaissance,and video standard conversion. For a detailed over-view of SR restoration approaches and methods, see,for example, Refs. 12–14. In this paper we use theiterative backprojection �IBP� method.13,15,16 TheIBP approach was borrowed by Irani and Peleg15,16

from the field of computer-aided tomography. Fur-ther improvement of the method was done by Cohenand Dinstein17 who integrated the method withpolyphase filtering. We use the IBP method in thisstudy because of its efficiency and relative simplicity.

The set of low-resolution images gk�n�, m�; ��k�1N

reconstructed from the set of measured IIs are theinput for the IBP algorithm. An ideal restorationwould yield a perfect sampled version of f��x�, y�� ona higher-resolution grid, a grid that is denser thanthat of gk�n�x2, m�y2�, having pixel dimensions ��x,�y� smaller than ��x2, �y2�, respectively.

The restoration starts with an arbitrary guessf�

0�n�x, m�y� for the high-resolution image. A pos-sible guess is the average of the images of the se-quence.15,16 At each iteration step n, the imagingprocess �Fig. 2� is simulated to obtain a set of simu-lated low-resolution images gk

�n��n�, m�; ��k�1N cor-

responding to the low-resolution image sequencereconstructed from the measurements gk�n�, m�;��k�1

N. The kth simulated image at the nth itera-tion step is given by

gk�n� � [Tk� f�

�n��*hPSF�s2, (6)

where Tk is the translation operator of the kth chan-nel, hPSF is the optical PSF, s2 is the decimationoperator for downsampling, and * denotes the convo-lution operator. At each iteration step, the differ-ence images gk � gk

�n�k�1N are used to improve the

previous guess f �n� by use of the following updatescheme:

f��n�1� � f�

�n� �1N �

k�1

N

Tk�1(� gk � gk

�n��s1*p) , (7)

where s1 is the inverse operator of s2 and p is abackprojection kernel, determined by hPSF. The op-erator Tk

�1 is the inverse of the geometric transfor-mation operator Tk. In our case Tk consists only oftranslation and Tk

�1 is a registration operator thataligns properly the difference images by performingshifts that are inverse to that of Tk. For Tk thatconsists only of translation, as in our case, the kernelp must obey the following constraint:16

�� � hPSF*p�2 � 1. (8)

Inequality �8� can be written in Fourier domain as

0 � �1 � H��� P���� � 1 � �, (9)

where H��� and P��� are the Fourier transforms ofhPSF and p, respectively.

5. Experimental Results

In this section we present computer reconstructionsdemonstrating the effectiveness of the proposedmethod. Figure 4 illustrates the optical setup.Two die with linear dimensions of 15 and 7 mm areused as 3-D objects. The lenslet array has 53 � 53lenslets. Each lenslet element is square shapedwith dimensions 1.09 mm � 1.09 mm and has lessthan a 7.6-�m separation between the lenslet ele-ments. Therefore the lenslet pitch is p � 1.1 mm.The focal length of the lenslets is 5.2 mm. The II isformed on the CCD camera by insertion of a cameralens with a focal length of 50 mm between the CCDcamera and the lenslet array. The camera we usedis a Kodak Megaplus with a CCD array having 2029horizontal � 2044 vertical elements. The elementalimages are stored in a computer as 10 bits of data, sothe quantization error is negligible. An example of atypical array of elementary images is shown in Fig. 5.

We measure the overall optical PSF hPSF requiredin Eq. �6� by imaging a point source located at ap-

Fig. 3. Sampling grid �dashed–dotted lines� used to sample the IIto generate an image appropriate to viewing directions �. Circlesrepresent elementary images.

Fig. 4. Optical setup used in the experiment.

10 December 2003 � Vol. 42, No. 35 � APPLIED OPTICS 7039

proximately 2 m from the lenslet. Each lenslet pro-duces an image of the point source forming an arrayof PSFs in the CCD plane. Because the point sourcewas relatively far from the lenslets, the PSFs areapproximately equally spaced. An example of a typ-ical lenslet PSF is depicted in Fig. 6. We calculatedthe PSF in Fig. 6 by averaging 100 elemental imagesof the point source. The II of the point source is usedalso to determinate p2 in Eq. �4�, which we obtainedby measuring the distance between the elementaryimages.

Figure 7 demonstrates computational reconstruc-tion appropriate to two viewing directions. The twodie, placed one upon the other, are placed at approx-imately 9 cm from the lenslet array. We captured asequence of four IIs by shifting the lenslet array hor-izontally. Low-resolution reconstruction �with onlyone II by use of the algorithm described in Subsection3.A� is shown in Figs. 7�a� and 7�b�. The imageshave a size of 39 by 39 pixels. Figures 7�c� and 7�d�show the high-resolution reconstruction of the view-ing direction of Figs. 7�a� and 7�b� by use of time-division multiplexing CII with the four images of thecaptured sequence. It can be seen that the recon-

structed image quality is improved and that the hor-izontal resolution has increased. Because thescanning is performed only in the horizontal direc-tion, true resolution improvement is achieved only inthe horizontal direction. To maintain the aspect ra-tio, we present the high-resolution images on a rect-angular grid �195 � 195 pixels� by repeating each rowfive times and interpolating the image. We performthe interpolation to obtain a more pleasant visualiza-tion by smoothing the blocking effect caused by therepetition of the rows. Because it is done on a densegrid, it barely affects the true vertical resolution thatremains similar to that of the original image. Toachieve true resolution in both the horizontal and thevertical directions, the subpixel motion needs to bedone in both directions. In such a case, the repeti-tion of the rows is not required to maintain the aspectratio, and the effect of interpolation can be combinedin the iteration process.18

The SR ability of the time-division multiplexingCII method is demonstrated in Fig. 8. In this exam-ple, the small dice is farther away from the lensletarray. Figure 8�a� illustrates a low-resolution re-construction from only one II. The pixels of the diceare smeared and distorted by the aliasing effect;therefore the dice faces cannot be recognized. Usingthe proposed method, the dice faces are recovered andcan be recognized in the reconstructed image �Fig.8�d��. The reconstruction in Fig. 8�d� is carried outby use of time-division multiplexing CII with a se-quence of five IIs shifted horizontally. The aspectratio is maintained by row duplication and interpo-lation as for the images in Figs. 7�c� and �d�.

In Fig. 8�b� we show the interpolation of five imagesof the sequence on a high-resolution grid �195 � 195pixels�, without applying the IBP method. It can beseen that the interpolated image is smoother but thepixels on the dice faces cannot be resolved. Thisindicates the advantage of our using digital recon-

Fig. 5. Enlarged part of a captured II.

Fig. 6. Example of a measured PSF. The horizontal and verticalaxes are in units of CCD pixels.

Fig. 7. �a�, �b� Low-resolution reconstructed images from only oneII appropriate for two viewing directions; �c�, �d� high-resolutionreconstruction of �a� and �b�, respectively, by use of the time-division multiplexing CII method with a sequence of four shiftedIIs.

7040 APPLIED OPTICS � Vol. 42, No. 35 � 10 December 2003

struction with time-division multiplexing integralimaging over the optical time-division multiplexingmethod6 in which the sequence of images is basicallyintegrated by the eye.

An example of the convergence rate of the iterativealgorithm is demonstrated in Fig. 8�c� showing theroot-mean-square error �� f�

�n� � f��n�1��2�1�2 as a func-

tion of the iteration step n. Typically the algorithmconvergences in approximately 5–20 iteration steps.The convergence is mainly determined by the back-projection filter p. If the backprojection filter is suchthat inequality �9� approaches zero, then the conver-gence is faster. As inequality �9� approaches unity,the convergence is slower and the algorithm is lesssensitive to noise.

In Fig. 9 we demonstrate the resolution improve-ment by comparing the spectrum of the low- andhigh-resolution images. The normalized average

spectrum in the horizontal direction of the image inFig. 8�a� is denoted by the solid curve and that of thereconstructed image in Fig. 8�d� by the dashed curve.It can be seen that the reconstructed image containsmuch more high-frequency components. The space–bandwidth product19,20 of the low-resolution image is0.26 and that of the high-resolution image is almostdoubled to 0.51. To calculate the space–bandwidthproduct we used the root-mean-square width criteri-on.20

The reconstruction in Fig. 8�d� with time-divisionmultiplexing CII is shown in contrast with Figs. 10�a�and 10�b�. Figures 10�a� and 10�b� are examples ofimage restoration by application of the powerful im-age processing techniques on only one image. Fig-ure 10�a� illustrates restoration of the low-resolutionimage shown in Fig. 8�a� by use of the Wiener filter.21

The Wiener filter is the optimal linear reconstructionfilter given by

M��� �H*���

� H����2 � ����, (10)

where H��� is the Fourier transform of hPSF, � is thespatial angular frequency vector, and ���� is thenoise-to-signal ratio. Because the noise-to-signal���� is not known, we treat it as a free parameter �that we fit to achieve best visual restoration. Figure10�b� demonstrates reconstruction with the IBPmethod with only one image �Fig. 8�a�� at the input�instead of the entire sequence�. The reconstructionis performed on a high-resolution grid similar to thatof Fig. 8�d�. It can be seen that both restorationmethods improve the image quality but do not gainSR as obtained from a sequence of images.

In the following we add a few practical remarksregarding the reconstruction algorithm. The firstremark is in regard to the filters hPSF and p requiredin Eqs. �6� and �7�. In the reconstructions of Figs.7�c� and 7�d� we used the PSF hPSF measured with apoint source. However, the precise hPSF is notstrictly necessary as demonstrated in the reconstruc-tion of Fig. 8�d�, which was performed with an esti-mated PSF. In the reconstruction of Fig. 8�d�, thePSF is estimated to be Gaussian with a standarddeviation of 0.6 pixel �of the CCD�, and the algorithm

Fig. 8. �a� Low-resolution image reconstructed from one II, �b�interpolation of five images of the sequence, �c� convergence rate,�d� high-resolution reconstruction with the time-division multi-plexing CII method.

Fig. 9. Comparison of the average spectrum in the scanning �hor-izontal� direction of the low-resolution �LR� image �solid curve� anda high-resolution �HR� reconstruction �dashed curve� by use oftime-division multiplexing CII.

Fig. 10. Image enhancement of a low-resolution image: �a� Wie-ner filtered image, �b� reconstruction by application of the CIImethod on only one image from the sequence.

10 December 2003 � Vol. 42, No. 35 � APPLIED OPTICS 7041

provided similar results as when we use the precisePSF. In both examples, the backprojection filterwas chosen to be p � hPSF

2.16 The second remark isin regard to the subpixel shifts obtained by the lensletarray motion. In principle those shifts do not haveto be equally placed. For example, in both Figs. 7and 8 the lenslet array was shifted in steps of 22 �mappropriate to a 0.2 pixel shift �on a low-resolutiongrid�. Therefore the subpixel shift sequence of thefive IIs used in the example of Fig. 8 are equallyspaced; that is, dk � �0, 0�, �0.2, 0�, �0.4, 0�, �0.6, 0�, �0,0.8�. However, the shifts of the four IIs used in theexample of Fig. 7 are not spread uniformly along apixel; that is, dk � �0, 0�, �0.2, 0�, �0.4, 0�, �0.6, 0�.Still, as demonstrated, good reconstruction can beobtained if careful registration is performed in Eq.�7�. In general, uniform subpixel sampling is pre-ferred because it increases the robustness of the SRalgorithm.

6. Conclusions

In this paper we proposed a CII method for recon-struction of high-resolution perspectives of 3-D ob-jects. To overcome optical limitation on the spatialresolution of a reconstructed image, we suggest per-forming time-division multiplexing by which a se-quence of IIs is captured with a slightly differentlocation of the lenslet array. The lenslet array shiftsare smaller than the lenslet array pitch, yielding asubpixel shift in the reconstructed image plane.Then, the IBP SR method is applied digitally togetherwith the measured sequence to obtain a high-resolution image. The method is demonstrated onimage sequences obtained by horizontal linear scan-ning. In general, SR can be obtained both in thehorizontal and in the vertical directions by applica-tion of a planar motion within one lenslet pitch.

The authors thank Ju-Seong Jang for his advice onintegral imaging. We acknowledge the partial sup-port of Connecticut Innovation Inc. grant 01Y11.

References1. T. Okoshii, “Three-dimensional displays,” Proc. IEEE 68, 548–

564 �1980�.2. H. Arimoto and B. Javidi, “Integral three-dimensional imaging

with computed reconstruction,” Opt. Lett. 26, 157–159 �2001�.3. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time

pickup method for a three-dimensional image based on inte-gral photography,” Appl. Opt. 36, 1598–1603 �1997�.

4. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-indexlens-array method based on real-time integral photography forthree-dimensional images,” Appl. Opt. 37, 2034–2045 �1998�.

5. T. Naemura, T. Yoshida, and H. Harashima, “3-D computergraphics based on integral photography,” Opt. Exp. 8, 255–262�2001�, http:��www.opticsexpress.org.

6. J. S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 �2002�.

7. J. S. Jang and B. Javidi, “Three-dimensional synthetic aper-ture integral imaging,” Opt. Lett. 27, 1144–1146 �2002�.

8. Y. Frauel and B. Javidi, “Digital three-dimensional image cor-relation by use of computer-reconstructed integral imaging,”Appl. Opt. 41, 5488–5496 �2002�.

9. C. B. Burckhardt, “Optimum parameters and resolution limi-tation of integral photography,” J. Opt. Soc. Am. A 58, 71–76�1968�.

10. T. Okoshi, “Optimum design and depth resolution of lens-sheetand projection-type three-dimensional displays,” Appl. Opt.10, 2284–2291 �1971�.

11. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis ofresolution limitation of integral photography,” J. Opt. Soc. Am.A 15, 2059–2065 �1998�.

12. M. Elad and A. Feuer, “Restoration of a single superresolutionimage from several blurred, noisy, and undersampled mea-sured images,” IEEE Trans. Image Process. 6, 1646–1658�1997�.

13. A. Stern, E. Kempner, A. Shukrun, and N. S. Kopeika, “Res-toration and resolution enhancement of a single image from avibration distorted image sequence,” Opt. Eng. 39, 2451–2457�2000�.

14. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequencyvibrated images by use of convex projections,” Appl. Opt. 40,4706–4715 �2001�.

15. A. Irani and S. Peleg, “Improving resolution by image regis-tration,” CVGIP: Graph. Models Image Process 53, 231–239�1991�.

16. A. Irani and S. Peleg, “Motion analysis for image enhance-ment: resolution, occlusion, and transparency,” J. VisualCommun. Image Represent. 4, 324–335 �1993�.

17. B. Cohen and I. Dinstein, “Polyphase back-projection filteringfor image resolution enhancement,” IEE Proc. Vision ImageSignal Process. 147, 318–322 �2000�.

18. R. L. Lagendijk, Iterative Identification and Restoration of Im-ages �Kluwer Academic, Dordrecht, The Netherlands, 1991�.

19. B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics�Wiley, New York, 1991�.

20. Z. Zalevsky, D. Mendlovic, and A. W. Lohmann, “Understand-ing superesolution in Wigner space,” J. Opt. Soc. Am. A 17,2422–2430 �2000�.

21. A. K. Jain, Fundamentals of Digital Image Processing�Prentice-Hall, Englewood Cliffs, N.J., 1989�.

7042 APPLIED OPTICS � Vol. 42, No. 35 � 10 December 2003