Depth-enhanced three-dimensional integral imaging by use of multilayered display devices

10
Depth-enhanced three-dimensional integral imaging by use of multilayered display devices Yunhee Kim, Jae-Hyeung Park, Heejin Choi, Joohwan Kim, Seong-Woo Cho, and Byoungho Lee Integral imaging is one of the promising three-dimensional display techniques and has many advantages. However, one disadvantage of integral imaging is the limited image depth. The image can be displayed only around the central depth plane. We propose a depth-enhanced integral imaging using multilayered display devices. We locate transparent display devices that use liquid crystal in parallel to each other and incorporate them into an integral imaging system. As a result, the proposed method has multiple central depth planes and permits the limitation of expressible depth to be overcome. The principle of the proposed method is explained, and some experimental results are presented. © 2006 Optical Society of America OCIS codes: 110.2990, 100.6890, 220.2740. 1. Introduction Integral imaging (integral photography) is a three- dimensional (3D) display technique, first proposed by Lippmann in 1908. 1 It is one of the multiview binoc- ular displays, and it utilizes a lens array as an opti- cal plate to provide 3D images. Recently, integral imaging has attracted much attention as an auto- stereoscopic 3D display technique for its many advantages. 2–8 It does not need any special glasses and has continuous viewpoints within the viewing angle. It also provides full parallax and can display real-time 3D animated images with full color owing to the advancement of the display devices. It uses inco- herent light, which has been considered for 3D tele- vision and visualization. Integral imaging is composed of pickup and display steps. In the pickup step an object is imaged on a pickup device through each lens in a lens array, and the elemental images are recorded as a form of two- dimensional (2D) image array. In the display step, the elemental images displayed on a display device are integrated through a lens array and form a 3D image. Recently, advances in electronic devices such as the charge-coupled-device (CCD) camera, high- definition television, and liquid-crystal display (LCD) have enabled real-time 3D integral imaging. However, integral imaging still has some problems to be solved, which are the limitation of viewing an- gle, image resolution, and image depth. The limita- tion of image depth is the primary drawback of integral imaging. Figure 1 shows the concept of the limited depth. Generally, an integral imaging system uses a display device and a lens array. There is the focused image plane, which is called a central depth plane, determined by the lens law. The location of the central depth plane is calculated using the focal length of the elemental lens and the gap between the lens array and the display device as follows: 1 g 1 l c 1 f , (1) where g is the gap between the lens array and the display panel, f is the focal length of the lens, and l c is the distance from the lens array to the central depth plane. When the gap between the lens array and the display panel is longer than the focal length of the elemental lens, a real image is integrated. When the gap is shorter than the focal length of the elemental lens, a virtual image is integrated. Since images are focused at the central depth plane, the original 3D object is reconstructed exactly only around the central depth plane. The image quality degrades as it goes far away from the central depth plane. This is because the lens array has a limited depth of focus and thus cannot make a fully sharp The authors are with the School of Electrical Engineering, Seoul National University, Kwanak-Gu Shinlim-Dong, Seoul 151-744, Korea. B. Lee can be reached at [email protected]. Received 28 November 2005; revised 6 March 2006; accepted 6 March 2006; posted 9 March 2006 (Doc. ID 66264). 0003-6935/06/184334-10$15.00/0 © 2006 Optical Society of America 4334 APPLIED OPTICS Vol. 45, No. 18 20 June 2006

Transcript of Depth-enhanced three-dimensional integral imaging by use of multilayered display devices

Depth-enhanced three-dimensional integralimaging by use of multilayered display devices

Yunhee Kim, Jae-Hyeung Park, Heejin Choi, Joohwan Kim, Seong-Woo Cho,and Byoungho Lee

Integral imaging is one of the promising three-dimensional display techniques and has many advantages.However, one disadvantage of integral imaging is the limited image depth. The image can be displayedonly around the central depth plane. We propose a depth-enhanced integral imaging using multilayereddisplay devices. We locate transparent display devices that use liquid crystal in parallel to each other andincorporate them into an integral imaging system. As a result, the proposed method has multiple centraldepth planes and permits the limitation of expressible depth to be overcome. The principle of the proposedmethod is explained, and some experimental results are presented. © 2006 Optical Society of America

OCIS codes: 110.2990, 100.6890, 220.2740.

1. Introduction

Integral imaging (integral photography) is a three-dimensional (3D) display technique, first proposed byLippmann in 1908.1 It is one of the multiview binoc-ular displays, and it utilizes a lens array as an opti-cal plate to provide 3D images. Recently, integralimaging has attracted much attention as an auto-stereoscopic 3D display technique for its manyadvantages.2–8 It does not need any special glassesand has continuous viewpoints within the viewingangle. It also provides full parallax and can displayreal-time 3D animated images with full color owing tothe advancement of the display devices. It uses inco-herent light, which has been considered for 3D tele-vision and visualization.

Integral imaging is composed of pickup and displaysteps. In the pickup step an object is imaged on apickup device through each lens in a lens array, andthe elemental images are recorded as a form of two-dimensional (2D) image array. In the display step,the elemental images displayed on a display deviceare integrated through a lens array and form a 3Dimage. Recently, advances in electronic devices suchas the charge-coupled-device (CCD) camera, high-

definition television, and liquid-crystal display (LCD)have enabled real-time 3D integral imaging.

However, integral imaging still has some problemsto be solved, which are the limitation of viewing an-gle, image resolution, and image depth. The limita-tion of image depth is the primary drawback ofintegral imaging. Figure 1 shows the concept of thelimited depth. Generally, an integral imaging systemuses a display device and a lens array. There is thefocused image plane, which is called a central depthplane, determined by the lens law. The location of thecentral depth plane is calculated using the focallength of the elemental lens and the gap between thelens array and the display device as follows:

1g �

1lc

�1f , (1)

where g is the gap between the lens array andthe display panel, f is the focal length of the lens, andlc is the distance from the lens array to the centraldepth plane. When the gap between the lens arrayand the display panel is longer than the focal lengthof the elemental lens, a real image is integrated.When the gap is shorter than the focal length of theelemental lens, a virtual image is integrated. Sinceimages are focused at the central depth plane,the original 3D object is reconstructed exactly onlyaround the central depth plane. The image qualitydegrades as it goes far away from the central depthplane. This is because the lens array has a limiteddepth of focus and thus cannot make a fully sharp

The authors are with the School of Electrical Engineering, SeoulNational University, Kwanak-Gu Shinlim-Dong, Seoul 151-744,Korea. B. Lee can be reached at [email protected].

Received 28 November 2005; revised 6 March 2006; accepted 6March 2006; posted 9 March 2006 (Doc. ID 66264).

0003-6935/06/184334-10$15.00/0© 2006 Optical Society of America

4334 APPLIED OPTICS � Vol. 45, No. 18 � 20 June 2006

3D image in the optical (z) axis. The thickness of the3D image to be displayed cannot be large due to thesevere image quality degradation. As the image getsfar from the central depth plane, the image gets moredefocused and distorted. Detailed theoretical analysison the limitation of depth is given in Refs. 9 and 10.

Many methods have been studied to alleviate thelimitation of image depth.11–18 One method is movingthe lens array along the longitudinal direction tochange the location of the central depth plane con-tinuously.11 However, this method involves the rapidmechanical movement of the lens array and causessome problems such as air resistance and noise. An-other is to use the uniaxial crystal plate and an LCDshutter to produce triple central depth planes.13 How-ever, the problems of this scheme are the demand ofa very fast shutter and the low quality of integratedimage. Another method is using a mirror barrierarray,15,16 which makes two central depth planes bycontrolling the optical path lengths. In mode 1 themirror barriers are located vertically between thelens array and the display panel. In mode 2 theychange their position to make an angle of 45° with thelens array. The two modes have different opticalpaths and make two central depth planes. However,this method also needs rapid mechanical movementsof the mirror barrier array to change the mode. An-other method is adopting polarization devices com-bined with a beam splitter to make different opticalpaths.15 Although these do not need mechanicalmovement, the systems are bulky, and optical effi-ciency is low due to loss at the beam splitter. Recentlya layered-panel integral imaging method has beenreported.17 It used two parallel-layered display pan-els and overcame the translucence problem by thetime-multiplexing method. This method enhancedthe depth effectively. However, the above methods,use of layered panels, a mirror barrier array, or dou-

ble devices and polarization devices, make only twocentral depth planes.

In this paper, we propose a depth-enhanced inte-gral imaging using multilayered display devices.For multilayered display devices, transparent dis-play devices using liquid crystal (LC) are used. As aresult, the proposed method has multiple centraldepth planes and makes the expressible depth re-gion deeper. In reconstructing images two methodsare used. One is a time-multiplexing method toovercome the translucence problem. The other is awhite background method that does not need timemultiplexing. To enhance the brightness, we makean additional experiment with less polarizers. Theprinciple of the proposed method is described, andthe experimental results are shown.

2. Principle of the Proposed Method

In the proposed method, multilayered display devicesare used instead of the conventional one display de-vice. Figure 2 shows the configuration of the proposedmethod using multilayered display devices. In theconventional method, only one emissive display de-vice is used for displaying the elemental images, andone central depth plane exists. However, in the pro-posed method, three or more display devices are used,and there are as many central depth planes as thereare display devices.

The LCD panel displays an image by transmittingor blocking the backlight. If the backlight unit isremoved, it can be used as a spatial light modulator(SLM). Since the LC panels are transparent, we canobserve the image displayed in display device 1 evenwhen display device 2 and display device 3 are dis-playing other images. That is to say, we can displaythree different elemental images with different loca-tions at the same time. This is the main characteristicthat enhances the expressible depth. The display de-vices are located in parallel, and then each has adifferent gap to the lens array. Since the gaps aredifferent, the locations of the central depth planes aredetermined differently by the lens law, as shown inFig. 2.

In the case of Fig. 2(a), the elemental images indisplay device 1 are integrated by the lens array andmake an arrow image located around central depthplane 1. The cube image is reconstructed by the ele-mental image in display device 2, located aroundcentral depth plane 2. The cylinder image is recon-structed by the elemental image in display device 3and is located around central depth plane 3. We canobtain multiple central depth planes at the sametime, and this is the reason that we can enhance thedepth. Figure 2(a) shows the case in which all centraldepth planes are located in front of the lens array, i.e.,all integrated images are displayed in the real modes.Figure 2(b) shows the case in which the central depthplane 3 is located behind the lens array (virtualmode),9,11 while central depth planes 1 and 2 arelocated in front of the lens array (real modes). Thiscan be implemented, as can be seen from Eq. (1), bysetting the gap between the display device 3 and the

Fig. 1. (Color online) Limitation of depth in integral imaging.

20 June 2006 � Vol. 45, No. 18 � APPLIED OPTICS 4335

lens array to be smaller than the focal length, whilethe gaps for the display devices 1 and 2 are main-tained as greater than the focal length.

Each display device makes a central depth planeat the corresponding location by the lens law. As thenumber of the display devices increases, the num-ber of central depth planes also increases. As aresult, the depth over which 3D objects can be lo-cated becomes deeper. In Fig. 2(a), if display device2 locates near display device 3 in such a way thatthe marginal depth of central depth plane 2 is over-lapped with the marginal depth of central depthplane 3, then we can display a 3D image continu-ously within the two marginal depths. Like this, ifwe use more display devices, more marginal depthplanes will be overlapped with each other. As aresult, we can expect that a thick and continuous

depth can be implemented by multilayered displaydevices.

Figure 2(b) shows the images used in this experi-ment. The cube and the cylinder are 3D images. Thesize of the cube is 30 mm and located at 80 mm be-hind the lens array. The size of the cylinder is 30 mmand located at 80 mm in front of the lens array. Asandglass 2D image is located at 50 mm in front ofthe lens array. The elemental images are calculatedby using ray optics in a reverse manner of the pickupstep. That is, for a point in the object to be integrated,we follow an imaginary ray that originates from theobject point and goes through the center of the cor-responding lens and arrives at the display panel. Weperform this process for all lenses and all points inthe object. In this calculation, computer-generatedintegral imaging is used.

Fig. 2. (Color online) Concept of the multilayered integral imaging system: (a) all in real modes, (b) two central depth planes in real modesand one in virtual modes. Locations of the integrated images used in the experiments are shown.

4336 APPLIED OPTICS � Vol. 45, No. 18 � 20 June 2006

In the conventional case, the elemental images ofthe three images are calculated all together. Elemen-tal images are generated for one display device asshown in Fig. 3(a). In the proposed method, how-ever, the elemental image of the object in each central

depth plane is calculated separately and displayedin the corresponding display panel as shown inFig. 3(b). The elemental image of cube is displayed inLCD 3, the elemental image of cylinder is displayedin LCD 2, and the elemental image of the sandglassis displayed in LCD 1.

In the conventional one display device case, ele-mental images with a black background are typicallyused. However, in the proposed method, the conven-tional elemental images with black background can-not be used because the LC display devices block thebacklight and are not transparent on black. The blackregion in the elemental image of sandglass blocks thebacklight, and hence the elemental image of the cyl-inder displayed in device 2 cannot be observed in theregion, and the same is true in display device 3. Onlya portion of elemental images is displayed becauselights are blocked by black areas in rear display de-vices.

To solve this problem, two methods are used. One isthe expanded time-multiplexing method. For the two-layered panel system, a time-multiplexing method hasbeen proposed recently to overcome the translucenceproblem.17 We expand the time-multiplexing methodfor the multilayered system.

Figure 4(a) shows the elemental images for thetime-multiplexing method. In the time-multiplexingmethod, mask patterns are used to solve the trans-lucence problem.17 When we display elementalimages for a rear object image, we display black ele-mental images to implement integrated black maskimages for the locations of front images. In the dis-play devices showing the black elemental images, therest regions are set to the white state to maximallytransmit the light coming from behind the devices.The black and white elemental images have binarypixel values. The binary elemental images can begenerated easily using computer-generated integralimaging by setting the color of the front 3D objects toblack. In mode 1, the virtual cube image that is lo-cated farthest from the observer is displayed as usualon display device 3. The sandglass and the cylinderlocated in front of the cube are displayed in black andcover the corresponding region of the cube as if therewere a black sandglass and a black cylinder in frontof cube. Here it is worth noting that the black maskelemental images are loaded to display devices 1 and2, which are farther from the lens array than displaydevice 3. This is because display device 3 generates avirtual integrated image that is farthest from theobserver, although display device 3 is closer tothe observer than other display devices. In mode 2,the sandglass that is located in the middle is dis-played as usual. The cylinder located in front of thesandglass is displayed in black and covers the sand-glass. Display device 3 is operated in the transparent(white) state. In mode 3, the cylinder that is locatedclosest to the observer is displayed as usual, and noblack image is used because the cylinder is in front ofthe other images and is not covered by any image.Display device 1 is used as a backlight (white), anddisplay device 3 is operated in the transparent

Fig. 3. (Color online) (a) Calculated elemental images in the con-ventional method and (b) calculated expected elemental images inthe proposed method.

20 June 2006 � Vol. 45, No. 18 � APPLIED OPTICS 4337

(white) state. If these three modes are displayed suc-cessively fast enough, the three integrated imagesare observed by the afterimage effect. Using thismethod, the front image can cover the rear image andthe translucence problem can be overcome. For theafterimage effect the frequency of the display deviceshould be high enough. For example, the three-layerdisplay system requires a frequency higher than180 Hz. However, there is no such high frequencyLCD in commercial use yet. In addition, if more dis-play devices are used, the required frequency in-creases in proportion to the number of display devicesused.

Another method is the white background method.Figure 4(b) shows the elemental image with a whitebackground. Elemental images are displayed asusual except that the black background is whitened.When the background of elemental images is white,although there may be the overlapped translucentregion, all displays have backlights. Each elementalimage on each display panel can be observed at thesame time. It has the translucence problem that therear image is not covered by the front image. How-ever, it has some advantages. It does not need anycomplex time multiplexing. The generation of the el-emental image is easy as usual, and the system is sosimple that it can be implemented easily even with anumber of display devices.

3. Experimental Results and Discussion

A. Depth-Enhanced Integral Imaging Using a MultilayeredDisplay Device

In an experiment a Fresnel lens array is used as thelens system.19 It consists of 13 by 13 square elementalFresnel lenses with a lens width of 10 mm and a focallength of 22 mm. Figure 5 shows the experimentalsetup, which consists of a lens array and the multi-layered display devices. In this experiment threedisplay devices are used. One is an emissive LCD

(LCD 1) and the others are LCD panels (LCD 2, LCD3) without a backlight unit. These three are parallellayered and adopted as the display system in integralimaging. Samsung LCDs are used, and the size of theLCD is 17 inches �43 cm� with 1280 (horizontal, H)by 1024 (vertical, V) resolution. The pixel size is0.273 mm (H) by 0.273 mm (V).

In this experiment, we displayed two real imagesand one virtual image as shown in Fig. 2(b). The gapbetween the lens array and each display panel iscalculated by using the lens law. The gap is approx-imately 39 mm between LCD 1 and the lens array,

Fig. 4. (Color online) Elemental images in the proposed method (a) using the time-multiplexing method and (b) using the whitebackground method.

Fig. 5. (Color online) Experimental setup: multilayered displaydevices and a lens array.

4338 APPLIED OPTICS � Vol. 45, No. 18 � 20 June 2006

30 mm between LCD 2 and the lens array, and17 mm between LCD 3 and the lens array. The gapsmake central depth planes at 50 mm, 80 mm, and�80 mm, respectively. For the white backgroundmethod, the cube elemental image is displayed onLCD 3, the elemental image of cylinder is displayedon LCD 2, and the elemental image of the sandglassis displayed on LCD 1 as shown in Fig. 4(b). Forthe expanded time-multiplexing method, the threemodes of elemental images are displayed in the cor-responding display device shown in Fig. 4(a).

To investigate the effect of enhancing the depth, wecompare the integrated image of the conventionalscheme with that of the proposed scheme experimen-tally. Figure 6 shows the integrated images by theconventional method that uses one display device. Anelemental image is calculated for three different cen-tral depths. When the central depth is adjusted at80 mm behind the lens array, we can observe that thecube is integrated well; however, the other images,sandglass and cylinder, are distorted severely. Whenthe central depth is 50 mm, the sandglass is inte-grated clearly, as shown in Fig. 6(b). However, theother images are distorted. Similarly, when the cen-tral depth plane is adjusted at 80 mm in front of lensarray, the cube and the sandglass images are notgood, although the cylinder is integrated correctly asshown in Fig. 6(c). As the results show, the imagequality is worse when the image is located far fromthe central depth plane. If we make one image inte-grated well, the others are distorted, and we cannotget all three correct images at the same time in theconventional one display device method. 3D imagesare displayed only within the restricted depth aroundone central depth plane in the conventional method.

However, in the proposed method we can displaythe three images correctly at the same time. Fig-ures 7 and 8 show the experimental results by usingthe proposed method. Figure 7 shows the results ofintegrated images using the white background. Fig-ure 8 shows the experimental results using the ex-panded time-multiplexing method in detail. Theobtained integrated images in each mode are shownin Figs. 8�a�–8�c�, and the expected result by com-bining the results of the three modes results areshown in Fig. 8(d). As shown in Figs. 7 and 8, thethree images that have different depths are inte-grated with good quality without any defocus or dis-tortion. This enhancement of the expressible depth isowing to the multiple central depth planes using mul-tilayered display devices. In the conventional methodthe expressible depth is less than 50 mm. However,in the proposed method the expressible depth is about190 mm. When we consider the cube size, the cubeimage is integrated for the depth from �95 mm to�65 mm. The cylinder is for the depth from 65 mm to95 mm. The sandglass is in the middle of the cubeand the cylinder, 50 mm in front of the lens array.The total depth difference is about 190 mm, andthree central depths are implemented. We can recog-nize the different perspectives between the imageseasily as the observing direction is changed as shown

Fig. 6. (Color online) Integrated images when the central depth isadjusted (a) at 80 mm behind the lens array, (b) 50 mm in front oflens array, and (c) 80 mm in front of lens array using the conven-tional method.

20 June 2006 � Vol. 45, No. 18 � APPLIED OPTICS 4339

in Fig. 7. The results prove that the proposed methodusing three display panels enhances the depth re-markably.

B. Brightness-Enhanced Multilayered System Using LessPolarization Sheets

As mentioned in the above subsection, if we developthe proposed system to have more layered displaypanels and hence more central depths, a system thathas continuous depth can be implemented. However,there are some problems to be solved for the multi-layered system. One is the brightness problem. Com-paring the brightnesses of the images of Fig. 6 with

those of Fig. 7, the image in the proposed method isless bright than the conventional one. In fact, theintegrated image in the conventional method is muchbrighter than the results shown in Fig. 6 because itwas obtained with a 200 times shorter exposure timethan that of Fig. 7. The results shown in Fig. 7 aredim because of the polarization sheets between theLC panels. However, polarization sheets are inevita-ble for displaying images properly in LC panels. Ifmore panels are added to enhance the expressibledepth, more polarization sheets are required and thebrightness is reduced rapidly. Thus for implementinga desirable system that can display volumetric 3Dimages without depth restriction, the problem ofbrightness should be solved first.

To enhance the brightness, we make an additionalexperiment with less polarization sheets. There arepolarization sheets between LCDs to display imagescorrectly, and they are adjusted to minimize the op-tical loss. Figure 9(a) shows the configuration of thepolarization sheets used in the experiments. In theabove experiments, four polarization sheets havebeen used as shown in Fig. 9(a). However, for enhanc-ing the brightness, additional experiments are imple-mented using less polarization sheets as shown inFig. 9(b). The two polarization sheets in the displaydevice 2 are eliminated. In this case the image in LCD1 is displayed correctly owing to the front polarizationsheet. LCD 3 needs an orthogonal polarization in theback for imaging correctly. The orthogonal polariza-tion can be made when LCD 2 displays black becausein the black state the polarization direction is un-changed in the back and front of LCD 2. Since theblack in LCD 2 does not block the backlight but trans-

Fig. 7. (Color online) Integrated images observed from differentviewing points using the proposed method with the white back-ground method.

Fig. 8. (Color online) Integrated images using the proposedmethod with the expanded time-multiplexing method (a) mode 1, (b)mode 2, (c) mode 3, and (d) expected results obtained by combiningthe three modes.

4340 APPLIED OPTICS � Vol. 45, No. 18 � 20 June 2006

mits the white of LCD 1 to LCD 3, the black in LCD2 is observed as white. Thus only if the color of theLCD 2 is reversed, are all three images observedcorrectly.

Figure 10 shows the elemental image for the pro-posed method using less polarization sheets. The el-emental image displayed in LCD 2 is color reversed,and the other elemental images are the same as inFig. 4(b). The observed color of LCD 2 is reversed as

shown in Fig. 10(b). Elemental images with whitebackground are implemented. Figure 11 shows theexperimental results with less polarization sheets.The three images with different depths are observedwithout distortion as expected. We can see that theimages in Fig. 11 are much brighter than the imagesin Fig. 7. However, the lower images in Fig. 11 showundesired color in the overlapped region between thecube and the cylinder. The undesired color in theoverlapped region occurs because there is no polar-ization sheet in LCD 2. Without a polarization sheetthe colors displayed in LCD 2 are dependent on thecolors in LCD 3. Figure 12(b) shows this situation.Thus it is inevitable to insert a polarization sheet inLCD 2 for preventing undesired colors in the over-lapped region. Figure 12(a) shows the configurationof the minimum number of polarization sheets, andFigs. 12(b), 12(c), and 12(d) show integrated imageswhen two, three, and four polarization sheets areused, respectively. The required minimum number ofpolarization sheets is three, and Figs. 12(c) and 12(d)show the right color in the overlapped region, as ex-pected. However, there is a clear distinction of bright-ness among the results in Figs. 12(b), 12(c), and 12(d).Although using less polarization sheets is desirablefor brightness, as many display devices as polariza-tion sheets are needed for color control. The bright-ness will decrease as more display devices are used.

Fig. 9. (Color online) (a) Configuration using polarization sheetsused in the experiments. (b) Configuration using reduced numberof polarization sheets.

Fig. 10. (Color online) Elemental images using the white back-ground method with less polarization sheets.

Fig. 11. (Color online) Integrated images observed from differentviewpoints using the proposed method with two polarizationsheets.

20 June 2006 � Vol. 45, No. 18 � APPLIED OPTICS 4341

Another problem is that the viewing angle is nar-row. Generally in integral imaging the viewing angledepends on the gap between the lens array and thedisplay device. If the gap increases, the viewing angledecreases, and the central depth plane gets closer tothe lens array by the lens law. Thus the viewing angleof the image that is located close to the lens array isnarrow, while the viewing angle of the image that islocated far from the lens array is wide. In the exper-iment, the sandglass image corresponds to this situ-ation. Since the LCD 1 that displays the elementalimage of the sandglass is farthest from the lens arrayamong the other LC panels, the viewing angle of thesandglass is narrow and the smallest. In the proposeddepth-enhanced method using multilayered displaydevices, each image has a different viewing angleaccording to its location. For observing all the imagesat once, the observer should be within the smallestviewing angle, which restricts the viewing angle ofthe system.

In the proposed method, the concept of using mul-tilayered transparent display devices in parallel issimple and a main factor that enables the depth en-hancement. We can display depth-enhanced 3D im-ages. The white background method or the expandedtime-multiplexing method can be used. The bright-ness of integrated images is enhanced using aminimum number of polarization sheets. If a high-frequency LCD or some other display devices that donot use polarization sheets are developed, some prob-lems like brightness and translucence may be solvedeasily.

The pickup procedure for the proposed multilay-ered integral imaging system needs more study. Theproposed display system covers the depth range thatis beyond the conventional pickup system of integralimaging that uses a fixed separation between a lensarray and a camera. If we could pick up elementalimages of different objects around different centraldepth planes separately, then it would be straight-forward to use those data in our display system. Inthe time-multiplexing method, making a white andblack elemental image needs a signal process. How-ever, this is easy and can be obtained in real time. Inthe case that we cannot pick up separately the objects

around different central depth planes, an additionalimage processing algorithm would be needed, whichmay be extracting the depth of the objects.20,21

4. Conclusion

A depth-enhanced integral imaging with multiplecentral depth planes is proposed by using multilay-ered display devices. For multilayered display de-vices, transparent LC display panels are used. Thepanels are located in parallel and keep the individualgap to the lens array different, which enables centraldepth planes to be located at different locations at thesame time. The experiments are made using both thetime-multiplexing method and the white backgroundmethod in reconstructing images. Additional experi-ments are also performed with less polarizationsheets. The experimental results show that the ex-pressible depth is expanded remarkably comparedwith the depth in the conventional method that usesa single display device. Using the proposed method,the 3D image can be displayed correctly without de-focus or distortion with the expanded depth region,and finally we expect to realize a deep volumetric 3Ddisplay system.

This work was supported by the Next GenerationInformation Display R&D Center, one of the 21stCentury Frontier R&D Programs funded by the Min-istry of Commerce, Industry, and Energy of Korea.

References1. G. Lippmann, “La photograhie integrale,” Compt.-Rend. Acad.

Sci. 146, 446–451 (1908).2. T. Okoshi, Three-Dimensional Imaging Techniques (Academic,

1976).3. N. Davies, M. McCormick, and L. Yang, “Three-dimensional

imaging systems: a new development,” Appl. Opt. 27, 4520–4528 (1988).

4. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Gradient-indexlens-array method based on real-time integral photography forthree-dimensional images,” Appl. Opt. 36, 1598–1603 (1997).

5. M. C. Forman, N. Davies, and M. McCormick, “Continuousparallax in discrete pixilated integral three-dimensional dis-plays,” J. Opt. Soc. Am. A 20, 411–420 (2003).

6. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced in-tegral imaging by lens switching,” Opt. Lett. 27, 818–820(2002).

Fig. 12. (Color online) (a) Configuration using three polarization sheets. (b) Integrated images using proposed methods when two, (c)three, and (d) four polarization sheets are used.

4342 APPLIED OPTICS � Vol. 45, No. 18 � 20 June 2006

7. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Wide-viewing inte-gral three-dimensional imaging by use of orthogonal polariza-tion switching,” Appl. Opt. 42, 2513–2520 (2003).

8. Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee,“Wide-viewing-angle integral three-dimensional imaging sys-tem by curving a screen and a lens array,” Appl. Opt. 44,546–552 (2005).

9. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of view-ing parameters for two display methods based on integral pho-tography,” Appl. Opt. 40, 5217–5232 (2001).

10. J. Hong, J.-H. Park, J. Kim, and B. Lee, “Analysis of imagedepth in integral imaging and its enhancement by correction toelemental images,” in Novel Optical Systems Design and Op-timization VII, J. Koshel, P. K. Manhart, and R. C. Juergens,eds., Proc. SPIE 5524, 387–395 (2004).

11. B. Lee, S. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dy-namically variable image planes,” Opt. Lett. 26, 1481–1482(2001).

12. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensionalintegral imaging system by use of double display devices,”Appl. Opt. 42, 4186–4195 (2003).

13. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imagingwith multiple image planes using a uniaxial crystal plate,”Opt. Express 11, 1862–1875 (2003).

14. H. Choi, J.-H. Park, J. Hong, and B. Lee, “Depth-enhancedintegral imaging with a stepped lens array or a composite lens

array for three-dimensional display,” Jpn. J. Appl. Phy. 43,5330–5336 (2004).

15. S. Jung, J. Hong, J.-H. Park, Y. Kim, and B. Lee, “Depth-enhanced integral-imaging 3D display using different opticalpath lengths by polarization devices or mirror barrier array,”J. Soc. Inf. Display 12, 461–467 (2004).

16. J. Hong, J.-H. Park, S. Jung, and B. Lee, “Depth-enhancedintegral imaging by use of optical path control,” Opt. Lett. 29,1790–1792 (2004).

17. H. Choi, Y. Kim, J.-H. Park, J. Kim, S.-W. Cho, and B. Lee,“Layered-panel integral imaging without the translucent prob-lem,” Opt. Express 13, 5769–5776 (2005).

18. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integralimaging with large depth of focus by use of real and virtualimage fields,” Opt. Lett. 28, 1421–1423 (2003).

19. S.-W Min, S. Jung, J.-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lensarray,” Opt. Eng. 41, 2572–2576 (2002).

20. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging withthree-dimensional information processing,” Opt. Express 12,6020–6032 (2004).

21. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depthextraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43,4882–4895 (2004).

20 June 2006 � Vol. 45, No. 18 � APPLIED OPTICS 4343