Random Projections Imaging With Extended Space-Bandwidth Product
Transcript of Random Projections Imaging With Extended Space-Bandwidth Product
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
1
Abstract— We propose a novel approach to imaging that is not
based on traditional optical imaging architecture. With the new approach the image is reconstructed and visualized from random projections of the input object. The random projections are implemented within a single exposure by using a random phase mask which can be placed on a lens. For objects that have sparse representation in some known domain (eg. Fourier or wavelet) the novel imaging systems has larger effective space-bandwidth-product than conventional imaging systems. This implies, for example, that more object pixels may be reconstructed and visualized than the number of pixels of the image sensor. We present simulation results on the utility of the new approach. The proposed approach can have broad applications in efficient imaging capture, visualization, and display given ever increasing demands for larger and higher resolution images, faster image communications, and multi dimensional image communications such as 3D TV and display.
Index Terms— Sparse representation, Compressed sensing, compressed imaging, random phase encoding, space-bandwidth-product, efficient image visualization and display.
I. INTRODUCTION Performance of image visualization and display systems is very dependent on imaging sensors, hardware, and
algorithms in terms of resolution, sampling size, capture speed, processing rates, power consumption, reconstruction error, etc. Clearly, there could be many benefits if an image can be visualized and/or displayed by capturing the smallest number of elements which contain enough information to represent the image with reasonable accuracy. This efficiency is critical due to the ever increasing demands to enable larger and higher resolution image capture and display, faster image communications, and multi dimensional image communications such as 3D TV and display [1]. Imaging is a process that maps real object points form the “object” space into another real or virtual “image” space. The image capturing process may be carried out in one or more
Manuscript was submitted on May, 21 2006, Accepted on Feb. 13, 2007 A. Stern is with Electro Optics Engineering Unit, Ben Gurion University of
the Negev, Beer-Sheva 84105, Israel, (phone: 972-8-6461840; e-mail: [email protected]).
B. Javidi, is with Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-2157
Email: [email protected]
steps depending on the imaging sensor and the optics. There are two main common requirements from the recording step in any conventional imaging process. The first is that the imaging model obeys some structural rule. For instance two dimensional (2D) imaging with a lens produces an inverted and magnified copy of the object plane. Each object point is copied into the image plane according the same rule. With a three dimensional stereoscopic recording process, there are two 2D subsystems each performing linear shift invariant imaging. Three dimensional holography maps each object point into a fringe pattern according a given rule. This requirement of obeying a structural rule is desired since it permits a simple representation of the object and a relative simple analysis of the performances of the system. The second requirement is that the mathematical mapping model describing the imaging process should be invertible. This requirement is necessary in order to permit fully reconstruction (optically or numerically) of the object. It implies that the Shannon number representing the spatial degrees of freedom of the system should be at least as high as the number of object points. For example, with a conventional digital camera one can not expect to capture more object points than the number of the pixels of the image sensor. In terms of imaging system specifications, this second requirement implies that the system space bandwidth product (SBP) should be large enough to provide sufficient field-of-view (FOV) and depth of field to cover the entire object with a desired resolution. We note that the second requirement is strictly necessary only if there is no a priori information available about the type of the object and if the imaging system has to handle all types of objects equally well. However, in general this is not the case. For example, usually we do not use the same camera to capture humanly-intelligible images and also to capture random salt and paper noise. Even if the camera is used only to capture human-intelligible images, often not all image samples are necessary. Captured images are often redundant and compression techniques are employed to reduce the redundancy. This raises the question: is it strictly necessary to acquire all the image samples in a pedantic way and then later to compress them? Can one capture optically fewer samples without compromising the quality of reconstruction? The answer to the question is positive owing to the theory of compressed sensing (CS) theory [2]-[6]. The basic idea behind
Random projections imaging with extended space-
bandwidth product Adrian Stern and Bahram Javidi, Fellow, IEEE
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
2
CS is that an image can be accurately reconstructed from fewer measurements than the nominal number of pixels if it is compressible by a known transform such as Wavelet or Fourier. This implies that if an object is compressible by having a sparse representation in some domain, it can be captured with fewer samples than its SBP. Clearly, there are many diverse applications and benefits in imaging and image processing systems that operate on very large volumes of data [7]-[14]. The price that is to be paid for implementation of a CS-based imaging system is giving up the first requirement mentioned above; giving up the convenient structural form of the imaging process model. In the following we propose a non-conventional imaging approach that performs CS via random projections.
II. THE IMAGING SYSTEM In this paper, we present a novel method to image objects
that have a sparse representation in some known domain. The efficiency of the novel imaging system is higher than that of conventional imaging systems. The new imaging approach is based on decomposition of the object in a set of random projections following the scheme shown in Fig. 1.
In Fig. 1, the object f consisting of N pixels is imaged by taking a set of M random projections g, where M<N. It is assumed that f has a sparse representation in some known domain so that it can be composed by a transform Ψ and only K nonzero coefficients of α, that is f = Ψα where where only K (K<N) entries of α are zero . We will refer to such an object as K-sparse object. Many natural images are assumed to be sparse or nearly sparse in some domain. For instance it is commonly assumed for purpose of image compression that images are nearly sparse in Fourier or some wavelet domain so that N-K coefficients are set to be zero. By taking M orthogonal random projections Φ of f we get M compressed sensing measurements g=ΦΨα [2]-[6]. M has to be
sufficiently larger than K; typically KM 3≥ [4]. It is also assumed that Ψ is incoherent with Φ, that is their bases are essentially uncorrelated [2], [15]. Incoherence property holds for many pair of bases. In particular, it holds with high probability between any arbitrary basis of Ψ and the random projections Φ.
In order to reconstruct f we first estimate the coefficients α by solving the following minimizations problem [2]:
pp 'minˆ ααα
=subject to ΦΨαg = (1)
where p⋅
denotes the lp norm given by pN
i
pip
/1
1
''
= ∑
=
αα, and [ ]1,0∈p . Thus we find pα by
choosing from all coefficient vectors α’ that are related to the
measured image by 'ΦΨαg = the one with the minimum p-
norm. Once we found pα , the object is reconstructed
by pp αˆ Ψf = . According to Eq. (1), the reconstruction pf has transform coefficients with the smallest lp norm among all objects generating the same measured data g. Ideally, a pure sparse signal can be exactly reconstructed by (1) with p=0 [2]-
[6]. The l0 norm operator 0'α simply counts the number of
nonzero entries of α’. Therefore, a K-sparse object can be estimated from exactly M=K+1 measurements of g by solving Eq. (1) with p=0. However implementation of the l0 estimator
requires combinatorial enumeration of the
KN
possible sparse subspaces which is prohibitively complex. A more practically approach is estimating of f by solving Eq. (1) with p=1 [2],[4],[15],[16] for which traditional linear programming techniques are available. The price to be paid by using l1
estimator instead of l0 is that more than K measurements of g are required, that is M>K+1.
In this work we implement the random projection operator Φ in Fig.1 by using a random phase mask. One possible optical setup using such mask is depicted in Fig. 2. The complex field amplitude at an image point ri due to an object point r0 is given by:
( )
( ) ( ) ( )∫
−−−
= ϕλπ
λπ
ϕλπ
ϕϕϕ
ϕ
rrrrrrrrr
deeeeKui
lo z
jf
jjzj
oi
2
2
22
1, (2)
where λ is the wavelength, ( )ϕϕ r is the distribution of the
random phase mask, fl is the focal length of the lens and K is a multiplicative constant. The integral in (2) is evaluated over
all phase mask points ϕr. Equation (2) defines the relation
between the input and output fields. Note that due to the
random phase mask ( )ϕϕ r this relation is shift variant and
spatially random. With reference to the CS model (Fig.1), equation (2) defines the continuous operator Φ. In the appendix we show that if the correlation length ρ of the random phase is sufficiently small with respect to the other dimensions of the imaging system then the discrete operator Φ performs the required random projections (for the exact conditions please see the appendix). Consequently, Φ and Ψ are incoherent with high probability [2], as required for CS solution via (1).
Figure 3 illustrates the effect of the phase mask on the intensity distribution in the image plane of two object points. The structural rule of a conventional imaging process is evident in Fig. 3 (a); the two object points produce two narrow spots in the image. Both spots have the same shape, that of diffraction limited point spread function (PSF), and the separation between them is according the lateral magnification of the system. In contrast we see in Fig. 3(b) that with the random phase mask the information of each object point is randomly spread over the entire image plane, and the intensity PSFs of two object points are uncorrelated, as required from the random projection operator Φ in Fig. 1. Fig. 3(b) illustrates the fact that with the system in Fig. 2 the object
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
3
points are randomly projected in the image plane, therefore according to the CS scheme in Fig. 1 numerical reconstruction of compressible objects is possible even if they are captured with insufficient SBP.
III. SIMULATION RESULTS We have simulated using Matlab the setup of Fig. 2 with
z1= 20.9cm, focal length of the lens fl=20 cm, z2= fl, and
wavelength mµλ 5.0= . The random phase mask is assumed to have correlation length ρ=10µm. Its size is 16.38X16.38 cm2. We have performed our simulation in one dimension to reduce the computational burden involving the calculation of the shift variant PSF. However for visualization purposes, we present in Fig. 4 two dimensional images for which each column is calculated separately. The columns of the original image in Fig. 4(a) have each N=128 pixels. Since the original image is piecewise smooth it has a sparse representation in Haar wavelet domain [17]. That is, if Ψ in Fig.1 represents a discrete Haar wavelet transform operator then less than 128 coefficients α are required to represent the object. Indeed, each column of the captured image in Fig. 4(b) has only M=106 pixels. The size of the pixels in the object and image plane are 1mm and 100µm respectively. For the simulation of the captured image in Fig. 4(b) we assumed totally incoherent illumination conditions, which are more common. In such a case the intensity PSF is given by the square amplitude of the
coherent field;( ) ( ) 2,, oioi u rrrr =Φ . It is noted that the
scheme of Fig.2 can be implemented with coherent illumination too.
Fig. 4(c) shows the reconstructed image. Each column of
the reconstructed image has 128 pixels. The estimation of
coefficients α is performed using Matching Pursuit algorithm [18]. Matching pursuit is a computationally simple iterative and greedy algorithm that that yields a very good
approximation of 1α
if ΨΦ is incoherent and M is sufficiently large [6]. It runs as follows:
1) Set r0=g and 0ˆ =α ,i=1 , where r0 is the initial
residuum, α the estimated coefficients vector of size N, and i is the iteration step.
2) Select the column vector inΩof the matrix
ΦΨΩ = that is mostly correlated to the residuum:
iiiNii rn ΩΩ= −∈,maxarg 1,..1 , (3)
where [ ] [ ]∑
=−− Ω=Ω
N
lnini llrr
ii1
*11 ,
.
3) Calculate the residuum for the vector inΩ:
2
11 ,iii nnniii rrr ΩΩΩ−= −− . (4)
4) Update the ni’th estimated coefficient
[ ] [ ] 2
1 ,ˆ:ˆii nniii rnn ΩΩ+= −αα
. (5)
5) If 2gr ε>i where ε is the minimum
proportion of the energy signal that can be left in the residuum, then i=i+1 and go to step 2; otherwise, terminate.
It can be seen in Fig. 4 that the FOV and the resolution of the original image are maintained (note that the white circle has one pixel width at some locations) despite it is reconstructed from the captured image that has 17.2% less pixels per column than the original. With a two dimensional implementation of the system in Fig. 2 it is expected to obtain similar results with more then 31% reduction in the number of pixels required representing the original image.
IV. CONCLUSION In conclusion, we have presented a novel imaging approach
that utilizes the concept of CS. This approach permits more effective imaging of objects that have sparse representation in some domain in a sense that the required SBP of the imaging system may be less than that of the object. This means that with given a system SBP dictated by the imaging components, a larger FOV or resolution, or both, of compressible objects can be obtained than it can be with conventional imaging. We note that our aim in this work is only to introduce the proposed novel imaging approach and to prove its concept. The optical setup implementing Fig. 1 may be further optimized considering a different layout than in Fig. 2. Depending on the type of the sparsity of the object, the reconstruction may be optimized by post processing and by multi-scale compressed sensing [3]. The reconstruction algorithm may be accelerated by employing the structure of Ψ[5], which is beneficial if very large images are considered. As a final note we believe that the concept presented in this paper may be extended effectively for three-dimensional imaging as three-dimensional images are highly compressible.[7]-[9],[13].
APPENDIX For simplicity of notation we will consider the one
dimensional system. The extension to two dimensions is
simple. The relation between the samples of ( )oxf at
points oo nx ∆= , 21
2−≤≤− NN n , Zn ∈ and ( )ixg at points
ii mx ∆= , 21
2−≤≤− MM m , Zm ∈ is given by the sampled
one dimensional version of (2):
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
4
( )( )
( )( )
21
2
21
22/
2/
,'
,2
2
22
1 −
−
−
−∆−−∆≤≤−≤≤−
=∆∆
∫ MM
NNL
L
mz
jf
jj
nz
j
oi
mn
deeeeK
nmu
il
o
ξξ
λπ
ξλπ
ξϕξ
λπ
. (A1) where L is the size of the random phase mask. Hence the
vector g consisting of the M samples of ( )ixg is related to the
vector f consisting of the N samples ( )oxf by the linear
equation fg Φ= , where Φ is a matrix with N columns and M rows (as in Fig. 1) with entries:
( )oinm nmu ∆∆=Φ ,, . (A2)
We want Φ to perform uncorrelated random projections therefore the columns of Φ need to be uncorrelated. The physical mean of this requirement is that the point response to
any two object points separated k∆o , Zk ∈ , should be uncorrelated, that is [19]:
( ) ( ) ( ) ( )[ ]( )[ ] ( )[ ] 0,,
,,
0*
0
0*
0,, *
=∆+∆∆∆∆
−∆+∆∆∆∆=∆+⋅⋅⋅⋅
knmuEnmuE
knmunmuEC
ii
iikuu , (A3)
where E[⋅] denotes the mean operator and the upper asterisk denotes the complex conjugate. Substituting (A1) in (A3) and using the linearity of the mean operator we obtain
( ) ( )( ) ( )[ ] ( )[ ] ( )[ ]
( ) ( ) ( ) ( ) .
''
2/
2/
2/
2/
*,,
2
2
22
1
2
2
22
1
*
∫ ∫− −
−∆−−∆+∆−−∆−−∆
−−
∆+⋅⋅⋅⋅
×−
=
L
L
L
Lm
zj
fjkn
zjm
zj
fjn
zj
jjjj
kuu
ddeeeeee
eEeEeeE
KKC
il
ooil
o
ξηξ
λπ
ξλπ
ξλπ
ηλπ
ηλπ
ηλπ
ξϕηϕξϕηϕ
(A4)
For a phase mask having a transmission ( ) ( )ξϕξ jet = ,
where the phase ( )ξϕ is a wide sense Gaussian stationary random process, it can be shown that the mean and
autocorrelation of ( )ξt is given by [20]:
( )[ ] 221
E σϕξϕ −= jj ee , (A5)
( ) ( )[ ] ( )[ ]ξηγσξϕηϕ −−−− = 12
E eee jj, (A6)
where ϕ denotes the mean of ( )ξϕ , σ2 is the variance of ( ) ( )ξϕηϕ − , and ( )ξηγ − is the normalized autocorrelation
function of ( )ξϕ [20]. Substituting (A5) and (A6) in (A4),
changing variables τξη += and rearranging terms yields
( ) ( )
( )[ ]
( )[ ][ ]( ) [ ] ( )[ ] .
''
2/
2/
2/
2/222
12
*,,
2
1
2
1
2
1
22
1
2
1*
∫ ∫−
−
−−+−∆−+−−∆
−−−∆
∆+−
∆+⋅⋅⋅⋅
⋅−
⋅=
L
L
L
Lm
zj
fjn
zj
zj
kz
j
nkkz
j
kuu
ddeeee
eee
eKKC
il
o
o
o
ξ
ξτξτ
λπ
τξτλπ
τξλ
πτ
λπ
στγσξ
λπ
λπ
ξτ
(A7)
If the correlation length of the phase is ρ , ( )τγ is
essentially zero for 2/ρτ > ; therefore the range of the second integral in (A7) reduces to (-ρ/2,ρ/2) yielding
( ) ( )
( )[ ]
( )[ ][ ] ( ) [ ] ( )[ ]
( )[ ]
[ ] .1
''
''
2/
2/
11121112
*
2/
2/
2221
2
*,,
2
2
2121
2
212
1
2
1
2
2
1
2
1
2
122
1
2
1*
∫ ∫
∫ ∫
− −
−+−
∆−
∆
+−
−∆
∆+−
− <
+−∆−+−−∆−−−
∆
∆+−
∆+⋅⋅⋅⋅
−
⋅
≤−
⋅=
L
L
zfzzm
zn
jzfz
jkz
j
nkkz
j
L
L
mz
jf
jnz
jz
jkz
j
nkkz
j
kuu
ddeeee
eKK
ddeeeeeee
eKKC
l
io
lo
o
il
oo
o
ρ
ρ
ρ
ξτ
ξτ
τξλπ
τλπ
σξ
λπ
λπ
τ
τξτλπ
τξτλπ
τξλ
πτ
λπ
στγσξ
λπ
λπ
(A8) If the correlation length ρ is sufficiently small so that
+−
<<
21
1112
zfz l
λρ
(A9)
then the quadratic term in (A8) is 12
21
111
≈
+− τ
λπ
zfzj
le , yielding
( ) ( )
( )[ ][ ]
( )[ ][ ]∫
∫ ∫
−
∆
−∆+−
− −
−+−
∆−
∆∆
−∆+−
∆+⋅⋅⋅⋅
−+−
∆−
∆
⋅−
=
−⋅≤
2/
2/ 2121
2
*
2/
2/
11122
*,,
111sinc
1''
1''
1
22
1
2
2
21211
22
1*
L
L l
iok
zj
nkkz
j
L
L
zfzzm
zn
jkz
j
nkkz
j
kuu
dzfzz
mz
ne
eeKK
ddee
eeKKC
o
o
l
ioo
o
ξξλρ
ρ
ξτ
ξλ
π
σλπ
τξλπ
ξλ
π
σλπ
ρ
ρ
(A10) Let us inspect the integral in (A10). It integrates a highly
oscillating term ξ
λπ
okz
je
∆1
2
multiplied by a (shifted) sinc
function. If the period of ξ
λπ
okz
je
∆1
2
is much smaller than the integration range (-L/2,L/2) and is also much smaller than the period between the zeros of the sinc function,
−+=
210
111zfz l
ρλξ, then the integral is essentially
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
5
zero. These conditions hold if:
Lz
o12λ
>>∆
(A11) and
−+>>∆
211
111zfz
zl
o ρ. (A12)
Thus we conclude that if the conditions (A9), (A11) and (A12) hold then the columns of Φ are uncorrelated.
REFERENCES 1. B. Javidi and F. Okano eds, “Three Dimensional
Television, Video, and Display Technologies," Springer Verlag Berlin, 2002.
2. D. L. Donoho, “Compressed sensing”, Manuscript (2004). http://www-stat.stanford.edu/%7Edonoho/Reports/2004/CompressedSensing091604.pdf.
3. E. J. Candes, J. Romberg and T. Tao, “Robust uncertainty Principles: Exact signal reconstruction from highly incomplete frequency information”, manuscript (2004)..
4. Y. Tsaig and D. L. Donoho, “Extensions to compressed sensing”, Signal Processing, vol 86, 549-571, March 2006.
5. D. Takhar, J. N. Laska, M. B. Wakin. M. F. Durate, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression”, presented at Proc. of Computational Imaging IV at SPIE Electronic Imaging, San Jose, CA, Jan. 2006 .
6. M. F. Duarte, M. B. Wakin, and R. G. Baraniuk, “Fast reconstruction of piecewise smooth signals from random projections,” in Proc. SPARS05, (Rennes, France), (2005).
7. T. Nomura, A. Okazaki, M. Kameda, Y. Morimoto, and B. Javidi, “Image reconstruction from compressed encrypted digital hologram,” Opt. Eng. Vol. 44, 2005.
8. 9. T. J. Naughton, Y. Frauel, B. Javidi, and E. Tajahuerce, “Compression of digital holograms for three-dimensional object reconstruction and recognition,” Appl. Opt. vol. 41, 4124-4132, 2002.
9. T.J. Naughton, J.B. McDonald, and B. Javidi, "Efficient compression of digital holograms for Internet transmission of three-dimensional images," Applied Optics, vol. 42, 4758-4764, August 2003.
10. F. Sadjadi, ed., "Selected Papers on Automatic Target Recognition," SPIE Press, 1999.
11. H. Kwon and N. M. Nasrabadi, “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., vol. 43, 388-397, (2005).
12. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express, vol. 12,
5237-5242 Oct. 2004., http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-21-5237.
13. S. K. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Optics Express, on-line Journal of the Optical Society of America, vol. 12, 1632-1642, April 2004.
14. A. Mahalanobis, “Processing of multi-sensor data using correlation filters,” Proc. SPIE 3466, 56–64 (1998).
15. M. Elad and A. M. Bruckstein, “A generalized uncertainty principle and sparse representation in pairs of RN bases”, IEEE Trans. on Information Theory, vol. 48, 2558–2567 (2002).
16. D. L. Donoho and M. Elad,” Optimally sparse representation from overcomplete dictionaries via l1 norm minimization,” Proc. Natl. Acad. Sci. USA, vol 100,2197-2002, March 2003.
17. S. Mallat, A wavelet tour of signal processing, (Academic Press. 2’nd edition, 1999).
18. S. Mallat, and Z. Zhang, “Matching Pursuits with Time-Frequency Dictionaries”, IEEE Trans. Sig. Proc., 41 (12), 3397-3415 (1993).
19. A. Papoulis, Probability, random variables and stochastic processes, (McGraww-Hill, 2’nd edition, Chp. 9, 1984).
20. Joseph W. Goodman. Statistical Optics, (John Wiley & Sons, chp. 8.3, 1985).
Adrian Stern received his B.Sc., M. Sc. (cum laude) and PhD degrees from Ben-Gurion University of the Negev, Israel, in 1988,1997 and 2003 respectively, all in Electrical and Computer Engineering. During 2002-2003 he was a post-doctoral fellow in the Electrical and Computer Engineering Department at University of Connecticut. He is now with department of Electro-Optical Engineering at Ben-Gurion University of the Negev, Israel. His current research interests include sequences of images processing, image restoration, image and video compression, three dimensional imaging, bio-medical imaging, optical encryption, non conventional imaging. Bahram Javidi (Fellow, IEEE) received the B.S. degree in electrical engineering from George Washington University, Washington, D.C., and the M.S. and Ph.D. degrees in electrical engineering from the Pennsylvania State University, University Park. He is Board of Trustees Distinguished Professor at the University of Connecticut, Storrs. He has supervised over 80 masters and doctoral graduate students, postdoctoral students, and visiting professors during his academic career. He has published over 230 technical articles in major journals. He has published over 270 conference proceedings, including over 100 invited conference papers, and 60 invited presentations. His papers have been cited over 3200 times, according to the citation index of WEB of Science. His papers have appeared in Physics Today and Nature, and his research has been cited in the Frontiers in Engineering Newsletter, published by the National Academy of Engineering, IEEE Spectrum, Science, New Scientist, and National Science Foundation Newsletter. He has completed eight books including, Physics of Automatic Target Recognition (Springer 2007); "Optical
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
6
Imaging Sensors and Systems for Homeland Security Applications (Springer, 2005); Optical and Digital Techniques For Information Security (Springer, 2005); Image Recognition: Algorithms, Systems, and Applications (Marcel Dekker, 2002); Three Dimensional Television, Video, and Display Technologies (Springer-Verlag, 2002); Smart Imaging Systems (SPIE Press, 2001). He is currently on the Board of Editors of the Proceedings of the IEEE Journal, the Editor in Chief of the Springer Verlag series on Advanced Science and Technologies for Security Applications, and the IEEE/OSA journal of Display Technologies. He has served as topical editor for Springer-Verlag, Marcel Dekker publishing company, Optical Engineering Journal, and IEEE/SPIE Press Series on Imaging Science and Engineering. He has held visiting positions during his sabbatical leave at Massachusetts Institute of Technology, United States Air Force Rome Lab at Hanscom Base, and Thomson-CSF Research Labs in Orsay, France. He is a consultant to industry in the areas of optical systems, image recognition systems, and 3-D optical imaging systems. Dr. Javidi is Fellow of six professional societies, including the IEEE. He was awarded the Dennis Gabor Award in Diffractive Wave Technologies by the SPIE in 2005. He was the recipient of the IEEE Lasers and Electro-Optics Society Distinguished Lecturer Award twice in 2003 and 2004. He has been awarded the University of Connecticut Board Of Trustees Distinguished Professor Award, the School Of Engineering Distinguished Professor Award, University of Connecticut Alumni Association Excellence in Research Award, the Chancellor's Research Excellence Award, and the first Electrical and Computer Engineering Department Outstanding Research Award. In 1990, the National Science Foundation named him a Presidential Young Investigator.
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
7
Fig. 1. Imaging scheme of compressed sensing. f and g denote the object and captured image respectively, written in a lexicographic way, that is, in the form of column vectors of sizes N and M, respectively. The imaging acquisition process is modeled by the measurement matrix Φ of size M by N. α is a N-length column vector of coefficients of f with respect to the transform ψ (of size NXN). We assume that f is sparse so that α has only nonzero coefficients, and KMN ≥> .
Fig. 2. One possible optical setup to implement the image acquisition in Fig. 1.
Object plane
Random phase mask
Image plane
z1 z2
Φ f g
Ψ α-estimation pα
Ψ pf
Imaging acquisition Digital image reconstruction Object representation
α
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
8
(a)
(b)
Fig. 3 (a) two intensity PSFs appropriate to two object points located at xo=0m (continuous line) and xo=-0 .001m (dashed line); (a) obtained with a conventional imaging system (the imaging system in Fig.2 without the random phase mask ) (b) with the system in Fig.2.
Preprint. To appear IEEE/OSA Journal of Display Technology , vol.3 no.3 Sept 2007<
9
Fig. 4. Imaging simulation results. (a) Original image (“Shepp phantom” from Matlab), (b) Captured image. The columns of the captured image have 17.2% less pixels that the ones in the original image. (c) Reconstructed image having the same size as the original.
(a)
20 40 60 80 100 120
20
40
60
80
100
120
(c)
20 40 60 80 100 120
20
40
60
80
100
120
(b)
20 40 60 80 100 120
20
40
60
80
100