Compressive hyperspectral imaging by random separable projections in both the spatial and the...

9
Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains Yitzhak August, 1 Chaim Vachman, 1 Yair Rivenson, 2 and Adrian Stern 1, * 1 Department of Electro-Optical Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel 2 Department of Electrical & Computer Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel *Corresponding author: [email protected] Received 5 November 2012; revised 18 February 2013; accepted 21 February 2013; posted 22 February 2013 (Doc. ID 179331); published 22 March 2013 An efficient method and system for compressive sensing of hyperspectral data is presented. Compression efficiency is achieved by randomly encoding both the spatial and the spectral domains of the hyperspec- tral datacube. Separable sensing architecture is used to reduce the computational complexity associated with the compressive sensing of a large volume of data, which is typical of hyperspectral imaging. The system enables optimizing the ratio between the spatial and the spectral compression sensing ratios. The method is demonstrated by simulations performed on real hyperspectral data. © 2013 Optical Society of America OCIS codes: 110.4155, 110.4190, 110.4234, 110.1758. 1. Introduction Hyperspectral (HS) images are used in numerous fields such as biomedical imaging, remote sensing, the food industry, art conservation and restoration, and many more. The amount of data typically cap- tured with HS imaging systems is very large and it is often highly compressible. This has motivated the application of compressive sensing techniques for HS imaging. Compressive sensing (CS) [ 13] is a fast-emerging field in the area of digital signal sensing and process- ing. CS theory provides a sensing framework for sampling sparse or compressible signals in a more efficient way that is usually done with ShannonNyquist sampling scheme. With CS, a compressed version of the signal is obtained already in the acquisition stage, thus averting the need for digital compressing. Since CS requires fewer measure- ments, it can be applied to reduce the number of sensors or to reduce the acquisition time. One natu- ral implementation arena of CS theory is the field of imaging. The first implementation of CS for imaging was the single-pixel CS camera [ 4]. Single pixel CS camera architecture has been used for imaging in the visible, the terahertz [ 5, 6], and the short-wave infrared [ 7] spectrum. The use of single-pixel CS cameras is suitable in cases where large detector arrays are not available or are too expensive. Another use of the single-pixel CS camera is in aero- space remote sensing [ 8, 9]; in this case, the motiva- tion is to reduce the cost of data acquisition. Other compressive imaging techniques include single-shot compressive imaging [ 10, 11], compressive hologra- phy [ 12, 13], progressive compressive imaging [ 14], compressive motion tracking [ 15, 16], and CS applica- tions for microscopy [ 1719], to name but a few. An overview of CS techniques in optics may be found in [ 20]. In this work we focus on using CS for HS imaging. Hyperspectral and multispectral imaging may benefit from CS, since HS data is typically highly compressible. 1559-128X/13/100D46-09$15.00/0 © 2013 Optical Society of America D46 APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

Transcript of Compressive hyperspectral imaging by random separable projections in both the spatial and the...

Compressive hyperspectral imaging by randomseparable projections in both the spatial

and the spectral domains

Yitzhak August,1 Chaim Vachman,1 Yair Rivenson,2 and Adrian Stern1,*1Department of Electro-Optical Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel

2Department of Electrical & Computer Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel

*Corresponding author: [email protected]

Received 5 November 2012; revised 18 February 2013; accepted 21 February 2013;posted 22 February 2013 (Doc. ID 179331); published 22 March 2013

An efficient method and system for compressive sensing of hyperspectral data is presented. Compressionefficiency is achieved by randomly encoding both the spatial and the spectral domains of the hyperspec-tral datacube. Separable sensing architecture is used to reduce the computational complexity associatedwith the compressive sensing of a large volume of data, which is typical of hyperspectral imaging. Thesystem enables optimizing the ratio between the spatial and the spectral compression sensing ratios. Themethod is demonstrated by simulations performed on real hyperspectral data. © 2013 Optical Society ofAmericaOCIS codes: 110.4155, 110.4190, 110.4234, 110.1758.

1. Introduction

Hyperspectral (HS) images are used in numerousfields such as biomedical imaging, remote sensing,the food industry, art conservation and restoration,and many more. The amount of data typically cap-tured with HS imaging systems is very large andit is often highly compressible. This has motivatedthe application of compressive sensing techniquesfor HS imaging.

Compressive sensing (CS) [1–3] is a fast-emergingfield in the area of digital signal sensing and process-ing. CS theory provides a sensing framework forsampling sparse or compressible signals in a moreefficient way that is usually done with Shannon–Nyquist sampling scheme. With CS, a compressedversion of the signal is obtained already in theacquisition stage, thus averting the need for digitalcompressing. Since CS requires fewer measure-ments, it can be applied to reduce the number of

sensors or to reduce the acquisition time. One natu-ral implementation arena of CS theory is the fieldof imaging.

The first implementation of CS for imaging wasthe single-pixel CS camera [4]. Single pixel CScamera architecture has been used for imaging inthe visible, the terahertz [5,6], and the short-waveinfrared [7] spectrum. The use of single-pixel CScameras is suitable in cases where large detectorarrays are not available or are too expensive.Another use of the single-pixel CS camera is in aero-space remote sensing [8,9]; in this case, the motiva-tion is to reduce the cost of data acquisition. Othercompressive imaging techniques include single-shotcompressive imaging [10,11], compressive hologra-phy [12,13], progressive compressive imaging [14],compressive motion tracking [15,16], and CS applica-tions for microscopy [17–19], to name but a few.An overview of CS techniques in optics may be foundin [20]. In this work we focus on using CS for HSimaging. Hyperspectral and multispectral imagingmay benefit from CS, since HS data is typicallyhighly compressible.

1559-128X/13/100D46-09$15.00/0© 2013 Optical Society of America

D46 APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

Hyperspectral data is typically organized in theform of a cube, which is a three-dimensional (3D)digital array, as shown in Fig. 1. The x–y planerepresents the spatial information and the thirddimension is for the spectral reflection as a functionof wavelength. Each point in the x–y plane has itsown spectral signature, described by a spectralvector. The number of spectral bands in the HS imageis in the range of dozens to thousands, where thetypical wavelength width of each spectral bandranges from 0.5 up to 10 nm with some spectraloverlap. The common acquisition techniques forHS data are based on spectrometer point scanningand spectrometer line scanning [21,22]. One of themain limitations of these two methods is therelatively slow scanning process. Other limitationsarise from the fact that huge amounts of data needto be processed and transmitted. CS-inspired meth-ods can help in handling these limitations. The appli-cability of CS is based on the fundamental notionthat data are sparse, or at least compressible, proper-ties that HS data typically possess; different studiesshow that an HS cube is sparse and sometimeseven extremely sparse [23–30]. If we look at a singlenarrow spectral window, that is if we look on an x–yplane, we have a regular image which is typicallycompressible in the wavelet domain. On the other

hand, if we look in the spectral direction, λ, we gen-erally also find the data to be extremely redundant.For example, the spectral signature of green grassis unique, thus all the vectors in the HS image thatrepresent reflection from the grass have the samespectral signature.

In recent years, several types of CS systems for HSimaging have been proposed [31–36]. In [37], CS HScube acquisition is accomplished by a method calledcoded aperture snapshot spectral imagers (CASSI).In the CASSI architecture, the spatial informationis first randomly encoded and then the spectralinformation is mixed by a shearing operation. CASSIis suboptimal in terms of CS because CASSI employsrandom signal multiplexing only in the x–y plane,while in the spectral domain, it undergoes determin-istic uniform transformation.

Another implementation of the CS system for HSimaging, presented in [35], is shown in Fig. 2(b). Thismethod follows the single-pixel CS camera technique[Fig. 2(a)] that was expanded to 3D imaging byreplacing the standard detector (single photodiode)in the single pixel CS camera with a spectrometerprobe.With this architecture, the spatial informationis encoded while the spectral information remainsunchanged. This mechanism can be considered aparallel spectral acquisition, leaving the spectraldimension unmixed and uncompressed.

In this work, we present a new method for HSimage acquisition using CS separable encoding bothin the spatial and the spectral domains. We proposea scheme for 3D multiplexing using two stages ofmultiplexing; the first stage is spatial multiplexing,which is done by using the classical scheme of thesingle pixel CS camera, and the second stage isspectral multiplexing and is introduced in Section 4.The spectral encoding is performed in a single stepand thus the proposed method requires the samenumber of projections as in [27], while benefitingFig. 1. (Color online) Hyperspectral cube.

Fig. 2. (Color online) (a) Schematic diagram of single pixel CS camera and its photodiode detector. (b) Expansion to multispectral imagingusing a grating and a CCD vector.

1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS D47

from random multiplexing of the wavelength do-main too.

2. Compressive Sensing

In this section, we review briefly CS theory, which isa technique to recover sparse signals from signifi-cantly less measurements than needed when usingtraditional sampling theory. A block diagram of aCS system is depicted in Fig. 3. In this figure, f rep-resents a physical signal, e.g., an objects’ intensities.α is a vector of components in the sparsifying domainused to represent f. α is a mathematical representa-tion vector that contains mainly zeros or near zerovalues. In the image acquisition step, the signal vec-tor f is sampled using the Φ operator, yielding themeasurement vector g. The final step in Fig. 3 is theimage reconstruction, accomplished by estimation off using l1 type minimization [1–3].

We assume that anN × 1 vector, f that is to be mea-sured, can be expressed by f � Ψα, where the N × 1vector, α, contains only k ≪ N nonzero elementsand Ψ is a sparsifying operator. The measurementsvector g ∈ RMx1 is obtained by

g � Φf; (1)

where Φ ∈ RM×N is a sensing matrix. By properlychoosing M and Φ and assuming sparsity of f inthe Ψ domain, the signal f can be recovered from themeasurements g. The crucial step here is to build asensing matrixΦ such that it enables accurate recov-ery of an N-sized f from fewer M measurementsg. Reconstruction of f from g is guaranteed if thenumber of measurements M meets the followingcondition [1,3]:

M ≥ Cμ2K log�N�: (2)

It can be seen that the number of measurementsrequired M depends on the size of the signal N,its sparsity k and μ, representing the mutual coher-ence between Φ and Ψ. The mutual coherence isdefined by

μ�Φ;Ψ� ������N

pmax

1≤i≠j≤MjhΦH

i Ψjij; (3)

whereΦi,Ψj are vectors ofΦ andΨ, respectively. Thevalue of μ is in the range of 1 ≤ μ ≤

�����N

p. The lower μ

is the better is the performance of the system. Theoriginal signal f can be recovered by solving thefollowing problem:

f � Ψα subject to minα

f‖g −ΦΨα‖22 � γ‖α‖1g; (4)

where γ‖ � ‖1 is l1 norm and γ is a regularizationweight.

One of the difficulties of using the CS methodfor HS imaging is the huge size of the matrices Φrequired for representing the sensing operation.Signals in CS theory are represented by vectorswith N components. The measurements data isM-dimensional so that the sensing matrix is of sizeΦ ∈ RM×N . Hyperspectral imaging involves 3D sig-nals F ∈ RN1×N2×N3 , which can be converted to vectorsby lexicographic ordering to an N-length vector[f � vec�F�]. Since N � N1 ×N2 ×N3, the sensingmatrix size has the order of �N1 ×N2 ×N3�2. Forinstance, let us consider the computational aspectsof randomly encoding a 3D data HS cube ofF ∈ RN1×N2×N3 , with N1 � N2 � N3 � 256. In thiscase, the sensing matrix Φ will be Φ ∈ R224×224 . Suchmatrices cannot be handled in standard computa-tional systems because of their challenging storageand memory requirements. The optical implementa-tion and sensor calibration of such systems alsopresent a great challenge because the realizationof random Φ requires the system to have N ×Mnearly independent modes (degrees of freedom).

3. Separable Compressive Sensing

Separable sensing operators are common in manyoptical systems (e.g., wave propagation) and areoften applied in image processing tasks. SeparableCS was proposed in [38–40] to overcome the practicallimitations in compressive imaging implementationsinvolving large data and for the reason that oftenseparable sensing operators arise naturally in multi-dimensional signal processing. As shown in [38–40],a separable system matrix significantly reduces theimplementation complexity at the expense of somecompression efficiency loss, i.e., more samples are re-quired compared to nonseparable CS, to accuratelyreconstruct the signal.

A separable sensing operator, Φ, can be repre-sented in the form of Φy ⊗ Φx, where the symbol⊗ denotes the Kronecker product, also referred toas the direct product or the tensor product. If Φy ��ϕy1;ϕy2; � � � ;ϕyn� is an n × p matrix and Φx an m × qmatrix, then the Kronecker product between Φyand Φx is given:

Φyx � Φy ⊗ Φx

26664ϕy1;1Φx ϕy1;2Φx � � � ϕy1;pΦx

ϕy2;1Φx ϕy2;2Φx � � � ϕy2;pΦx

..

. ... ..

. ...

ϕyn;1Φx ϕyn;1Φx … ϕyn;pΦx

37775: (5)

As we described in the previous section, in the case ofan n-dimensional signal, we use the vec��� operatorin order to create a column vector from a matrix F bystacking the column:Fig. 3. Compressive sensing block diagram [10].

D48 APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

vec�F� �

26664

f1f2...

fn

37775: (6)

Let us consider the two-dimensional (2D) signalF � �f1; f2; � � � ; fn� and the measurement G ��g1; g2; � � � ; gn�. F and G are a matrix representationof f and g. In such a case, Eq. (1) can be written inthe form [38]

vec�G� � Φyx × vec�F� � �ΦTy ⊗ Φx� × vec�F�; (7)

and, using properties of the Kronecker product, wecan write

G � ΦyFΦx: (8)

Consequently, Eq. (4) can be rewritten to solve

F � ΨAΨT subject to

minα

f‖vec�G� − vec�ΦyΨAΨTΦx�‖2� γ‖vec�A�‖1g

α � vec�A� (9)

Equation (9) provides a simple way to handle thehuge matrix vector multiplication of Eq. (4). For ex-ample, if the size of each ofΦy,Φx, F is ∼1000 × 1000entries, Eq. (9) requires operations with matricesof the same order, whereas the standard compres-sive sensing recovery problem, Eq. (4), involves alge-braic manipulations with matrices of the orderof ∼106 × 106.

When considering CS with a separable sensingscheme, it was shown in [38] that the mutual coher-ence of the separable sensing system is given by

μ�Φyx;Ψyx� � μ�Φy ⊗ Φx;Ψy ⊗ Ψx�� μ�Φy;Ψy�μ�Φx;Ψx�: (10)

The mutual coherence, Eq. (10), can be shown to belarger than that of a nonseparable sensing operator.Therefore, according to Eq. (2), the number of mea-surements, M, required to accurately reconstructthe signal with the separable sensing scheme islarger. For example, if Φ is a random orthogonal ma-trix uniformly distributed on the unit sphere, it canbe shown that

μ�Φy ⊗ Φx;Ψ�μ�Φ;Ψ� ≈

2 log10� �����

Np �

�����������������������2 log10�N�

p �������������������������12log10�N�

r; (11)

meaning that

������������������������12log10�N�

r

times more measurements are required to accuratelyreconstruct the signal using a separable sensingoperator than with a nonseparable random operator[38]. This is a reasonable cost for gaining thecomputational simplification. In practice, as it wasnumerically demonstrated in [38], the loss in com-pression efficiency is quite moderate and practicallysmaller than the one predicted in Eq. (11).

4. Implementation Architecture for Spatial andSeparable Spectral Encoding for HyperspectralCompressive Sensing

In this section, we present an optical implementationscheme that permits both spatial and spectralrandom encoding. In Section 4.A we provide adescription of the spectral encoding method and inSection 4.B we provide the full description of the sys-tem architecture for compressive HS imaging byseparable spatial and spectral operators (CHISSS).

A. Spectral Encoding

In this section we describe the principle of theproposed separable spectrum sensing operation.Figure 4 provides a schematic description of the spec-tral encoding principle. In this description the inputsignal is the optical spatially multiplexed signal atthe detector S3 in Fig. 2.

Figure 4 shows a mechanism which replaces thedetectors a or b in Fig. 2. The input signal, S3, is theoutput of the single pixel CS camera presented inFig. 2. Thus, it is a spectral vector that we wish toencode andmeasure using the photo sensor. In Fig. 4,the input optical signal at S3 passes through a dif-fractive or dispersive element working as a spectralto spatial convertor. A spatial grating can be usedto separate the spectral components in the horizontaly direction, thus converting the light spot into aspectral line. The spectral line in Fig. 4 (along they direction) is spatially encoded using the codedaperture mask C1. Here, C1 is a single line of codedapertures. This operation gives each wavelength itsownweight, i.e., each wavelength is multiplied by thelocal coded aperture transition value. To focus andcollect the different spectral components, regularconverging lenses can be used. In practice, we

Fig. 4. (Color online) Schematic diagram of the spectral sepa-rable operator.

1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS D49

propose a parallel process for the spectral encodingwith a cylindrical lens; this will be explained in thenext section. The technique described above providesa single randomly encoded measurement of thespectral component. However, for CS we need Mmeasurements that satisfy Eq. (2), where each meas-urement is a result of different encoding of thedatacube. Multiple encoding of the spectral vectorcan be achieved by time division multiplexing, i.e.,by changing the aperture pattern for each measure-ment of the image sensor. However, this will result ina long acquisition time. Alternatively, the variousspectral encoding can be achieved by spatial divisionmultiplexing. The system described in the nextsubsection shows such spatial division multiplexing,which is essentially implemented by duplicating theapparatus described in Fig. 4 in the x direction. Thespectral information is multiplied by different ran-dom codes and captured by a line array of sensors.This way, parallel spectral encoded measurementis achieved within one exposure for a given spectralvector. The ability to measure all the spectralprojections with a single exposure provides a wayto measure HS images with the same number ofspatial measurements that is needed for a monochro-matic single pixel CS camera [4].

B. System Structure

In this section we describe the proposed CHISSSarchitecture. The architecture implements an opticalCS system using separable operators. In contrastto the previous architecture for CS–HS imaging[32,41,42], CHISSS architecture provides a way forencoding both the spatial and the spectral domainsusing separate and random operations with theability to change the compression ratio betweenthe spectral and the spatial domains. The CHSISSsystem uses two separable random encoding codes,one for the spatial domain and the other for the spec-tral domain. Figure 5 depicts the proposed CHSISSsystem.

The spatial multiplexing process is performed in away similar to that with the single pixel HS camera[35]. As in Fig. 2, the lens L1 is used to image the ob-ject on the digital micromirror device (DMD) D1. Arandom code of size Nx ×Ny is displayed by L1. Theencoded light reflected from D1 is then focused onthe central point of the G1 grating using the lensL2. At this point, the spot on the G1 plane containsthe same mixed spatial information for the entirespectrum. One can view the process up to this stageas a parallel encoding of the spatial data for eachwavelength. Therefore, each spectral component is aresult of the spatial x–y multiplexing (provided bytheDMD), where each component undergoes the samemultiplexing process.

The spectral multiplexing is achieved by applyinga second encoding operator separately. The spectralencoder is based on the method described in Fig. 4.By means of the cylindrical lenses L3 and L4 and thecoded aperture C1 the spectral encoding process,described in Section 4.A, is performed in parallel.Grating G1 splits and diffracts the beam S3 intoNλ spectral spots, which are spread along parallelrays on the coding device C1 by means of the cylin-drical lens L3. The coded aperture C1 has a randomreflection pattern; therefore, each horizontal spectralgeometrical line is encoded by a different randompattern. The coded aperture in Fig. 5 hasMλ horizon-tal elements andNλ vertical elements. Next,Nλ spec-trally encoded components reflected from the verticallines of C1 are summed by means of the cylindricallens L4 and collected by the appropriate pixel in aline array sensor. The different spectral modulationspass through the L4 cylindrical lens in parallel. Notethat the encoding process with the CHISSS system inFig. 5 is separable in the x–y and λ domains. Since thespectral encoding is performed in parallel in a singlestep (by space-division-multiplexing) the overallacquisition time is determined solely by the spatialencoding. Therefore, the CHISSS acquisition timeis similar to that of the single pixel CS camera.

We wish to note that the system and method de-scribed above perform universal HSI CS, i.e., theyare designed to image arbitrary HSI data. Since noa priori information about the spatial or spectral fea-tures of the imaged scene is assumed to be available,random projections are preferable [1]. However, if apriori information about the imaged scene is avail-able, one can imprint appropriate nonrandom maskson D1 and C1 to achieve improved task-specific CS[43]. The system in Fig. 5 can also be easily adaptedto perform adaptive spectral imaging [44] by chang-ing the static coded mask C1 with a variable one(such as a DMD).

5. Simulation Results

We simulated the acquisition process with theCHISSS shown in Fig. 5 and investigated the recon-structions. To simulate the system, we use a com-puter procedure that implements the appropriatespatial and spectral separable encoding operators.

Fig. 5. (Color online) Schematic diagram of CHSISS system forCS HS imaging.

D50 APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

We used real data from a HS camera. The HS imageof the Iris painting (Fig. 6, left) was taken indoorsusing a halogen light source and the parking lotimage (Fig. 6, right) was taken outdoors duringdaylight. Both images were recorded in 256 spectralbands from 500 to 657 nm, where the spectral widthof each band is about 0.61–0.62 nm. The spatialimage size was 256 × 256 pixels. We used thesetwo HS cubes as objects and sampled them accordingto the CHISSS system structure shown in Fig. 5.

As we describe in Fig. 4, eachHSI cube was first spa-tially encoded and then spectrally encoded. In thesimulation we used three orthogonal random masks,Φx,Φy,Φλ, to compose the separable sensing operator.Note that with the CHISSS shown in Fig. 5 (as withthe systems in Fig. 2) the spatial sampling operator,Φyx, does not have to be separable in the x and y direc-tions. However, to alleviate the computational burdenrequired for CS and reconstruction of data of sizeN � 2563, we chose to use spatial masks obtainedfrom a Kroneker product of Φx and Φy. While a non-separable spatial sensing operator,Φyx, is representedby a matrix of the order 2562 × 2562, the matrices Φyand Φx are of the order 256 × 256 and the system’sforward model is implemented simply by Eq. (8). Forthe recovery process, we used the MATLABR2012a

and the TwIST [45] solver procedure. The programswere run on an Intel i7-2600 3.4 GHz processor with8 GB memory. We used the 3D Haar wavelets as thesparsifying operators,ΨT , together with l1 regulariza-tion according to Eq. (9). Reconstructed images fromthe simulated CHISSS are shown in Fig. 6 (lowerrow). These results are for a total compression ratio

Mx ×My ×Mλ

Nx ×Ny ×Nλ;

of 10% of the original HS datacube. For the Iris paint-ing, the spatial domain (x–y) compressive sensing ratiowas set to

Mx ×My

Nx ×Ny�

�217256

�2≅ 71.2%;

while the spectral compressive sensing ratio was

Nλ� 38

256≅ 14.8%:

For the parking lot the HS data compression ratios are

Mx ×My

Nx ×Ny�

�181256

�2≅ 49.9%;

and

Nλ� 51

256≅ 19.9%;

for the spatial and spectral domains, respectively. Aswe can see in Fig. 6, despite the X10 compression,the reconstructions are quite similar to the originalimages. The reconstruction peak signal-to-noise ratio(PSNR) for the Iris painting was ∼21 dB and for theparking lot ∼25 dB.

The dependence of the reconstruction quality on theCS ratio M∕N is demonstrated in Fig. 7. Figure 7(a)shows an RGB projection of the HSI source andFigs. 7(b), 7(c), 7(d), and 7(e) show the reconstructionsfrom data compressively sensed with ratios 10%,38%, 5%, and 13%, respectively. As it can be seen,the results are reasonable even with compressionas deep as 5%, while at compression ratios larger than10% the degradation is hardly noticeable.

Since the sparsity of the HS datacube in the spatialdimension is typically different from that in the spec-tral dimension, it is interesting to investigate thedependence of the CHISSS performance on the spa-tial and spectral compression ratios. Figure 8 showsthe PSNR for the parking lot image compressively

Fig. 6. (Color online) Left: original image of “Iris painting,” and (lower image) its reconstruction from 10% samples. Right: original imageof “Parking lot,” and (lower image) its reconstruction from 10% samples.

1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS D51

sampled with various spectral and spatial ratios,yielding given overall sampling ratios M∕N. Dottedcontours represent the locations of the same totalcompression ratio.

From Fig. 8, it is evident that, as expected, thePSNR increases as a function of the total sensing ra-tio. In addition, we can also see that the reconstructionPSNR increases as the spectral compression contribu-tion to the total compression ratio is higher. Thisreflects the well-known fact the HS cubes are morecompressible in the spectral dimension [26,46,47].

Figure 9 shows the reconstruction PSNR contourlines of the interpolated surface in Fig. 8. Thecontour lines show, from another perspective, theobservation obtained from Fig. 8 that the influenceof the spectral compression is larger than that ofthe spatial. For instance, from observing the upperpart of Fig. 9, we see that the equi-PSNR contoursare approximately vertically aligned, implying that

introducing spectral compression (taking aboveabout 70% of the samples) to a given spatialcompression induces negligible PSNR degradation.The greater influence of the spectral compression isevident in the rest of the graph too (below 70% spec-tral samples). For example, to achieve a PSNR of30 dB, one can choose a spatial compression of 42%together with a spectral compression of 72% (pointA), yielding a total compression of 30%. Alternatively,the same PSNR can be achieved with a spatial com-pression of 75% together with a spectral compressionof 25% (point B), yielding a total compression of 19%.

6. Conclusion

We have presented a technique and simulation forHS compressive imaging using separable randomprojections in all three dimensions of the HS data.The proposed CHSISS architecture can provide bothspatial and spectral random encoding in a relativelysimple way. The spectral multiplexing is done in

Fig. 7. (Color online) RGB projection of 256 × 256 × 256 HS cube. (a) Source, (b) reconstruction from 128 × 128�spatial�×102�spectral�measurements � 10%, (c) reconstruction from 197 × 197�spatial� × 163�spectral�measurements � 38%, (d) reconstructionfrom 204 × 204�spatial� × 20�spectral�measurements � 5%, (e) reconstruction from 204 × 204�spatial� × 51�spectral�measurements � 13%.

Fig. 8. (Color online) Reconstruction PSNR calculation for “Park-ing lot.” HS as function of spatial and spectral compression ratios.Points with same color represent the same overall compression ra-tio. For visualization purposes a surfaces grid was built by bilinearinterpolation.

Fig. 9. (Color online) Reconstruction PSNR contours plots theCSHSS of the “Parking lot.”

D52 APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

parallel and only once per single spatial multiplex-ing; therefore, we can acquire an HS cube for thesame number of spatial projections. Simulationresults demonstrate the need to balance thecompression depths in the spatial and spectral do-mains to optimize the CHISSS performance for agiven total compression sensing ratio. Because ofhigher redundancy in the spectral domain, more spa-tial projections are needed than spectral projections.

Adrian Stern wishes to thank the Israel ScienceFoundation (grant No. 1039/09). The authors wish tothank Iris Tresman (Arts Department, Ben-GurionUniversity) for providing her painting for HSimaging. We also acknowledge Professor OhadBen-Shahar’s research group (the interdisciplinaryComputational Vision Lab) for providing thehyperspectral camera.

References1. E. J. Candes andM. B.Wakin, “An introduction to compressive

sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).2. M. Stojnic, W. Xu, and B. Hassibi, “Compressed sensing of

approximately sparse signals,” in IEEE International Sympo-sium on Information Theory (IEEE, 2008), pp. 2182–2186.

3. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52, 1289–1306 (2006).

4. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, TingSun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imagingvia compressive sampling,” IEEE Signal Process. Mag.25(2), 83–91 (2008).

5. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk,and D. M. Mittleman, “A single-pixel terahertz imagingsystem based on compressed sensing,” Appl. Phys. Lett. 93,121105 (2008).

6. W. L. Chan, M. L. Moravec, R. G. Baraniuk, and D. M.Mittleman, “Terahertz imaging with compressed sensingand phase retrieval,” in Opt. Lett. 33, 974–976 (2008).

7. L. McMackin, M. A. Herman, B. Chatterjee, and M. Weldon,“A high-resolution SWIR camera via compressed sensing,”Proc. SPIE 8353, 835303 (2012).

8. J. Ma, “Single-pixel remote sensing,” IEEE Geosci. RemoteSens. Lett. 2, 199–203 (2009).

9. J. Ma, “A single-pixel imaging system for remotesensing by two-step iterative curvelet thresholding,” IEEEGeosci. Remote Sens. Lett. 6, 676–680 (2009).

10. A. Stern and B. Javidi, “Random projections imaging withextended space-bandwidth product,” J. Disp. Technol. 3,315–320 (2007).

11. A. Stern, Y. Rivenson, and B. Javidi, “Optically compressedimage sensing using random aperture coding,” Proc. SPIE6975, 69750D (2008).

12. Y. Rivenson and A. Stern, “Compressive sensing techniques inholography,” in 10th Euro-AmericanWorkshop OnInformationOptics (WIO), (IEEE, 2011), pp. 1–2.

13. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnelholography,” J. Disp. Technol. 6, 506–509 (2010).

14. S. Evladov, O. Levi, and A. Stern, “Progressive compres-sive imaging from radon projections,” Opt. Express 20,4260–4271 (2012).

15. Y. Kashter, O. Levi, and A. Stern, “Optical compressive changeand motion detection,” Appl. Opt. 51, 2491–2496 (2012).

16. D. J. Townsend, P. K. Poon, S. Wehrwein, T. Osman, A. V.Mariano, E. M. Vera, M. D. Stenner, and M. E. Gehm, “Staticcompressive tracking,” Opt. Express 20, 21160–21172 (2012).

17. M. de Moraes Marim, E. D. Angelini, and J. Olivo-Marin,“Compressed sensing in biological microscopy,” Proc. SPIE7446, 744605 (2009).

18. S. Schwartz, A. Wong, and D. A. Clausi, “Compressive fluores-cence microscopy using saliency-guided sparse reconstructionensemble fusion,” Opt. Express 20, 17281–17296 (2012).

19. V. Studer, “PNAS plus: Compressive fluorescence microscopyfor biological and hyperspectral imaging,” Proc. Natl. Acad.Sci. USA 109, E1679–E1687 (2012).

20. R. M. Willett, R. F. Marcia, and J. M. Nichols, “Compressedsensing for practical optical imaging systems: a tutorial,”Opt. Eng. 50, 072601 (2011).

21. J. S. Sanders, R. E. Williams, R. G. Driggers, and C. E. Halford,“A novel concept for hyperspectral remote sensing,” in Proceed-ings, IEEE Southeastcon (IEEE, 1992), vol. 1, pp. 363–367.

22. T. Wilson and R. Felt, “Hyperspectral remote sensing technol-ogy (HRST) program,” in Proceedings IEEE AerospaceConference (IEEE, 1998), vol. 5, pp. 193–200.

23. J. In, S. Shirani, and F. Kossentini, “JPEG compliant efficientprogressive image coding,” in Proceedings of the 1998 IEEEInternational Conference On Acoustics, Speech and SignalProcessing (IEEE, 1998), vol. 5, pp. 2633–2636.

24. Q. Wang and Y. Shen, “A JPEG2000 and nonlinear correlationmeasurement based method to enhance hyperspectral imagecompression,” in Proceedings IEEE Instrumentation andMeas-urement Technology Conference (IEEE, 2005), pp. 2009–2011.

25. J. Lv, Y. Li, B. Huang, and C. Wu, “Hyperspectral compressivesensing,” Proc. SPIE 7810, 781003 (2010).

26. S. Lim, K. Sohn, and C. Lee, “Compression for hyperspectralimages using three dimensional wavelet transform,” in IEEE2001 International Geoscience and Remote Sensing Sympo-sium (IEEE, 2001), vol. 1, pp. 109–111.

27. S. Lim, K. H. Sohn, and C. Lee, “Principal component analysisfor compression of hyperspectral images,” in IEEE 2001International Geoscience and Remote Sensing Symposium(IEEE, 2001), vol. 1, pp. 97–99.

28. G. A. Shaw and H.-H. K. Burke, “Spectral imaging for remotesensing,” Lincoln Lab. J. 14, 3–28(2003).

29. M. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Sparse unmix-ing of hyperspectral data,” IEEE Trans. Geosci. Remote Sens.49, 2014–2039 (2011).

30. N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEESignal Process. Mag. 19(1), 44–57 (2002).

31. H. Arguello and G. R. Arce, “Code aperture optimization forspectrally agile compressive imaging,” J. Opt. Soc. Am. A28, 2400–2413 (2011).

32. Y. Wu and G. Arce, “Snapshot spectral imaging via compres-sive random convolution,” in 2011 IEEE InternationalConference on Acoustics, Speech and Signal Processing (IEEE,2011), pp. 1465–1468.

33. H. Arguello and G. Arce, “Code aperture agile spectral imaging(CAASI),” in Imaging Systems Applications, OSA TechnicalDigest (CD) (Optical Society of America, 2011), paper ITuA4.

34. Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, “Developmentof a digital-micromirror-device-based multishot snapshotspectral imaging system,” Opt. Lett. 36, 2692–2694 (2011).

35. T. Sun and K. Kelly, “Compressive sensing hyperspectralimager,” in Computational Optical Sensing and Imaging,OSA Technical Digest (CD) (Optical Society of America,2009), paper CTuA5.

36. C. Li, T. Sun, K. F. Kelly, and Y. Zhang, “A compressive sensingand unmixing scheme for hyperspectral data processing,”IEEE Trans. Image Process. 21, 1200–1210 (2012).

37. M. E. Gehm, R. John, D. J. Brady, R.M.Willett, and T. J. Schulz,“Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007).

38. Y. Rivenson and A. Stern, “Compressed imaging with aseparable sensing operator,” IEEE Signal Process. Lett. 16,449–452 (2009).

39. Y. Rivenson and A. Stern, “Practical compressive sensingof large images,” presented at 16th International Conferenceon Digital Signal Processing (IEEE, 2009), pp. 1–9.

40. M. F. Duarte and R. G. Baraniuk, “Kronecker compressivesensing,” IEEE Trans. Image Process. 21, 494–504 (2012).

41. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady,“Video rate spectral imaging using a coded aperture snapshotspectral imager,” Opt. Express 17, 6368–6388 (2009).

42. Q. Zhang, R. Plemmons, D. Kittle, D. Brady, and S. Prasad,“Reconstructing and segmenting hyperspectral imagesfrom compressed measurements,” in 3rd Workshop on

1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS D53

Hyperspectral Image and Signal Processing: Evolution inRemote Sensing (WHISPERS) (IEEE, 2011).

43. A. Ashok, P. K. Baheti, and M. A. Neifeld, “Compressive im-aging system design using task-specific information,” Appl.Opt. 47, 4457–4471 (2008).

44. D. Dinakarababu, D. Golish, and M. Gehm, “Adaptive featurespecific spectroscopy for rapid chemical identification,” Opt.Express 19, 4595–4610 (2011).

45. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST:Two-step iterative shrinkage/thresholding algorithms for

image restoration,” IEEE Trans. Image Process. 16,2992–3004 (2007).

46. M. J. Ryan and J. F. Arnold, “Lossy compression of hyperspec-tral data using vector quantization,” Remote Sens. Environ.61, 419–436 (1997).

47. S.-E. Qian, A. B. Hollinger, M. Dutkiewicz, H. A. Z. Tsang, andJ. R. Freemantle, “Effect of lossy vector quantizationhyperspectral data compression on retrieval of red-edgeindices,” IEEE Trans. Geosci. Remote Sens. 39, 1459–1470(2001).

D54 APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013