Estimation of In Situ 3-D Particle Distributions From a Stereo Laser Imaging Profiler

16
586 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011 Estimation of In Situ 3-D Particle Distributions From a Stereo Laser Imaging Profiler Paul Leo Drinkwater Roberts, Member, IEEE, Jonah V. Steinbuck, Jules S. Jaffe, Associate Member, IEEE, Alexander R. Horner-Devine, Peter J. S. Franks, and Fernando Simonet Abstract—In this paper, an image processing system for esti- mating 3-D particle distributions from stereo light scatter images is described. The system incorporates measured, three-component velocity data to mitigate particle blur associated with instrument motion. An iterative background estimation algorithm yields a local threshold operator that dramatically reduces bias in particle counts over the full image field. Algorithms are tested on simu- lated particle distributions and data from an open-ocean profile collected near the Santa Barbara Channel Islands, CA. They yield over a 50% reduction in root-mean-squared error in particle size estimates, and a 30% reduction in the magnitude of the motion blur point spread function. In situ particle distributions are estimated and compared to several models. It is demonstrated that quantitative, 3-D particle distributions can be accurately estimated from these data for particles with diameter larger than 4 pixels (0.8 mm). Index Terms—Particle measurements, statistical distributions, reconstruction algorithms. I. INTRODUCTION T HE study of small-scale ( 1 m) physical–biological in- teractions in the open ocean is a rapidly advancing sub- field of oceanography. Recent work in this area has revealed strongly heterogeneous plankton distributions as well as promi- nent thin layers of highly concentrated plankton. Physical pro- cesses can play an important role in regulating the spatial struc- ture and development of these features [1]–[7]. While the oc- currence of regular patterns in the distribution of planktonic or- ganisms has been recognized for several decades [8], it has not been until the last decade that instruments with sufficient res- olution both in time and space have emerged to investigate the Manuscript received November 07, 2009; revised March 07, 2011; accepted August 16, 2011. Date of publication September 22, 2011; date of current ver- sion October 21, 2011. This work was supported by the National Science Foun- dation under Grant OCE 0220213. Associate Editor: W. Carey. P. L. D. Roberts and J. S. Jaffe are with the Scripps Institution of Oceanog- raphy and the Marine Physical Laboratory, University of California San Diego, La Jolla, CA 92093-0238 USA (e-mail: [email protected]; [email protected]. edu). P. J. S. Franks and F. Simonet are with the Scripps Institution of Oceanog- raphy, University of California San Diego, La Jolla, CA 92093-0238 USA (e-mail: [email protected]; [email protected]). J. V. Steinbuck is with the Kennedy School of Government, Harvard Uni- versity, Cambridge, MA 02138 USA (e-mail:[email protected]. edu). A. R. Horner-Devine is with the Civil and Environmental Engineering De- partment, University of Washington, Seattle, WA 98195-2700 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/JOE.2011.2165923 small-scale variability in these patterns. Modern optical systems include those that use single or multiple video recorders to form 2-D and 3-D images [9]–[11], scalar microstructure sensors and sensor arrays [3], [12], [13], and submersible holographic sys- tems [14]–[16]. Acoustic sensors have also advanced dramati- cally in their ability to resolve plankton distributions on these scales [4], [17]. For example, in the study of flocculation pro- cesses, high-resolution acoustic, video, and still-image sensors with submillimeter resolution have become key tools for esti- mating floc size and sinking rates [18], [19]. In parallel with the development of scalar microstructure sen- sors for studying in situ particle distributions and flocculation, improvements in the capabilities of particle image velocimetry (PIV) and particle tracking velocimetry (PTV) systems have en- abled measurements of 3-D velocity fields at millimeter-scale resolution [20], [21]. These techniques have an inherent advan- tage over methods that employ Doppler acoustics in that they can simultaneously record three-component velocities over a wide field of view. Both PIV and PTV methods are typically applied in laboratory settings where one has control over key fluid and flow parameters, including the particle density and the orientation of the imaging system relative to the flow. Recently, these methods have been applied in coastal ocean environments [22]–[24]. While major advances have been made both in scalar microstructure, and also PIV/PTV, the two have, to our knowl- edge, yet to be combined in an open-ocean profiling system. Inferring particle size and particle distributions from images recorded by a profiling instrument is challenging given the po- tential for motion-induced particle blur and nonuniformities in illumination [6]. Motion blur can significantly bias particle size and shape estimates. For applications where water motion varies in both time and space, the associated biases will also vary and therefore cannot be corrected by calibration. One method for mitigating the impact of motion blur on particle size estimates is to detect motion blur in images and exclude blurred images from analysis [6]. However, for dynamic environments where a large portion of images are blurred, this bias must be corrected. Nonuniform illumination can cause the average number of de- tected particles to vary over the field of view. This may lead to inferring nonrandom spatial structure in images when there is in fact none. Motion blur and nonuniform illumination present greater challenges for imaging applications involving long ex- posure times, a large field of view, and operate in dynamic flow and particle environments. In this paper, algorithms for correcting motion-blur and nonuniform illumination, and subsequently estimating 3-D particle distributions, are evaluated via simulation and on 0364-9059/$26.00 © 2011 IEEE

Transcript of Estimation of In Situ 3-D Particle Distributions From a Stereo Laser Imaging Profiler

586 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

Estimation of In Situ 3-D Particle DistributionsFrom a Stereo Laser Imaging Profiler

Paul Leo Drinkwater Roberts, Member, IEEE, Jonah V. Steinbuck, Jules S. Jaffe, Associate Member, IEEE,Alexander R. Horner-Devine, Peter J. S. Franks, and Fernando Simonet

Abstract—In this paper, an image processing system for esti-mating 3-D particle distributions from stereo light scatter imagesis described. The system incorporates measured, three-componentvelocity data to mitigate particle blur associated with instrumentmotion. An iterative background estimation algorithm yields alocal threshold operator that dramatically reduces bias in particlecounts over the full image field. Algorithms are tested on simu-lated particle distributions and data from an open-ocean profilecollected near the Santa Barbara Channel Islands, CA. Theyyield over a 50% reduction in root-mean-squared error in particlesize estimates, and a 30% reduction in the magnitude of themotion blur point spread function. In situ particle distributionsare estimated and compared to several models. It is demonstratedthat quantitative, 3-D particle distributions can be accuratelyestimated from these data for particles with diameter larger than4 pixels (0.8 mm).

Index Terms—Particle measurements, statistical distributions,reconstruction algorithms.

I. INTRODUCTION

T HE study of small-scale ( 1 m) physical–biological in-teractions in the open ocean is a rapidly advancing sub-

field of oceanography. Recent work in this area has revealedstrongly heterogeneous plankton distributions as well as promi-nent thin layers of highly concentrated plankton. Physical pro-cesses can play an important role in regulating the spatial struc-ture and development of these features [1]–[7]. While the oc-currence of regular patterns in the distribution of planktonic or-ganisms has been recognized for several decades [8], it has notbeen until the last decade that instruments with sufficient res-olution both in time and space have emerged to investigate the

Manuscript received November 07, 2009; revised March 07, 2011; acceptedAugust 16, 2011. Date of publication September 22, 2011; date of current ver-sion October 21, 2011. This work was supported by the National Science Foun-dation under Grant OCE 0220213.

Associate Editor: W. Carey.P. L. D. Roberts and J. S. Jaffe are with the Scripps Institution of Oceanog-

raphy and the Marine Physical Laboratory, University of California San Diego,La Jolla, CA 92093-0238 USA (e-mail: [email protected]; [email protected]).

P. J. S. Franks and F. Simonet are with the Scripps Institution of Oceanog-raphy, University of California San Diego, La Jolla, CA 92093-0238 USA(e-mail: [email protected]; [email protected]).

J. V. Steinbuck is with the Kennedy School of Government, Harvard Uni-versity, Cambridge, MA 02138 USA (e-mail:[email protected]).

A. R. Horner-Devine is with the Civil and Environmental Engineering De-partment, University of Washington, Seattle, WA 98195-2700 USA (e-mail:[email protected]).

Digital Object Identifier 10.1109/JOE.2011.2165923

small-scale variability in these patterns. Modern optical systemsinclude those that use single or multiple video recorders to form2-D and 3-D images [9]–[11], scalar microstructure sensors andsensor arrays [3], [12], [13], and submersible holographic sys-tems [14]–[16]. Acoustic sensors have also advanced dramati-cally in their ability to resolve plankton distributions on thesescales [4], [17]. For example, in the study of flocculation pro-cesses, high-resolution acoustic, video, and still-image sensorswith submillimeter resolution have become key tools for esti-mating floc size and sinking rates [18], [19].

In parallel with the development of scalar microstructure sen-sors for studying in situ particle distributions and flocculation,improvements in the capabilities of particle image velocimetry(PIV) and particle tracking velocimetry (PTV) systems have en-abled measurements of 3-D velocity fields at millimeter-scaleresolution [20], [21]. These techniques have an inherent advan-tage over methods that employ Doppler acoustics in that theycan simultaneously record three-component velocities over awide field of view. Both PIV and PTV methods are typicallyapplied in laboratory settings where one has control over keyfluid and flow parameters, including the particle density and theorientation of the imaging system relative to the flow. Recently,these methods have been applied in coastal ocean environments[22]–[24]. While major advances have been made both in scalarmicrostructure, and also PIV/PTV, the two have, to our knowl-edge, yet to be combined in an open-ocean profiling system.

Inferring particle size and particle distributions from imagesrecorded by a profiling instrument is challenging given the po-tential for motion-induced particle blur and nonuniformities inillumination [6]. Motion blur can significantly bias particle sizeand shape estimates. For applications where water motion variesin both time and space, the associated biases will also vary andtherefore cannot be corrected by calibration. One method formitigating the impact of motion blur on particle size estimatesis to detect motion blur in images and exclude blurred imagesfrom analysis [6]. However, for dynamic environments where alarge portion of images are blurred, this bias must be corrected.Nonuniform illumination can cause the average number of de-tected particles to vary over the field of view. This may lead toinferring nonrandom spatial structure in images when there isin fact none. Motion blur and nonuniform illumination presentgreater challenges for imaging applications involving long ex-posure times, a large field of view, and operate in dynamic flowand particle environments.

In this paper, algorithms for correcting motion-blur andnonuniform illumination, and subsequently estimating 3-Dparticle distributions, are evaluated via simulation and on

0364-9059/$26.00 © 2011 IEEE

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 587

Fig. 1. (a) Drawing of the imaging system showing the position and orientation of its primary components. The laser light sheet is emitted vertically below thebase of the profiler, and cameras view both sides of the illuminated field at 45 angles. The top of the field of view is 39 cm below the camera lenses. (b) Geometryand variables associated with the imaging problem. The ADV sample volume is positioned 16 cm vertically and 50 cm horizontally from the top of the field ofview. The positive cross-plane direction points towards camera-1.

data collected near the Santa Barbara Channel Islands using arecently developed stereoscopic particle imaging profiler [25].This profiler is the latest in a series of particle imaging free-fallprofilers that have been developed over the past decade atScripps Institution of Oceanography, University of CaliforniaSan Diego, La Jolla [26]–[29]. It is demonstrated that the imagecorrection algorithms can significantly improve particle sizeand distribution estimates by reducing the influence of motionblur and nonuniform illumination.

II. AUTONOMOUS IMAGING PROFILER

The details of the autonomous imaging profiler and its appli-cation to in situ stereo particle velocimetry are described in [25]and additional technical details are available online.1 Briefly, theprofiler consists of a main controller, two cameras, one laser,two linear actuators, and a large aluminum frame that rigidlyholds these components together [Fig. 1(a)]. The profiler is typ-ically programmed to descend through the water column at anadjustable rate of 5–10 cm s , controlled by the active bal-lasting system (linear actuators). The laser produces a verticalsheet of illumination directly below the instrument and the twocameras positioned on either side of the light sheet image a jointfield of 25 25 0.6 cm at a range of 70 cm from the camera

1Additional details of the imaging platform are available at http://jaffeweb.ucsd.edu/3dpiv

lens. Images are recorded simultaneously by each camera at anominal rate of 8 Hz. Cameras used are Cooke Corp. (Romulus,MI) Sensicam QE cooled charge-coupled device (CCD). Thecameras use the Sony ICX285 CCD sensor with 1376 1040square pixels of 6.45- m size, and 12-b [0–4096 digital num-bers (DN)] dynamic range. Noise in images was dominated byreadout noise with a mean value of around 70 DN. Exposuretimes during experiments ranged between 10 and 20 ms. Dueto the 45 angle between cameras and the laser sheet, squareimaged pixel size varied between 150 and 250 m from thetop to the bottom of the image, respectively. A Nortek acousticDoppler velocitmeter (ADV) was mounted on the frame andrecorded three-component velocity 0.5 m to the side and 0.16m above the top of the optical field of view [Fig. 1(b)].

The diffraction-limited resolution of the system is defined bythe wavelength of light (532 nm) and the aperture and focallength of the camera lens [30]. The system used Sigma 20-mmfocal length lenses with an f-number of 1.8. Converting this tonumerical aperture yields NA 0.27 and an effective lens di-ameter of 11 mm. Using the 20-mm focal length, the diffrac-tion-limited spatial resolution in the image plane is approxi-mately 1.2 m compared to the 6.45- m pixels. Given that thefeasible optical resolution is over five times finer than the pixelresolution of the images, we consider the system resolution tobe the pixel resolution and the blur functions to be only motionblur.

588 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

Fig. 2. Magnitude of motion blur in pixels for each camera image as a functionof depth in a sample profile. Motion blur was estimated from ADV velocitydata assuming an exposure time of 10 ms. It can be seen that the upper region ofthe profile experiences significant motion blur due to elevated water velocitiesrelative to the profiler.

Images from the profiler were calibrated both before and afterdeployments at sea. The calibration process consisted of the es-timation of a function to map the position of particles in thefield of view into each camera image. Let the coordinates of apixel in the camera image be defined as , andthe coordinates of a point in the field of view of the system as

[Fig. 1(b)]. The mapping function was de-fined as

(1)

which maps physical points to image coordinates in camera-’s coordinate system. A specialized calibration procedure [25]was developed to estimate using a polynomial model.The resulting mapping functions for each camera were highlyaccurate, with remapping errors on the order of 0–0.2 pixels overthe entire image field.

III. IMAGE CORRECTION AND RECONSTRUCTION

A. Biases in Raw, In Situ Images

Preliminary analysis of images from the profiler revealed thatmany of the images suffered from motion blur. Motion blurin particle images occurs when particles are displaced on theorder of 1 pixel during the camera exposure time. The exposuretime for images considered here was 10 ms, which was longenough to suffer significant blur for water velocities greater than3 cm s . Due to shear in the water column, the degree of blur-ring was depth dependent (Fig. 2). Particle motion during imageintegration has several important consequences.

• Imaged particles appear larger than they actually are.• Some particles that start near the boundary of the imaged

volume may move outside and are not detected.• Small or weakly scattering particles may be blurred enough

that they are not detected in images.

These issues have important implications for estimating par-ticle statistics from any particle imaging system, and they mo-tivate the development of algorithms for correcting images be-fore particle detection and analysis. The correction algorithmsdescried here seek to reduce motion blur using velocity data,and then remove variations in the frequency of detected parti-cles over the image due to nonuniform illumination. A block di-agram of the entire image correction and reconstruction processis shown in Fig. 3.

B. Particle Detection

As a first step in the assessment of motion blur in images,particles were segmented before applying deblurring using aspatially invariant Gaussian noise model. Parameters of themodel were estimated using an entire profile of images, whileexcluding pixel values above 100 DN (recall that the noise floorof images was around 70 DN out of a dynamic range of 0–4095DN). Excluding pixel values for the purpose of estimating theseparameters reduced the influence of large, bright particles.The value of 100 DN was chosen because pixels with valuegreater than 100 DN were clearly identified as particles and notbackground noise by visual inspection. The threshold wasset to be the mean noise level plus five standard deviations.This high threshold value was important in limiting largefluctuations in particle counts due to the sensitivity of images tosmall, weak-scattering particles. Very weak-scattering particleswere therefore neglected, yielding a conservative estimate forthe number of particles in the field. Particles were initiallysegmented from the background by setting only those pixelsabove to have the value “1” and all other pixels to have thevalue “0.” Throughout the paper, pixels with value “1” aftersegmentation are referred to as “foreground pixels.” Segmentedimages were then passed through one dilation and then oneerosion operation using a 3 3 pixel mask [31] to fill in gapsin segmented particle images. The choice of threshold and thebinary morphological operations on images affect the absolutenumber of particles remaining in the segmented image. A widerange of thresholds and mask sizes were tested and the relativenumber of particles in the image at each depth was found tobe insensitive to small changes in both threshold and masksize. Therefore, while the absolute number of particles usingthis method is biased low due to the high threshold and binaryprocessing, the relative particle counts at different depths arelikely unaffected. Once images had been segmented, statisticssuch as particle area and particle count were computed usingan eight-neighbor rule [31].

C. Image Formation Model

To begin the image correction process, a simple model wasused to capture the primary sources of bias in image forma-tion. Note that this model focuses only on image degradationdue to motion and not due to diffraction or lens aberrations asthese factors are not detectable in our images as explained previ-ously. This model is applicable to a general laser-based particleimaging system that is not diffraction limited. Let the true par-ticle image at depth index be defined as where the explicitdependence on has been omitted for brevity. The superscript

is also omitted when the camera number is clear from the

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 589

Fig. 3. Block diagram showing an overview of the processing steps described in the paper. Camera-1 and camera-2 images are first deconvolved using the esti-mated PSFs from ADV velocity data. The images are then thresholded using an initial global threshold and the image with fewer particles is high-boost filtered.Images are then corrected for beam pattern and segmented using an optimized spatially varying threshold. The resulting segmented image from camera-1 is thenback-projected to object space using the inverse of the calibration data. These 3-D data are then forward projected using camera-2’s calibration data and intersectedwith segmented particles in camera-2. The result is then back-projected using the inverse of camera-2’s calibration data and the intersection between the two setsof 3-D points is made. Statistics are computed from this result.

context or the equation applies identically to each camera. Theimage acquired by the th camera was then modeled as

(2)

where is a velocity-dependent point spread function(PSF), is Gaussian readout noise, is an illumina-tion-dependent beam pattern, denotes convolution, and de-notes pixel-wise multiplication. In general, and mustbe estimated simultaneously when correcting images. However,if it is known that is constant over the support of , thetwo factors can be estimated separately. This is the case treatedhere, where the laser intensity varied slowly over the imagedvolume, and the size of the PSF was small compared to varia-tions in illumination. In this case, these two factors can be esti-mated and corrected in any order. However, it is more efficientto estimate and correct for first, and second.

D. Motion Blur Correction

A preliminary check of particle counts computed from raw,segmented images revealed that the number of particles detectedin the two cameras was inconsistent. It was hypothesized thatthis inconsistency was a direct consequence of motion blur. Thecamera that experienced more motion blur may not detect asmany small particles because these particles are blurred belowthe detection threshold. To mitigate inconsistencies, an image-deblurring algorithm was developed using velocity data from

the ADV. In using ADV data to help correct for motion blur,the method implicitly assumes that the ADV sample volume andimage volume experience similar velocities. Preliminary resultsfrom [25] support this assumption. Although the flow field likelyvaries over the FOV of the images, we make the assumption thatthe flow is constant over the FOV and therefore we can use theADV velocity measurements to completely define the flow.

The PSF, , can be obtained from ADV velocity datain a straightforward manner. Let the velocity measurement fromthe ADV be defined by the vector . Further,assume that displacements are small and remains constantover the displaced distance. For an exposure time , the ap-proximate displacement experienced by a particle is .Next, define a discrete point on the line between 0 and as

, for . Using the calibration function, can be computed as

(3)

where

otherwise(4)

and denotes rounding to the nearest integer and implementsnearest neighbor (NN) interpolation. Equation (3) defines theintensity of the PSF at pixel to be the fraction of points alongvector that map to under . Therefore, the estimated

590 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

PSF is the projection of the particle displacement during ex-posure of the image that would be caused by the three-compo-nent velocity measured by the ADV. Under this simple model,the PSF for each camera, estimated using (3), was deconvolvedfrom images using the Lucy–Richardson (LR) algorithm [32].The LR algorithm maximizes the likelihood of the input imagebeing the convolution of the output image with the given PSFunder the assumption of Poisson noise. Although our imageformation model assumes Gaussian noise, the statistics of thetrue underlying image were found to be better modeled usingPoisson noise (due to the low-probability occurrence of brightparticles) and therefore the LR algorithm performed better thanother methods that assume purely Gaussian models such as theWiener filter. Using this model, the deconvolved image is

(5)

where denotes the probability of .For some image pairs, significant differences in particle

counts remained after deconvolution. This was most pro-nounced for particles with diameter on the order of 1–5 pixels.For these image pairs, the image with fewer small particles washigh-boost filtered [31] using an adaptive filter of the form

(6)

where

(7)

The scale parameter was estimated by minimizing the absoluteerror between the new particle count in the filtered image and theoriginal particle count in the unfiltered image. Note that becausethe particle detection is a nonlinear process, it is not always pos-sible to find an that makes the particle counts equal. Generally,this procedure left the majority of the image unchanged whileslightly boosting the pixel values of small particles. In essence,this filtering only gave a small refinement to the deconvolvedimage. Note that for , and for most im-ages, was between 0.1 and 0.5. In addition, the high-boostfiltering only altered pixel values near edges. The overall shapeand size of particles were not changed by this operation.

E. Illumination Correction

Having corrected for image degradation due to motion blur,the next step was to address the issue of nonuniform illumina-tion. Regions of higher and lower intensity in the laser sheet cancause dramatic variations in the number of detected particlesthroughout the field of view. Inspection of images revealed thatthis variability caused a significant bias (typically around 60%)in the frequency of detected particles over the field of view. Tocorrect this, two methods were applied to images in series. Theinitial correction mask for each image was computed as the av-erage beam pattern estimate over depth

(8)

where is the beam pattern estimate for the image .The beam pattern estimate was computed in a two-step processof downsampling followed by upsampling using bicubic inter-polation. Due to the local uniformity of the beam pattern, itis sufficient to operate on blocks in the image rather than in-dividual pixels. The downsampled image was computed as theblock-minimum pixel value over the image

for

(9)

where and define the block size of the downsampling oper-ation, and integers and are limited by the size of the image.The beam pattern estimate was then computed as

(10)

where is an upsampling operation. The function was abicubic function of the pixel coordinates and the downsampledimage. This procedure is advantageous because it is insensitiveto bright particles provided that the area is larger than thelargest particle area in the image. However, any dead pixels, orerroneous low-valued pixels resulting from the motion correc-tion must be eliminated to avoid biasing the minimization overthe image block. The beam-corrected image was obtained bypixel-wise division of the uncorrected image by

(11)

To ensure that the beam pattern was completely corrected,the process of estimation and correction was repeated iterativelyuntil the maximum variation in the estimated beam pattern wasless than 1%, or 0.01 DN. Corrected images were then used toestimate the probability of a detected particle center occurringat each pixel in the image.

Despite the beam-pattern correction, biases in the frequencyof occurrence of particle centers in the image due to nonuniformillumination remained. To correct for these biases, the estimatedbias was used to spatially adjust the threshold computed ear-lier. Specifically, let the average particle center frequency forthe th iteration in the correction process be . This func-tion was estimated in a similar manner to the beam pattern es-timate in (9) and (10) except using an image of particle centersrather than pixel intensity and replacing the “min” operation in(9) with the “sum” operation. Note that depends on thethreshold, and therefore will change as the threshold is updatedat each iteration. The threshold was then updated according to

(12)

where

(13)

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 591

After iterations, the resulting local threshold was used to seg-ment particle images according to

otherwise(14)

The parameter was selected to be 0.45 and controlled thespeed with which the update transformed the threshold. Setting

to values greater than 0.5 typically resulted in unstable oscil-lations in the algorithm.

For a uniformly random particle field, there should be nospatial bias in the probability of detecting a particle. There-fore, the particle detection frequency map should be un-changed by splitting into subregions and regrouping. As onefinal check, was broken into four quadrants, the quad-rants were swapped, and the resulting image was then visuallyinspected. No edges or boarders were found indicating that thefrequency of particle detection was likely uniform over the fieldof view.

F. 3-D Particle Reconstruction

With images corrected for both motion blur and also nonuni-form illumination, the final step in the processing was to com-pute a 3-D reconstruction of the particle field. The problem oflocalizing and tracking particles from their projections into sev-eral images has been studied extensively in the PTV literature[21], [33]–[35]. A common approach is to apply a local maximafilter to find 2-D particle centers in each image, and then usethe “collinearity condition” followed by ray intersection to findcorrespondences between particle centers [21]. This method hasproved to be highly effective in PTV algorithms. However, itdoes require one to compensate for rays traveling through mul-tiple media, overlapping rays from particles in the same image,and the need for knowledge of the camera parameters.

To avoid some of these issues, a table-based inverse approachwas used to localize particles in 3-D. This approach has the ad-vantage that it only requires knowledge of a function to mapfrom object space to each image space, and a set of stereo, bi-nary particle images. The method is similar to the back-projec-tion used in limited-view tomography [36], up to a few key dif-ferences. First, particle detection and segmentation is performedin 2-D image space, not in 3-D space. Second, constraints onthe reconstruction are applied before back-projection in imagespace, and after reconstructing 3-D particles that are consistentwith images. These deviations are justified in this case becauseunlike most tomography applications, the primary objective inthis problem is locating and sizing a sparse set of particles in3-D, not reconstructing a dense 3-D volume. The method firstbuilds up a fine resolution grid in the object space. The func-tions defined in (1) are then used to map grid points into eachimage building up a lookup table (LUT). The method for recon-structing a single particle can be summarized as follows.

1) Project particles from either camera-1 or camera-2 (call itcamera-a) into 3-D using camera-a’s LUT.

2) Project resulting 3-D grid points into the other cameraimage (call it camera-b) using camera-b’s forward map-ping and intersect 2-D detected particles in camera-b withthe result of the projected 3-D grid points.

3) Reproject the result of the intersection to 3-D usingcamera-b’s LUT.

4) Take the intersection of two sets of 3-D points as the finalreconstruction.

Specifically, let a coordinate on the grid in object space be de-fined by the vector where is an element inthe set of grid points . The corresponding physical locationof the grid point is then simply , where is the grid resolu-tion in distance per sample. For the results presented here,125 mS . Using the estimated mapping functions, the pro-jection of the grid point into camera-a is given by

(15)

The inverse mapping function (from camera-a to object space)for the th particle (step 1) was defined as the set

(16)where represents the set of foreground pixel coordinates inthe camera-a image. Recall that a foreground pixel is defined asany pixel in the image with value greater than the threshold. Al-though not shown in (16), the values of the back-projected pixels

were retained for use later in the analysis.Equation (16) states that the inversion selects all grid points thatare consistent with the given pixel under the forward mappingfunction. This method was advantageous due to the minimalnumber of assumptions required to perform the inversion, andthe relative flatness of the volume illuminated by the laser sheet,which permitted the grid to fit in available computer memory.Furthermore, the approach retains information about particlevolume and shape, as well as the location of the particle, whichis the primary interest in PTV methods.

Using (16), the set of grid points consistent with detected par-ticles in each image was identified in a straightforward manner.Let the back-projected grid points from camera-a be . Theforeground pixels in camera-b (the other camera image) that areconsistent with (step 2) were defined by the set

(17)where are the set of foreground pixels originally detectedin camera-b. The set was then back-projected using a sim-ilar operation to (16) (step 3) to yield the set

(18)As the final step in the reconstruction (step 4), the intersectionof the sets and was computed as

(19)

As can be seen from (19), the final set of grid points is consistentunder the mapping functions with the corresponding segmentedforeground pixels in camera-1 and camera-2.

592 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

Once the set intersection had been computed, the center andvolume of the th reconstructed particle were computed as

(20)

and

(21)

where is the number of points in . The functionin (21) is a correction factor that was applied to account for ex-cess voxels in the reconstruction that resulted from the limitednumber of views. The correction factor was applied under theassumption that particles were spherical. For spherical particleswith radius much larger than the pixel size, the excess can be de-rived using calculus to be . However, for smaller particleswhere the pixel size is on the same order as the radius, this termis insufficient to model the excess volume. To correctly compen-sate for the excess, a Monte Carlo simulation was performed toestimate the required correction for those smaller particles. De-tails of are given in the Appendix. The weights in(20) were computed as the average pixel intensities from eachcamera for the given grid point

(22)

IV. IMAGE ANALYSIS ALGORITHMS

To quantify the accuracy and precision of image correctionalgorithms and the 3-D reconstruction, a set of image analysisand image simulation algorithms were developed. The primaryobjective of these algorithms was to estimate the amount of blur-ring present in real particle images, and to simulate images fromknown particle size distributions and velocity fields to computeestimates of root mean square (RMS) error in reconstructed par-ticle size.

A. PSF Estimation

To complement the analysis of the reconstruction algorithmon simulated data, it was desirable to estimate the amount ofblurring in real images and compare this before and after cor-recting for motion blur. Under the assumption that the majorityof imaged particles have diameter less than one pixel, the fre-quency domain characteristics of the particle image may be usedto estimate the modulation transfer function (MTF) of the image[28]. Using this approach, the shape of the 2-D fast Fouriertransform (FFT) of the particle image (the shape of the MTF)can be used to estimate the PSF.

The shape of the MTF is defined as the coordinates of all co-efficients in the frequency domain that have magnitude greaterthan 5% of the average value in the image. The choice of 5% as athreshold was made empirically and demonstrated a good repre-sentation of the ellipsoidal shape of the MTF. As shown in [28],there will be strong correlations between the horizontal and ver-tical coordinates of these points for images that are blurred. Toquantify the degree of blurring, an eigenvalue decomposition of

the covariance matrix between vertical and horizontal coordi-nates was computed. The eigenvectors and were used toestimate the orientation of blurring, and the eigenvalues and

were converted to a degree of blurring in pixels using

(23)

where is the number of samples along each dimensionused to compute the FFT, which must be equal for both dimen-sions of the image for (23) to hold. The factor of four is due tothe factor of two scaling in each dimension of the transform andensures that when all of the pixels in the FFT are above5% of the average value in the image.

The MTF can then be graphically represented as an ellipse.The equation of the ellipse follows from (23)

(24)

where is the orthonormal matrix of eigenvectors[6].

B. Particle Image Simulation

To quantify the accuracy and precision of the 3-D reconstruc-tion of particle volumes, a particle image simulation was de-veloped using the mapping functions defined in (1). The pri-mary objective of the image simulator was to yield images fromknown particle size distributions with image statistics that weresimilar to the real images collected by the system. These simu-lated images could then be used to quantify errors in estimatedparticle statistics due to image blurring and the improvementobtained by correcting images. The simulator assumed all par-ticles were spherical.

To simulate the blurring of a particle due to relative motionduring an exposure, define the vectorwhere is the exposure time and for are thecomponents of the assumed (or estimated) velocity that blursthe particle image. Let define the diameter of a particle and

define the resolution of the 3-D grid, as above. The projectionof the blurred particle into the grid is given by the set

(25)

where . The intensity values of each point werethen divided by . To model the effect of the Gaussian distribu-tion of laser intensity in the cross-plane direction, each point inthe projection was scaled by

(26)

where

(27)

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 593

Fig. 4. Demonstration of the correlation between the differences in the number of detected particles � and the velocity as measured by the ADV. (a) Plot of� along with the three components of velocity; � is the in-plane horizontal, � is the cross-plane horizontal, and � is the in-plane vertical (� positivetowards camera-1). (b) Correlation between � and� as a function of particle area along with correlation between � and � . (c) Same as (a) but after theapplication of the motion correction algorithm. (d) Same as (b) but after the application of the motion correction algorithm. Dashed lines in (b) and (d) representplus and minus one standard deviation.

representing a 6.0-mm-thick laser sheet as defined by thebeam waist. Finally, points were projected into each camera toform the image

(28)

Normalizing (28) by makes the intensity of each pixel inproportional to the fraction of points in that get pro-

jected to that pixel. This process of synthetic image formationwas then repeated times for particle diameters distributed ac-cording to an exponential distribution and the final image wasformed by summing over the particle images

(29)

where and are the mean and standard deviation computedfrom the real images in the profile.

V. RESULTS

Image correction and reconstruction algorithms were evalu-ated on one of the profiles recorded by the profiler during a two-week cruise in September 2004. The profile was named SC13–2,the second profile of the 13th deployment of the system. Addi-tional details about system deployment at sea are available in[25].

A. Validation of the Motion Correction Algorithm

Image correction algorithms were based on the principle thatmotion-induced blurring in images was caused by the in situ ve-locity field, and therefore by deconvolving this signal, blurringand subsequent errors in estimated particle size can be reduced.Let the number of particles detected in camera- ’s image be

, and the difference in the number of detected particles be-tween cameras be

(30)

To confirm the influence of motion blur on images, a com-parison was made between and all three components of

594 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

Fig. 5. Bias in the average frequency of particles over the image (a) before and (b) after correction. It can be seen that there is roughly a 60% difference in thefrequency of particle centers between the middle and the edges of the image before the correction. After the correction, the maximum deviation is below 4%. Notethat the scale in (b) is one twentieth that of (a).

measured ADV velocity data (Fig. 4). Inspection of Fig. 4(a)reveals that there is a strong qualitative correlation betweencross-plane velocity and . When coupled with a steadyvertical velocity, cross-plane velocities will cause nonsym-metric blurring between each camera due to the systemgeometry [Fig. 1(b)]. One camera may image particles movingdirectly towards it and therefore the image will not be blurred.The other camera will then image particles moving up and awaycausing a perceptible blur. Because particles were detected inthe image via a threshold operator, blurring can result in manyof the smaller particles falling below the threshold. Indeed,this result is confirmed in Fig. 4(b), where a strong correlationbetween the cross-plane velocity and is found forsmaller particles. Likewise, the correlation between and

is low for smaller particles.After applying the motion correction outlined in

Section III-D, it is expected that the absolute magnitude ofwould be reduced. This result is confirmed in Fig. 4(c)

where the profile of appears nearly flat, and the correlationbetween and is significantly increased for smallerparticles [Fig. 4(d), gray curve]. However, there is stillsignificant correlation between and [Fig. 4(d)]. Thisshows that while the correction is able to mitigate discrepanciesin particle counts between cameras, it is unable to removethe effect of motion blur from the image entirely. This resultcan be explained by two important limitations of the motionblur correction. First, because of the continuous exposuretime used to record images, the motion blur PSF has zerosin its transfer function and there is no direct inverse that canperfectly undo the blurring even in the absence of noise.Second, ADV velocity estimates are made at a single locationin space, and do not perfectly capture the velocity field in thefield of view due to the sensor being positioned some distanceaway. Therefore, the PSF estimate will not exactly capture theburring experienced by images in each camera.

B. Validation of the Illumination Correction Algorithm

To validate the illumination correction algorithm defined inSection III-E, average images of detected particle counts over

Fig. 6. Comparison of elliptical representations of image PSFs showing themajor and minor axes of blurring along with the orientation of the axes relativeto the image coordinates for a total of 50 images. The PSFs are computed fromthe raw (gray) and motion-corrected (black) images. The processing yielded astatistically significant reduction in spatial variance of the PSF of 30%.

the field of view are computed before and after applying the cor-rection (Fig. 5). It can be seen that there is a large variation inthe average number of detected particles over the image beforeapplying the correction. This is directly related to a nonuniformdistribution of laser intensity over the field that results from amismatch between the laser-sheet forming optics and the laserbeam shape. Due to the nonlinear threshold operation applied tosegment particles, even a very small bias in laser intensity overthe field of view can cause strong biases in particle detection.Indeed, the beam pattern of raw images collected by the systemis known to vary by only a few digital numbers over the entirefield of view [the pattern is consistent with that of Fig. 5(a)].However, this small variation results in particle count frequen-cies that vary by nearly 60% over the field of view. As can beseen in Fig. 5(b), applying the illumination correction, which it-eratively adjusts the global threshold to minimize variations inparticle count frequency, dramatically reduces this spatial bias.The key aspect of the algorithm, which permits these biases to

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 595

be reduced, is that at each iteration the average number of par-ticles detected at each point in the field is computed and used toupdate the global threshold. Because this update varies over theimage, the final threshold also varies over the image in exactlythe inverse of the nonuniform illumination pattern. This allowsthe iterative algorithm to capture the nonlinear threshold opera-tion in its computation of the threshold correction.

C. Evaluation of Motion Correction and 3-D Reconstruction

A set of 50 images recorded by the imaging system from ahigh-shear region between 22- and 26-m depth were correctedusing the algorithms described in Section III. Using the PSF es-timation method described in Section IV-A, elliptical represen-tations of the PSF estimates were compared before and afterapplying the correction algorithms (Fig. 6). Qualitatively, it canbe seen that the correction algorithms significantly decrease themagnitude of the PSF. Quantitatively, the average reduction inthe spatial variance of the PSF was 30% over the 50 images. Thisreduction was statistically significant at the 5% level (1 10

-value). In addition to the 30% reduction in spatial variance, itcan be seen that the spatial variance of the PSF after correctingimages is on the order of 1–2 pixels, which indicates that it isnearing the resolution of the system.

To further explore the utility of the image correction methods,and to quantify the bias and variance in the 3-D reconstruction,the particle image simulator was used to generate expected im-ages from known particle distributions and velocity fields. FiftyADV velocity readings spanning the depth range of 22–40 mwere selected and used as input to the particle simulator. Then,20 particle diameters ranging linearly from 0.02 cm up to 0.2cm were selected. For each particle diameter, a single image of50 randomly positioned particles was simulated for each depth.These images were then processed through the image correc-tion and 3-D reconstruction algorithms to yield 3-D estimatesfor particle position and volume [volume was estimated using(21) as defined in the Appendix]. Since true particle diameterswere known in the simulation, it was possible to compute theRMS error in an estimated particle diameter before and afterapplying the correction [Fig. 7(a)]. It can be seen that the erroris much higher between 22 and 26 m (presumably due to thelarger cross-plane or in-plane horizontal velocity in that region).This is true in both uncorrected and corrected images. How-ever, for corrected images, the difference between the error inthe high-shear region and the rest of the profile is smaller, indi-cating that the image correction is able to dramatically improvethe particle size estimates in regions of significant motion-in-duced blurring.

Average (over depth) relative errors in an estimated particlediameter [Fig. 7(b)] show that the image correction typicallyyields more than a 50% reduction over the uncorrected imagesfor all particle sizes. It can be seen that the relative error in-creases as the particle diameter decreases. This is to be expectedgiven the finite resolution of the imaging system. The resolutionis between 0.15 and 0.25 mm, and Fig. 7(b) shows that even withthe correction, errors for particles of similar diameter to pixelsize remain rather high, but still significantly lower than for un-corrected images. Finally, for particles larger than 0.8 mm in

Fig. 7. (a) Simulated RMS error in estimated particle diameter as a functionof depth in the SC13–2 profile, and true particle diameter. The depth in theprofile was used to predict a velocity vector using the ADV data for correctingthe image. (b) Simulated average RMS error in estimated particle diameter over25 different depths as a function of particle diameter with and without motioncorrection. Error bars denote one standard deviation.

diameter, the corrected average error is less than 10% and dis-plays no size dependence.

D. Estimation of 3-D Particle Distributions

Images from the SC13–2 profile were corrected as describedin Section III. These corrected images were then passed throughthe image reconstruction algorithm to estimate 3-D particle dis-tributions for each image pair in the SC13–2 profile.

An example of an in situ, 3-D distribution of particlesis shown in Fig. 8(a). From this distribution, NN distance[Fig. 8(b)], and particle volume [Fig. 8(c)] probability densityfunctions (pdfs) were estimated. From Fig. 8, a wide variationin particle volume as well as some complex reconstructedparticle shapes can be seen. Locations of particles clearly spanthe full width of the laser sheet, indicating that images are sen-sitive to particles throughout the illuminated volume. The NNdistribution [Fig. 8(b)] was compared with two models: 1) theanalytic pdf for a uniformly random 3-D distribution of particleswith infinitesimal diameter [37], [38]; 2) a random sequentialabsorption (RSA) simulation using spherical particles similar

596 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

Fig. 8. Three-dimensional particle reconstruction results for a single image pair at a depth of 25 m. (a) Three-dimensional distribution. The grayscale indicatesthe average intensity of back-projected pixels between the two cameras (darker is higher). (b) Comparison between the histogram of NN distance from a singleimage pair, an RSA model, and the analytic pdf. (c) Log-abundance per particle volume computed from the a single image pair.

to [39], with the particle diameter pdf estimated from imagedata. The particle diameter pdf was used to select the sphericalparticle diameters randomly, and the domain volume and shapewas set equal to the size and shape of the reconstruction grid.The comparison shows agreement between all three models.However, it is clear that the infinitesimal diameter model doesnot capture the enhanced probability of larger NN distances dueto the finite sizes of the particles. The RSA simulation howeverdoes capture this, and yields good agreement with the NN datafor this particular depth. The goodness of the fit of the analyticand RSA models to the data was evaluated by drawing samplesrandomly from the analytic and RSA pdfs and comparing theseto the data using a two-sample Kolmogorov–Smirnov test [40].The test indicated that the samples from the RSA pdf and datawere drawn from the same distribution (0.09 -value) where asthe samples from the analytic pdf were drawn from a differentdistribution (0.0005 -value). The log-abundance of particle

volume [Fig. 8(c)] does not appear to follow a linear trend asis typically assumed. It is difficult to define this to be either areduced abundance of small particles or large particles. How-ever, given that the 3-D reconstruction is more likely to capturelarge particles (with enough scatter to register in both images)than small particles, it is possible that the reduced detection ofsmall particles could explain the deviation from the logarithmicdistribution of particle abundance typically assumed.

Histograms of NN distance and particle diameter werecomputed for a section of the SC13–2 profile ranging from22 to 40 m. NN distributions for data [Fig. 9(a)] and models[Fig. 9(a) and (c)] were compared as a function of depth and NNdistance. Fig. 9(d) shows the normalized difference betweendata and RSA model outputs (positive meaning data predictshigher probability than the model). There is an interestingfeature around 32 m where the number of particles dramaticallydrops off, and thus the NN histograms broaden substantially. In

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 597

Fig. 9. Histograms of NN distance as a function of depth for the SC13–2 profile computed from (a) reconstructed data, (b) RSA simulation, and (c) analyticpdf. The fractional anomaly (difference divided by the mean) between histograms computed from the reconstructed data and the RSA simulation is shown in (d).Positive anomaly indicates that the data are predicting higher probability than the model. In the regions below 32 m, the total number of particles is quite low,which leads to higher variance in the histogram.

addition, both data and model outputs show a strong variationin the width of the distribution between 22 and 24 m. The nor-malized difference image showed consistent trends throughoutthe profile with data generally predicting higher probabilitiesin the tails of the distribution than the RSA model. Below32 m, where the abundance of particles drops substantially,histograms become much broader and have significantly highervariance. Since the difference between data and model outputsin the region between 22 and 32 m is most prominent aroundregions of low probability, it is difficult to say if this is anindication of a deviation from uniform randomness, or simplya bias in the histogram estimates or RSA simulation.

Finally, the total number of particles at each depth was plottedover a histogram of particle size versus depth (Fig. 10). It canbe seen that there is a very high abundance of large diameterparticles ( 5 mm) between 22 and 24 m in the profile. Theseare not present below 24 m where the largest particles have anequivalent diameter smaller than 5 mm. Around 32 m, the abun-dance of those particles between 2- and 4-mm equivalent diam-eter decline rapidly with depth. There are also variations in thenumber of large particles relative to the total number of par-ticles. Fig. 10 shows that while the total number of particlesgenerally decreases with depth, but oscillates on small scales,the number of large particles drops off rather systematically atprecise depths in the profile. Note that these features occur at

Fig. 10. Histogram image of equivalent particle diameter with scaled total par-ticle count as a function of depth. Note the occurrence of very large particles(�5-mm diameter) between 22 and 24 m. The abundance of small particlesdominates the total particle count, and follows a nearly identical trend.

similar locations to changes in the velocity field [Fig. 4(a)] andtherefore could be confused with changes in motion blur in im-ages. However, the motion corrected images show a relativelyflat profile [Fig. 4(c)] supporting the conclusion that these vari-ations in abundance and changes in particle size spectra are infact real.

598 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

Fig. 11. RMS error in estimated particle diameter as a function of particle di-ameter, motion blur, and volume estimation method. The estimation methodsare described in the Appendix. The “blurred” and “corrected” curves show theerror from significant (�10 cm �s ) motion blur with and without motion cor-rection, respectively.

VI. DISCUSSION

Motion blur and nonuniform illumination are general prob-lems that can bias particle size and particle distributions esti-mated from in situ images. These biases are likely to be mostsignificant in systems that employ long exposure times relativeto typical particle velocities and systems that use large field ofviews, or nonuniform laser illumination. Analysis of in situ dataand simulations confirm that the proposed motion and illumina-tion correction algorithms successfully mitigate these problemsfor large particles (diameter greater than 0.8 mm).

This result demonstrates a key advantage of stereo imagingfor this application. As can be seen in Fig. 4, discrepancies inparticle counts between image pairs were correlated with watermotion, and could be used to aid in detecting biases in im-ages. These correlations were useful, and could be implementedeasily without the need for particle simulation or PSF estima-tion. However, differences in particle counts between camerascan under-represent blurring when motion blur is roughly equalbetween cameras (for example, purely vertical blur). Therefore,it was important to quantify motion blur directly in images boththrough the estimation of real image PSFs (Fig. 6) and simula-tion (Fig. 7). In this paper, motion correction was implementedunder the assumption of available velocity data measured by anADV. However, it is possible, although not yet investigated, thatstereo PIV data as obtained in [25] could be used in place ofADV data yielding a self-sufficient solution.

Illumination correction was able to nearly eliminate spatialbiases in the frequency of detected particles over the field ofview. The key aspect of this approach was an iterative algorithmthat incorporated particle detection frequency bias into the up-date step at each iteration. For suitable learning parameters (inthis case ) the algorithm converged nicely and resultedin bias reductions over the whole field of view in excess of oneorder of magnitude.

Using corrected images, 3-D reconstruction was accom-plished with a simple back-projection approach that made useof known calibration data for mapping points in object spaceinto each camera space. While the method was simple, avoided

the need to estimate camera parameters, and yielded volumetricreconstructions of particles as opposed to estimates for par-ticle centers, it suffered from high memory and computationrequirements. In the case of the rather-thin laser sheet used inthese experiments, this was not a problem. However, for thickersheets or larger particle fields, it may be required to use anapproach in which only particle centroids and diameters arereconstructed.

In regard to 3-D estimation of particle distributions, the mostsignificant drawback of the system examined here is due to thelimit of two cameras. It is well known that for dense particlefields, a minimum of three cameras is required to uniquely re-construct the field [21]. In this work, this problem was addressedby removing duplicate particles that appeared in the reconstruc-tion, and ensuring that each detected particle in an image paircould only map to a single particle in 3-D.

Preliminary estimates for NN distance, particle diameters,and particle volume were statistically similar to those for uni-formly random particle fields with equivalent particle size dis-tributions as shown in Figs. 8–10. Although ground truth datafor these distributions is unavailable, it is likely that biases inthe reconstruction algorithm would show up as strong peaks inthe NN pdfs (due to favoring certain particle sizes or positionsin the reconstruction) and these are not evident. In addition, theRSA model yields a good match to these data in regions wherethere are sufficient data for comparison. It should be noted thatthe 3-D reconstruction pursued here used a spherical approx-imation to the actual particle shape when estimating particlediameter and particle volume. This approximation was impor-tant for yielding realistic particle diameter estimates as shownin Fig. 11. Without any correction, error, even in the absence ofmotion blur, could be as high as 20% for particles between 5-and 10-mm diameter.

VII. CONCLUSION

Estimating 3-D distributions and size statistics presents sev-eral important challenges for in situ particle imaging systems.Correcting motion blur and nonuniform illumination were ex-plored in this work using data from a recently developed par-ticle imaging system. It was shown that these algorithms couldreduce RMS errors in particle size estimates by more than 50%,and virtually eliminate bias in the frequency of detected par-ticles over the field of view due to nonuniform illumination.Using corrected images, a 3-D reconstruction algorithm basedon back-projection was implemented and used to estimate 3-Dparticle distributions for a sample profile from a field deploy-ment of the profiler. From these distributions, statistics of par-ticle fields including particle size, particle volume, and NN dis-tance were computed. In the examples considered, reconstructedparticles were consistent with a uniformly random distribution.

In situ acquisition of particle images is a key compo-nent in studying small-scale physical–biological interactions,thin-layer formation, floc settling speeds, and open oceancarbon transport. By enhancing the accuracy of these statisticsthrough motion and illumination correction, we hope to extractmore detailed information about particle distributions that canpoint to new discoveries and enhanced understanding of thesecritical biophysical processes.

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 599

APPENDIX

PARTICLE VOLUME ESTIMATION

The most straightforward volume estimate is

(31)

where is the number of grid points representing the particleand is the grid resolution. This volume estimate contains twotypes of errors: 1) errors due to the limited number of views, and2) errors due to the finite size of the grid points. Errors of the firstkind can be corrected if the shape (and orientation) of the par-ticle is known. For spherical particles, the two-view back-pro-jection yields two intersecting cylinders with radius and lengthequal to the true radius of the particle. The volume of the inter-section is given by

(32)

where is the true volume of a sphere with the same radius. Fora continuous reconstruction, the excess volume in the estimatecan therefore be corrected by scaling it by . However, thereconstruction is discrete, and therefore this correction needs tobe adjusted. The significance of discretization error is larger forsmaller particles. Under the spherical particle assumption, thevolume is estimated as

(33)

where is a scale factor that depends on the estimated di-ameter of the particle . In the optimal case , the truediameter is known, and the scale factor is selected to be the onethat minimizes the error between the true and estimated diam-eter. To apply (33) when the true diameter is unknown, a modelfor is developed, and is estimated from data. The modelis defined as

(34)

where is found from matching the model to the optimal scale.For these data, is the best match. The diameter estimate isfound by evaluating (34) for a wide range of possible diametersand picking the one that is most consistent in volume

(35)

The average RMS error in estimated particle diameter usingthis method is compared as a function of particle size and esti-mation method in Fig. 11.

ACKNOWLEDGMENT

The authors would like to thank the captain, crew, students,and volunteers that were instrumental in the development and

successful deployment of the system, and two anonymous re-viewers for providing helpful feedback about the manuscript.

REFERENCES

[1] P. Franks, “Thin layers of phytoplankton: A model of formationby near-inertial wave shear,” Deep-Sea Res. I, vol. 42, pp.75–91, 1995.

[2] M. M. Dekshenieks, P. L. Donaghay, J. M. Sullivan, J. E. B. Rines, T.R. Osborn, and M. S. Twardowski, “Temporal and spatial occurrenceof thin phytoplankton layers in relation to physical processes,” Mar.Ecol.—Prog. Ser., vol. 223, pp. 61–71, 2001.

[3] J. R. Seymour, R. L. Waters, and J. G. Mitchell, “Geostatistical charac-terization of centimeter-scale spatial structure of in vivo fluorescence,”Mar. Ecol.—Prog. Ser., vol. 251, pp. 49–58, 2003.

[4] D. Holliday, P. Donaghay, C. Greenlaw, D. E. McGehee, M. McManus,J. M. Sullivan, and J. Miksis, “Advances in defining fine- and micro-scale pattern in marine plankton,” Aquat. Living Resour., vol. 16, no.3, pp. 131–136, 2003.

[5] E. Malkiel, J. N. Abras, E. A. Widder, and J. Katz, “On the spatialdistribution and nearest neighbor distance between particles in thewater column determined from in situ holographic measurements,” J.Plankton Res., vol. 28, no. 2, pp. 149–170, Feb. 2006.

[6] P. J. Franks and J. S. Jaffe, “Microscale variability in thedistributions of large fluorescent particles observed in situ witha planar laser imaging fluorometer,” J. Mar. Syst., vol. 69, pp.254–270, 2008.

[7] J. V. Steinbuck, M. T. Stacey, M. A. McManus, O. M. Cheriton, andJ. P. Ryan, “Turbulence observations in a phytoplankton thin layer:Implications for formation, maintenance, and breakdown,” Limnol.Oceanogr., vol. 54, pp. 1353–1368, 2009.

[8] R. W. Sheldon, W. H. Sutcliff, and A. Prakash, “Size distribution ofparticles in ocean,” Limnol. Oceanogr., vol. 17, no. 3, pp. 327–340,1972.

[9] S. M. Gallager, H. Yamazaki, and C. S. Davis, “Contribution offine-scale vertical structure and swimming behavior to formation ofplankton layers on Georges Bank,” Mar. Ecol.—Prog. Ser., vol. 267,pp. 27–43, 2004.

[10] M. Lunven, P. Gentien, K. Kononen, E. L. Gall, and M. M. Danielou,“In situ video and diffraction analysis of marine particles,” Estuar.Coast. Shelf S., vol. 57, no. 5–6, pp. 1127–1137, Aug. 2003.

[11] E. A. Widder and S. Johnsen, “3D spatial point patterns of biolumines-cent plankton: A map of the ’minefield’,” J. Plankton Res., vol. 22, no.3, pp. 409–420, Mar. 2000.

[12] F. Wolk, H. Yamazaki, L. Seuront, and R. G. Lueck, “A new free-fallprofiler for measuring biophysical microstructure,” J. Atmos. Ocean.Technol., vol. 19, no. 5, pp. 780–793, May 2002.

[13] R. A. Desiderio, T. J. Cowles, J. N. Moum, and M. Myrick,“Microstructure profiles of laser-induced chlorophyll fluores-cence-spectra—Evaluation of backscatter and forward-scatterfiberoptic sensors,” J. Atmos. Ocean. Technol., vol. 10, no. 2, pp.209–224, Apr. 1993.

[14] E. Malkiel, J. N. Abras, and J. Katz, “Automated scanningand measurements of particle distributions within a holographicreconstructed volume,” Meas. Sci. Technol., vol. 15, no. 4, pp.601–612, Apr. 2004.

[15] J. Watson, S. Alexander, G. Craig, D. C. Hendry, P. R. Hobson, R. S.Lampitt, J. M. Marteau, H. Nareid, M. A. Player, K. Saw, and K. Tip-ping, “Simultaneous in-line and off-axis subsea holographic recordingof plankton and other marine particles,” Meas. Sci. Technol., vol. 12,no. 8, pp. L9–L15, Aug. 2001.

[16] R. B. Owen and A. A. Zozulya, “In-line digital holographic sensor formonitoring and characterizing marine particulates,” Opt. Eng., vol. 39,no. 8, pp. 2187–2197, Aug. 2000.

[17] D. V. Holliday, P. L. Donaghay, C. F. Greenlaw, J. M. Napp,and J. M. Sullivan, “High-frequency acoustics and bio-opticsin ecosystems research,” ICES J. Mar. Sci., vol. 66, no. 6,pp. 974–980, Jul. 2009.

[18] P. S. Hill, J. P. Syvitski, E. A. Cowan, and R. D. Powell, “In situ ob-servations of floc settling velocities in Glacier Bay, Alaska,” Mar. Ge-ology, vol. 145, no. 10, pp. 85–94, Feb. 1998.

[19] O. A. Mikkelsen, T. G. Milligan, P. S. Hill, and D. Moffatt, “INS-SECT—An instrumented platform for investigating floc propertiesclose to the seabed,” Limnol. Oceanogr.—Meth., vol. 2, pp. 226–236,Jul. 2004.

600 IEEE JOURNAL OF OCEANIC ENGINEERING, VOL. 36, NO. 4, OCTOBER 2011

[20] A. K. Prasad, “Stereoscopic particle image velocimetry,” Exp. Fluids,vol. 29, no. 2, pp. 103–116, Aug. 2000.

[21] H. G. Maas, A. Gruen, and D. Papantoniou, “Particle trackingvelocimetry in 3-dimensional flows—Part 1: Photogrammetric de-termination of particle coordinates,” Exp. Fluids, vol. 15, no. 2, pp.133–146, Jul. 1993.

[22] L. Bertuccioli, G. I. Roth, J. Katz, and T. R. Osborn, “A submersibleparticle image velocimetry system for turbulence measurements in thebottom boundary layer,” J. Atmos. Ocean. Technol., vol. 16, no. 11, pp.1635–1646, Nov. 1999.

[23] P. Doron, L. Bertuccioli, J. Katz, and T. R. Osborn, “Turbulencecharacteristics and dissipation estimates in the coastal ocean bottomboundary layer from PIV data,” J. Phys. Oceanogr., vol. 31, no. 8, pp.2108–2134, 2001.

[24] W. A. M. N. Smith, P. Atsavapranee, J. Katz, and T. R. Osborn, “PIVmeasurements in the bottom boundary layer of the coastal ocean,” Exp.Fluids, vol. 33, no. 6, pp. 962–971, Dec. 2002.

[25] J. V. Stienbuck, P. L. D. Roberts, C. D. Troy, A. D. Horner-Devine,F. Simonet, A. H. Uhlman, J. S. Jaffe, P. J. S. Franks, and S. G.Monismith, “An autonomous open-ocean stereoscopic PIV profiler,”J. Atmos. Ocean. Tehcnol., vol. 27, pp. 1366–1380, 2010.

[26] D. G. Zawada, “The application of a novel multispectral imagingsystem to the in vivo study of fluorescent compounds in selectedmarine organisms,” Ph.D. dissertation, Mar. Phys. Lab., Scripps Inst.Oceanogr., Univ. California San Diego, La Jolla, CA, 2002.

[27] D. G. Zawada, “Image processing of underwater multispectralimagery,” IEEE J. Ocean. Eng., vol. 28, no. 4, pp. 583–594,Oct. 2003.

[28] P. Franks and J. Jaffe, “Microscale distributions of phytoplankton: Ini-tial results from a two-dimensional imaging fluorometer, OSST,” Mar.Ecol.—Prog. Ser., vol. 220, pp. 59–72, 2001.

[29] J. Jaffe, P. Franks, and A. Leising, “Simultaneous imaging of phyto-plankton and zooplankton distributions,” Oceanography, vol. 11, pp.24–29, 1998.

[30] J. W. Goodman, Introduction to Fourier Optics, 3rd ed. GreenwoodVillage, CA: Roberts & Company, 2004, ch. 6.

[31] R. C. Gonzales and R. E. Woods, Digital Image Processing. Engle-wood Cliffs, NJ: Prentice-Hall, 2002, ch. 9.

[32] J. L. Starck, E. Pantin, and F. Murtagh, “Deconvolution in astronomy:A review,” Publ. Astron. Soc. Pac., vol. 114, no. 800, pp. 1051–1069,Oct. 2002.

[33] F. Pereira, H. Stuer, E. C. Graff, and M. Gharib, “Two-frame 3D par-ticle tracking,” Meas. Sci. Technol., vol. 17, no. 7, pp. 1680–1692, Jul.2006.

[34] K. Ohmi and H. Y. Li, “Particle-tracking velocimetry with newalgorithms,” Meas. Sci. Technol., vol. 11, no. 6, pp. 603–616,Jun. 2000.

[35] Y. G. Guezennec, R. S. Brodkey, N. Trigui, and J. C. Kent,“Algorithms for fully automated 3-dimensional particle trackingvelocimetry,” Experiments In Fluids, vol. 17, no. 4, pp. 209–219,Aug. 1994.

[36] A. C. Kak and M. Slaney, Principles of Computerized TomographicImaging. Philadelphia, PA: SIAM, 2001, ch. 3.

[37] P. Hertz, “Concerning the average mutual distance between points,which are represented in space with a common density,” Math Ann.,vol. 67, pp. 387–398, 1909.

[38] S. Chandrasekhar, “Stochastic problems in physics and astronomy,”Rev. Mod. Phys., vol. 15, no. 1, pp. 0001–0089, Jan. 1943.

[39] A. Tewari and A. M. Gokhale, “Nearest-neighbor distances be-tween particles of finite size in three-dimensional uniform randommicrostructures,” Mat. Sci. Eng. A—Struct., vol. 385, no. 1–2, pp.332–341, Nov. 2004.

[40] J. Massey and J. Frank, “The Kolmogorov-Smirnov test for goodnessof fit,” J. Amer. Stat. Assoc., vol. 46, no. 253, pp. 68–78, 1951.

Paul Leo Drinkwater Roberts (S’04–M’09) re-ceived the B.S. degree (with honors) in computerengineering and the M.S. and Ph.D. degrees inelectrical engineering (applied ocean sciences) fromthe University of California San Diego, La Jolla, in2002, 2004, and 2009 respectively.

He is interested in statistical signal processing,machine learning, computer vision, and oceanog-raphy. His current research efforts are in developingparameter estimation and classification algorithms toextract biophysical information (such as size, shape,

and taxa) from multiview, broadband, acoustic, and optical scattering frommarine animals. The goal of this research is to develop new tools that improveour ability to remotely survey marine animals in the ocean. He is currentlyworking as a Development Engineer at the Scripps Institution of Oceanography,La Jolla, CA, where he builds acoustic and optical underwater imaging systemsfor studying a wide array of marine animals from micrometers to meters.

Jonah V. Steinbuck received the B.S. degree incivil engineering (environmental and water studiesprogram) from Stanford University, Stanford, CA, in2002, the M.S. degree in environmental engineeringfrom the University of California Berkeley, Berkeley,in 2003, and the Ph.D. degree in environmental fluidmechanics and hydrology from Stanford Universityin 2009. In his doctoral research, he investigatedturbulent mixing processes and phytoplanktondynamics in the Red Sea (Eilat, Israel) and off thecoast of California.

He is a Kay Public Service Fellow in the MPA program at the HarvardKennedy School of Government, Cambridge, MA and a Visiting Fellow withthe American Meteorological Society Policy Program. Previously, he served onthe staff of the White House Council on Environmental Quality and the HouseSelect Committee on Energy Independence and Global Warming.

Jules S. Jaffe (A’85) received the B.A. degree inphysics from the State University of New Yorkat Buffalo (SUNY Buffalo), Buffalo, in 1973, theM.S. degree in biomedical information science fromThe Georgia Institute of Technology, Atlanta, in1974, and the Ph.D. degree in biophysics from theUniversity of California Berkeley, Berkeley, in 1982.

He is a Research Oceanographer in the MarinePhysical Laboratory, Scripps Institution of Oceanog-raphy, University of California San Diego, La Jolla.He worked at the Woods Hole Oceanographic

Institution, Woods Hole, MA, before his present position. His lab specializesin the development of underwater optical and acoustical instruments and hasrecently launched a new initiative in building swarms of miniature vehicles forupper ocean measurements.

Dr. Jaffe is a Fellow of the Acoustical Society, a past recipient of a NationalScience Foundation Creativity Award, as well as a Visiting Miller Professor atthe University of California Berkeley. He is currently an Associate Editor of theIEEE JOURNAL OF OCEAN ENGINEERING as well as Limnology and Oceanog-raphy Methods.

Alexander R. Horner-Devine received the B.S.E.degree in mechanical and aerospace engineeringfrom Princeton University, Princeton, NJ, in 1995and the M.S. and Ph.D. degrees in civil and envi-ronmental engineering from Stanford University,Stanford, CA, in 1998 and 2003, respectively.

He is currently an Associate Professor in the De-partment of Civil and Environmental Engineering,University of Washington, Seattle. Before his presentposition he worked as a Postdoctoral Researcherat Stanford University. His research and teaching

are in the area of environmental fluid mechanics. His research group uses acombination of field and laboratory experiments to describe transport andmixing processes in geophysical and environmental flows, focusing on rivers,estuaries, and coastal river plumes.

ROBERTS et al.: ESTIMATION OF In Situ 3-D PARTICLE DISTRIBUTIONS FROM A STEREO LASER IMAGING PROFILER 601

Peter J. S. Franks received the B.Sc. (honors) de-gree in biology from Queen’s University, Kingston,ON, Canada, in 1981, the M.Sc. degree in biolog-ical oceanography from Dalhousie University, Hal-ifax, NS, Canada, in 1984, and the Ph.D. degree in bi-ological oceanography from the Massachusetts Insti-tute of Technology/Woods Hole Oceanographic In-stitution Joint Program, Woods Hole, in 1990.

He is a Professor of Biological Oceanography atthe Scripps Institution of Oceanography, Universityof California San Diego, La Jolla, and Director of the

Integrative Oceanography Division there. He and his group specialize in thestudy of physical–biological interactions in the ocean, using novel technologiesand computer models to gain understanding of the ways in which the physics ofthe ocean affects the distributions and growth rates of the plankton.

Prof. Franks is on the editorial board of the Journal of Marine Research, theJournal of Plankton Research, and the Ocean Sciences Journal.

Fernando Simonet received the B.S. and M.Eng. de-grees in electrical engineering from the University ofCalifornia San Diego, La Jolla, in 1999 and 2006, re-spectively.

He is a Senior Development Engineer at theScripps Institution of Oceanography, University ofCalifornia San Diego, where he designs and buildsinstrumentation for ocean exploration.