Wideband source localization using a distributed acoustic vector-sensor array

13
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003 1479 Wideband Source Localization Using a Distributed Acoustic Vector-Sensor Array Malcolm Hawkes, Member, IEEE, and Arye Nehorai, Fellow, IEEE Abstract—We derive fast wideband algorithms, based on mea- surements of the acoustic intensity, for determining the bearings of a target using an acoustic vector sensor (AVS) situated in free space or on a reflecting boundary. We also obtain a lower bound on the mean-square angular error (MSAE) of such estimates. We then develop general closed-form weighted least-squares (WLS) and reweighted least-squares algorithms that compute the three-dimensional (3-D) location of a target whose bearing to a number of dispersed locations has been measured. We devise a scheme for adaptively choosing the weights for the WLS routine when measures of accuracy for the bearing estimates, such as the lower bound on the MSAE, are available. In addition, a measure of the potential estimation accuracy of a distributed system is developed based on a two-stage application of the Cramér–Rao bound. These 3-D results are quite independent of how bearing estimates are obtained. Naturally, the two parts of the paper are tied together by examining how well distributed arrays of AVSs located on the ground, seabed, and in free space can determine the 3-D position of a target The results are relevant to the local- ization of underwater and airborne sources using freely drifting, moored, or ground sensors. Numerical simulations illustrate the effectiveness of our estimators and the new potential performance measure. I. INTRODUCTION A COUSTIC emissions from battlefield or underwater sources can provide an invaluable signature by which to detect, locate, and track hostile units. The passivity of an acoustic surveillance system allows it to monitor the battle- field or ocean without giving away its own presence. Passive acoustic surveillance has long been used underwater, but its battlefield application [1] is more recent. The feasibility of acoustic localization and tracking on the battlefield has been demonstrated in [2]–[5]. We propose using a distributed array of acoustic vector sen- sors (AVSs) to perform the surveillance function in three sepa- rate scenarios: sensors free floating in the water column, sen- sors located on the seabed, and aero-acoustic vector sensors located on the ground for battlefield surveillance. These sen- sors measure the (scalar) acoustic pressure and all three com- ponents of the acoustic particle velocity vector at a given point Manuscript received April 6, 2001; revised December 3, 2002. This work was supported by the Air Force Office of Scientific Research under Grants F49620-99-1-0067 and F49620-00-1-0083, the National Science Foundation under Grant CCR-0105334, and the Office of Naval Research under Grants N00014-98-1-0542 and N00014-01-1-0681. The associate editor coordinating the review of this paper and approving it for publication was Dr. Fulvio Gini. M. Hawkes was with the Department of Electrical Engineering and Computer Science, University of Illinois, Chicago, IL 60607 USA. He is now with Ronin Capital LLC, Chicago, IL 60604 USA. A. Nehorai is with the Department of Electrical and Computer Engineering, University of Illinois, Chicago, IL 60607 USA. Digital Object Identifier 10.1109/TSP.2003.811225 and possess a number of advantages over arrays of pressure sen- sors [6]–[10]. The ability of a single AVS to rapidly determine the bearing of a wideband source makes them especially at- tractive for the present problem. Vector sensors for underwater applications have already been constructed [11], [12], and sea tested [13], [14]. The Swallow float sensor described in [11] is a freely drifting device that is perfectly suited to this application’s free-space scenario. The above-referenced devices use highly sensitive moving-coil geophones to measure velocity, but other velocity sensors have been based on physical principles such as the change in inductance of a metallic glass strip [15], piezoce- ramics [16], and fiber-optic interferometry [17]. Recently, a new aero-acoustic velocity sensor called the Microflown [18], [19] has become commercially available, from Microflown Tech- nologies, B.V. [20] in the Netherlands, that would be appropriate for the battlefield context. It measures the differential resistance between two micro-machined metallic strips in an air-current to determine velocity. Thus, a miniature, lightweight, portable AVS could be constructed. We have analyzed the use of vector sensors near a boundary in [9] and [10], and they have been tested on a mock vessel hull [21] and at the seabed [22]. In this paper (see also [23] and [24]), we develop a fast, wide- band algorithm for finding the bearing of an acoustic source using a single aero-acoustic AVS located on the ground. Sim- ilar bearing estimation algorithms, which we will make use of here, that are applicable to the free-space and seabed scenarios were presented in [6] and [10], respectively. We also develop wideband, closed-form algorithms that combine bearing esti- mates from several arbitrary locations to determine a source’s three-dimensional (3-D) position. The bearing estimate is based on the measured acoustic intensity vector. We derive an op- timal bound on its mean-square angular error (MSAE) [25], [26] and use it to obtain a data-based measure of the bearing estimator’s accuracy. Each AVS transmits its bearing estimate, the estimate of its variability, and its current location, to a cen- tral processor (CP), which uses them to determine 3-D posi- tion. We propose a weighted least-squares (WLS) method to es- timate position using the variability measures supplied by the sensors as weights. We also develop a reweighted least-squares (RWLS) algorithm to account for the different ranges of the sen- sors from the target. The WLS estimator is closed form and the RWLS estimator requires just a single iteration of the WLS al- gorithm. Consequently, they are both very computationally ef- ficient. Since the individual bearing estimates are also closed form, the system provides a very fast estimate of the location of a wideband source. Note that these 3-D position estimation algorithms are independent of how the bearing estimates are ob- tained; they could come from subarrays of pressure sensors or 1053-587X/03$17.00 © 2003 IEEE

Transcript of Wideband source localization using a distributed acoustic vector-sensor array

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003 1479

Wideband Source Localization Using a DistributedAcoustic Vector-Sensor Array

Malcolm Hawkes, Member, IEEE,and Arye Nehorai, Fellow, IEEE

Abstract—We derive fast wideband algorithms, based on mea-surements of the acoustic intensity, for determining the bearingsof a target using an acoustic vector sensor (AVS) situated in freespace or on a reflecting boundary. We also obtain a lower boundon the mean-square angular error (MSAE) of such estimates.We then develop general closed-form weighted least-squares(WLS) and reweighted least-squares algorithms that compute thethree-dimensional (3-D) location of a target whose bearing to anumber of dispersed locations has been measured. We devise ascheme for adaptively choosing the weights for the WLS routinewhen measures of accuracy for the bearing estimates, such as thelower bound on the MSAE, are available. In addition, a measureof the potential estimation accuracy of a distributed system isdeveloped based on a two-stage application of the Cramér–Raobound. These 3-D results are quite independent of how bearingestimates are obtained. Naturally, the two parts of the paper aretied together by examining how well distributed arrays of AVSslocated on the ground, seabed, and in free space can determinethe 3-D position of a target The results are relevant to the local-ization of underwater and airborne sources using freely drifting,moored, or ground sensors. Numerical simulations illustrate theeffectiveness of our estimators and the new potential performancemeasure.

I. INTRODUCTION

A COUSTIC emissions from battlefield or underwatersources can provide an invaluable signature by which

to detect, locate, and track hostile units. The passivity of anacoustic surveillance system allows it to monitor the battle-field or ocean without giving away its own presence. Passiveacoustic surveillance has long been used underwater, but itsbattlefield application [1] is more recent. The feasibility ofacoustic localization and tracking on the battlefield has beendemonstrated in [2]–[5].

We propose using a distributed array ofacoustic vector sen-sors(AVSs) to perform the surveillance function in three sepa-rate scenarios: sensors free floating in the water column, sen-sors located on the seabed, and aero-acoustic vector sensorslocated on the ground for battlefield surveillance. These sen-sors measure the (scalar) acoustic pressure and all three com-ponents of the acoustic particle velocity vector at a given point

Manuscript received April 6, 2001; revised December 3, 2002. This workwas supported by the Air Force Office of Scientific Research under GrantsF49620-99-1-0067 and F49620-00-1-0083, the National Science Foundationunder Grant CCR-0105334, and the Office of Naval Research under GrantsN00014-98-1-0542 and N00014-01-1-0681. The associate editor coordinatingthe review of this paper and approving it for publication was Dr. Fulvio Gini.

M. Hawkes was with the Department of Electrical Engineering and ComputerScience, University of Illinois, Chicago, IL 60607 USA. He is now with RoninCapital LLC, Chicago, IL 60604 USA.

A. Nehorai is with the Department of Electrical and Computer Engineering,University of Illinois, Chicago, IL 60607 USA.

Digital Object Identifier 10.1109/TSP.2003.811225

and possess a number of advantages over arrays of pressure sen-sors [6]–[10]. The ability of a single AVS to rapidly determinethe bearing of a wideband source makes them especially at-tractive for the present problem. Vector sensors for underwaterapplications have already been constructed [11], [12], and seatested [13], [14]. The Swallow float sensor described in [11] is afreely drifting device that is perfectly suited to this application’sfree-space scenario. The above-referenced devices use highlysensitive moving-coil geophones to measure velocity, but othervelocity sensors have been based on physical principles such asthe change in inductance of a metallic glass strip [15], piezoce-ramics [16], and fiber-optic interferometry [17]. Recently, a newaero-acoustic velocity sensor called the Microflown [18], [19]has become commercially available, from Microflown Tech-nologies, B.V. [20] in the Netherlands, that would be appropriatefor the battlefield context. It measures the differential resistancebetween two micro-machined metallic strips in an air-currentto determine velocity. Thus, a miniature, lightweight, portableAVS could be constructed. We have analyzed the use of vectorsensors near a boundary in [9] and [10], and they have beentested on a mock vessel hull [21] and at the seabed [22].

In this paper (see also [23] and [24]), we develop a fast, wide-band algorithm for finding the bearing of an acoustic sourceusing a single aero-acoustic AVS located on the ground. Sim-ilar bearing estimation algorithms, which we will make use ofhere, that are applicable to the free-space and seabed scenarioswere presented in [6] and [10], respectively. We also developwideband, closed-form algorithms that combine bearing esti-mates from several arbitrary locations to determine a source’sthree-dimensional (3-D) position. The bearing estimate is basedon the measured acoustic intensity vector. We derive an op-timal bound on its mean-square angular error (MSAE) [25],[26] and use it to obtain a data-based measure of the bearingestimator’s accuracy. Each AVS transmits its bearing estimate,the estimate of its variability, and its current location, to a cen-tral processor (CP), which uses them to determine 3-D posi-tion. We propose a weighted least-squares (WLS) method to es-timate position using the variability measures supplied by thesensors as weights. We also develop a reweighted least-squares(RWLS) algorithm to account for the different ranges of the sen-sors from the target. The WLS estimator is closed form and theRWLS estimator requires just a single iteration of the WLS al-gorithm. Consequently, they are both very computationally ef-ficient. Since the individual bearing estimates are also closedform, the system provides a very fast estimate of the locationof a wideband source. Note that these 3-D position estimationalgorithms are independent of how the bearing estimates are ob-tained; they could come from subarrays of pressure sensors or

1053-587X/03$17.00 © 2003 IEEE

1480 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003

other direction finders (passive or active). Finally, we derive anovel measure of potential performance for a distributed systemsuch as this, based on a two-stage calculation of Cramér–Raobounds (CRBs). Like the CRB, this measure is estimator in-dependent and can be used as a benchmark against which tocompare distributed processing techniques and as a criterion forarray design.

As each AVS transmits only a bearing estimate rather thanall measurements to the CP, this is a decentralized processingscheme [27], [28]. The resulting 3-D position estimator issuboptimal because it does not make use of correlationsbetween different locations, but it has numerous advantages:Sensor placement is arbitrary and need not be fixed (although itmust be known) so sensors can be dropped (from the air or seasurface) and may be used in a dynamic context, free floatinglike the sensors in [11], or carried by battlefield units forexample; each sensor provides local target bearing information(especially valuable in the dynamic context) without the need tocommunicate with the CP. Even when communication is made,minimal data is sent, hence, minimizing the risk of detectionand telemetry requirements; last, the algorithms are widebandand very computationally efficient as they require no numericaloptimization. Of the previous work on distributed arrays, [27]supposes that the source is in the far field of all subarrays, i.e.,all bearings from different locations are the same; therefore,no 3-D estimate of position can be made. The approach takenin [28] could be adapted to our current situation; however, itrequires that transmission of the covariance matrix of eachsubarray, resulting in a somewhat greater communicationburden. Furthermore, both require numerical search algorithmsso greatly increasing computational complexity and use sub-arrays of standard omni-directional sensors. The fact that each“subarray” in our method is an AVS contained in a singlesensor package gives it great flexibility in terms of deploymentand usage.

In Section II, we present the mathematical model for thesensor measurements, and Section III develops an algorithmto rapidly estimate bearing using a single vector sensor. InSection IV, we develop weighted and reweighted least-squaresalgorithms for determining 3-D source position given thebearing estimates from each sensor, construct an estimatorto determine the weights, and give an expression for a lowerbound on the bearing estimator from each sensor. We proposethe new potential performance measure for a distributed systemin Section IV-C and numerically illustrate the efficacy of theproposed algorithms in Section V. Section VI concludes thepaper.

II. M EASUREMENTMODEL

In the following, bold-face characters represent vectors,and upper case characters represent matrices. We assume thatthere is a single bandlimited acoustic source, whose locationis donated by , radiating bandlimited spherical waves into anisotropic homogeneous whole-space or half-space. The signalis received by vector sensors at arbitrary distinct locations

. The scenario (for ground or seabed) is illustratedin Fig. 1. As long as the source to sensor distance is more

Fig. 1. Schematic illustration. Source at��� emits spherical waves, sensors onboundary atrrr ; . . . ; rrr estimate bearing vectorsuuu ; . . . ; uuu .

than a few times the maximum wavelength and the sensor’sdimensions are small compared with the minimum wavelength,the wavefront arriving at each sensor is essentially planar. Theacoustic particle velocity and the acoustic pressure at any point

that is more than a few (maximum) wavelengths from thesource are related by Euler’s equation [29]

(1)

where is the velocity vector, is the pressure, is the densityof the medium, is the speed of sound, andis the unit vectorfrom the source to.

In the free-space scenario, the output of an AVS located atis the four-element vector (see [6])

(2)

for , where and are the complex en-velopes of the acoustic pressure and particle velocity, respec-tively (the latter normalized by ), is the complexenvelope of the pressure signal at, and represents noise.The vector is the unit vector pointing from the sensor to thesource’s location at time , where is the propagation delaybetween the source and sensor at time . Since this propa-gation delay cannot be compensated for, we will simply refer tothe target’s location at time as its position. Note that weimplicitly assume that the observation interval is short enoughrelative to the inverse of the source’s speed such thatis ap-proximately constant for .

For the ground and seabed scenarios, we assume that theground or seabed, which defines the -plane, forms a flatplanar boundary on which all sensors are located. If the sourceis not too close to the boundary, the total field at any pointon or above the interface may be obtained as the superposi-tion of the sound fields arising from the original source and animage source (see Fig. 2). The image source is obtained by re-flecting the original source in the boundary and has amplitudeand phase determined by the boundary characteristics and thepoint of interest . The fields from the two sources are summedas if the boundary were not present. This is known as the rayacoustics or geometrical optics approximation [30]. Note that ifthe source is located on or very close to the boundary, groundand surface waves may exist [31]. The resultant field may stillbe obtained using an image source for locally reacting surfaces

HAWKES AND NEHORAI: WIDEBAND SOURCE LOCALIZATION 1481

Fig. 2. Field atrrr is sum of real and image fields and satisfies boundaryconditionZ = �p=v at rrr.

but now the image source must be somewhat modified [32]. Al-though not explicitly shown here, the algorithm for determiningthe azimuth of the source relative to each sensor (see Section III)is valid under any boundary conditions that ensure that the ratioof the signals at the two horizontal velocity components is equalto the tangent of the azimuth.

We therefore assume that the source is far enough away fromthe boundary that the resulting field can be regarded as arisingfrom a point source radiating spherically symmetric waves anda simple point image with (complex) amplitude, relative tothe true source. Consider an AVS located at a pointon theboundary. The bearing of the image source relative to the sensoris , which is obtained by negating the-component of . Theoutput of the AVS is thus

(3)

for , where is the complex envelope of thepressure signal that would arrive atif the boundary were notpresent.

At the interface, the condition , whereis the -component of the total (unnormalized) velocity

field and is the specific acoustic impedance of the surfaceat , must be satisfied [29]. Therefore

(4)

where is the elevation of the source with respect to the sensor;therefore

(5)

The quantity is known as the reflection coefficient. In gen-eral, and, hence, is a function of both incidence angle,i.e., the angle betweenand the -axis, and frequency. Our mea-surement model implicitly assumes that is frequency inde-pendent. Therefore, the signal bandwidth must be such that forany given incidence angle, is approximately constant overthe frequency range. This is the only bandwidth restriction thatwe require. Since the distributed system of sensors may extendover many hundreds of times the smallest wavelength, this is a

much less stringent requirement than the standard narrowbandarray processing assumption. Thus, we refer to our algorithmsas wideband.

We model the ground as a locally reacting surface, i.e., one forwhich is independent of the incidence angle. The reflectioncoefficient is then given by (5), with a fixed complex con-stant. Experiments in [31] showed that various ground surfacesbehave as if they are locally reacting. Such surfaces also arise inarchitectural acoustics with porous sound-absorbing materials[29]. A locally reacting surface may be characterized as one inwhich the sound disturbance transmitted into the lower mediumdoes not travel along its boundary (actually, this is only strictlytrue for plane waves since, as mentioned above, ground wavesand surface waves may exist when the source is very close tothe boundary), and therefore, the normal velocity at each pointis completely determined by the pressure at this point [30]. Theseabed is modeled as the interface between two liquid layers,one of which is absorptive. This model is a reasonable approxi-mation for a water packed sandy bottom [30, pp. 11], althoughthe reflective properties of many seafloor terrains will undoubt-edly be more complicated. The reflection coefficient is given by

(6)

where is the ratio of sand density to water density, andisthe index of refraction. Absorption is accounted for by allowingthe index of refraction to be complex, i.e., ,with . For the sandy ocean bottom typical values are

, , and [30, pp. 11]. Note that asrequired, does not depend on frequency in these models (6).

It follows from (2) and (3) that for each of the scenarios, themeasurement of a single AVS may be written

(7)

where for the free-space scenario, and

(8)

where is the source’s azimuth relative to the sensor for theground and seabed cases. In fact, (7) and (8) form the samemeasurement model as for a single vector sensor located ona boundary when illuminated by a planewave from bearing(see [10]). This occurs because of the point-like sensor assump-tion and the ray acoustic approximation. Of course, unlike theplanewave model, the bearing and, hence, the reflection coeffi-cient will differ from one location to another. These expressionsfor assume that the three velocity components are aligned withthe three coordinate axes or that the orientation of the sensor isknown and that the data have been rotated to achieve the sameeffect. The Swallow floats described in [11] contain a fluxgatecompass to provide information on the horizontal components’alignment and are carefully trimmed before deployment to min-imize tilt. For the ground and seabed sensors, it should not bedifficult to design a sensor package for which it is easy to align

1482 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003

the vertical component, even when deployed by air or surfacevessel. Again, a compass could provide horizontal alignment.

A. Statistical Assumptions

We assume that the signal and noise processesandare zero-mean uncorrelated processes with finite second-ordermoments and that

(9)

(10)

for , where is the Kronecker delta func-tion, is the identity matrix, the overbar represents conjugation,and superscript represents complex conjugation and trans-position. The assumption of spatially white noise is consistentwith internal sensor noise. In the free-space case, we have shownit to be consistent with isotropic and even certain anisotropicambient noise fields [33]. The assumption of independent timesamples is consistent with spectra that are symmetric about theircenter frequency and sampled at the first zero of their autocor-relation function. It is included for ease of exposition and isnot necessary for the following algorithms to be implemented.If there is time correlation, however, more samples will be re-quired to achieve a given level of accuracy.

Letting be the pressure signal at the origin, the completemeasurement model for the-sensor distributed array is

(11)

for , , where is the th sensor’ssteering vector, is the distance of the source from the origin,are the source to sensor distances, and are thedifferential time delays with respect to the origin. The termaccounts for spherical spreading loss of the signal so that thesignal power will vary between sensors. Theaccount for thedifferential Doppler shifts between each sensor and the origin.In this model, it is implicitly assumed that eachpoints towardthe same location. Thus, the differential time delays between allsensors and the observation interval must be short enough, rel-ative to the inverse’s of the target’s speed, such thatandremain approximately constant. The algorithms in the followingsections do not make use of intersensor correlations; therefore,there is no similar requirement on and . No further statis-tical assumptions are required regarding the noise at differentsensors. Indeed, it may vary in power and even be correlatedbetween sensors at different locations. Note that for the groundand seabed scenarios, the reflection coefficients, on which

depend, will generally vary from one location to another asa result of the different bearings. However, variability in the

can also be used to account for different reflection charac-teristics of the surface at different locations.

III. B EARING ESTIMATION

In this section, we derive fast wideband algorithms to esti-mate the bearing of the sourcefrom a single vector sensor. Aswell as providing the information on which the 3-D position es-timator is based, this information may be of use in its own right.

For example, in a mobile battlefield array, where each sensoris carried by a battlefield unit, such as a soldier, this estimateprovides vital target information to the unit without the need forany communication, thereby minimizing detection risk.

Acoustic intensity is a vector quantity defined as the productof pressure and velocity. For the free-space problem, it is parallelto ; therefore, it may be estimated to determine the bearing [6].In the two boundary problems, since theand components ofthe intensity vector are the same for both real and image sources,the acoustic intensity vector is parallel to the projection ofonto the -plane. However, the 3-D intensity vector is notparallel to ; therefore, the same method cannot be used to findthe elevation.

A. Free Space

For the free-space scenario, the bearing estimate, based onthe intensity vector, is (see [6])

Re (12)

(13)

The asymptotic, normalized MSAE is defined as

MSAE (14)

For a very large class of estimators, MSAEis lower boundedby (see [6] and [26])

MSAE CRB CRB (15)

where CRB is the CRB. In addition, if is an unbiased unitlength estimator of , MSAE is also a lower bound for thenormalized finite-sample MSAE. In [6], it was shown that whenthe signals and noise are Gaussian, the estimator in (12) and (13)has MSAE , where is the signal-to-noise ratio (SNR) at the sensor. Is was also shown that

MSAE (16)

under the same distributional assumptions.

B. Ground and Seabed

The horizontal component of acoustic intensity is

(17)

Thus, under the noise model of (10)

(18)

Since this is purely real, we let Re , andby the strong law of large numbers, . Thus, wecan estimate azimuth from

(19)

HAWKES AND NEHORAI: WIDEBAND SOURCE LOCALIZATION 1483

Note that (19) is independent of, and therefore, we can usethis estimator to determine the azimuth even without knowingthe local reflective properties of the ground. Neither is a normal-component velocity sensor required, of course. Furthermore,even when the source is close to the boundary, so that model(3) does not hold, (19) will still produce a consistent estimatorof the azimuth as long as theand components of the velocitysignal are proportional to and , respectively, and thatthe constant of proportionality is the same for both. When this isthe case, (19) may be used to find the bearing of targets locatedon the boundary. Since the magnitude of the horizontal compo-nent of acoustic intensity depends strongly on the elevation, sowill the accuracy of . With appropriate modification, we canapply the analysis of [6, App. B] to this azimuthal estimator toshow that its asymptotic MSAE is (see Appendix B)

(20)

where is the SNR.Obtaining an estimator of the elevation requires that the func-

tional form of be known. The vertical component of acousticintensity has expected value

(21)

Using (18), we see that

(22)

is a function of alone, which we estimate from the statistic

Re

(23)

The elevation estimate is then the solution to

(24)

Substituting (5) into (22), we obtain .Hence, for the ground surface, the elevation can be estimatedfrom

Re (25)

For the seabed, we substitute (6) into (22) to obtain

(26)

and therefore, we estimateusing

Re (27)

Note that the estimates produced by (25) and (27) lie betweenand , hence, incorporating thea priori information that

the source lies above the surface. The bearing estimator for the

seabed problem and the azimuthal estimator were developed inthe context of planewave reflection in [10]. The same estimatorsarise here because the ray acoustics approximation results in asimilar model for the AVS’s steering vector.

It can be seen from the above that the bearing is estimated viathe three components of intensity. Therefore, we could in theoryuse three orthogonally oriented single-axis intensity probes. Mi-croflown Technologies B.V. already packages its novel aero-acoustic velocity sensor with a pressure sensor to form such anintensity probe. Intensity probes are available from other manu-facturers such as Brüel and Kjær, although those currently avail-able use two closely spaced pressure sensors, instead of true ve-locity sensors, to determine the velocity. As discussed in [6],such sensors are not considered appropriate for vector-sensorprocessing.

There is no known expression for the MSAEof the fullbearing estimator in the ground and seabed scenarios. However,in Appendix A, we show that under the assumption of Gaussiansignals and noise

MSAE

(28)

where

(29)

(30)

Re (31)

(32)

and is the derivative of with respect to . Thus, theMSAE is a function of the SNR and the elevation butnot the azimuth. Compare this with the free-space situation inwhich MSAE is solely a function of the SNR.

1) Multiple Sources:The intensity-based bearing estimatorsoutlined above are only effective when there is one dominantsource: They may not be used in the presence of a strong inter-fering course. A detailed analysis of the case of multiple sourcesis beyond the scope of this paper; however, we make the fol-lowing observations and suggestions for future work.

It has been shown that an AVS can identify the directions ofup to two sources [34]. Therefore, the case of a source and singleinterfering source can be handled with the same array structure,i.e., a single AVS at each location. To deal with the two sourcesituation, we suggest the following adaptations of conventionaland minimum variance beamforming-based estimators: Formthe spectra

(33)

1484 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003

(34)

where is the sample covariance matrix, and anddenote the conventional and minimum-variance

spectra, respectively. The expressions for the spectra are similarto the usual conventional and minimum-variance beamformingspectra with the array steering vector replaced by, except thatan additional normalizing term of is added to compensatefor the fact that in the boundary scenarios, the magnitudeof is not constant but depends on the elevation angle. Theconventional (minimum-variance) bearing estimates are thevalues of corresponding to the two largest local maxima of

. The vector is independent of frequencybecause all four AVS components are co-located; therefore,unlike traditional pressure-sensor array frequency-domainbeamforming estimators, these estimators are wideband. Ofcourse, a 2-D numerical search is required so that they are notas fast as the intensity-based algorithms. The beamformingestimates can be used when there is one or two sources, and inthe former case, the intensity-based estimate could be used toinitialize the search. An investigation of the properties of theseestimators, for one or two sources, would be an interestingdirection for future research.

When there are more than two sources, a small subarray oftwo or more spatially separated AVSs would be required at eachlocation in order to estimate the bearings to all sources.

IV. POSITION ESTIMATION

We now consider the problem of how to combine the decen-tralized estimates of the target bearings to obtain a 3-D estimateof its location. The algorithms of this section apply to all sce-narios. Each sensor transmits its local estimate of the directionfrom its location to the source , say, as well as its own lo-cation . The location could be determined from a lightweightGPS receiver packaged with the sensor, for example. In practice,both the local bearing and position will contain errors; however,we assume that the bearing estimate is the dominant source oferror and ignore possible inaccuracies in the location. There-fore, a total of five quantities (four if all sensors are at the samealtitude) need to be transmitted—three to describe the locationand two for the bearing—no matter how long an observationwindow is used. This is a huge advantage over a centralized pro-cessing scheme in terms of communications overhead, whereevery single data sample from every single sensor must be sentto a central processor. The results of this section are applicableto bearing estimates obtained from any type of subarray andnot just individual AVSs. In particular, many sources may behandled by these techniques if each subarray consists of mul-tiple traditional or vector sensors and can therefore estimate thebearing to more than two sources.

A. Weighted Least Squares

If all the were without error, the collection of linespassing through each with direction would intersect atthe true source position. Therefore, we want to choose the es-timate of the source’s location to be a point that is in somesense closest to all these lines. We will chooseto minimizea weighted sum of the minimum squared distances fromto

each line. By doing so, we will derive a closed-form solutionfor the location estimate, avoiding the need for a complex com-putational search. Any point along the line defined by thethsensor’s location and bearing estimate is defined by the vector

for some . For fixed , the point of closest approachoccurs when , i.e., is the projection of thevector from to onto the direction . Thus, we propose aweighted least squares (WLS) estimate of, which is given by

(35)

where is a weight corresponding to the accuracy of each.Expanding (35) and rearranging, we obtain

(36)where we have dropped terms independent of. Note that

is the projection matrix onto the plane orthogonal to.Differentiating with respect to and setting the result equal tozero gives

(37)

Hence, we arrive at the closed-form solution

(38)

where , diag , ,and

(39)

1) Choice of Weights:In general, the accuracy of thebearing estimates will be different from sensor to sensordue to a number of factors. There may be local variations inbackground noise level or ground reflectivity, signal strengthwill differ between sensors that are different distances from thesource because of spherical spreading loss or partial occlusion,and the accuracy of the estimation algorithm may depend onthe true , which differs between sensors. Thus, it is importantfor each sensor to transmit a measure of accuracy along withits bearing estimate to the central processor. A very naturalmeasure in this situation is the MSAE. Since no finite sampleexpression is known for the MSAE, we consider instead thebound MSAE (28). In the free-space problem, if the signaland noise are Gaussian, we could also use the previously givenexpression for MSAE ; however, in our simulations, we foundno discernible difference in the resulting accuracy of.

Let us suppose the signal and noise are Gaussian, so thatMSAE is given by (16) or (28). It then depends on the un-known quantities and, for the boundary scenarios,, so thatit must be estimated by plugging in estimates of the unknowns.The bearing estimator itself already provides us with an estimateof ; therefore, remains to be estimated. If we knew the max-imum-likelihood (ML) estimate of , say, , then we could

HAWKES AND NEHORAI: WIDEBAND SOURCE LOCALIZATION 1485

use closed-form expressions (see, e.g., [35]) to find the ML es-timates of and . Since we do not know , we will useour actual estimate of. Therefore, we have

Re tr (40)

(41)

where , and is the sample covariance matrix. Ourestimate of the SNR is . Finally, we plug andthe elevation estimate into (16) or (28) to obtainMSAE .This calculation is made locally at each sensor, and the weightsent to the central processor and used in determiningis then

MSAE . Note that this method of choosing weights can beused with bearing estimators other than those developed in Sec-tion III. If the signal and noise are not Gaussian, (28) may notalways be easy to compute. If it is particularly intractable, weexpect that using the above procedure based on the Gaussianassumption will still lead to better estimates ofthan woulduniform weighting for many distributions.

B. Reweighted Least Squares

Errors in the bearing estimates from sensors far from thesource have a much greater effect uponthan those from sen-sors nearby. The contribution of theth bearing estimate to thesquared error criterion is approximately , where is theangular error of , and (as before) is the distance from eachmeasurement location to the source. Although we do not knowthe , we do have an estimate of them after we have estimated

using the above WLS procedure. Therefore, we propose areweighted least-squares (RWLS) estimator constructed as fol-lows: Find using the weights MSAE . Usingthis , estimate the distances from each sensor to the source as

, and then, construct a reweighted estimate, againusing WLS but now with weights MSAE .

This RWLS estimator can be thought of as extension of Stans-field’s estimator [36] to three dimensions and to the case ofunknown angular error variance and lengths, as we now show.In two dimensions, the bearings can be represented by a singleangle. Suppose each estimated bearing is an unbiased Gaussiandistributed estimate of the true bearing with known variance,say, and that the bearing estimates from different locations areuncorrelated. The ML estimate ofwould be

(42)

where is the angular error of theth bearing estimate. Sincethis has no closed-form solution, Stansfield proposed replacing

with its sine, i.e., let

(43)

which does have a closed-form solution, provided that thelengths are known as well as the angular error variances.Now, in two or three dimensions, is equal to the ratio

of the length of the projection of onto the subspaceorthogonal to to the length of , i.e.,

(44)

However, .Substituting this into (44), then (43), and comparing with (35),we see that Stansfield’s estimator is a WLS estimator with

.

C. Distributed Potential Performance

Calculation of the CRB on for the entire array based onall measurements made by all sensors, i.e., (11), quickly be-comes computationally infeasible for even a moderate numberof samples because of the wideband nature of the source sig-nals. Even for a narrowband source, however, it leads to a com-pletely unrealistic assessment of the potential of the distributedsystem because it implicitly supposes that all possible cross-cor-relations between measurements at different locations can beestimated. Therefore, in this section, we develop an estimator-independent indicator of the potential performance achievablewith a distributed system. It is based on a two-stage calculationof CRBs, made under the assumption of Gaussian signals andnoise, but is not itself a CRB. Nevertheless, in the examples ofSection V, it does lower bound the variance of the estimate ofand is attained at high SNRs. Therefore, we expect it to providea good benchmark against which to assess the performance of aparticular distributed estimation scheme and an effective crite-rion for system design.

Each sensor sends a bearing estimate and an estimate of its ac-curacy based on the estimated values of signal power and noisepower at its location. We could, of course, send the signal powerand noise power estimates separately, so let us suppose that eachsensor transmits the vector to the CP. Atthe first stage, we calculate the CRB onusing only those measurements made by theth vector sensor,i.e., using model (7). This may be done using Bangs’ formulafor zero-mean complex Gaussian data, which gives the entriesof the Fisher information matrix (FIM) (see [37, pp. 525], forexample). Thus, the th entry of FIM is

FIM tr (45)

where is the 4 4 covariance matrix ofthe measurement data at theth AVS, and is the th entry of

.In the second stage, we consider theas measurements that

are Gaussian distributed with meanand covariance CRB .The ML estimate of will asymptotically have these properties[38]. In addition, we suppose that and are mutually uncor-related for . For a finite number of samples, this latter as-sumption is unlikely to be exactly true since the measurementsare correlated between different sensor locations, although itmay well be true asymptotically. Nevertheless, we expect thatignoring possible correlations between differentwill tend tolead to a lower bound on the variance ofsince setting theoff-diagonal elements of a positive definite matrix to zero gen-erally increases its determinant and, hence, reduces the diagonal

1486 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003

elements of its inverse. Stacking the vectorsforin a single vector , and similarly creating from for

, we have the measurement model

(46)

where is a zero-mean (real) Gaussian distributed vector thathas a block-diagonal covariance

blkdiag CRB CRB (47)

The vector of unknowns at the CP,say, consists of the loca-tion , the source power at the origin , and the noise powersat each of the sensors , i.e., .Note that in defining the unknowns thusly, it is implicitly as-sumed that the noise at different AVSs is uncorrelated. We cal-culate CRB , using the equivalent of Bangs’ formula for realGaussian data (see [37, pp. 47]); therefore, theth entry ofthe FIM [FIM ] is

FIM

tr (48)

We then invert FIM and extract the appropriate 33 blockcorresponding to CRB . We denote this 3 3 matrix byDPP . Note that (48) requires determination of the derivativeof each CRB with respect to each component of, which is,in most cases, unlikely to be analytically tractable. Therefore, itmay have to be computed numerically, which is what was donein the examples in Section V. Whenis expressed in Cartesiancoordinates, these may be converted to polar coordinates by

DPP DPP (49)

where contains the range, azimuth, and ele-vation, respectively, of the source relative to the origin. This issimilar to the CRB parameter transformation formula (see e.g.,[37, pp. 45]). The potential mean-square range error (MSRE,see [26]) of the distributed system, which is denoted MSRE,is just that entry of DPP corresponding to. The potentialMSAE is, by analogy with the MSAE[see (15)]

MSAE DPP DPP (50)

D. Multiple Sources

The issue of multiple sources is beyond the scope of thepaper; however, we believe the WLS and RWLS positionestimation methods could be extended for use with multiplesources. If the CP can correctly associate bearing estimatesfrom each location with the bearing estimates from all otherlocations, that is, it correctly decides which bearing and sourcepower estimates correspond to the first source, which to thesecond and so on, each source’s position could be estimatedindependently of the others with the algorithms describedabove. Methods of association would form a very interestingtopic for further work. Perhaps matching frequency spectra orsignatures of the various signals would be fruitful. The problem

Fig. 3. Underwater scenario: Performance MSAEof fast bearing estimator(solid), and bound MSAE (dashed), for 3-dB source with 175 snapshots.

Mean estimated boundMSAE (dash-dotted) plus and minus three empiricalstandard deviations (dotted) is also shown. Five hundred realizations were used.

Fig. 4. Ground scenario: Performance MSAE of fast bearing estimator(solid) and bound MSAE (dashed) for 20-dB source with 350 snapshots.

Mean estimated boundMSAE (dash-dotted) plus and minus three empiricalstandard deviations (dotted) is also shown. Five hundred realizations were used.

may also be related to the problem of eigenvalue association inESPRIT. Alternatively, the WLS could be extended to jointlyfind the positions of all sources by searching all associationsand choosing that which minimizes the sum of the individualWLS criteria over the number of sources. Extension of theformulae for the distributed potential performance measure tomultiple sources should be relatively straightforward. It clearlyremains a lower bound as its calculation would assume that theassociations are correct at the central processor.

V. NUMERICAL EXAMPLES

A. Single-Sensor Bearing Estimation

For the purposes of simulation, we use a Gaussian distributedsignal and noise. For the seabed case, we use the parametersvalues given directly below (6), which is a 3-dB source, and175 snapshots. For the ground problem, we take

, which was measured by [31] for grass-covered flatground at 215 Hz, and use a 20-dB source with 350 snapshots.The performance of the bearing estimate in free space is studiedin [6]. Figs. 3 and 4 summarize the results, which are indepen-dent of azimuth, as a function of incidence angle, say (i.e.,

). The seabed estimator is rather more accurate

HAWKES AND NEHORAI: WIDEBAND SOURCE LOCALIZATION 1487

Fig. 5. Signal gain resulting from reflection and sensor directionality forpressure (solid), in-plane (dashed), and normal (dash-dotted) components ofa vector sensor at a locally reacting boundary. Sensor located on ground withnormalized input impedance isZ = 11 + 13i.

than the ground estimator at all angles (note the higher SNRand greater number of snapshots used in the latter example). Thereason for this is the substantially lower density of air relativeto the ground than that of water relative to the seabed. The largedifferential causes the ground surface to appear almost rigid,i.e., there is very little motion normal to the boundary and, as aresult, very low signals at the normal velocity sensor, which isresponsible for much of the directional sensitivity of the AVS atthe boundary. In both casesis close to the bound at normal in-cidence, , i.e., with the source directly above the sensor.As incidence increases, the performance, though not the bound,worsens steadily for the ground sensor. For the seabed case, per-formance stays approximately constant and close to the boundbefore worsening rapidly around , which is the same timethat the bound actually decreases. In both cases, performanceand bound tend to as the incidence reaches grazing because

, and therefore, so does the SNR.Except at large incidence, the quantityMSAE is seen to

estimate MSAE well. This is especially true for the bat-tlefield problem and is probably due to the higher SNR in thesimulation. It is possible that on any run, , with finiteprobability if the argument of the inverse cosine in (25) or (27)is greater than one. In this case, our technique fails to yield anestimate of MSAE because it is theoretically infinite if reallyis zero as . In our simulation, this never occurred below66 incidence, and the chances of it occurring rose to about 50%within a few degrees of grazing incidence. However, we do notexpect that such large incidences will need to be measured inthis application, and therefore, this should not be a problem inpractice. If it does occur at one sensor in the array, a solutionwould be to use the average of theMSAE obtained from theother sensors as its weight in the WLS location procedure.

Examination of the standard deviation of the elevation andazimuthal estimates separately (not shown) reveals that the ma-jority of the angular error, especially in the battlefield scenario,is due to errors in the elevation rather than the azimuth. Thus,the 3-D location system can determine that the-coordinates(ground track) of a target can be rather more accurate than its3-D position or height. The reason for this may be seen in Fig. 5.It shows the signal gain resulting from reflection at the boundary

TABLE IANGLES AND RANGES FROM THE SENSORS TO THESOURCE

and the incoming signal’s direction for each sensor component,i.e., the squared magnitudes of the entries of, for the groundsensor. Actually, the illustrated in-plane gain is the gain of thesum of the in-plane components or, equivalently, the gain of onein-plane component when is in the same plane as its axis.It is seen that the normal component gain is much lower thanboth pressure and in-plane gain for almost all angles. When thenormal gain is larger than the in-plane, very near normal inci-dence, large errors in the azimuthal estimate have little effect onthe MSAE because of the inherent singularity in the sphericalcoordinate system. This poor normal gain is mainly due to thesize of the input impedance, if the input impedance were small,i.e., the surface were acoustically more pliable, the normal gainwould improve relative to the in-plane and pressure gains, re-sulting in better estimation of elevation but poorer azimuthalestimation.

B. Position Estimation

We now give examples of the performance of the WLSand RWLS position estimators for a wideband signal.We use a stationary target with a Gaussian signal andsix vector sensors. In the free-space example, sensorsare located at , ,

, , , and. In the seabed and ground exam-

ples, they are located on the surface with coordinates, , , , ,

and . In all examples, the source is at. The resulting angles and ranges are shown

in Table I. We also assume the sampling frequency in eachcase is equal to the speed of sound (1500 m/s in water and330 m/s in air) so that the differential delay between sensorsis an exact multiple of the sampling period, thereby avoidingthe need to implement fractional delays in the simulation. The

1488 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003

Fig. 6. Angular estimation performancep

MSAE for LS (dotted), WLS(dash-dotted), and RWLS (solid) position estimators.MSAE (dashed) isalso shown. Free-space scenario.

Fig. 7. Range estimation performancep

MSRE for LS (dotted), WLS(dash-dotted), and RWLS (solid) position estimators.MSRE (dashed) isalso shown. Free-space scenario.

Fig. 8. Angular estimation performancep

MSAE for LS (dotted), WLS(dash-dotted), and RWLS (solid) position estimators.MSAE (dashed)is also shown. Seabed scenario.

source signal is uncorrelated from one snapshot to the next.This would occur if, for example, it has a flat spectral densityand its bandwidth was equal to the sampling frequency. Noiseis of equal power and uncorrelated between different sensors.The SNR is defined as the ratio of the signal power at theorigin to the common noise power. In keeping with model(11), the signal power at any point is determined by spherical

Fig. 9. Range estimation performancep

MSRE for LS (dotted), WLS(dash-dotted), and RWLS (solid) position estimators.MSRE (dashed)is also shown. Seabed scenario.

Fig. 10. Angular estimation performancep

MSAE for LS (dotted), WLS(dash-dotted), and RWLS (solid) position estimators.MSAE (dashed) isalso shown. Ground scenario.

Fig. 11. Range estimation performancep

MSRE for LS (dotted), WLS(dash-dotted), and RWLS (solid) position estimators.MSRE (dashed) isalso shown. Ground scenario.

spreading. For the boundary problems, the SNR is defined asthe ratio of the signal power that would exist at the origin if theboundary were not present to the common noise power. Thesame reflection properties as in Section V-A are used for theground and seabed.

Figs. 6–11 show the MSAE and mean-square range error(MSRE), which are defined by , of the location esti-mated for the WLS estimator and the RWLS estimator and for an

HAWKES AND NEHORAI: WIDEBAND SOURCE LOCALIZATION 1489

un-weighted least-squares (LS) algorithm, i.e., the WLS proce-dure with equal weights. A total of 350 snapshots (at each AVS)were used and the results averaged over 1500 Monte Carlo trials.The improvement obtained by using WLS and RWLS over LSdepends on the scenario; it is most noticeable in the groundproblem, and least in the seabed. In addition, the source’s di-rection is more accurately determined than its range. This is nottoo surprising. It seems obvious that small errors in the localbearings will cause a larger error in the estimated range than inthe estimated direction to the source, especially if the source isvery far from the sensors. Indeed, it is known that the CRB onthe variance of range estimators for a passive sensor array in-creases as the fourth power of the range [39].

We also show MSAE and MSRE , which are theangular and range potential accuracy measures computedusing the distributed potential performance (DPP) measureof Section IV-C. The DPP measures lower bound the actualMSAE and MSRE in all cases. In the free-space scenario,they are essentially achieved by the RWLS estimate at highSNRs. In all cases, the MSAE of all estimates comes closerto MSAE than does the MSRE to MSRE , indicatingthat the source’s bearing is more efficiently estimated than itsrange. The actual performance is worst, relative to the DPPin the ground case. This is not surprising. In the free-spacesituation, the MSAE of each individual bearing estimate isquite close to the MSAE, regardless of the actual bearing (seeexpressions in Section III-A), and achieves it at high SNR. Inthe seabed problem (see Fig. 3), the MSAE is close to MSAEat most angles. However, in the ground case (see Fig. 4), thetwo quantities are only close near normal incidence.

VI. CONCLUSION

We developed a fast, wideband decentralized processingscheme for 3-D source localization of targets in three dimen-sions using a distributed array of acoustic vector sensors.This method requires minimal communication between thesensors of the array and is very adaptable to a changing arrayconfiguration. We examined the cases of sensors located infree space and on two different types of surface: the seabedand the ground. For the latter case, we proposed a new fastwideband algorithm to determine the bearing from a singleAVS to the source and gave a lower bound on its performance.We also showed how to estimate this bound from the data. Weconstructed a weighted least squares and 3-D position estimatorbased on combining bearings and an estimated accuracyof each bearing from various locations. We also proposeda reweighted least-squares method to take into account thedifferent distances from the source to the various locations andshowed the relationship between it and Stansfield’s 2-D meansof combining bearings [36]. We also developed a distributedpotential performance measure, based on a two-step calculationof Cramér–Rao bounds, to use as a benchmark for assessingthe efficacy of various distributed estimators and as a criterionfor array design. Numerical examples illustrated our results.

We note several particularly interesting extensions of thiswork: We believe the DPP measure to be an important tool fordetermining the optimal performance of distributed systems

in general. Further study of its theoretical properties would bevery useful. We also note that the vertical stratification of thespeed of sound in the ocean causes refraction of acoustic waves.As a result, when there is considerable variability in the soundspeed profile between the source and the sensor, the vectorsensor’s direction estimate will not point directly at the target.In such a case, the direction estimates can be used to provideboundary conditions to ray-tracing algorithms. Furthermore,the location estimate can be used as an initial estimate in theiterative numerical optimizations that would be required tolocate a source from simultaneous ray tracings. Developmentof algorithms along these lines could prove very fruitful.

Finally, we note that the algorithms herein do not require themeasurement of the source over a period of time long enoughsuch that is has moved a substantially difference. Methods in-volving measuring the changing acoustic (pressure) signature ofa source from a number of locations as it moves through spacein order to determine location and to track sources have beendeveloped (see [40] for example). We believe that the spatialinformation provided by the distributed vector-sensor approachcould be combined very effectively with temporal techniques toproduce accurate and fast source tracking systems.

APPENDIX A

In this Appendix, we derive the MSAEfor a single vectorsensor located on a reflecting boundary. Under the assumptionof Gaussian signals and noise, the single sensor measurementmodel (3) satisfies the requirements of [10, Th. 3.1]. Now, fromthe definition of the vector-sensor steering vector[see (8)],noting that the reflection coefficient is a function of butnot , we have that

(51)

(52)

where is the derivative of with respect to . From theseequations, we see that .Therefore, [10, Th. 3.1] gives the CRB as

CRB

(53)

The MSAE (28) then follows from (15), and (29)–(32) followfrom (8), (51), and (52).

1490 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 6, JUNE 2003

APPENDIX B

In this Appendix, we determine the performance of the fast,wideband azimuthal estimator. Consider the model for measure-ments made by a single vector sensor in free space given by(2). In [6], the following bearing estimator was proposed for usewith a single AVS

Re

Re

(54)

where and are defined in (2). In [6, App. B], anexpression is presented for the MSAE of this estimator, whichholds under the statistical assumptions of this paper, includingthe Gaussianity assumption (see [6, eq. (B.5)]). This expressionmay also be written as

MSAE tr (55)

where and are the SNRs at the pressure and velocity sen-sors, respectively, and is the unit-length vector being esti-mated. It is clear that the analysis of [6, App. B] holds no matterwhat the dimension of the unit vector[and ]. Further-more, the MSAE only depends on the SNRs and not the absolutevalues of signal and noise powers. Consequently, the expressionis applicable to the current azimuthal estimator. Making the fol-lowing substitutions in (55)

(56)

(57)

(58)

gives the result (20).

REFERENCES

[1] D. Lake. (1998, Jan.) Battlefield acoustic signal processing andtarget identification. CSI/Stat. Colloq., George Mason Univ., Fairfax,VA. [Online]. Available: http://www.galaxy.gmu.edu/stats/collo-quia/colljan3098.html.

[2] D. Lake and D. Keenan, “Maximum likelihood estimation of geodesicsubspace trajectories using approximate methods and stochasticoptimization,” inProc. 9th IEEE SP Workshop Statistical Signal ArrayProcess., Portland, OR, Sept. 1998, pp. 148–151.

[3] T. Pham and B. Sadler, “Adaptive wideband aeroacoustic arrayprocessing,” inProc. 8th IEEE SP Workshop Statistical Signal ArrayProcess., Corfu, Greece, June 1996, pp. 295–298.

[4] B. G. Ferguson, “Time-delay estimation techniques applied to theacoustic detection of jet aircraft transits,”J. Acoust. Soc. Amer., vol.106, no. 1, pp. 255–264, July 1999.

[5] B. G. Ferguson and B. G. Quinn, “Application of the short-time Fouriertransform and the Wigner–Ville distribution to the acoustic localizationof aircraft,” J. Acoust. Soc. Amer., vol. 96, pp. 821–827, 1994.

[6] A. Nehorai and E. Paldi, “Acoustic vector-sensor array processing,”IEEE Trans. Signal Processing, vol. 42, pp. 2481–2491, Sept. 1994.

[7] M. Hawkes and A. Nehorai, “Acoustic vector-sensor beamforming andCapon direction estimation,”IEEE Trans. Signal Processing, vol. 46,pp. 2291–2304, Sept. 1998.

[8] , “Effects of sensor placement on acoustic vector-sensor array per-formance,”IEEE J. Oceanic Eng., vol. 24, pp. 33–40, Jan. 1999.

[9] , “Hull-mounted acoustic vector-sensor array processing,” inProc.29th Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Oct.1995, pp. 1046–1050.

[10] , “Acoustic vector-sensor processing in the presence of a reflectingboundary,”IEEE Trans. Signal Processing, vol. 48, pp. 2981–2993, Nov.2000.

[11] G. L. D’Spain and W. S. Hodgkiss, “The simultaneous measurement ofinfrasonic acoustic particle velocity and acoustic pressure in the oceanby freely drifting swallow floats,”IEEE J. Oceanic Eng., vol. 16, pp.195–207, Apr. 1991.

[12] J. C. Nickles, G. L. Edmonds, R. A. Harriss, F. H. Fisher, J. Giles, and G.L. D’Spain, “A vertical array of directional acoustic sensors,” inProc.Mast. Oceans Tech., Newport, RI, Oct. 1992, pp. 340–345.

[13] G. L. D’Spain, W. S. Hodgkiss, and G. L. Edmonds, “Energetics of thedeep ocean’s infrasonic sound field,”J. Acoust. Soc. Amer., vol. 89, pp.1134–1158, Mar. 1991.

[14] G. L. D’Spain, W. S. Hodgkiss, G. L. Edmonds, J. C. Nickles, F. H.Fisher, and R. A. Harriss, “Initial analysis of the data from the verticalDIFAR array,” inProc. Mast. Oceans Tech., Newport, RI, Oct. 1992, pp.346–351.

[15] J. L. Butler, S. C. Butler, D. P. Massa, and G. H. Cavanagh, “Metallicglass velocity sensor,” inAcoustic Particle Velocity Sensors: Design,Performance and Applications, M. J. Berliner and J. F. Linberg,Eds. Woodbury, NY: AIP, 1996, pp. 101–133.

[16] M. A. Josserand and C. Mearfield, “PVF2 velocity hydrophones,”J.Acoust. Soc. Amer., vol. 78, no. 3, pp. 861–867, Mar. 1985.

[17] N. Lagakos and J. A. Bucaro, “Planar fiber optic acoustic velocitysensor,”J. Acoust. Soc. Amer., vol. 97, pp. 1660–1663, 1995.

[18] H.-E. de Bree, P. Leussink, T. Korthorst, H. Jansen, T. Lammerink, andM. Elwenspoek, “The�-flown: A novel device for measuring acousticflows,” Sensors Actuators A, vol. 54, pp. 552–557, June 1996.

[19] F. van der Eerden, H.-E. de Bree, and H. Tijdeman, “Experiments witha new particle velocity sensor in an impedance tube,”Sensors ActuatorsA, vol. 69, pp. 126–133, Aug. 1998.

[20] Microflown Technologies, B.V. [Online]. Available: http://www.mi-croflown.com.

[21] B. A. Cray and R. A. Christman, “Acoustic and vibration performanceevaluations of a velocity sensing hull array,” inAcoustic Particle VelocitySensors: Design, Performance and Applications, M. J. Berliner and J. F.Lindberg, Eds. Woodbury, NY: AIP, 1996, pp. 177–188.

[22] R. J. Brind and N. J. Goddard, “Beamforming of a V-shaped array ofsea-bed geophone sensors,”J. Acoust. Soc. Amer., pt. 2, vol. 105, no. 2,p. 1106, Feb. 1999.

[23] M. Hawkes and A. Nehorai, “Battlefield target localization usingacoustic vector sensors and distributed processing,” inProc. Meet. IRISSpecialty Group Battlefield Acoust. Seismics (Invited), Laurel, MD,Sept. 1999, pp. 111–128.

[24] , “Distributed processing for 3-D localization using acoustic vectorsensors on the seabed or battlefield,” inProc. 8th Annu. Adapt. SensorArray Process. Workshop, Lexington, MA, Mar. 2000.

[25] A. Nehorai and E. Paldi, “Vector-sensor array processing for electro-magnetic source localization,”IEEE Trans. Signal Processing, vol. 42,pp. 376–398, Feb. 1994.

[26] A. Nehorai and M. Hawkes, “Performance bounds for estimating vectorsystems,”IEEE Trans. Signal Processing, vol. 48, pp. 1737–1749, June2000.

[27] P. Stoica, A. Nehorai, and T. Söderström, “Decentralized array pro-cessing using the MODE algorithm,”Circ., Syst., Signal Process., vol.14, no. 1, pp. 17–38, 1995.

[28] M. Wax and T. Kailath, “Decentralized processing in sensor arrays,”IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP–33, pp.1123–1129, Oct. 1985.

[29] A. D. Pierce,Acoustics: An Introduction to Its Physical Principles andApplications. New York: McGraw-Hill, 1981.

[30] L. M. Brekhovskikh,Waves in Layered Media, 2nd ed. New York:Academic, 1980.

[31] T. F. W. Embleton, J. E. Piercy, and N. Olsen, “Outdoor sound propa-gation over ground of finite impedance,”J. Acoust. Soc. Amer., vol. 59,no. 2, pp. 267–277, Feb. 1976.

[32] I. Rudnick, “Propagation of an acoustic wave along a boundary,”J.Acoust. Soc. Amer., vol. 19, no. 1, pp. 348–356, Jan. 1951.

[33] M. Hawkes and A. Nehorai, “Acoustic vector-sensor correlations in am-bient noise,”IEEE J. Oceanic Eng., vol. 26, pp. 337–347, July 2001.

[34] B. Hochwald and A. Nehorai, “Identifiability in array processing modelswith vector-sensor applications,”IEEE Trans. Signal Processing, vol.44, pp. 83–95, Jan. 1996.

[35] B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai, “Exact and largesample maximum likelihood techniques for parameter estimation anddetection in array processing,” inRadar Array Processing, S. Haykin,J. Litva, and T. J. Shepherd, Eds. Berlin, Germany: Springer-Verlag,1993, pp. 99–151.

[36] R. G. Stansfield, “Statistical theory of DF fixing,”J. IEE, pt. IIIA, vol.94, no. 15, pp. 762–770, 1947.

HAWKES AND NEHORAI: WIDEBAND SOURCE LOCALIZATION 1491

[37] S. M. Kay,Fundamentals of Statistical Signal Processing: EstimationTheory. Englewood Cliffs, NJ: Prentice-Hall, 1993.

[38] T. S. Ferguson,A Course in Large Sample Theory. New York:Chapman & Hall, 1996.

[39] Y. Rockah, “Array processing in the presence of uncertainty,” Ph.D. dis-sertation, Yale Univ., Hew Haven, CT, 1986.

[40] C. Y. Chong, K. C. Chang, and S. Mori, “Tracking multiple air targetswith distributed acoustic sensors,” inProc. Amer. Contr. Conf., Min-neapolis, MN, June 1997, pp. 1831–1836.

Malcolm Hawkes (S’95–M’00) was born inStockton-on-Tees, U.K., in 1970 and grew up inSwansea, U.K. He received the B.A. degree in elec-trical and information science from the Universityof Cambridge, Cambridge, U.K., in 1992, the M.Sc.degree in applied statistics from the University ofOxford, Oxford, U.K., in 1993, and the Ph.D. degreein electrical engineering from Yale University, NewHaven, CT, in 2000.

In 1988, he won a scholarship from GEC-MarconiResearch Centre, Chelmsford, U.K., and worked

there from 1988 to 1989, and again in 1990 and 1991, on a variety of projects.He won scholarships at Emmanuel College, Cambridge, from 1990 to 1992.He was awarded a U.K. Medical Research Council grant to pursue a Master’sprogram in 1992 and a Yale University Fellowship in 1993. From 1996 to2000, he was a Visiting Scholar at the University of Illinois, Chicago, where hereceived the John and Grace Nuveen International Scholar Award in 1998. Heis now a Research Associate with Ronin Capital, LLC, Chicago. His researchinterests include statistical signal processing and time series analysis withapplications in finance, array processing, and biomedicine.

Arye Nehorai (S’80–M’83–SM’90–F’94) receivedthe B.Sc. and M.Sc. degrees in electrical engineeringfrom the Technion—Israel Institute of Technology,Haifa, Israel, in 1976 and 1979, respectively, and thePh.D. degree in electrical engineering from StanfordUniversity, Stanford, CA, in 1983.

After graduation, he worked as a Research Engi-neer for Systems Control Technology, Inc., Palo Alto,CA. From 1985 to 1995, he was with the Depart-ment of Electrical Engineering, Yale University, NewHaven, CT, where he became an Associate Professor

in 1989. In 1995, he joined the Department of Electrical Engineering and Com-puter Science, University of Illinois at Chicago (UIC), as a Full Professor. From2000 to 2001, he was Chair of the department’s Electrical and Computer Engi-neering (ECE) Division, which is now a full department. In 2001, he was nameda University Scholar by the University of Illinois. He holds a joint professorshipwith the ECE and Bioengineering Departments at UIC. His research interestsare in signal processing, communications, and biomedicine.

Dr. Nehorai is Vice President—Publications of the IEEE Signal ProcessingSociety and was Editor-in-Chief of the IEEE TRANSACTIONS ON SIGNAL

PROCESSINGfrom January 2000 to December 2002. He is currently a memberof the Editorial Boards ofSignal Processing, the IEEE SIGNAL PROCESSING

MAGAZINE, andThe Journal of the Franklin Institute. He has previously beenan Associate Editor of the IEEE TRANSACTIONS ON ACOUSTICS, SPEECH,AND SIGNAL PROCESSING, the IEEE SIGNAL PROCESSINGLETTERS, the IEEETRANSACTIONS ON ANTENNAS AND PROPAGATION, the IEEE JOURNAL OF

OCEANIC ENGINEERING, and Circuits, Systems, and Signal Processing. Heserved as Chairman of the Connecticut IEEE Signal Processing Chapter from1986 to 1995 and was a Founding Member, Vice-Chair, and later Chair of theIEEE Signal Processing Society’s Technical Committee on Sensor Array andMultichannel (SAM) Processing from 1998 to 2002. He was the co-GeneralChair of the First and SecondIEEE SAM Signal Processing Workshops, held in2000 and 2002. He was co-recipient, with P. Stoica, of the 1989 IEEE SignalProcessing Society’s Senior Award for Best Paper. He received the FacultyResearch Award from UIC College of Engineering in 1999 and was Advisor forthe UIC Outstanding Ph.D. Thesis Award that went to Aleksander Dogandzicin 2001. This year, he was elected Distinguished Lecturer of the IEEE SignalProcessing Society for the years 2004 and 2005. He has been a Fellow of theRoyal Statistical Society since 1996.