Myopic deconvolution from wave-front sensing

11
Myopic deconvolution from wave-front sensing Laurent M. Mugnier, Cle ´lia Robert, Jean-Marc Conan, Vincent Michau, and Se ´lim Salem Optics Department, Office National d’E ´ tudes et de Recherches Ae ´rospatiales, B.P. 72, F-92322 Chatillon cedex, France Received November 30, 1999; revised manuscript received August 1, 2000; accepted October 19, 2000 Deconvolution from wave-front sensing is a powerful and low-cost high-resolution imaging technique designed to compensate for the image degradation due to atmospheric turbulence. It is based on a simultaneous re- cording of short-exposure images and wave-front sensor (WFS) data. Conventional data processing consists of a sequential estimation of the wave fronts given the WFS data and then of the object given the reconstructed wave fronts and the images. However, the object estimation does not take into account the wave-front recon- struction errors. A joint estimation of the object and the respective wave fronts has therefore been proposed to overcome this limitation. The aim of our study is to derive and validate a robust joint estimation approach, called myopic deconvolution from wave-front sensing. Our estimator uses all data simultaneously in a coher- ent Bayesian framework. It takes into account the noise in the images and in the WFS measurements and the available a priori information on the object to be restored as well as on the wave fronts. Regarding the a priori information on the object, an edge-preserving prior is implemented and validated. This method is vali- dated on simulations and on experimental astronomical data. © 2001 Optical Society of America OCIS codes: 010.1330, 010.7350, 100.1830, 100.3020, 100.3190, 110.6770. 1. INTRODUCTION The performance of high-resolution imaging with large optical instruments is severely limited by atmospheric turbulence. Deconvolution from wave-front sensing (DWFS) is a powerful high-resolution imaging technique designed to compensate for the image degradation that is due to atmospheric turbulence. It can be an interesting, light-weight, and low-cost alternative to adaptive optics. It is based on a simultaneous recording of monochromatic short-exposure images and associated wave-front sensor (WFS) data. This technique was originally proposed by Fontanella in 1985. 1 Conventional data processing con- sists of a sequential estimation of the wave fronts given the WFS data and then of the object given the recon- structed wave fronts and the images. This estimator was tested by Primot et al. on laboratory data. 2,3 The first as- tronomical results were obtained in the early 1990’s. 4,5 Primot 3 also gave an analytical expression of the signal- to-noise ratio (SNR) of the technique. DWFS should be more efficient than speckle interferometry for two rea- sons: DWFS is not limited by speckle noise at high flux, 6 and the SNR is better for extended objects, provided that a bright source is available for the WFS. This second point was confirmed in recent publications. 7,8 The esti- mator proposed by Primot was, however, shown to be biased 9 since it does not take into account the wave-front reconstruction errors in object estimation. An unbiased estimator can be obtained with additional calibration data recorded on an unresolved star, 10 but its high-flux performance is limited by speckle noise. 11 An alternative approach is to perform a joint estimation of the object and the wave fronts, accounting for the noise both in the im- ages and in the WFS data. 1114 The aim of this paper is to propose a robust joint esti- mator derived in a Bayesian framework that takes advan- tage of all the available statistical information: the noise statistics in the images and in the WFS as well as the available a priori knowledge about the object structure and about the turbulent-phase statistics. This novel method, recently presented in Ref. 15, is called myopic de- convolution from wave-front sensing (MDWFS). The idea of using WFS data and images jointly can be found in the pioneering work of Schulz. 12 Our contribu- tion is twofold: First, we model our WFS data as slopes deduced from wide-band images, not as narrow-band im- ages. This is mandatory for the processing of low-flux ex- perimental data; indeed, in DWFS all the photons not used for speckle imaging (which is necessarily narrow band) can and should be used for wave-front sensing. Second, we incorporate regularization into the data pro- cessing, for both the phase and the object. More specifi- cally, our approach is a joint (object and phase) maximum a posteriori (MAP) estimation, which is more robust to noise than an unregularized maximum-likelihood estima- tion. The prior on the turbulent phase is derived from Kolmogorov statistics. The prior used to regularize the estimation of the object can be taken as Gaussian, in which case the Bayesian interpretation of regularization provides a means to avoid any manual tuning of a regu- larization parameter. As an alternative, an edge- preserving object prior is also implemented and validated; it is particularly suitable for restoring asteroids or man- made objects such as satellites. We use the term myopic deconvolution instead of blind deconvolution to underline the fact that during the decon- volution, the phase is neither completely unknown (WFS measurements are used) or assumed to be perfectly known through the WFS measurements. The outline is as follows. The imaging models for the images and for the WFS are presented in Section 2. In Section 3 we recall the conventional framework of DWFS, which consists of a sequential estimation of (a) the wave 862 J. Opt. Soc. Am. A / Vol. 18, No. 4 / April 2001 Mugnier et al. 0740-3232/2001/040862-11$15.00 © 2001 Optical Society of America

Transcript of Myopic deconvolution from wave-front sensing

862 J. Opt. Soc. Am. A/Vol. 18, No. 4 /April 2001 Mugnier et al.

Myopic deconvolution from wave-front sensing

Laurent M. Mugnier, Clelia Robert, Jean-Marc Conan, Vincent Michau, and Selim Salem

Optics Department, Office National d’Etudes et de Recherches Aerospatiales, B.P. 72,F-92322 Chatillon cedex, France

Received November 30, 1999; revised manuscript received August 1, 2000; accepted October 19, 2000

Deconvolution from wave-front sensing is a powerful and low-cost high-resolution imaging technique designedto compensate for the image degradation due to atmospheric turbulence. It is based on a simultaneous re-cording of short-exposure images and wave-front sensor (WFS) data. Conventional data processing consists ofa sequential estimation of the wave fronts given the WFS data and then of the object given the reconstructedwave fronts and the images. However, the object estimation does not take into account the wave-front recon-struction errors. A joint estimation of the object and the respective wave fronts has therefore been proposedto overcome this limitation. The aim of our study is to derive and validate a robust joint estimation approach,called myopic deconvolution from wave-front sensing. Our estimator uses all data simultaneously in a coher-ent Bayesian framework. It takes into account the noise in the images and in the WFS measurements and theavailable a priori information on the object to be restored as well as on the wave fronts. Regarding the apriori information on the object, an edge-preserving prior is implemented and validated. This method is vali-dated on simulations and on experimental astronomical data. © 2001 Optical Society of America

OCIS codes: 010.1330, 010.7350, 100.1830, 100.3020, 100.3190, 110.6770.

1. INTRODUCTIONThe performance of high-resolution imaging with largeoptical instruments is severely limited by atmosphericturbulence. Deconvolution from wave-front sensing(DWFS) is a powerful high-resolution imaging techniquedesigned to compensate for the image degradation that isdue to atmospheric turbulence. It can be an interesting,light-weight, and low-cost alternative to adaptive optics.It is based on a simultaneous recording of monochromaticshort-exposure images and associated wave-front sensor(WFS) data. This technique was originally proposed byFontanella in 1985.1 Conventional data processing con-sists of a sequential estimation of the wave fronts giventhe WFS data and then of the object given the recon-structed wave fronts and the images. This estimator wastested by Primot et al. on laboratory data.2,3 The first as-tronomical results were obtained in the early 1990’s.4,5

Primot3 also gave an analytical expression of the signal-to-noise ratio (SNR) of the technique. DWFS should bemore efficient than speckle interferometry for two rea-sons: DWFS is not limited by speckle noise at high flux,6

and the SNR is better for extended objects, provided thata bright source is available for the WFS. This secondpoint was confirmed in recent publications.7,8 The esti-mator proposed by Primot was, however, shown to bebiased9 since it does not take into account the wave-frontreconstruction errors in object estimation. An unbiasedestimator can be obtained with additional calibrationdata recorded on an unresolved star,10 but its high-fluxperformance is limited by speckle noise.11 An alternativeapproach is to perform a joint estimation of the object andthe wave fronts, accounting for the noise both in the im-ages and in the WFS data.11–14

The aim of this paper is to propose a robust joint esti-mator derived in a Bayesian framework that takes advan-tage of all the available statistical information: the noise

0740-3232/2001/040862-11$15.00 ©

statistics in the images and in the WFS as well as theavailable a priori knowledge about the object structureand about the turbulent-phase statistics. This novelmethod, recently presented in Ref. 15, is called myopic de-convolution from wave-front sensing (MDWFS).

The idea of using WFS data and images jointly can befound in the pioneering work of Schulz.12 Our contribu-tion is twofold: First, we model our WFS data as slopesdeduced from wide-band images, not as narrow-band im-ages. This is mandatory for the processing of low-flux ex-perimental data; indeed, in DWFS all the photons notused for speckle imaging (which is necessarily narrowband) can and should be used for wave-front sensing.Second, we incorporate regularization into the data pro-cessing, for both the phase and the object. More specifi-cally, our approach is a joint (object and phase) maximuma posteriori (MAP) estimation, which is more robust tonoise than an unregularized maximum-likelihood estima-tion. The prior on the turbulent phase is derived fromKolmogorov statistics. The prior used to regularize theestimation of the object can be taken as Gaussian, inwhich case the Bayesian interpretation of regularizationprovides a means to avoid any manual tuning of a regu-larization parameter. As an alternative, an edge-preserving object prior is also implemented and validated;it is particularly suitable for restoring asteroids or man-made objects such as satellites.

We use the term myopic deconvolution instead of blinddeconvolution to underline the fact that during the decon-volution, the phase is neither completely unknown (WFSmeasurements are used) or assumed to be perfectlyknown through the WFS measurements.

The outline is as follows. The imaging models for theimages and for the WFS are presented in Section 2. InSection 3 we recall the conventional framework of DWFS,which consists of a sequential estimation of (a) the wave

2001 Optical Society of America

Mugnier et al. Vol. 18, No. 4 /April 2001 /J. Opt. Soc. Am. A 863

fronts, given the WFS data, and (b) the object, given thewave-front estimates and the images; we also introducethe various terms of the criteria that can be minimized inthis sequential estimation for optical wave-front estima-tion and for edge-preserving object restoration. Then inSection 4 we introduce our myopic-deconvolution ap-proach, which replaces the two previous minimizations(wave-front estimation and object estimation) with asingle joint minimization. This scheme is validated bysimulations in Section 5 for the case of satellite imaging,and some experimental results are given in Section 6.

2. IMAGING MODEL AND PROBLEMSTATEMENTA linear shift-invariant model is assumed for all M re-corded short-exposure images it of the observed object o:

it 5 o ! ht 1 nt , 1 < t < M, (1)

where ! denotes the convolution operator, ht is the in-stantaneous point-spread function (PSF), at time t, of thesystem consisting of the atmosphere, of the telescope, andof the detector, and nt is an additive noise (often predomi-nantly photon noise). The shift-invariance assumption isvalid if the field of view is within the isoplanatic patch.If the field of view is larger, then the phase variations inthe field should be modeled, taking into account possiblewave-front measurements in different directions as wellas a priori information on the wave-front spatialvariations.16,17

The integration time of each image is considered to beshort enough (typically 10 ms or less) to ‘‘freeze’’ the tur-bulence. Assuming that scintillation is negligible (near-field approximation), the PSF at time t is completely char-acterized by the turbulent phase w t in the pupil of theinstrument:

ht 5 uFT21@P exp~ jw t!#u2, (2)

where P is the pupil function (1 inside, 0 outside) and FTdenotes Fourier transformation. Owing to this relation-ship, which will be used throughout the paper, the knowl-edge of the wave front w t is equivalent to that of the PSFht . The phase is decomposed on the basis of Zernikepolynomials18 $Zk%:

w t~r ! 5 (k

f tkZk~r !, (3)

so the unknown phase at time t is the vector ft5 ( f t

1 , . . . , f tk , . . . )T, where T denotes transposi-

tion. In practice the summation on k will be limited tosome high number K (typically one hundred to a few hun-dreds) chosen to keep the computational burden reason-able. The turbulent phase in the pupil is consideredzero-mean Gaussian with Kolmogorov statistics; its cova-riance matrix Cf is thus known and given by Noll’sformula.18

In the following sections we shall consider that the ob-ject and the image are sampled on a regular grid, hence avectorial formulation for Eq. (1):

i t 5 o ! ht 1 n t 5 Hto 1 n t , (4)

where o, it and nt are the vectors corresponding to thelexicographically ordered object, image, and zero-meannoise, respectively. Ht is the Toeplitz matrix correspond-ing to the convolution by the PSF ht .

The WFS is assumed to be working in a linear domain,which is very reasonable for the Shack–Hartmann (SH)WFS considered in this paper. The WFS data recordedat time t can thus be written as

st 5 Dft 1 nt8 , (5)

where st is the vector concatenating the WFS measure-ments (wave-front slopes in the case of a SH sensor), nt8 isa zero-mean Gaussian noise (of covariance Cn8), ft is theturbulent phase vector, and D is the so-called interactionmatrix, which depends on the chosen WFS and on its geo-metrical configuration as well as on the basis chosen forf. For example, for a SH WFS and a phase decomposedon the Zernike polynomials, the columns of D consist ofthe collection of the responses of the WFS to each Zernikepolynomial, which are close to the spatial derivatives ofthese polynomials.3

The problem at hand is to estimate the observed objecto given a set of images it and a synchronously recordedset of wave-front measurements st (1 < t < M).

3. SEQUENTIAL ESTIMATION OF WAVEFRONTS AND OBJECTThe conventional processing of DWFS data consists of asequential estimation of the wave fronts, given the WFSdata, and then of the object, given the reconstructed wavefronts (and thus the PSF’s) and the images. Both thewave-front reconstruction and the image restoration areill-posed inverse problems and must be regularized (seeRefs. 19 and 20 for reviews on regularization), in thesense that some a priori information must be introducedin their resolution in order for the solution to be uniqueand robust to noise.

A. Wave-Front ReconstructionLet us first tackle the wave-front reconstruction problem.A common solution for the estimation of a wave front f,given slope measurements s made by a SH WFS, is theleast-squares one:

f 5 ~DTD!21DTs. (6)

This solution can be interpreted as a maximum-likelihoodsolution if the noise on the WFS is stationary whiteGaussian. The matrix DTD is often ill-conditioned oreven not invertible, because the number of measurementsis finite (twice the number of subapertures Nsub), whereasthe dimension of the true phase vector f is theoreticallyinfinite (in practice a high number K, possibly greaterthan 2Nsub). The usual remedy is to reduce the dimen-sion K of the vector space of the unknown f. This kind ofregularization is not very satisfactory, because the choiceof the dimension is difficult and somewhat ad hoc: Thebest choice depends in particular on the SNR of the WFS.A more rigorous approach is to turn to a probabilistic so-lution that makes use of prior information on the turbu-lent phase. Such a solution is, for instance, the linearminimum-variance estimator proposed by Wallner21

864 J. Opt. Soc. Am. A/Vol. 18, No. 4 /April 2001 Mugnier et al.

(similar to the Wiener filter familiar in image restoration)and first applied to DWFS by Welsh andVonNiederhausern.22 Another such solution is given bya MAP approach, i.e., by searching for the most likelyphase given the measurements and our prior information.When, as in this paper, the noise and the turbulent phaseare considered Gaussian, these two approaches areequivalent.23 The maximization of the posterior prob-ability of the phase is equivalent to the minimization ofthe neg-log-posterior probability of f, called JMAP

f in thefollowing. With use of Bayes’ rule, it is straightforwardto show that JMAP

f takes the following form:

JMAPf 5 Js 1 Jf , (7)

where

Js 512 ~s 2 Df!TCn8

21~s 2 Df!, (8)

Jf 512 fTCf

21f. (9)

Js is a term of fidelity to the WFS data and is the oppositeof their log likelihood, and Jf is a term of fidelity to theprior. The solution is analytical and can be written inmatrix form24 as follows:

f MAP 5 ~DTCn821D 1 Cf

21!21DTCn821s. (10)

Note that there is an equivalent expression for this solu-tion that involves the inversion of a smaller matrix whenthe phase f is expanded on more modes than there areWFS measurements:

f MAP 5 CfDT~DCfDT 1 Cn8!21s. (11)

This phase estimate is more accurate than the least-squares one of Eq. (6) and in addition does not require anad hoc tuning of the dimension K of the unknown phase.To the best of our knowledge, the phase estimate has sofar always been used as the true phase [for computing thePSF’s by use of Eq. (2)] for the image restoration.8–10 Incontrast, the phase estimate of Eq. (10) will be used as astarting point for our myopic method, and the correspond-ing criterion of Eq. (7) will be part of the criterion derivedin the myopic method. We delay the presentation of thismyopic method to the following section, and for the timebeing we consider, as is classically done, that the wavefronts estimated by the MAP approach above are the trueones.

B. Multiframe Image RestorationLet us now turn to the image restoration problem, in theclassical setting where the PSF’s are known. Most de-convolution techniques boil down to the minimization (ormaximization) of a criterion. The first issue is the defi-nition of a suitable criterion for the given inverse prob-lem. The second issue is then to find the position of thecriterion’s global minimum, which is defined as the solu-tion. In some rare cases (when the criterion to be mini-mized is quadratic) the solution is given by an analyticalexpression (e.g., Wiener filtering), but most of the timeone must resort to an iterative numerical method to solvethe problem. For our applications we use a conjugate-gradient method.

Similarly to wave-front reconstruction, a common ap-proach for the image restoration is to use a (multiframe)

least squares. The solution, i.e., the minimum of this cri-terion, is analytical and is the multiframe inverse filterproposed in DWFS by Primot et al.3 This approach has amaximum-likelihood interpretation when the noise can beassumed to be stationary white Gaussian. It leads to un-acceptable noise amplification for high noise levels andmust therefore be regularized.

A natural way to regularize the inversion is functionalregularization. It consists in defining the solution as theminimum of a compound criterion with two terms, say,Ji 1 lJo . Term Ji of the criterion enforces fidelity tothe data (it can be, e.g., a least squares) while term Jo ex-presses fidelity to some prior information about the solu-tion. Two choices must be made: One is the regulariza-tion functional, i.e., the expression of Jo , and the other isthe regularization parameter l, a scalar that adjusts thetrade-off between the two terms.

The choice of the regularization parameter can be madeautomatically during the restoration process in a con-strained least-squares formulation of theregularization.25 Yet the regularization functional re-mains to be chosen by the user in a somewhat arbitraryfashion.

An alternative is to use the Bayesian interpretation ofregularization, which can provide ‘‘natural’’ regulariza-tion functionals. This is the approach taken in this pa-per: The object is endowed with an a priori distributionp(o), and Bayes’ rule combines the likelihood of the M im-ages p($it%t51

M uo,$ft%t51M ) with this a priori distribution

into the a posteriori probability distribution p(ou$it%t51M ,

$ft%t51M ):

p~ou$it%t51M , $ft%t51

M ! } p~$it%t51M uo, $ft%t51

M ! 3 p~o!. (12)

We shall make two assumptions in the following: First,that the noise is independent between images and second,that the time between two successive data recordings isgreater than the typical evolution time of turbulence.With these two reasonable assumptions, the likelihood ofthe set of images can be rewritten as the product of thelikelihoods of the individual images, each of the condi-tioned only by the object and by the true phase at thesame time. The restored object is defined as the mostprobable one given the data, i.e., the one that minimizes:

JMAPo ~o! 5 (

t51

M

Ji~o; ft , it! 1 Jo~o!, (13)

where

Ji~o; ft , it! 5 2ln p~i tuo, f t!, (14)

Jo~o! 5 2ln p~o!. (15)

Ji is the neg-log-likelihood of image number t and Jo isthe neg-log-prior probability of the object. The minimi-zation of this criterion is performed on o only, and the ft’sare a reminder of the dependency of the criterion on thephases, assumed here to be known. The forms taken by

Mugnier et al. Vol. 18, No. 4 /April 2001 /J. Opt. Soc. Am. A 865

criteria Ji and Jo depend on the statistical assumptionsmade on the noise and on the object, respectively, and arediscussed next.

1. Noise StatisticsIf the noise is zero mean Gaussian with covariance matrixCn , then Ji is quadratic for any image i:

Ji~o; f, i! 512 ~h ! o 2 i!TCn

21~h ! o 2 i!, (16)

which depends on f through h [see Eq. (2)]. In particu-lar, if the noise is, in addition, stationary and white, ofvariance sn

2, then Eq. (16) reduces to the familiar leastsquares:

Ji~o; f, i! 51

2sn2

ih ! o 2 i)i2

51

2sm2 (

l,mu@h ! o#~l, m ! 2 i~l, m !u2. (17)

In astronomical imaging, the noise is often predominantlyphoton noise, which follows Poisson statistics. One pos-sibility is then to derive the true MAP criterion for photonnoise statistics, which is the neg-log-likelihood of thePoisson law. In this paper Ji is taken for simplicity asthe least-squares term of Eq. (17) with a uniform noisevariance equal to the mean number of photons per pixel.This can be considered as a first approximation of thephoton noise in the case of a rather bright and extendedobject.

2. Object PriorThe choice of a Gaussian prior probability distribution forthe object can be justified from an information theorystandpoint as being the least informative, given the firsttwo moments of the distribution. In this case a reason-able model of the object’s power spectral density (PSD)can be found26 and used to derive the criterion Jo . Thisis more satisfactory than an ad hoc regularization func-tional such as the identity or the traditional Laplacianand gives better object estimates.26 In addition, becauseJo is derived from a probability distribution, there is noregularization parameter (scaling factor between the Ji’sand Jo) to be adjusted. Finally, the solution is analyticaland is a multiframe Wiener filter.27

The disadvantage of a Gaussian prior, especially for ob-jects with sharp edges such as asteroids or artificial sat-ellites, is that it tends to oversmooth edges. A possibleremedy is to use an edge-preserving prior such as anL2 –L1 criterion, quadratic for small gradients and linearfor large ones.28 The quadratic part ensures a goodsmoothing of the small gradients (i.e., of noise), and thelinear behavior cancels the penalization of large gradients(i.e., of edges).29 Here we use a function that is an iso-tropic version of the expression suggested by Rey30 in thecontext of robust estimation, used by Brette and Idier31

for image restoration, and recently applied to imagingthrough turbulence32:

Jo~o! 5 md 2(l,m

F S ¹o~l, m !

dD 2 lnS 1 1

¹o~l, m !

dD G ,

(18)

where ¹o(l, m) 5 @¹xo(l, m)2 1 ¹yo(l, m)2#1/2, and ¹xoand ¹yo are the object finite-difference gradients along xand y, respectively.

The global factor m and the threshold d have to be ad-justed according to the noise level and the structure of theobject. This is currently done by hand but an automaticprocedure is under study. This function is convex, as isthe global criterion, which ensures uniqueness and stabil-ity of the solution with respect to noise and also justifiesthe use of a gradient-based method for the minimization.Note that if the object is a stellar field, then a strongerprior can be used, namely, the fact that the unknown ob-ject is a collection of Dirac delta functions.33

4. JOINT ESTIMATION OF WAVE FRONTSAND OBJECTA. MotivationThe conventional processing DWFS data consists of a se-quential estimation of the wave fronts given the WFSdata and then of the object given the images and the re-constructed wave fronts, which are regarded as the trueones. So, obviously, the information about the wavefronts is gathered only in the WFS data, not in the im-ages.

Yet the wave-front estimates are inevitably noisy, so itwould be useful to exploit both the WFS data and the im-ages in the wave-front estimation. And there is some ex-ploitable information about the wave front (or the PSF) ina short-exposure turbulence-degraded image. To provethis point, it is enough to recall that multiframe blind de-convolution for such images (blind meaning without aWFS) is feasible, at least at high SNR’s; indeed the PSFreparameterization of Eq. (2) by the phase, which wasproposed by Schulz,34 is a strong constraint that allowsthis technique to work in practice, even on experimentaldata.34,35

A drawback of blind criteria is that they usually exhibitlocal minima, and the PSF reparameterization by thephase is probably not a strong enough constraint to givethe blind-deconvolution problem a unique solution. Sothe WFS data should definitely not be thrown away butrather used in conjunction with the images.

Our aim is thus to estimate the object and the PSF’swhile taking into account all the measurements (WFSdata and images) simultaneously in a coherent frame-work.

B. Myopic-Deconvolution ApproachThe myopic-deconvolution approach consists in a joint es-timation of the object o and the turbulent phases ft thatare the most likely, given the images it , the WFS data stand the a priori information on o and the ft’s. If we useBayes’ rule in a way similar to its use in Subsection 3.Band the independence assumptions therein, the joint pos-terior probability distribution of the object and the phasesis

866 J. Opt. Soc. Am. A/Vol. 18, No. 4 /April 2001 Mugnier et al.

Table 1. Block Diagram of the Algorithm Used for Myopic Deconvolution from Wave-Front Sensing

Step Operation Performed Implementation

1. Initialization ft0 5 ft

MAP 5 ~DTCn821D 1 Cf

21!21DTCn821st ;t Matrix

Multiplication

o0 5 arg minoS 1

2sn2 (

tih~ft

0! ! o 2 iti2 1 Jo~o!D One conjugate-

gradient minimization

2. qth phase iteration ftq 5 arg minfS 1

2sn2

ih~f! ! oq21 2 iti2

M Partial conjugate-gradient minimizations

112

~st 2 Df!TCn821

~st 2 Df! 112

fTCf21f

3. qth object iteration oq 5 arg minoS 1

2sn2 (

tih~ft

q! ! o 2 iti2 1 Jo~o!D One partial conjugate-

gradient minimization4. Stopping rule if ioq 2 oq21i . 1027ioqi or if

't; iftq 2 ft

q21i . 1027iftqi then go to (2), else stop.

p~o, $ft%t51M u$it%t51

M , $st%t51M !

} p~$it%t51M , $st%t51

M uo, $ft%t51M ! 3 p~o! 3 p~$ft%t51

M !

} )t51

M

p~i tuo, ft! 3 p~o! 3 )t51

M

p~s tuf t!

3 )t51

M

p~ft!. (19)

So the joint MAP estimates (o,$ft%) are the ones thatminimize:

JMAP~o, $ft%! 5 (t51

M

Ji~o, ft ; it! 1 Jo~o!

1 (t51

M

Js~ft ; st! 1 (t51

M

Jf~ft!, (20)

where

• The Ji’s are terms of fidelity to image data;Ji(o, ft ; it) is the neg-log-likelihood of the tth image;for a Gaussian noise, Ji(o, ft ; it) 5 (1/2)(ht ! o2 it)

TCn21(ht ! o 2 it), which is the same as Eq. (16) ex-

cept that it is now a function of the object and the phases;• Jo(o) is the object prior, which can be taken either

as quadratic (with a simple parametric model for the ob-ject PSD26) or as an edge-preserving criterion [see Eq.(18)];

• The Js’s are terms of fidelity to WFS data; under ourassumptions they are quadratic and given by Eq. (8);

• The Jf’s are the phase priors; for Kolmogorov statis-tics they, too, are quadratic and given by Eq. (9).

C. Implementation of the Myopic DeconvolutionThe criterion of Eq. (20) is minimized numerically to ob-tain the joint MAP estimate for the object o and thephases ft . The minimization is performed on the un-known object o and on the unknown phases f t by a fastconjugate-gradient method.36 Because the criterion iscontinuously differentiable, the convergence of theconjugate-gradient method to a stationary point (in prac-

tice, to a local minimum) is guaranteed.37 We have foundthat the convergence is faster if the descent direction isreinitialized regularly (typically every ten iterations forthe sizes of our images and phase vectors); this result canbe attributed to the fact that the criterion to be minimizedis not quadratic. This modified version of the conjugategradient is known as the partial conjugate-gradientmethod and has similar convergence properties (see Sec.8.5 of Ref. 38).

A practical but severe minimization problem may occurif one simply stacks together the object and the phasesinto a vector of unknowns, because the gradients of thecriterion with respect to the object and to the phase mayhave very different orders of magnitude. One solutioncan be to scale the unknown object. The scaling must beadjusted so that the gradients with respect to the objectand to the phase will be of comparable magnitudes.39 Wehave found that it is safer and faster to split the minimi-zation into two blocks and to alternate between minimi-zations on the object for the current phase estimates andminimizations on the phases for the current object esti-mate, so that this scaling problem is avoided. Addition-ally, as mentioned above, we use a partial conjugate-gradient method; i.e., we perform a fixed number ofconjugate-gradient iterations within each block (object orphases) and then switch to the other block.

Finally, a careful inspection of the criterion JMAP of Eq.(20) allows a significant reduction of the computationalburden: First, only the M 1 1 first terms of JMAP de-pend on the object, so the last 2M terms of the criterionneed not be computed when minimizing on the object.Second, when JMAP is minimized on the phases $ft%t51

M forthe current object estimate, the minimization can be de-coupled into M minimizations, each being performed onone single phase screen ft and involving only three terms(Ji 1 Js 1 Jf).

As all numerical minimizations do, this one needs aninitialization point or initial guess; in order both to speedup the descent and to avoid, in practice, the local minimaassociated with joint criteria, we use as an initial guessthe MAP estimates obtained by the sequential processingdescribed in Section 3. More precisely, the initializationphases are those obtained by a MAP processing of the soleWFS data, i.e., by the minimization of the two right-hand

Mugnier et al. Vol. 18, No. 4 /April 2001 /J. Opt. Soc. Am. A 867

terms of Eq. (20); these phases are given analytically byEq. (10). The initialization object is the one obtained bya MAP processing of the sole images given the initializa-tion phases, i.e., by the minimization of the two left-handterms of Eq. (20); this object has an analytical expressionwhen both the object and the noise are assumed to beGaussian (it is the multiframe Wiener filter applied to theimage sequence).

The minimization is not stopped by hand but ratherwhen the estimated object and phases no longer evolve(i.e., when their evolution from one iteration to the next isclose to machine precision). The metric used to asses thequality of the object estimate is the standard mean squareerror (MSE) between the true object and the estimatedone, expressed in photons per pixel. We have verifiedthat the estimated objects are not shifted by more than afraction of pixel with respect to the true object, so theMSE’s mentioned below are a good measure of the re-stored object’s quality.

The conjugate-gradient routine needs to repeatedlycompute the criterion and its gradients with respect tothe object o and to the phases ft . These gradients canbe computed analytically; they are given in Appendix Afor completeness. The whole algorithm is summarized inthe block diagram of Table 1. In the next section we vali-date the proposed myopic approach on simulated data.

5. VALIDATION BY SIMULATIONSA set of 100 noisy images and associated wave-front mea-surements are simulated. The 100 turbulent wave frontsare obtained with a modal method40: The phase is ex-panded on the Zernike polynomial basis, and it is givenKolmogorov statistics.18 The strength of the turbulencecorresponds to a ratio D/r0 of the telescope diameter tothe Fried parameter equal to 10.

Each of these turbulent wave fronts leads to a 1283 128 Shannon-sampled short-exposure image with use

Fig. 1. Original object (SPOT satellite, left) one of the 100 turbulent PSF’s (D/r0 5 10, center) and corresponding noisy image (flux of104 photons, right).

Fig. 2. Restored object with conventional method (MAP wave-front estimation followed by a multiframe Wiener filter, left), to be com-pared with the restored object by myopic deconvolution (right). In both cases the same Gaussian prior is used for the object. The MSEis 0.48 photon (left) and 0.45 photon (right).

868 J. Opt. Soc. Am. A/Vol. 18, No. 4 /April 2001 Mugnier et al.

Fig. 3. Restored object with conventional method (left): MAP wave-front estimation followed by a multiframe edge-preserving resto-ration. Restored object by myopic deconvolution (right). In both cases the same edge-preserving prior is used for the object, with anadditional positivity constraint. The MSE is 0.45 photon (left) and 0.39 photon (right).

of Eqs. (1) and (2). The image noise is a uniform Gauss-ian noise with a variance equal to the mean flux, or104/(128 3 128) 5 0.61 photons/pixel. It approximatesthe photon noise for a uniform distribution of a 104 pho-ton total flux over the image sensor. Figure 1 shows theoriginal object (a model of the SPOT satellite), one of the100 turbulence PSF’s, and the corresponding noisy image.

The WFS is of SH type with 20 3 20 subapertures, ofwhich 308 are useful, and without a central obscuration.A white Gaussian noise is added to the computed slopes[see Eq. (5)] so that the SNR on the measured slopes, de-fined as the ratio of the turbulence slope variance over thenoise variance, is 1.

The Zernike expansion of the phase is limited to 190Zernike polynomials (radial degree 18), only to keep thecomputational burden reasonably small; indeed, with theSNR chosen for the WFS, it can be shown that the recon-structed phase modes and the reconstructed noise havethe same level for the 55th polynomial (radial degree 9).

Figure 2 compares the conventional sequential estima-tion and our myopic estimation for the same Gaussian ob-ject model with the same PSD. On the left, the conven-tional restoration, consisting of a MAP wave-frontestimation followed by a multiframe Wiener filter, gives aMSE of 0,48 photon. On the right, the myopic joint esti-mation gives a MSE on the object of 0.45 photon. Severaldetails are visibly better restored on the myopic restora-tion, in particular the two little bright rectangles near thedish and the separations between the solar panels. Fig-ure 3 shows the same comparison between the conven-tional and the myopic restorations (left and right, respec-tively), but with the edge-preserving prior of Eq. (18) plusa positivity constraint instead of the Gaussian prior. TheMSE is 0.45 photon and 0.39 photon, respectively. A par-ticularly noticeable difference between these two imagesis the thin shadow on the lower left part of the satellitebody, which is visible only in the edge-preserving myopic

restoration. In both cases the images restored with ourmyopic method (right side of Figs. 2 and 3) are lessblurred than the ones restored with the sequentialscheme (left side of the same figures). And the two res-torations obtained with the edge-preserving prior are alsocrisper and closer to the true object than are their qua-dratic counterparts. The restorations shown in Fig. 3were obtained with respectively 210 s (left) and 1450 s(right) of CPU time on a Sun Sparc. Processing 100frames with MDWFS therefore leads to a quite reason-able computation time.

Fig. 4. Comparison of the reconstructed phase errors, as a func-tion the Zernike mode radial degree, for the conventional (MAP)and the myopic (join MAP) restoration schemes. The turbulent-phase variance is shown for comparison (dashed line).

Mugnier et al. Vol. 18, No. 4 /April 2001 /J. Opt. Soc. Am. A 869

It is remarkable that, simultaneously with the betterobject estimation, the myopic deconvolution provides abetter estimation of the phase. We believe that the latteris due to the fact that all the available data, images in-cluded, are used for the phase estimation. A more quan-titative explanation of this improvement remains to bedone and is worth further study. This estimation im-provement is shown in Fig. 4; the tip-tilt, the defocus, andthe astigmatism happen to be particularly better esti-mated than by use solely of WFS data. One can note thatthe use of a prior that is well suited to the object (anL2 –L1 criterion for an object that is known to have sharpedges) enhances not only the object restoration but alsothe phase estimation.

6. EXPERIMENTAL RESULTSWe have applied our MDWFS method to experimental im-ages of the binary star Capella taken at the 4.2-m Will-iam Herschel Telescope, located at La Palma in the Ca-nary Islands. Ten short-exposure images of size128 3 128 were taken on November 8, 1990, at 4:00 uni-versal time and processed. The exposure time is 5 ms onboth the imaging and the wave-front sensing channel.Note that for fainter objects it has been shown that inspeckle imaging it can be useful to increase the exposuretime so as to obtain a better SNR in the images,41 eventhough the speckles get blurred. Preliminary internalstudies tend to show that this is not the case for DWFS,because the wave-front measurements degrade quicklywhen the exposure time is greater than the evolution timeof turbulence.

The experimental conditions were the following: Aflux of 67,500 photons per frame and an estimated D/r0 of13. The imaging wavelength was 0.66 mm, which gives adiffraction-limit resolution l/D of 32.1023 arc sec. TheWFS is of SH type with 29 3 29 subapertures, of which560 are useful because of the telescope’s 29% central ob-

scuration. The estimated WFS SNR is 5, which gives theWFS noise level required in Js . Note that the noise levelcan be artificially increased to account for noncommon-path aberrations in the optical setup. When this noiselevel goes to infinity, the algorithm becomes a multiframeregularized blind deconvolution. Figure 5 shows one ofthe ten processed short exposures (left) and the corre-sponding computed long exposure obtained by adding theten short exposures (right); the latter image illustratesthe loss in resolution associated with the long exposure.

The left panel of Fig. 6 shows the result of conventionalestimation: A multiframe Wiener filter was applied tothe images, with wave front estimates obtained by a MAPreconstruction from the slopes. The PSD is taken as aconstant, as is natural for a spike-like object, and thewave front is reconstructed on the first 190 Zernike poly-nomials. The binary nature of the star is not clearly vis-ible. The center panel of Fig. 6 shows an improved con-ventional estimation, in which we have used the same flatPSD but added a positivity constraint to the restoration.The binary nature of the star begins to be visible, but thebinary star is still embedded in strong fluctuations.

The right panel of Fig. 6 shows the result of our MD-WFS method: the object and the wave fronts are jointlyestimated by minimizing the criterion of Eq. (20); theprior used is the same flat PSD with the positivity con-straint, and the starting point of the minimization is theestimate on the center. There is a clear gain in imagequality with the MDWFS method. In particular, the twocomponents of Capella are clearly resolved, and the esti-mated angular separation is 57.1023 arc sec, in goodagreement with the one predicted by orbit data at thedate of the experiment (55.1023 arc sec).

The PSD used in the regularization is the same for theconventional and for the myopic estimations: It is de-duced from the average flux in the images and is nottuned manually; one may consider underestimating thePSD (i.e., performing overregularization) to make the con-

Fig. 5. Experimental short-exposure image of Capella taken on November 8, 1990 (left); corresponding long exposure (average of tenshort exposures, right).

870 J. Opt. Soc. Am. A/Vol. 18, No. 4 /April 2001 Mugnier et al.

Fig. 6. Restored object with conventional methods: MAP wave-front estimation followed by a multiframe Wiener filter (left); MAPwave-front estimation followed by a quadratic restoration with a positivity constraint (center), to be compared with the object restored bythe myopic deconvolution (right), with use of the same quadratic regularization and the same positivity constraint.

ventional estimation smoother, but no manual tuning ofthe PSD can make the conventionally restored imagesboth sharp and free from spurious ripples.

The restoration of Capella without the use of WFS databy use of the bispectrum has also been reported,42 butsuch higher-order methods usually require many moreimages to produce a result of similar quality (typically100 images even for very simple objects such as a binarystar).

The conventional restoration with positivity constraint(center panel of Fig. 6) required 40 s of CPU time, and themyopic restoration (right part of the figure) took 110 s,which remains fairly fast.

7. CONCLUSIONWe have presented a novel approach for deconvolutionfrom wave-front sensing, called myopic deconvolutionform wave-front sensing (MDWFS). This approach con-sists in a joint estimation of the object of interest and theunknown turbulent phases; it considers all the data (WFSslopes and images) simultaneously in a coherent Baye-sian framework. This method takes into account thenoise in the images and in the wave-front sensor mea-surements, as well as the available prior information onthe object to be restored and on the turbulent phases.

This approach has been validated on simulations andhas led to a better object estimation than that obtainedwith the sequential processing of the WFS data and theimages. An edge-preserving object prior has been imple-mented and should be very effective for asteroids or man-made objects such as satellites. An initial experimentalastronomical application of MDWFS on Capella has alsobeen presented.

APPENDIX A: DERIVATION OF THEGRADIENTS WITH RESPECT TO OBJECTAND PHASE PARAMETERSThe image deconvolutions shown in this paper, whetherconventional or myopic, are performed by means of aconjugate-gradient routine. This routine needs to re-peatedly compute the considered criterion and its gradi-ents with respect to the object o (and with respect to the

phases ft in the case of myopic deconvolution). The gra-dients of all the terms of the criteria considered here aregiven below.

1. Gradients of the Image Data TermThe gradient of the least-squares criterion Ji(o, ft ; it)with respect to the unknown object is easily computed inmatrix form and can be expressed in Fourier space forimplementation by fast Fourier transform:

dJi

do5

1

sn2Ht

T~Hto 2 it! 51

sn2 FT21(ht* ~h to 2 it!),

(A1)

where the tilde and FT21( • ) denote Fourier transforma-tion and its inverse, respectively.

The gradient of the least-squares criterion Ji(o,f t ;it)with respect to the unknown phase ft is done in threesteps. The derivation begins by calculating the gradientof Ji with respect to the PSF ht:

dJi

dht5

1

sn2FT21(o* ~h to 2 it!). (A2)

It then consists in calculating the gradient of the PSFwith respect to the phase w t(l, m) at pixel (l, m) in thepupil and applying the chain rule. The result, alreadyderived in Ref. 35 in a slightly different form, is

dJi

dw t~l, m !5

2

AIH P~l, m !exp@2jw t~l, m !#

3 FFTS dJi

dhtD ! P exp~ jw t!G ~l, m !J , (A3)

where A is the area of the pupil [numerically, the numberof pixels where P(l,m) 5 1], I denotes the imaginarypart of a complex number, and ! denotes a convolution op-erator.

Finally, because we expand the unknown phases on theZernike basis, we need the gradient of Ji with respect tothe Zernike coefficients f t

k of phase f t ; it is easily de-duced from the previous expression:

Mugnier et al. Vol. 18, No. 4 /April 2001 /J. Opt. Soc. Am. A 871

dJi

df tk 5 (

l,m

d Ji

dw t~l, m !Zk~l, m !

52

A (l,m

Zk~l, m !IH P~l, m ! exp@2jw t~l, m !#

3 FFTS dJi

dh tD ! P exp~ jw t!G ~l, m !J . (A4)

For a Poisson model of the image noise, the gradient ofthe corresponding criterion Ji could be derivedsimilarly.43

2. Gradient of the Object Prior TermThe gradient of the edge-preserving prior of Eq. (18) canbe derived and implemented by using the finite-differenceoperators (¹xo)(l, m) 5 o(l, m) 2 o(l 2 1, m) and(¹yo)(l, m) 5 o(l, m) 2 o(l, m 2 1). One can showthat

dJo

do5 mF¹x

TS ¹xo

1 1 A~¹xo!2 1 ~¹yo!2/dD

1 ¹yTS ¹yo

1 1 A~¹xo!2 1 ~¹yo!2/dD G , (A5)

where the fraction line denotes pointwise division,(¹x

To)(l, m) 5 o(l, m) 2 o(l 1 1, m), and (¹yTo)(l, m)

5 o(l, m) 2 o(l, m 1 1).

3. Gradients of the Slope Data and of the Phase PriorTermsThe gradient of criterion Js is straightforward to com-pute, because this criterion is quadratic. Furthermore,in practice the wave-front noise covariance matrix istaken as Cn8 5 sn8

2 I (where I denotes the identity matrix)so that Js 5 (1/2sn8

2 )ist 2 Dfti2, and its gradient reads

as

dJs

df t5

1

sn82 DT~Df t 2 s t!. (A6)

Similarly, the gradient of Jf 512 ft

TCf21ft reads as

dJf

dft5 Cf

21ft . (A7)

ACKNOWLEDGMENTSThis work was supported by contracts from SPOTI, Min-istere de la Defense, France. The authors thank ThierryFusco, Gerard Rousset, Jerome Idier, and AmandineBlanc for fruitful discussions. Many thanks also to RuyDeron, Joseph Montri, Francis Mendez, Jean Lefevre, Di-dier Rabaud, and Christophe Coudrain, who made the ex-perimental setup happen.

Corresponding author Laurent Mugnier can be reachedby e-mail at [email protected].

REFERENCES1. J.-C. Fontanella, ‘‘Analyse de surface d’onde, deconvolution

et optique active,’’ J. Mod. Opt. 16, 257–268 (1985).2. J. Primot, G. Rousset, and J.-C. Fontanella, ‘‘Image decon-

volution from wavefront sensing: atmospheric turbulencesimulation cell results,’’ in Very Large Telescopes and TheirInstrumentation, Vol. II, M.-H. Ulrich, ed., ESO Conf. andWorkshop Proc. No. 30 (European Southern Observatory,Garching, Germany, 1988), pp. 683–692.

3. J. Primot, G. Rousset, and J.-C. Fontanella, ‘‘Deconvolutionfrom wavefront sensing: a new technique for compensat-ing turbulence-degraded images,’’ J. Opt. Soc. Am. A 7,1598–1608 (1990).

4. J. D. Gonglewski, D. G. Voelz, J. S. Fender, D. C. Dayton, B.K. Spielbusch, and R. E. Pierson, ‘‘First astronomical appli-cation of postdetection turbulence compensation: imagesof a Aurigae, n Ursae Majoris, and a Geminorum using self-referenced speckle holography,’’ Appl. Opt. 29, 4527–4529(1990).

5. T. Marais, V. Michau, G. Fertin, J. Primot, and J. C.Fontanella, ‘‘Deconvolution from wavefront sensing on a 4m telescope,’’ in High-Resolution Imaging by InterferometryII, J. M. Beckers and F. Merkle, eds., ESO Conf. and Work-shop Proc. No. 39 (European Southern Observatory, Garch-ing, Germany, 1992), pp. 589–597.

6. F. Roddier, ‘‘Passive versus active methods in optical inter-ferometry,’’ in High-Resolution Imaging by InterferometryPart II, F. Merkle, ed., ESO Conf. and Workshop Proc. No.29 (European Southern Observatory, Garching, Germany,1988), pp. 565–574.

7. M. C. Roggemann, C. A. Hyde, and B. M. Welsh, ‘‘Fourierphase spectrum estimation using deconvolution from wave-front sensing and bispectrum reconstruction,’’ in AdaptiveOptics, Vol. 12 of 1996 OSA Technical Digest Series (Opti-cal Society of America, Washington, D.C., 1996), pp. 133–135.

8. D. Dayton, J. Gonglewski, and S. Rogers, ‘‘Experimentalmeasurements of estimator bias and the signal-to-noise ra-tio for deconvolution from wave-front sensing,’’ Appl. Opt.36, 3895–3902 (1997).

9. M. C. Roggemann and B. M. Welsh, ‘‘Signal to noise ratiofor astronomical imaging by deconvolution from wave-frontsensing,’’ Appl. Opt. 33, 5400–5414 (1994).

10. M. C. Roggemann, B. M. Welsh, and J. Devey, ‘‘Biased es-timators and object-spectrum estimation in the method ofdeconvolution from wave-front sensing,’’ Appl. Opt. 33,5754–5763 (1994).

11. J.-M. Conan, V. Michau, and G. Rousset, ‘‘Signal-to-noiseratio and bias of various deconvolution from wavefrontsensing estimators,’’ in Image Propagation through the At-mosphere, J. C. Dainty and L. R. Bissonnette, eds., Proc.SPIE 2828, 332–339 (1996).

12. T. J. Schulz, ‘‘Estimation-theoretic approach to the decon-volution of atmospherically degraded images with wave-front sensor measurements,’’ in Digital Image Recovery andSynthesis II, P. S. Idell, ed., Proc. SPIE 2029, 311–320(1993).

13. L. M. Mugnier, J.-M. Conan, V. Michau, and G. Rousset,‘‘Imagerie travers la turbulence par deconvolution myopemulti-trame,’’ in Seizieme Colloque sur le Traitement duSignal et des Images, J.-M. Chassery and C. Jutten, eds.(Gretsi, Grenoble, France, 1997), pp. 567–570.

14. D. C. Dayton, S. C. Sandven, and J. D. Gonglewski, ‘‘Expec-tation maximization approach to deconvolution from wave-front sensing,’’ in Image Reconstruction and Restoration II,T. J. Schulz, ed., Proc. SPIE 3170, 16–24 (1997).

15. L. M. Mugnier, C. Robert, J.-M. Conan, V. Michau, and S.Salem, ‘‘Regularized multiframe myopic deconvolution fromwavefront sensing,’’ in Propagation through the AtmosphereIII, M. C. Roggemann and L. R. Bissonnette, eds., Proc.SPIE 3763, 134–144 (1999).

16. T. Fusco, J.-M. Conan, V. Michau, L. M. Mugnier, and G.Rousset, ‘‘Phase estimation for large field of view: applica-tion to multiconjugate adaptive optics,’’ in Propagation-

872 J. Opt. Soc. Am. A/Vol. 18, No. 4 /April 2001 Mugnier et al.

through the Atmosphere III, M. C. Roggemann and L. R.Bissonnette, eds., Proc. SPIE 3763, 125–133 (1999).

17. T. Fusco, J.-M. Conan, V. Michau, L. Mugnier, and G.Rousset, ‘‘Efficient phase estimation for large-field-of-viewadaptive optics,’’ Opt. Lett. 24, 1472–1474 (1999).

18. R. J. Noll, ‘‘Zernike polynomials and atmospheric turbu-lence,’’ J. Opt. Soc. Am. 66, 207–211 (1976).

19. C. M. Titterington, ‘‘General structure of regularizationprocedures in image reconstruction,’’ Astron. Astrophys.144, 381–387 (1985).

20. G. Demoment, ‘‘Image reconstruction and restoration:overview of common estimation structures and problems,’’IEEE Trans. Acoust., Speech, Signal Process. 37, 2024–2036 (1989).

21. E. P. Wallner, ‘‘Optimal wave-front correction using slopemeasurements,’’ J. Opt. Soc. Am. 73, 1771–1776 (1983).

22. B. M. Welsh and R. N. VonNiederhausern, ‘‘Performanceanalysis of the self-referenced speckle-holography image-reconstruction technique,’’ Appl. Opt. 32, 5071–5078(1993).

23. H. L. Van Trees, Detection, Estimation, and ModulationTheory (Wiley, New York, 1968).

24. P. A. Bakut, V. E. Kirakosyants, V. A. Loginov, C. J.Solomon, and J. C. Dainty, ‘‘Optimal wavefront reconstruc-tion form a Shack–Hartmann sensor by use of a Bayesianalgorithm,’’ Opt. Commun. 109, 10–15 (1994).

25. S. D. Ford, B. M. Welsh, and M. C. Roggemann, ‘‘Con-strained least-squares estimation in deconvolution fromwave-front sensing,’’ Opt. Commun. 151, 93–100 (1998).

26. J.-M. Conan, L. M. Mugnier, T. Fusco, V. Michau, and G.Rousset, ‘‘Myopic deconvolution of adaptive optics imagesby use of object and point-spread function power spectra,’’Appl. Opt. 37, 4614–4622 (1998).

27. L. P. Yaroslavsky and H. J. Caulfield, ‘‘Deconvolution ofmultiple images of the same object,’’ Appl. Opt. 33, 2157–2162 (1994).

28. P. J. Green, ‘‘Bayesian reconstructions from emission to-mography data using a modified EM Algorithm,’’ IEEETrans. Med. Imaging 9, 84–93 (1990).

29. C. Bouman and K. Sauer, ‘‘A generalized Gaussian imagemodel for edge-preserving MAP estimation,’’ IEEE Trans.Image Process. 2, 296–310 (1993).

30. W. J. Rey, Introduction to Robust and Quasi-Robust Statis-tical Methods (Springer-Verlag, Berlin, 1983).

31. S. Brette and J. Idier, ‘‘Optimized single site update algo-rithms for image deblurring,’’ in Proceedings of the Interna-tional Conference on Image Processing (IEEE Computer So-ciety Press, Los Alamitos, Calif., 1996), pp. 65–68.

32. J.-M. Conan, T. Fusco, L. M. Mugnier, E. Kersale, and V.Michau, ‘‘Deconvolution of adaptive optics images with im-precise knowledge of the point spread function: results onastronomical objects,’’ in Astronomy with Adaptive Optics:Present Results and Future Programs, D. Bonaccini, ed.,ESO Conf. and Workshop Proc. No. 56 (European SouthernObservatory, Garching, Germany, 1999), pp. 121–132.

33. T. Fusco, J.-P. Veran, J.-M. Conan, and L. Mugnier, ‘‘Myo-pic deconvolution method for adaptive optics images of stel-lar fields,’’ Astron. Astrophys., Suppl. Ser. 134, 1–10 (1999).

34. T. J. Schulz, ‘‘Multiframe blind deconvolution of astronomi-cal images,’’ J. Opt. Soc. Am. A 10, 1064–1073 (1993).

35. E. Thiebaut and J.-M. Conan, ‘‘Strict a priori constraintsfor maximum-likelihood blind deconvolution,’’ J. Opt. Soc.Am. A 12, 485–492 (1995).

36. Groupe Problemes Inverses, ‘‘GPAV: une grande oeuvrecollective,’’ Internal Report, Laboratoire des Signaux etSystemes (Centre National de la Recherche Scientifique/Supelec/Universite Paris-Sud, Paris, 1997).

37. D. P. Bertsekas, Nonlinear Programming (Athena Scien-tific, Belmont, Mass., 1995).

38. D. G. Luenberger, Introduction to Linear and NonlinearProgramming (Addison-Wesley, Reading, Mass., 1973).

39. Y.-L. You and M. Kaveh, ‘‘A regularization approach to jointblur identification and image restoration,’’ IEEE Trans. Im-age Processing 5, 416–428 (1996).

40. N. Roddier, ‘‘Atmospheric wavefront simulation usingZernike polynomials,’’ Opt. Eng. 29, 1174–1180 (1990).

41. D. W. Tyler and C. L. Matson, ‘‘Speckle imaging detectoroptimization and comparison,’’ Opt. Eng. 32, 864–869(1993).

42. E. Thiebaut, ‘‘Speckle imaging with the bispectrum andwithout reference star,’’ in International AstronomicalUnion Symposium on Very High Angular Resolution Imag-ing, R. J. G. Robertson and W. J. Tango, eds. (Kluwer Aca-demic, Dordrecht, The Netherlands, 1994), Vol. 158, p. 209.

43. R. G. Paxman, T. J. Schulz, and J. R. Fienup, ‘‘Joint esti-mation of object and aberrations by using phase diversity,’’J. Opt. Soc. Am. A 9, 1072–1085 (1992).