Restoration of severely blurred high range images using stochastic and deterministic relaxation...

17
Pattern Recognition 33 (2000) 555}571 Restoration of severely blurred high range images using stochastic and deterministic relaxation algorithms in compound Gauss}Markov random "elds q Rafael Molina!,*, Aggelos K. Katsaggelos", Javier Mateos!, Aurora Hermoso#, C. Andrew Segall" !Departamento de Ciencias de la Computacio & n e I.A., Universidad de Granada, 18071 Granada, Spain "Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA #Departamento de Estadn & stica e I.O., Universidad de Granada, 18071 Granada, Spain Received 15 March 1999 Abstract Over the last few years, a growing number of researchers from varied disciplines have been utilizing Markov random "elds (MRF) models for developing optimal, robust algorithms for various problems, such as texture analysis, image synthesis, classi"cation and segmentation, surface reconstruction, integration of several low level vision modules, sensor fusion and image restoration. However, no much work has been reported on the use of Simulated Annealing (SA) and Iterative Conditional Mode (ICM) algorithms for maximum a posteriori estimation in image restoration problems when severe blurring is present. In this paper we examine the use of compound Gauss}Markov random "elds (CGMRF) to restore severely blurred high range images. For this deblurring problem, the convergence of the SA and ICM algorithms has not been established. We propose two new iterative restoration algorithms which can be considered as extensions of the classical SA and ICM approaches and whose convergence is established. Finally, they are tested on real and synthetic images and the results compared with the restorations obtained by other iterative schemes. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Keywords: Image restoration; Compound Gauss}Markov random "elds; Simulated annealing; Iterative conditional mode 1. Introduction Image restoration refers to the problem of recovering an image, f, from its blurred and noisy observation, g, for the purpose of improving its quality or obtaining some type of information that is not readily available from the degraded image. It is well known that translation linear shift invariant (LSI) image models do not, in many circumstances, lead to appropriate restoration methods. Their main problem q This work has been supported by the `Comisio H n Nacional de Ciencia y Tecnologm H aa under contract TIC97-0989. * Corresponding author. Tel.: #34-958-244019; fax: #34- 958-243317 E-mail address: rms@decsai.ugr.es (R. Molina) is their inability to preserve discontinuities. To move away from simple LSI models several methods have been proposed. The Compound Gauss}Markov random "elds (CGMRF) theory provides us with a mean to control changes in the image model using a hidden random "eld. A compound random "eld has two levels; an upper level which is the real image that has certain translation in- variant linear sub-models to represent image character- istics like border regions, smoothness, texture, etc. The lower or hidden level is a "nite range random "eld to govern the transitions between the sub-models. The use of the underlying random "eld, called the line process, was introduced by Geman and Geman [1] in the discrete case and extended to the continuous case by Jeng [2], Jeng and Woods [3,4] and Chellapa et al. [5]. 0031-3203/00/$20.00 ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 9 9 ) 0 0 0 7 2 - 2

Transcript of Restoration of severely blurred high range images using stochastic and deterministic relaxation...

Pattern Recognition 33 (2000) 555}571

Restoration of severely blurred high range imagesusing stochastic and deterministic relaxation algorithms

in compound Gauss}Markov random "eldsq

Rafael Molina!,*, Aggelos K. Katsaggelos", Javier Mateos!,Aurora Hermoso#, C. Andrew Segall"

!Departamento de Ciencias de la Computacio& n e I.A., Universidad de Granada, 18071 Granada, Spain"Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA

#Departamento de Estadn&stica e I.O., Universidad de Granada, 18071 Granada, Spain

Received 15 March 1999

Abstract

Over the last few years, a growing number of researchers from varied disciplines have been utilizing Markov random"elds (MRF) models for developing optimal, robust algorithms for various problems, such as texture analysis, imagesynthesis, classi"cation and segmentation, surface reconstruction, integration of several low level vision modules, sensorfusion and image restoration. However, no much work has been reported on the use of Simulated Annealing (SA) andIterative Conditional Mode (ICM) algorithms for maximum a posteriori estimation in image restoration problems whensevere blurring is present. In this paper we examine the use of compound Gauss}Markov random "elds (CGMRF) torestore severely blurred high range images. For this deblurring problem, the convergence of the SA and ICM algorithmshas not been established. We propose two new iterative restoration algorithms which can be considered as extensions ofthe classical SA and ICM approaches and whose convergence is established. Finally, they are tested on real and syntheticimages and the results compared with the restorations obtained by other iterative schemes. ( 2000 Pattern RecognitionSociety. Published by Elsevier Science Ltd. All rights reserved.

Keywords: Image restoration; Compound Gauss}Markov random "elds; Simulated annealing; Iterative conditional mode

1. Introduction

Image restoration refers to the problem of recoveringan image, f, from its blurred and noisy observation, g, forthe purpose of improving its quality or obtaining sometype of information that is not readily available from thedegraded image.

It is well known that translation linear shift invariant(LSI) image models do not, in many circumstances, leadto appropriate restoration methods. Their main problem

qThis work has been supported by the `ComisioH n Nacionalde Ciencia y TecnologmHaa under contract TIC97-0989.

*Corresponding author. Tel.: #34-958-244019; fax: #34-958-243317

E-mail address: [email protected] (R. Molina)

is their inability to preserve discontinuities. To moveaway from simple LSI models several methods have beenproposed.

The Compound Gauss}Markov random "elds(CGMRF) theory provides us with a mean to controlchanges in the image model using a hidden random "eld.A compound random "eld has two levels; an upper levelwhich is the real image that has certain translation in-variant linear sub-models to represent image character-istics like border regions, smoothness, texture, etc. Thelower or hidden level is a "nite range random "eld togovern the transitions between the sub-models. The useof the underlying random "eld, called the line process,was introduced by Geman and Geman [1] in the discretecase and extended to the continuous case by Jeng [2],Jeng and Woods [3,4] and Chellapa et al. [5].

0031-3203/00/$20.00 ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.PII: S 0 0 3 1 - 3 2 0 3 ( 9 9 ) 0 0 0 7 2 - 2

Given the image and noise models, the process of"nding the maximum a posteriori (MAP) estimate forthe CGMRF is much more complex, since we no longerhave a convex function to be minimized and methods likesimulated annealing (SA) (see [1]) have to be used. Al-though this method, when blurring is not present, leadsto the MAP estimate, it is a very computationally de-manding method. A faster alternative is deterministicrelaxation which results in local MAP estimation, alsocalled iterative conditional mode (ICM) [6].

When blur is present, di!erent approaches have beenproposed to "nd the MAP. Blake and Zisserman [7]propose the use of gradually non-convexity, which can beextended to the blurring problem, Molina and Ripley [8]propose the use of a log-scale for the image model, andGreen [9], Bouman and Sauer [10], Schultz and Steven-son [11] and Lange [12] use convex potentials in orderto ensure uniqueness of the solution.

Additional solutions are motivated by the applicationof nonlinear partial di!erential equations in imageprocessing, particularly anisotropic di!usion [13]. Theseoperators are designed to smooth an image while preserv-ing discontinuities (e.g. [14}16]). While traditionally utilizedfor image enhancement, embedding the di!usion inhibitorwithin a variational framework facilitates image restora-tion. Restoration using di!erent variants of the di!usionmechanism has been presented by Geman and McClure[17], Hebert and Leahy [18], Green [9], and Bouman andSauer [10]. Design of the edge-preserving function is criticalfor solution convergence and quality. Selection of suitableexpressions has been explored by Charbonnier et al. [19].

In this paper we extend the use of SA to restore highdynamic range images in the presence of blurring, a casewhere convergence of this method has not been shown(see Jeng [2] and Jeng and Woods [3,4] for the continu-ous case without blurring).

In Section 2 we introduce the notation we use and theproposed model for the image and line processes as well asthe noise model. Both, stochastic and deterministic relax-ation approaches to obtain the MAP estimate withoutblurring are presented in Section 3. Reasons why thesealgorithms may be unstable in the presence of blurringare studied in Section 4. In Section 5 we modify the SAalgorithm and its corresponding relaxation approach inorder to propose our modi"ed algorithms. Convergenceproofs are given in the Appendix. In Section 6 we test bothalgorithms on real images and compare the results withother existing methods. Section 7 concludes the paper.

2. Notation and model

2.1. Bayesian model

We will distinguish between f, the &true' image whichwould be observed under ideal conditions and g, the

observed image. The aim is to reconstruct f from g.Bayesian methods start with a prior distribution, aprobability distribution over images f by which theyincorporate information on the expected structure withinan image. It is also necessary to specify p(g D f ), the prob-ability distribution of observed images g if f were the&true' image. The Bayesian paradigm dictates that theinference about the true f should be based on p ( f Dg) givenby

p ( f Dg)"p (g D f )p ( f )/p( g )Jp (g D f )p ( f ). (1)

Maximization of Eq. (1) with respect to f yields

fK"arg maxf

p( f D g), (2)

the maximum a posteriori estimator. For the sake ofsimplicity, we will denote by f (i) the intensity of the trueimage at the location of the pixel i on the lattice. Weregard f as a p]1 column vector of values f (i). The con-vention applies equally to the observed image g.

2.2. Incorporation of the edge locations in the prior model

The use of CGMRF was "rst presented by Geman andGeman [1] using an Ising model to represent the upperlevel and a line process to model the abrupt transitions.Extensions to continuous range models using GMRFwere presented by Jeng [2]. The CGMRF model used inthis paper was proposed by Chellapa et al. [5] and it is anextension of the Blake and Zisserman's weak membranemodel [7] used for surface interpolation and edge detec-tion. The convergence proof that will be given here canalso be extended to the CGMRF de"ned by Jeng [2] andJeng and Woods [3,4].

Let us "rst describe the prior model without any edges.Our prior knowledge about the smoothness of the objectluminosity distribution makes it possible to model thedistribution of f by a Conditional Auto-Regressive (CAR)model (see [20]). Thus,

p( f )JexpG!1

2p2wf T(I!/N) fH, (3)

where Nij"1 if cells i and j are spatial neighbors (pixels

at distance one), zero otherwise and / just less than 0.25.The parameters can be interpreted by the following ex-pressions describing the conditional distribution:

E( f (i) D f ( j), jOi)"/ +j /)"3 i

f ( j),

var ( f (i) D f ( j), jOi)"p2w, (4)

where the su$x &j nhbr i' denotes the four neighbor pixelsat distance one from pixel i (see Fig. 1). The parameterp2w measures the smoothness of the &true' image.

556 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

Fig. 1. Image and line sites.

From Eq. (3) we have

!log p( f )"const#1

2p2w+i

f (i)( f (i)!/(Nf )(i)).

Then, if i:#1, i:#2, i:#3, i:#4 denote the four pixelsaround pixel i as described in Fig. 1 and we assume a&toroidal edge correction', we have

!log p( f )"const#1

2p2w+i

[/ (f (i)!f (i:#1))2

#/ (f (i)!f (i:#2))2#(1!4/) f 2(i)].

This expression can be rewritten as

!log p( f, l)"const#1

2p2w+i

[/( f (i)!f (i:#1))2

](1!l([i, i:#1])

#bl([i, i:#1])#/( f (i)!f (i:#2))2

](1!l([i, i:#2]))#bl([i, i:#2])

#(1!4/) f 2(i)], (5)

where l([i, j]),0 for all i and j.We now introduce a line process by simply rede"ning

the function l([i, j]) as taking the value zero if pixels i andj are not separated by an active line and one otherwise.We then penalize the introduction of an active line ele-ment in the position [i, j] (see Fig. 1) by the term bl([i, j])since otherwise the expression in Eq. (5) would obtain itsminimum value by setting all line elements equal to one.The intuitive interpretation of this line process is simple;it acts as an activator or inhibitor of the relation betweentwo neighbor pixels depending on whether or not thepixels are separated by an edge.

2.3. Noise models

A simpli"ed but very realistic noise model for manyapplications is to assume that it is Gaussian with mean

zero and variance p2n . This means that the observedimage corresponds to the model g(i)"(Df )(i)#n(i)"+

jd(i!j) f ( j)#n(i), where D is the p]p matrix

de"ning the systematic blur, assumed to be known andapproximated by a block circulant matrix, n(i) is theadditive Gaussian noise with zero mean and variancep2n and d( j) are the coe$cients de"ning the blurringfunction.

Then, the probability of the observed image g if f werethe &true' image is

p(g D f )JexpC!1

2p2nDDg!Df DD2D. (6)

3. MAP estimation using stochastic and deterministicrelaxation

The MAP estimates of f and l, fK , lK are given by

fK , lK"arg maxf,l

p ( f, l D g). (7)

This is an obvious extension of (2) where now we have toestimate both the image and line processes. The modi"edsimulated annealing (MSA), algorithm we are going topropose in this work, ensures convergence to a localMAP estimate regardless of the initial solution. We startby examining the SA procedure as de"ned by Jeng [2].

Since p( f, l Dg) is nonlinear it is extremely di$cult to"nd fK and lK by any conventional method. Simulatedannealing is a relaxation technique to search for MAPestimates from degraded observations. It uses the distri-bution

pT( f, l D g)"

1

ZT

expG!1

¹

1

2p2nDDg!Df DD2

!

1

¹

1

2p2w+i

[/( f (i)!f (i:#1))2

](1!l([i, i:#1]))#bl([i, i:#1])

#/( f (i)!f (i:#2))2

](1!l([i, i:#2]))#bl([i, i:#2])

#(1!4/) f 2(i)]H, (8)

where ¹ is the temperature and ZT

a normalizationconstant.

We shall need to simulate the conditional a posterioridensity function for l([i, j]), given the rest of l, f and g andthe conditional a posteriori density function for f (i) giventhe rest of f, l and g. To simulate the line processconditional a posteriori density function, p

T(l([i, j]) D

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 557

l([k, l]): ∀[k, l]O[i, j], f, g), we have

pT(l([i, j])"0 D l([k, l]): ∀[k, l]O[i, j], f, g)

JexpC!1

¹

/

2p2w( f (i)!f ( j))2D, (9)

pT(l([i, j])"1 D l([k, l]): ∀[k, l]O[i, j], f, g)

JexpC!1

¹

b2p2wD. (10)

Furthermore, for our Gaussian noise model,

pT( f (i) D f ( j): ∀jOi, l, g)&N(kl*i+(i),¹p2l*i+(i)), (11)

where kl*i+(i) and p2l*i+(i) are given by

kl*i+(i)"jl*i+(i)/ +j /)"3 i

f ( j)(1!l([i, j]))

nnl*i+(i)

# (1!jl*i+(i))A(DTg)(i)!(DTDf )(i)

c#f (i)B, (12)

p2l*i+(i)"p2wp2n

nnl*i+(i)p2n#cp2w, (13)

where c is the sum of the square of the coe$cientsde"ning the blur function, that is, c"+

jd( j)2,

nnl*i+(i)"/+j /)"3 i

(1!l([i, j]))#(1!4/) and

jl*i+(i)"nnl*i+(i)p2n

nnl*i+(i)p2n#cp2w,

and l[i] is the four-dimensional vector representing theline process con"guration around image pixel (i).

Then the sequential SA to "nd the MAP, with noblurring (D"I), proceeds as follows (see [2]):

Algorithm 1 (Sequential SA procedure). Let it, t"

1, 2,2, be the sequence in which the sites are visited forupdating.

1. Set t"0 and assign an initial conxguration denoted asf~1

, l~1

and initial temperature ¹(0)"1.2. The evolution l

t~1Pl

tof the line process can be ob-

tained by sampling the next point of the line process fromthe raster-scanning scheme based on the conditionalprobability mass function dexned in Eqs. (9) and (10) andkeeping the rest of l

t~1unchanged.

3. Set t"t#1. Go back to step 2 until a complete sweep ofthe xeld l is xnished.

4. The evolution ft~1

Pftof the image f can be obtained by

sampling the next value of the image process from theraster-scanning scheme based on the conditional prob-ability mass function given in Eq. (11) and keeping therest of f

t~1unchanged.

5. Set t"t#1. Go back to step 4 until a complete sweep ofthe xeld f is xnished.

6. Go to step 2 until t'tf, where t

fis a specixed integer.

Note that steps 2 and 3 consist in an exact and directsampling from the independent conditional lawpT(l([i, j]) D f

t~1, g)"<

Wi, jXpT(l([i, j]) D f

t~1(i), f

t~1( j), while

steps 4 and 5 consist in one sweep of Gibbs samplingfrom conditional distribution p

T( f D l

t, g) with initial con-

"guration ft~1

.The following theorem from Jeng [2] guarantees that

the SA algorithm converges to the MAP estimate in thecase of no blurring.

Theorem 1. If the following conditions are satisxed:

1. D/D(0.25,2. ¹(t)P0 as tPR, such that,3. ¹(t)*C

T/log(1#k(t)),

then for any starting conxguration f~1

, l~1

, we have

p( ft, ltD f~1

, l~1

, g)Pp0( f, l) as tPR,

where p0(. , .) is the uniform probability distribution over the

MAP solutions, CT

is a positive constant and k(t) is thesweep iteration number at time t.

Instead of using a stochastic approach, we can usea deterministic method to search for a local maximum.An advantage of the deterministic method is that itsconvergence is much faster than the stochastic approach,since instead of simulating the distributions, the modefrom the corresponding conditional distribution ischosen. The disadvantage is the local nature of the solu-tion obtained. This method can be seen as a particularcase of SA where the temperature is always set to zero.

4. Instability of the SA and ICM solutions

Unfortunately, due to the presence of blurring theconvergence of SA has not been established for thisproblem. The main problem of the methods is that, if c issmall, as is the case for severely blurred images, the term[(DTg)(i)!(DTDf ) (i)]/c in (12) is highly unstable. For theICM method the problem gets worse because suddenchanges in the "rst stages, due to the line process, becomepermanent (see Ref. [21]).

Let us examine intuitively and formally why we mayhave convergence problems with Algorithm 1 and itsdeterministic relaxation approximation when severeblurring is present. Let us assume for simplicity no lineprocess and examine the iterative procedure where weupdate the whole image at the same time; it is importantto note that this is not the parallel version of SA but an

558 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

iterative procedure. We have

ft"j/Nf

t~1!(1!j)C

DTD

cft~1

!ft~1D# (1!j)

DTg

c

"Aft~1

#const, (14)

where t is the iteration number, understood as sweep ofthe whole image, and

A"CI!j(I!/N)!(1!j)DTD

c D. (15)

For the method to converge, A must be a contractionmapping. However, this may not be the case. For in-stance, if the image su!ers from severe blurring then c isclose to zero and the matrix [DTD/c] has eigenvaluesgreater than one. Furthermore, if the image has a highdynamic range, like astronomical images where ranges[0, 7000] are common, it is natural to assume that p2w isbig and thus, (1!j)[DTD/c] has eigenvalues greaterthan one. Therefore, this iterative method may not con-verge. It is important to note that, when there is noblurring, c"1 and A is a contraction mapping.

Let us modify A in order to have a contraction. Adding[(1!j)(1!c)/c] f to both sides of Eq. (14) we have, inthe iterative procedure,

(1#[(1!j)(1!c)/c]) ft

"[(1!j)(1!c)/c] ft~1

#Aft~1

#const

or

ft"uf

t~1#(1!u)[Af

t~1#const],

with u"(1!c)p2w/(p2n#p2w). We then have for this newiterative procedure

ft"AI f

t~1#(1!u)const,

where

AI "[I!o(I!/N)!(1!o)DTD],

with o"p2n /(p2n#p2w), is now a contraction mapping.

5. The modi5ed simulated annealing algorithm

Let us now examine how to obtain a contraction forour iterative procedure. Let us rewrite Eq. (12) as aniterative procedure and add (a(1!nnl*i+(i))#b(1!c)) f (i)to each side of the equation, we have

(a#b) ft(i)"(a(1!nnl*i+(i))#b(1!c)) f

t~1(i)

#a/ +j /)"3 i

ft~1

( j)(1!l([i, j]))

#b((DTg)(i)!(DTDf )t~1

(i)#cft~1

(i)), (16)

where a"1/p2w and b"1/p2n or

ft(i)"ul

t~1*i+(i) ft~1

(i)#(1!ult~1*i+(i))

]Ajl*i+(i)/ +j /)"3 i

ft~1

( j)(1!l([i, j]))

nnl*i+(i)# (1!jl*i+(i))

]A(DTg)(i)!(DTDf )

t~1(i)

c#f

t~1(i)BB,

where ult~1*i+(i)"(p2n (1!nnl

t~1*i+(i)))#(1!c)p2w/(p2n#p2w).So, in order to have a contraction, we update the whole

image at the same time using the value of f (i) obtained inthe previous iteration, f

tk~1(i), and, instead of simulating

from the normal distribution de"ned in Eq. (11) to obtainthe new value of f (i), we simulate from the distribution

N(klt~1*i+m

(i),¹p2lt~1*i+

m(i)) (17)

with mean

klt~1*i+m

(i)"ult~1*i+(i) f

tk~1(i)#(1!ul

t~1*i+(i))klt~1*i+(i) (18)

and

p2lt~1*i+

m(i)"(1!(ul

t~1*i+(i))2)p2lt~1*i+(i). (19)

The reason to use this modi"ed variance is clear if wetake into account that, if

X&N(m,p2)

and

>DX&N(jX#(1!j)m, (1!j2)p2),

where 0(j(1, then

>&N(m, p2).

We then have for this iterative method that thetransition probabilities are

nT(tk)

( ftkD ftk~1

, ltk, g)JexpC!

1

2¹(tk)[ f

tk!Mltkf

tk~1!Qltkg]t

][Qltk1]~1[ f

tk!Mltkf

tk~1!Qltkg]D,

(20)

where

Mltk")ltk#(I!)ltk)(Cltk!(DTD)ltkH), (21)

Qltk"(I!)ltk)Bltk, (22)

where

Cltk*i+*ftk

(i)"/jltk*i+(i) +

j /)"3 i

(1!l([i, j]))

nnltk*i+(i)

ftk( j)

and

(DTD)ltk*i+H * ftk(i)"(1!jl

tk*i+(i))A(DTDf )(i)

c!f (i)B,

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 559

)ltk is a diagonal matrix with entries ultk*i+(i) and Qltk

1is

a diagonal matrix with entries p2l*i+m

(i).In the next section we apply the modi"ed SA and ICM

algorithms, whose convergence is established in the ap-pendix to restore astronomical images.

The algorithms are the following:

Algorithm 2 (MSA procedure). Let it, t"1, 2,2, be the

sequence in which the sites are visited for updating.

1. Set t"0 and assign an initial conxguration denoted asf~1

, l~1

and initial temperature ¹(0)"1.2. The evolution l

t~1Pl

tof the line process can be ob-

tained by sampling the next point of the line process fromthe raster-scanning scheme based on the conditionalprobability mass function dexned in Eqs. (9) and (10) andkeeping the rest of l

t~1unchanged.

3. Set t"t#1. Go back to step 2 until a complete sweep ofthe xeld l is xnished.

4. The evolution ft~1

Pft

of the image system can be ob-tained by sampling the next value of the whole imagebased on the conditional probability mass function givenin Eq. (17)

5. Go to step 2 until t'tf, where t

fis a specixed integer.

The following theorem guarantees that the MSA algo-rithm converges to a local MAP estimate, even in thepresence of blurring.

Theorem 2. If the following conditions are satisxed:

1. D/D(0.252. ¹(t)P0 as tPR, such that3. ¹(t)*C

T/log(1#k(t)),

then for any starting conxguration f~1

, l~1

, we have

p( ft, ltD f~1

, l~1

, g)Pp0( f, l) as tPR,

where p0(. , .) is a probability distribution over local MAP

solutions, CT

is a constant and k(t) is the sweep iterationnumber at time t.

We notice that if the method converges to a con"gura-

tion ( f, l), then

fM"arg maxf

p( f DlM ,g)

Furthermore,

lM"arg maxl

p(lD fM , g)

We conjecture that the method we are proposing con-verges to an distribution over global maxima. However,the di$culty of using synchronous models prevent usfrom proving that result (See Ref. [22]).

The modi"ed ICM procedure is obtained by selectingin steps 2 and 4 of Algorithm 2 the mode of the corre-sponding transition probabilities.

6. Test examples

Let us "rst examine how the modi"ed ICM algorithmworks on a synthetic star image, blurred with an atmo-spherical point spread function (PSF), D, given by

d(i)J(1#(u2#v2)/R2)~d, (23)

with d"3, R"3.5, i"(u, v), and Gaussian noise withp2n"64. If we use p2w"24415, which is realistic for thisimage, and take into account that, for the PSF de"ned inEq. (23), c"0.02, A de"ned in Eq. (15) is not a contrac-tion. Figs. 2a and b depict the original and corruptedimage, respectively. Restorations from the original andmodi"ed ICM methods with b"2 for 2500 iterations aredepicted in Figs. 2c and d, respectively. Similar results areobtained with 500 iterations.

The proposed methods were also tested on real imagesand compared with ARTUR, the method proposed byCharbonnier et al. [19]. ARTUR minimizes energyfunctions of the form

J( f )"j2G+i

u[ f (i)!f (i :#1)]

#+i

u[ f (i)!f (i :#2)]H# DDg!Df DD2, (24)

where j is a positive constant and u is a potential func-tion satisfying some edge-preserving conditions. The po-tential functions we used in our experiments, u

GM, u

HL,

uHS

and uGR

, are shown in Table 1.Charbonnier et al. [19] show that, for those u func-

tions, it is always possible to "nd a function JH such that

J( f )"infl

JH( f, l),

where JH is a dual energy which is quadratic in f whenl is "xed. l can be understood as a line process which, forthose potential functions, takes values in the interval[0, 1].

To minimize Eq. (24), Charbonnier et al. propose thefollowing iterative scheme:

1. n"0, f 0,02. Repeat3. ln`1"argmin

l[JH( f n, l)]

4. f n`1"argminf[JH( f, ln`1)]

5. n"n#16. Until convergence.

560 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

Fig. 2. (a) Original image, (b) observed image, (c) ICM restoration, (d) restoration with the proposed ICM method.

Table 1Edge preserving potential functions used with ARTUR

Potentialfunction

uGM

uHL

uHS

uGR

Expressionof u(t)

t2

1#t2

log(1#t2) 2J1#t2!2 2 log[cosh (t)]

In our experiments the convergence criterion used instep 6 above was

DD f n`1!f nDD2/DD f nDD2(10~6.

The solution of step 4 was found by a Gauss}Seidelalgorithm. The stopping criterion was

DD f n`1,m`1!f n`1,mDD2/DD f n`1,mDD2(10~6,

where m is the iteration number.We use images of Saturn which were obtained at the

Cassegrain f/8 focus of the 1.52 m telescope at Calar AltoObservatory (Spain) on July 1991. Results are presentedon a image taken through a narrow-band interference"lter centered at the wavelength 9500 As .

The blurring function de"ned in Eq. (23) was used. Theparameters d and R were estimated from the intensitypro"les of satellites of Saturn that were recorded simulta-

neously with the planet and of stars that were recordedvery close in time and airmass to the planetary images.We found d&3 and R&3.4 pixels.

Fig. 3 depicts the original image and the restorationsafter running the original ICM and our proposed ICMmethods for 500 iterations and the original SA and ourproposed SA methods for 5000 iterations.

In all the images the improvement in spatial resolutionis evident. In particular, ring light contribution has beensuccessfully removed from equatorial regions close to theactual location of the rings and amongst the rings ofSaturn, the Cassini division is enhanced in contrast, andthe Encke division appears on the ansae of the rings in alldeconvolved images. To examine the quality of the MAPestimate of the line process we compared it with theposition of the ring and disk of Saturn, obtained from theAstronomical Almanac, corresponding to our observedimage. Although all the methods detect a great part of thering and the disk, the ICM method (Fig. 4a) shows thicklines. The SA method, on the other hand, gives us thinnerlines and the details are more resolved (Fig. 4b). Obvious-ly, there are some gaps in the line process but betterresults would be obtained by using eight neighbors in-stead of four or, in general, adding more l-terms to theenergy function.

Fig. 5 depicts the results after running ARTUR usingpotential functions u

GM, u

HL, u

HSand u

GRon the Saturn

image together with the results obtained by the proposed

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 561

Fig. 3. (a) Original image, (b) restoration with the original ICM method and (c) its line process, (d) restoration with the original SAmethod and (e) its line process, ( f ) restoration with the proposed ICM method and (g) its line process, (h) restoration with the proposedSA method and (i) its line process.

Fig. 4. Comparison between the real edges (light) and the ob-tained line process (dark). (a) proposed ICM method, (b) pro-posed SA method.

ICM method. Note that line processes obtained by thepotential functions used with ARTUR are presented ininverse gray levels. The results suggest that u

GMand

uHL

capture better the lines of the image than uHS

anduGR

. Lines captured by all these functions are thickerthan that obtained by the proposed ICM method; noticethat the line process produced by these potential func-tions is continuous on the interval [0, 1]. Furthermore,uGM

also captures some low-intensity lines, due to thenoise, that creates some artifacts on the restoration,specially on the Saturn rings, see Fig. 5b. Finally, thepotential functions used with ARTUR have captured thetotality of the planet contour although the line processintensity on the contour is quite low.

The methods were also tested on images of Jupiterwhich were also obtained at the Cassegrain f/8 focus ofthe 1.52 m telescope at Calar Alto Observatory (Spain)on August 1992.

The blurring function was the same as in the previousexperiment. Fig. 6 depicts the original image and therestorations after running the original ICM and ourproposed ICM method for 500 iterations and our pro-posed SA method for 5000 iterations. In all the imagesthe improvement in spatial resolution is evident. Fea-tures like the Equatorial plumes and great red spot are

562 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

Fig. 5. (a) Restoration with the proposed ICM method ( f ) and (k) its corresponding horizontal and vertical line process. (b)}(e) show therestoration when ARTUR is run for the potentials u

GM, u

HL, u

HSand u

GR, respectively. Their corresponding horizontal line processed

are shown in (g)}( j) and their vertical processed are shown in (l)}(o).

very well detected. ARTUR was also tested on theseimages obtaining similar results to the obtained withSaturn.

In order to obtain a numerical comparison, ARTURand our methods were tested and compared using the

cameraman image. The image was blurred using the PSFde"ned in Eq. (23) with the parameters d"3 and R"4.Gaussian noise with variance 62.5 was added obtaininga image with SNR"20 dB. The original and observedimage are shown in Fig. 7.

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 563

Fig. 6. (a) Original image, (b) restoration with the original ICM method and (c) its line process, (d) restoration with the proposed ICMmethod and (e) its line process, ( f ) restoration with the proposed SA method and (g) its line process.

Fig. 7. (a) Original cameraman image, (b) observed image.

Figs. 8 and 9 depict the restorations after running ourproposed SA method for 5000 iterations, our proposedICM method for 500 iterations and ARTUR with di!er-ent potential functions.

In order to compare the quality of the restora-tions we used the peak signal-to-noise ratio (PSNR)that, for two images f and g of size M]N, is de"nedas

PSNR"10 log10C

M]N]2552

DDg!f DD2 D.

Results, shown in Table 2, are quite similar for all themethods but they suggest that better results are obtainedwhen our proposed SA method is used. For the two bestmethods in terms of the PSNR, our proposed SA andARTUR with u

GM, we have included cross-sections of the

original and restored images in Fig. 10. It can be ob-served that, although both pro"les are quite similar, ourproposed SA method obtain sharper edges than the onesobtained with u

GM.

564 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

Fig. 8. (a) Restoration with the proposed SA method and (b), (c) its horizontal and vertical line process, (d) restoration with the proposedICM method and (e), ( f ) its horizontal and vertical line process, (g) restoration with ARTUR with u

GMfunction and (h), (i) its horizontal

and vertical line process.

Table 2Comparison of the di!erent restoration methods in terms of PSNR

ARTUR with

Observed Proposed ICM Proposed SA uGM

uHL

uHS

uGR

PRNR (dB) 18.89 20.72 21.11 20.75 20.64 20.72 20.51

Table 3 shows the total computing time of the studiedmethods after running them on one processor of a SiliconGraphics Power Challenge XL. It also shows the relativeexecution time referred to the computing time of the ICM

method. The little di!erence between the ICM and SAmethods is due to the fact that most of the time is spent inconvolving images.

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 565

Fig. 9. (a) Restoration with ARTUR with uHL

function and (c), (c) its horizontal and vertical line process, (d) restoration with ARTURwith u

HSfunction and (e), ( f ) its horizontal and vertical line process, (g) restoration with ARTUR with u

GRfunction and (h), (i) its

horizontal and vertical line process.

Table 3Total computing time of the methods and relative time per iteration referred to the ICM

Original Proposed ARTUR with

Method ICM SA ICM SA uGM

uHL

uHS

uGR

Total time (s) 1149 12852 140 2250 198 38 29 29Relative time 1.00 1.13 0.12 0.20 0.17 0.17 0.17 0.17

7. Conclusions

In this paper we have presented two new methodsthat can be used to restore high dynamic range imagesin the presence of severe blurring. These methods

extend the classical ICM and SA procedures, andthe convergence of the algorithms is guaranteed. Theexperimental results verify the derived theoretical re-sults. Further extensions of the algorithms are underconsideration.

566 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

Fig. 10. Cross section of line 153 of the original cameramanimage (solid line) and reconstructed images (dotted line) with (a)proposed SA and (b) ARTUR with u

GM.

Appendix. Convergence of the MSA procedure

In this section we shall examine the convergence of theMSA algorithm. It is important to make clear that in thisnew iterative procedure we simulate f (i) using Eq. (17)and to simulate l([i, j]) we keep using Eqs. (9) and (10).We shall denote by n

Tthe corresponding transition

probabilities. That is, nT(tk)

( ftkD ftk~1

, ltk, g) is obtained from

Eq. (20) and nT(tk)

(ltkD ftk~1

, ltk~1

) is obtained from Eqs. (9)and (10).

Since updating the whole image at the same timeprevents us from having a stationary distribution we willnot be able to show the convergence to the global MAPestimates using the same proofs as in Geman and Geman[1] and Jeng and Woods [3].

To prove the convergence of the chain we need somelemmas and de"nitions as in Jeng [2] and Jeng andWoods [3].

We assume a measure space (),&,k) and a conditionaldensity function n

n(snDsn~1

) which de"nes a Markov chains1, s

2,2, s

n,2. In our application, the s

iare vectors

valued with a number of elements equal to the number ofpixels in the image. For simplicity, we assume ) is Rd

and k is a Lebesgue measure on Rd. De"ne a Markovoperator P

n:¸1P¸1 as follows:

Pnn(s

n)"P)nn

(snDsn~1

)n(sn~1

) dsn~1

. (A.1)

By Pmn

we mean the composite operation Pn`m

Pn`m~12

Pn`2

Pn`1

. The convergence problem we aredealing with is the same as the convergence of Pm

0as

mPR.

De5nition A.1. Let x be a vector with components x(i)and Q be a matrix with components q(i, j). We de"ne DDxDD

2and DDQDD

2as follows:

DDxDD2"A+

i

Dx(i)D2B1@2

,

DDQDD2"sup

x

DDQxDD2

DDxDD2

"maxi

(o(i))1@2,

where o(i) are the eigenvalues of matrix QTQ.

De5nition A.2. A continuous nonnegative function< :)PR is a Lyapunov function if

lim@@s@@?=

<(s)"R, (A.2)

where DDsDD is a norm of s.

Denote by D the set of all pdf 's with respect to Lebes-gue measure and the ¸

1norm de"ned as follows:

DDnDD1"P)Dn(s)Dds ∀n3¸1.

De5nition A.3. Let Pn:¸1P¸1 be a Markov operator.

Then MPnN is said to be asymptotically stable if, for any

n1,n

23D,

limm?=

DDPm0(n

1!n

2)DD

1"0. (A.3)

The following theorem from Jeng and Woods [3] givesthe su$cient conditions on the asymptotic stability ofPm0

in terms of transition density functions.

Theorem A.1. Let (),&, k) be a measure space and k beLebesgue measure on Rd. If there exists a Lyapunov func-tion, < :)PR, such that

P)<(sn)n

n(snDsn~1

) dsn)a<(s

n~1)#b

for 0)a(1 and b*0 (A.4)

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 567

and

=+i/1

DDhmiDD1"R, m

i"im8 for any integer m8 '0, (A.5)

where

hmi(smi)" inf

@@smi~1@@xr

nmi(smiDsmi~1

) (A.6)

and r is a positive number satisfying the following inequality:

<(s)'1#b

1!a∀DDsDD'r

then, for the Markov operator, Pn:¸1P¸1, dexned by

Eq. (25) we have that Pm0

is asymptotically stable.

We are going to show that the su$cient conditions ofTheorem A.1 are satis"ed by the Markov chain de"ned byour MSA procedure when the parameters describing theAlgorithm 2 satisfy the conditions of Theorem 2. Sinceasymptotic stability of an inhomogeneous Markov se-quence implies convergence, this will imply for our resto-ration problem that for any starting con"guration f

~1, l~1

,we have

p( ft, ltD f~1

, l~1

, g)Pp0( f, l) as tPR,

where p0(. , .) is the probability distribution over the MAP

solutions.Let us prove the following lemma.

Lemma A.1. If D/D(0.25 then, ∀l,

DDMlDD2(1,

where Ml has been dexned in (21).

Proof. First we note that from (16)

ft(i)"f

t~1(i)!oA/ +

j /)"3 i

( ft~1

(i)!ft~1

( j))

](1!l([i, j]))#(1!4/) ft~1

(i)B# (1!o)((DTg)(i)!(DTDf )

t~1(i)),

where o"a/(a#b).So, Ml is symmetric and for any vector x

xTMlx"+x(i)2!oA+i

/(x(i)!x(i:#1))2

](1!l([i, i:#1]))B!oA+

i

/(x(i)!x(i:#2))2(1!l([i, i:#2]))

# (1!4/)+i

x2(i)B! (1!o)xTDTDx

Obviously if D/D(0.25, ∀xO0, xTMlx(+x(i)2. Fur-thermore,

xTMlx*+x(i)2!oA+i

/(x(i)!x(i:#1))2

#+i

/(x(i)!x(i:#2))2

# (1!4/)+i

x2(i)B!(1!o)xTDTDx

"xT(I!o(I!/N)!(1!o)DTD)x

and if D/D(0.25, !I((I!o(I!/N)!(1!o)DTD).So, if D/D(0.25,

!I(Ml(I

and

xTMlTMlx(xTx,

which proves that Ml is a contraction matrix forD/D(0.25. h

We shall also use the following lemma from Jeng [2]and Jeng and Woods [3].

Lemma A.2. Assume B is a d-dimensional positive-dexnitematrix with eigenvalues o(1)*o(2)*2*o(d)'0 andB"JTDJ, where D is a diagonal matrix which consists ofthe eigenvalues. Let b'0, then

1

J(2p)dDBDP@@x@@2;b

exp[!xTB~1x]dx

*qAb

Jo(d)Bd~2

expC!b2

2o(d)D.

Using these two lemmas, let us now show that thesu$cient conditions of Theorem A.1 are satis"ed.

The proof follows the same steps as the one given inJeng [2] and Jeng and Woods [3].

Let <( f, l) be the Lyapunov function

<( f, l)"DD f DD2#DDlDD

2. (A.7)

Step 1: Show that

+ltkP)<( f

tk, ltk)n

T(tk)( f

tk, ltkD ftk~1

, ltk~1

, g) dftk)b#a<( f

tk~1, ltk~1

).

568 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

First we show that

P)DD ftkDD2nT(tk)

( ftkD ftk~1

, ltk, g) df

tk)b

1#a DDf

tk~1DD2

∀ltk. (A.8)

We have

P)DD ftkDD2nT(tk)

( ftkD ftk~1

, ltk, g) df

tk(by change of variable)

"P)DD fM tk#Mltkftk~1

#QltkgDD2

const

expC!1

2¹(tk)fM ttk[Qltk

1]~1fM

tkD dfMtk

)P)(DDfM tkDD2#DDMltkftk~1

DD2#DDQltkgDD

2) const

expC!1

2¹(tk)fM ttk[Qltk

1]~1fM

tkD dfMtk)b

1#aDD f

tk~1DD2,

(A.9)

where

a"maxl

DDMlDD2,

b1"max

l

[¹(tk)]1@2[trace(Ql

1)]1@2#DDQltkgDD

2],

with a(1, since for Lemma A.1, Ml is a contraction, ∀l.Furthermore, it can be easily shown that

+ltk

nT(tk)

(ltkD ftk~1

, ltk~1

)DD ltkDD2)b

1#aDDl

tk~1DD2, (A.10)

since ltkhas only a "nite number of levels, choosing b

1big

enough, the above inequality obviously holds.Let us now show that Eq. (A.8) holds. We have, using

Eqs. (A.9) and (A.10),

+ltkP)<( f

tk, ltk)n

T( tk)(ftk, ltkD ftk~1

, ltk~1

, g) dftk

"+ltkP)DD f

tkDD2nT( tk)

(ftk, ltkD ftk~1

, ltk~1

, g) dftk

#+ltkP)DDltkDD2nT( tk)

( ftk, ltkDftk~1

, ltk~1

, g) dftk

)b1#aDD f

tk~1DD2#b

2DDl

tk~1DD2"b#a<( f

tk~1, ltk~1

).

Step 2: Show that if temperature ¹(t) decreases asC

T/log (1#k(t)), then for any n

0'0, we have

=+

m/1

DDhmtn0

DD1"R,

where CT

is a positive constant and k(t) is the iterationnumber of sweeps up to time t.

This result can be obtained by establishing a lowerbound for DDh

tkDD1. First, recall that

htk" inf

( ftk~1, ltk~1)|Ra

nT(tk)

( ftk, ltkDftk~1

, ltk~1

, g),

where Rais de"ned as

Ra"M( f, l)D<( f, l))aN.

By de"nition of ¸1

norm, we have

DDhtkDD1"+

ltkP inf

( ftk~1, ltk~1)|Ra

nT(tk)

( ftk, ltkD ftk~1

, ltk~1

, g) dftk

and, from the de"nition of the iterative procedure,

DDhtkDD1"+

ltkP inf

(ftk~1,ltk~1)|Ra

nT(tk)

( ftk, ltkD ftk~1

, ltk~1

, g) dftk

"+ltkP inf

(ftk~1, ltk~1)|Ra

nT(tk)

( ftkD ftk~1

, ltk, g)n

T(tk)

](ltkD ftk~1

, ltk~1

) dftk

*infltkC inf(ftk~1, ltk~1)|Ra

nT(tk)

(ltkD ftk~1

, ltk~1

)

]P inf(ftk~1, ltk~1)|Ra

nT(tk)

( ftkD ftk~1

, ltk, g) df

tkD* inf

(ltk,ltk~1)

inf@@ftk~1@@2xa

nT(tk)

(ltkD ftk~1

, ltk~1

)

]infltkCP inf

@@ftk~1@@2xa

nT(tk)

( ftkD ftk~1

, ltk, g) df

tkD.Let

d.!9

"maxG sup@@f@@2xa

C/

2p2w( f (i)!f ( j))2,

b2p2wDH

and

d.*/

"minG inf@@f@@2xa

C/

2p2w( f ( i)!f ( j))2,

b2p2wDH

then, from Eqs. (9) and (10),

inf(ltk,ltk~1)

inf@@ftk~1@@2xa

nT(tk)

(ltkDftk~1

, ltk~1

)

*A1

2expC!

d.!9

!d.*/

¹(tk) DB

p8

"C2expC!

C1

¹(tk)D,

where C1"p8 (d

.!9!d

.*/) and C

2"(1/2)p8 with p8 the

number of line sites.

Hence,

DDhtkDD1*C

2expC!

C1

¹(tk)D

]infltkP inf

@@ftk~1@@2xa

nT(tk)

( ftkD ftk~1

, ltk, g) df

tk.

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 569

Now, from Eq. (20),

nT(tk)

( ftkD ftk~1

, ltk, g)

"const expC!1

2¹(tk)[ f

tk!Mltkf

tk~1!Qltkg]t[Qltk

1]~1

][ ftk!Mltkf

tk~1!Qltkg]D.

It can be shown that there exists a "nite number b withb<a such that

P inf@@ftk~1@@2xa

nT(tk)

( ftkD ftk~1

, ltk, g) df

tk

*

1

[(2p)d¹(tk)DQltk

1D]1@2

]P@@ftk@@2;b

expC!1

2¹(tk)f tt k[Qltk

1]~1f

tkD dftk.

Then, from Lemma A.2, we can establish the followinginequality:

1

[(2p)d¹(tk)DQltk

1D]1@2P

@@ftk@@2;b

expC!1

2¹(tk)f ttk[Qltk

1]~1f

tkDdftk

*qAb

J¹(tk)j

dB

d~2expC!

b2

2¹(tk)j

dD,

where jdis the smallest eigenvalue of Qltk

1and b is a posi-

tive constant.Obviously j

dand b depend on l

tk. Denoting jH

dand

bH such that

AbH

J¹(tk)j

dHB

d~2expC!

b*2

2¹(tk)j

dHD

"infltkA

b

J¹(tk)j

dB

d~2expC!

b2

2¹(tk)j

dD,

we have

DDhtkDD1*C

2expC!

C1

¹(tk)D qA

bH

J¹(tk)j

dHB

d~2

]expC!b*2

2¹(tk)j

dHD.

Then, if we take ¹(tk)"C

T/log(1#k), we have

DDhtkDD1*

CH(log(1#k))(d~2)@2

1#k,

where

CH"C2qA

bH

J¹(tk)j

dHB

d~2

and

CT"C

1#

bH22j

dH,

and so

=+

m/1

DDhmtn0

DD1"R.

References

[1] S. Geman, D. Geman, Stochastic relaxation, Gibbs distri-butions, and the Bayesian restoration of images, IEEETrans. Pattern Anal. Mach. Intell. 9 (1984) 721}742.

[2] F.C. Jeng, Compound Gauss}Markov random "elds forimage estimation and restoration, Ph.D. Thesis, Rens-selaer Polytechnic Institute, 1988.

[3] F.C. Jeng, J.W. Woods, Simulated annealing in compoundGaussian random "elds, IEEE Trans. Inform. Theory 36(1988) 94}107.

[4] F.C. Jeng, J.W. Woods, Compound Gauss}Markov mod-els for image processing, in: Katsaggelos, A.K. (Ed.),Digital Image Restoration, Springer Series in InformationScience, vol. 23, Springer, Berlin, 1991.

[5] R. Chellapa, T. Simchony, Z. Lichtenstein, Image estima-tion using 2D noncausal Gauss}Markov random "eldmodels, in: Katsaggelos, A.K. (Ed.), Digital Image Restora-tion, Springer Series in Information Science, vol. 23,Springer, Berlin, 1991.

[6] J. Besag, On the statistical analysis of dirty pictures, J. Roy.Statist. Soc. Ser. B 48 (1986) 259}302.

[7] A. Blake, A. Zisserman, Visual Reconstruction, MIT Press,Cambridge, 1987.

[8] R. Molina, B.D. Ripley, Using spatial models as priors inastronomical images analysis, J. Appl. Statist. 16 (1989)193}206.

[9] P.J. Green, Bayesian reconstruction from emission tomo-graphy data using a modi"ed EM algorithm, IEEE Trans.Med. Imaging 9 (1990) 84}92.

[10] C. Bouman, K. Sauer, A generalized Gaussian imagemodel for edge-preserving MAP estimation, IEEE Trans.Image Process. 2 (1993) 296}230.

[11] R.R. Schultz, R.L. Stevenson, Stochastic modeling andestimation of multispectral image data, IEEE Trans.Image Process. 4 (1995) 1109}1119.

[12] K. Lange, Convergence of EM image reconstruction algo-rithms with Gibbs smoothing, IEEE Trans. Image Process.4 (1990) 439}446.

[13] P. Perona, J. Malik, Scale-space and edge detection usinganisotropic di!usion, IEEE Trans. Pattern Anal. Mach.Intell. 12 (1990) 629}639.

[14] Y.L. You, W. Xu, A. Tannenbaum, M. Kaveh, Behavioralanalysis of anisotropic di!usion in image processing, IEEETrans. Image Process. 5 (1996) 1539}1553.

570 R. Molina et al. / Pattern Recognition 33 (2000) 555}571

About the Author*RAFAEL MOLINA was born in 1957 and received his degree in Mathematics (Statistics) in 1979. He completed hisPh.D. Thesis in 1984 in Optimal Design in Linear Models and became Associate Professor in Computer Science and Arti"cialIntelligence at the University of Granada in 1989. His areas of research interest are image restoration (applications to astronomy andmedicine), parameter estimation, image compression, and blind deconvolution. He is a member of the IEEE, SPIE, Royal StatisticalSociety and AERFAI (AsociacioH n Espan8 ola de Reconocimento de Formas y AnaH lisis de ImaH genes) and currently the Dean of theComputer Engineering Faculty at the University of Granada.

About the Author*AGGELOS K. KATSAGGELOS received the Diploma degree in electrical and mechanical engineering from theAristotelian University of Thessaloniki, Thessaloniki, Greece, in 1979 and the M.S. and Ph.D. degrees both in electrical engineering fromthe Georgia Institute of Technology, Atlanta, Georgia, in 1981 and 1985, respectively.

In 1985 he joined the Department of Electrical Engineering and Computer Science at Northwestern University, Evanston, IL, wherehe is currently professor, holding the Ameritech Chair of Information Technology. During the 1986}1987 academic year he was anassistant professor at Polytechnic University, Department of Electrical Engineering and Computer Science, Brooklyn, NY. His currentresearch interests include image and video recovery, video compression, motion estimation, boundary encoding, computational vision,and multimedia signal processing. Dr. Katsaggelos is a Fellow of the IEEE, an Ameritech Fellow, a member of the Associate Sta!,Department of Medicine, at Evanston Hospital, and a member of SPIE. He is a member of the Steering Committee of the IEEETransactions on Medical Imaging, the IEEE Technical Committees on Visual Signal Processing and Communications, Image andMulti-Dimensional Signal Processing, and Multimedia Signal Processing, and the editor-in-chief of the IEEE Signal ProcessingMagazine. He has served as an Associate editor for the IEEE Transactions on Signal Processing (1990}1992), an area editor for the journalGraphical Models and Image Processing (1992}1995), a member of the Steering Committee of the IEEE Transactions on Image Processing(1992}1997). He is the editor of Digital Image Restoration (Springer-Verlag, Heidelberg, 1991), co-author of Rate-Distortion Based VideoCompression (Kluwer Academic Publishers, 1997), and co-editor of Recovery Techniques for Image and Video Compression, (KluwerAcademic Publishers, 1998). He has served as the General Chairman of the 1994 Visual Communications and Image ProcessingConference (Chicago, IL), and as technical program co-chair of the 1998 IEEE International Conference on Image Processing (Chicago,IL).

About the Author*JAVIER MATEOS was born in Granada (Spain) in 1968. He received his Diploma and M.S. degrees in ComputerScience from the University of Granada in 1990 and 1991, respectively. He completed his Ph.D. in Computer Science at the University ofGranada in July 1998.

Since September 1992, he has been Assistant Professor at the Department of Computer Science and Arti"cial Intelligence of theUniversity of Granada. His research interest includes image restoration and image and video recovery and compression. He is a memberof the AERFAI (AsociacioH n Espan8 ola de Reconocimento de Formas y AnaH lisis de ImaH genes).

About the Author*AURORA HERMOSO was born in 1957 and received her degree in Mathematics (Statistics) in 1979. She completedher Ph.D. in 1984 in Likelihood test in Log Normal Processes and became Associate Professor in Statistics at the University of Granadain 1986. Her area of research interest is Estimation in Stochastic Systems. She is a member of SEIO (Sociedad Espan8 ola de EstadmH sticae InvestigacioH n Operativa).

About the Author*ANDREW SEGALL is a Ph.D. student at Northwestern University, Evanston, Illinois, and a member of the Imageand Video Processing Laboratory. His research interests are in image processing and include scale space theory, nonlinear "ltering andtarget tracking.

[15] F. Catte, P.L. Lions, J.M. Morel, T. Coll, Image selectivesmoothing and edge detection by nonlinear di!usion,SIAM J. Numer. Anal. 29 (1992) 182}198.

[16] P. Saint-Marc, J.S. Chen, G. Medioni, Adaptive smooth-ing: a general tool for early vision, IEEE Trans. PatternAnal. Mach. Intell. 13 (1991) 514}529.

[17] S. Geman, D.E. McClure, Bayesian image analysis: anapplication to single photon emission tomography, theProceedings of Stat. Comput. Sect. Amer. Stat. Assoc.,Washington, D.C., 1985, pp. 12}18.

[18] T. Hebert, R. Leahy, A generalized EM algorithm for 3-DBayesian reconstruction from Poisson data using Gibbspriors, IEEE Trans. Med. Imaging 8 (1989) 194}202.

[19] P. Charbonnier, L. Blanc-Feraud, G. Aubert, M. Barlaud,Deterministic edge-preserving regularization in computedimaging, Trans. Image Process. 5 (1997) 298}311.

[20] B.D. Ripley, Spatial Statistics, Wiley, New York, 1981.[21] R. Molina, A.K. Katsaggelos, J. Mateos, J. Abad, Restora-

tion of severely blurred high range images using com-pound models, Proceedings of ICIP-96, vol. 2, 1996,pp. 469}472.

[22] L. Younes, Synchronous random "elds and image restora-tion, CMLA, Ecole Normales SupeH rieure de Cachan.Technical Report, 1998.

R. Molina et al. / Pattern Recognition 33 (2000) 555}571 571