Unified Hilbert space approach to iterative least-squares linear signal restoration

11
Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1455 Unified Hilbert space approach to iterative least-squares linear signal restoration Jorge L. C. Sanz and Thomas S. Huang Coordinated Science Laboratory, 1101 W. Springfield Avenue, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 Received December 22, 1982 We deal with iterative least-squares solutions of the linear signal-restoration problem g = Af. First, several exist- ing techniques for solving this problem with different underlying models are unified. Specifically, the following are shown to be special cases of a general iterative procedure [H. Bialy, Arch. Ration. Mech. Anal. 4, 166 (1959)] for solving linear operator equations in Hilbert spaces: (1) a Van Cittert-type algorithm for deconvolution of discrete and continuous signals; (2) an iterative procedure for regularization when g is contaminated with noise; (3) a Pap- oulis-Gerchberg algorithm for extrapolation of continuous signals [A. Papoulis, IEEE Trans. Circuits Syst. CAS- 22, 735 (1975); R. W. Gerchberg, Opt. Acta 21, 709 (1974)]; (4) an iterative algorithm for discrete extrapolation of band-limited infinite-extent discrete signals land the minimum-norm property of the extrapolation obtained by the iteration [A.Jain and S. Ranganath, IEEE Trans. Acoust. Speech Signal Process. ASSP-29, (1981)fl;and (5) a certain iterative procedure for extrapolation of band-limited periodic discrete signals [V. Tom et al., IEEE Trans. Acoust. Speech Signal Process. ASSP-29, 1052 (1981)]. The Bialy algorithm also generalizes the Papoulis- Gerchberg iteration to cases in which the ideal low-pass operator is replaced by some other operators. In addition a suitable modification of this general iteration is shown. This technique leads us to new iterative algorithms for band-limited signal extrapolation. In numerical simulations some of these algorithms provide a fast reconstruc- tion of the sought signal. 1. INTRODUCTION Iterative reconstruction of distorted signals has received much attention in the engineering literature. Many algorithms have been presented for different models of signals. The reader is referred to Ref. 1 for a comprehensive review. We present an approach that unifies a number of important algorithms in the restoration of linearly distorted signals. The basic tool that we use is that of an iterative least-squares so- lution of linear operators in Hilbert spaces. The advantages of this approach, which is based on a result given by Bialy, 2 are the following: (1) Several apparently disconnected algorithms, some of which have recently received much interest, can be considered special cases of Bialy's iteration. (2) All these algorithms can be shown to be convergent by using a rather general tool. (3) A simple generalization of the basic iterative procedure is shown to provide some new restoration algorithms that perform fast reconstruction of the signal for which are we looking. Section 2 reviews some fundamentals of linear operators in Hilbert spaces. Special emphasis is put on pseudoinverse solutions and the Bialy iteration for nonnegative symmetric operators. In Section 3 we show that this iteration can be used to obtain the iterative procedures mentioned in the ab- stract of this paper. In particular, weobtain a generalization of the Papoulis-Gerchberg algorithm for the continuous ex- trapolation problem. In Section 4 we show how a simple generalization of the Bialy iteration provides some useful re- cursive techniques for restoration. Some numerical examples showing the performance of these algorithms are presented in Section 5, in which the application problem is band-limited signal extrapolation. A numerical comparison of these al- gorithms with the Papoulis-Gerchberg procedure is pre- sented. 2. BASIC THEORY Let us recall what is meant by bounded and compact linear operators in Hilbert spaces. Let H 1 and H 2 be two Hilbert spaces and A: H 1 - H 2 a linear operator. We say that A is bounded (also continuous) if there exists a real number C, such that lAx II2 < CIIX II1 for all x e H 1 , where It Ii denotes the norm in Hi. The operator A is called compact if it maps every bounded set S-H 1 onto a set A(S) whose closure A(S) is compact. In other words, A is compact if and only if, for every bounded sequence jx, n e N} CH 1 , there exists a subsequence lXnk, k e NJ and y e H 2 such that 11-112 A (Xnk) 'Y X k - co The reader is referred to Ref. 3 for further theoretical de- tails. Obviously, if a linear operator is compact it will also be bounded. The converse does not hold in general. However, if H 2 is of finite dimension, both class of operators coincide. The adjoint of A is another linear operator, At: H 2 - H 1 , characterized by the followingidentity: (Aty, X)Hl = (y, Ax)H 2 , 0030-3941/83/111455-11$01.00 1983 Optical Society of America J. L. C. Sanz and T. S. Huang

Transcript of Unified Hilbert space approach to iterative least-squares linear signal restoration

Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1455

Unified Hilbert space approach to iterative least-squareslinear signal restoration

Jorge L. C. Sanz and Thomas S. Huang

Coordinated Science Laboratory, 1101 W. Springfield Avenue, University of Illinois at Urbana-Champaign,Urbana, Illinois 61801

Received December 22, 1982

We deal with iterative least-squares solutions of the linear signal-restoration problem g = Af. First, several exist-ing techniques for solving this problem with different underlying models are unified. Specifically, the followingare shown to be special cases of a general iterative procedure [H. Bialy, Arch. Ration. Mech. Anal. 4, 166 (1959)] forsolving linear operator equations in Hilbert spaces: (1) a Van Cittert-type algorithm for deconvolution of discreteand continuous signals; (2) an iterative procedure for regularization when g is contaminated with noise; (3) a Pap-oulis-Gerchberg algorithm for extrapolation of continuous signals [A. Papoulis, IEEE Trans. Circuits Syst. CAS-22, 735 (1975); R. W. Gerchberg, Opt. Acta 21, 709 (1974)]; (4) an iterative algorithm for discrete extrapolation ofband-limited infinite-extent discrete signals land the minimum-norm property of the extrapolation obtained bythe iteration [A. Jain and S. Ranganath, IEEE Trans. Acoust. Speech Signal Process. ASSP-29, (1981)fl; and (5)a certain iterative procedure for extrapolation of band-limited periodic discrete signals [V. Tom et al., IEEE Trans.Acoust. Speech Signal Process. ASSP-29, 1052 (1981)]. The Bialy algorithm also generalizes the Papoulis-Gerchberg iteration to cases in which the ideal low-pass operator is replaced by some other operators. In additiona suitable modification of this general iteration is shown. This technique leads us to new iterative algorithms forband-limited signal extrapolation. In numerical simulations some of these algorithms provide a fast reconstruc-tion of the sought signal.

1. INTRODUCTION

Iterative reconstruction of distorted signals has received muchattention in the engineering literature. Many algorithms havebeen presented for different models of signals. The readeris referred to Ref. 1 for a comprehensive review.

We present an approach that unifies a number of importantalgorithms in the restoration of linearly distorted signals. Thebasic tool that we use is that of an iterative least-squares so-lution of linear operators in Hilbert spaces. The advantagesof this approach, which is based on a result given by Bialy, 2

are the following:

(1) Several apparently disconnected algorithms, some ofwhich have recently received much interest, can be consideredspecial cases of Bialy's iteration.

(2) All these algorithms can be shown to be convergentby using a rather general tool.

(3) A simple generalization of the basic iterative procedureis shown to provide some new restoration algorithms thatperform fast reconstruction of the signal for which are welooking.

Section 2 reviews some fundamentals of linear operatorsin Hilbert spaces. Special emphasis is put on pseudoinversesolutions and the Bialy iteration for nonnegative symmetricoperators. In Section 3 we show that this iteration can beused to obtain the iterative procedures mentioned in the ab-stract of this paper. In particular, we obtain a generalizationof the Papoulis-Gerchberg algorithm for the continuous ex-trapolation problem. In Section 4 we show how a simplegeneralization of the Bialy iteration provides some useful re-cursive techniques for restoration. Some numerical examples

showing the performance of these algorithms are presentedin Section 5, in which the application problem is band-limitedsignal extrapolation. A numerical comparison of these al-gorithms with the Papoulis-Gerchberg procedure is pre-sented.

2. BASIC THEORY

Let us recall what is meant by bounded and compact linearoperators in Hilbert spaces. Let H1 and H 2 be two Hilbertspaces and A: H1 - H2 a linear operator. We say that A isbounded (also continuous) if there exists a real number C,such that

lAx II2 < CIIX II1 for all x e H1,

where It Ii denotes the norm in Hi.The operator A is called compact if it maps every bounded

set S-H1 onto a set A(S) whose closure A(S) is compact. Inother words, A is compact if and only if, for every boundedsequence jx, n e N} CH1, there exists a subsequencelXnk, k e NJ and y e H2 such that

11-112A (Xnk) 'Y X k - co

The reader is referred to Ref. 3 for further theoretical de-tails.

Obviously, if a linear operator is compact it will also bebounded. The converse does not hold in general. However,if H 2 is of finite dimension, both class of operators coincide.The adjoint of A is another linear operator, At: H2 - H1,characterized by the following identity:

(Aty, X)Hl = (y, Ax)H 2,

0030-3941/83/111455-11$01.00 (© 1983 Optical Society of America

J. L. C. Sanz and T. S. Huang

1456 J. Opt. Soc. Am./Vol. 73, No. 11/November 1983

where (, )Hi denotes the inner product. A linear operatorA: H - H is called symmetric if A = A t. In that case, we saythat A is nonnegative if (Ax, x) ' 0 for all x e H. We con-cern ourselves with iterative solutions to the linear problemsAx = y, where A: H1 - H 2 is bounded and y E H 2 is given.

It happens frequently that y does not belong to the rangeof A and therefore there is no x: Ax = y. In that case, one mayattempt to find the minimum-norm least-squares solution.However, for infinite-dimension spaces this approach is notalways successful because the least-squares solutions may failto exist. We need to recall some related results for our ap-plications.

It is well known that the range of a bounded operator maynot be closed. The situation is even worse for compact op-erators since it can be proved that the range of such an oper-ator is almost never closed. Undoubtedly, this result is themain drawback for a pseudoinverse approach to solving theoperator equation Ax = y, because most distortion equationsin signal processing are given by compact operators. Thefollowing lemmas, which are proved in Ref. 4, help in under-standing the matter and are useful for the remainder of ourpaper.

Lemma 1

Let A: H1 - H2 be a linear bounded operator. For a fixed yE H 2 , let S = {x E H1 : Ax = Qy} and N = {x E H1 : AtAx =Aty). Then S = N. [Q: H2 -A(H 1) is the projection oper-ator onto the closure of the range of A.]

The equation AtAx = Aty is recognized as the normalequation for A. It is obvious that if Qy does not belong to therange of A: R(A) then N = S = 0. Therefore, since R(A) maynot be closed, there exist many pointsy E H2: Qy ( R(A). Inother words, N will not be empty if and only if y E R(A) +R (A) I ( I denotes an orthogonal set).

Lemma 2

For a fixed y E H2, the set of least-squares solutions

{u E H1 : IIAu - y = inffllAx -yll,x & Hi}

coincides with the set of solutions of the normal equationAtAu = Aty.

From the comments given above, it is clear that the set ofleast-squares solutions will not be empty if and only if y ER(A) + R(A) I-. In that case, this set will be closed and con-vex, and therefore there will be an element u = A +y that hasminimum norm among all that satisfy A tAu = A ty.

Another simple but important property is the followingone.

Lemma 3If y E R (A), then A+y is the minimum-norm solution of thelinear equation Ax = y.

Lemma 3 says that, if a solution to the problem Ax = y ex-ists, then the minimum-norm solution will make sense and willcoincide with the generalized inverse A+y. This is a simpleconsequence of the fact that y E R (A) ensures that the normalequation A tAx = A ty has the same set of solutions of Ax =

Y.One would like to have pseudoinverse solutions for every

y E H2. However, as we have shown above, this will be pos-sible if and only if R(A) is closed. In that case, the generalized

J. L. C. Sanz and T. S. Huang

inverse A+: H2 - H is a well-defined bounded operator. Theboundness of A + shows that finding a pseudosolution A +y isa stable problem, i.e., small perturbations in the data producesmall changes in the pseudosolution A+y. As we mentionedabove, this will almost never be the case if A is compact.

Lemma 4If A: H1 - H2 is compact and R(A) is closed then A is de-generate, i.e., R(A) is of finite dimension.

Some examples of compact operators may clarify the mat-ter. Let us suppose that our distortion can be written eitheras

g(Y) = f h(x,y)f(x)dx,y E (-b, b)-a

for allf: __ If(x)I2dx <ca-a

for a continuous model or as

g ) = E; hn(m, n)f(n),neZ

m e Z

for allfi: A, If(n) 1 2 <co

neZ

for a discrete model.In both cases, under rather general conditions on h it can

be shown that the corresponding distortion operator is com-pact. Sometimes the situation is even worse because therange is not only nonclosed but also dense [i.e., R(A) = H2]-In practical terms, this means that, if the given datum g iscontaminated with some additive noise 7, the problem be-comes intractable from a generalized inverse point of view.This is because R(A)-I = {0}, and therefore A+g will never existif the noise has any component that is outside R (A) (this isalmost always the case). An example of dense range is pro-vided by the set of band-limited functions, with a fixedbandwidth Q. It is well known that this set is dense in the setof finite energy functions over an interval (-a, a) (see Refs.5 and 6). We shed more light on this problem in Section3.B.

In what follows we state the Bialy iteration, which is alsouseful for computing generalized inverse solutions of Ax = y.This iteration is the main core of the next section and providesthe basic tool for the announced unification of algorithms.

To this end, if A is a bounded linear operator, we denote byIIA || the infima of the numbers c: IlAx || < c lx 1| for all x E H 1 .We also denote by P the orthogonal projection onto the kernelof A: ker(A) = {x e H1 : Ax = 0}.

Theorem 1

Let A: H - H be a linear bounded nonnegative operator. Fory E H, xo e H consider the iterative process

xn+i = x. + a(y -Ax, (1)

Where 0 < a < 2/h|A 1. Then the sequence Ixi, n > 0} con-verges if and only if Ax = y has a solution. In that case

Xn >-Pxo +.t,

where x is the minimum-norm solution.We would like to make some remarks about Theorem 1. It

is clear that if the initial approximation x0 is zero then {xnI will

J. L. C. Sanz and T. S. Huang

approach the minimum-norm solution of the equation Ax =y. The theorem also says that this will happen if and only ifthe equation has at least one solution.

Judging from the appearance of Eq. (1), it may be said thatrecursion (1) tries to compute a fixed point of the mapping Gx= ay + (I - aA)x. Although G is nonexpansive, it cannot besaid that the fixed point exists or that the iterative procedureconverges because of a contractive property of G. In fact, thissituation will almost never occur. The reason for this asser-tion is given by the following.

Lemma 5If A: H - H is a bounded linear operator such that iteration(1) converges for all y E H for some x0, and a #D 0, then Acannot be compact unless H is of finite dimension.

The proof of this lemma is relegated to the reader. Lemma5 says that, if A is a compact operator and the dimension ofH is not finite (and therefore includes most of the cases inwhich we are interested), recursion (1) must be divergent forsome y. In particular, I - XA will not be a contractionmapping irrespective of the choice of X.

A relevant characteristic of the hypotheses of Theorem 1is that A is assumed to be nonnegative, apparently excludingmany operators for which this condition is not met. However,Bialy's theorem can be used to compute iteratively the mini-mum-norm least-squares solution of any bounded linear op-erator. This result is obtained easily if we recall that theminimum-norm least-squares solution (whenever it exists)is the minimum-norm solution of the normal equation A tAx= Aty. Then Bialy's theorem can be applied because AtA isa nonnegative linear bounded operator. Thus we have thefollowing theorem.

Theorem 2Let A: H1 - H2 be a bounded linear operator. y e R(A) +R (A)'; consider the iterative equation

XO = 0,Xn = xn-I + aAt(y - Axn- 1 ), n 2 1, (2)

where 0 < a < 2/IIAtA 11. Then IxnI converges to the mini-mum-norm least-squares solution A+y.

It is worth noting that Theorem 2 assumes that y e R (A)+ R (A) ', and therefore the sought generalized inverse solu-tion A+y exists.

To end this section, we point out some results that areconnected to those of Theorem 1 and Theorem 2. Two specialcases of Bialy's theorem were proved earlier for integral op-erators, which are a case of compact operators arising oftenin practical applications. In 1956, Fridman 7 proved Theorem1 for the case in which A is given by Af(x) = fa a h(x, t)f(t)dt,f is any finite energy function, i.e., f E L2 (-a, a), and thekernel h is positive [and symmetric: h(x, t) = h(t, x)]. In1951, Landweber8 proved Theorem 2 for the case Af(x) =

-a h(x, t)f(t), removing the assumptions made on h(x, t).In both cases, h(x, t) must define a compact operator, a con-dition that is often met.

A final remark is in order. It can easily be deduced fromTheorem 2 and the discussion on pseudosolutions presentedin this section that iteration (2) will approach the minimum-norm solution of the equation Ax = y whenever y e R (A). Tosee that, we just recall that Ix E H1: AtAx = Atyl = {x e H1:

Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1457

Ax =yjfory e R(A). Inparticularifthesolutionexistsandis unique it will also be obtained by procedure (2).

3. APPLICATIONS

In this section we show several applications of the resultsdiscussed in Section 2.

A. Van Cittert-Type AlgorithmsWe now consider a continuous-continuous deconvolutionproblem. Let L 2 (B) denote the Hilbert space of finite-energyfunctions defined on B, i.e., L2(B) = If: B - R: SB It(t) 12dt<a). Let h be a function such that the following linear op-erator is bounded:

A: L2 (S) - L2(T),

f -Af(t) = I h(t -s)f(s)ds, t e T.

[If T or S is bounded and h satisfies fs f T I h (s - t) l2dsdt <a, then A will be bounded. In that case, it can be provedthat

iIAflL2(T) < {Jw ,J'T Ih(s - t)12dsdtJ IIf L2(S),

where 11|Y11L2(B) stands for the norm ISfB y(t)I2 dt~ 1 2 . An-

other case for which A is bounded is obtained if the functionh has compact support, that is to say, h(s) = 0 if s E C, whereC is a compact set in Rn.]

If S = T, Js denotes truncation to S and h satisfies theadditional properties fRnIh(t) I 2dt < o and hI(c) 2 0 for allco e Rn, where h denotes a Fourier transform, then A is anonnegative operator. To see this, we write

(Af, f)L2(s) = (Af)(s)f (s) = I (h * Jsf)(s)Jsf(s)ds,

where the symbol * stands for convolution defined over Rn.By means of Parseval's equality, we dbtain (Af, f)L2(S) =

fRn(H * Jsf) (wJ)(JSf)fl(w)dw. But (h * Jsf)Y(w) = h(w)(Jsf) (w); then (Af, f)L2(s) = fRnh (w) I (Jsf) (W) I 2dw, whichis always nonnegative.

We now assume that g(s), s e S is the output of our con-tinuous system defined by A. If we are interested in recov-ering the input f(s), s e S, we can apply Theorem 1 to obtaina sequence f&I given by

A 0,

fn+1 =fin + a(g -h * Jsfn), (3)which converges to the minimum-energy signal that producesthe output g. Another way of writing Eqs. (3) is

f,+i(s) = fA(s) + a [g(s) - 5 h(s - t)fn(t)dtj I S E S,

or, equivalently, f,+1(s) = ag(s) + [fV(s) - afs h(s - t)f[ (t)dt].

This is a Van Cittert-type recursion whose convergence isensured. Several remarks are in order. Perhaps the mostimportant observation is that h may have zero frequencieswithout affecting the convergence of the procedure. It is alsoclear that many choices of a can be tried whenever 0 < a <

1458 J. Opt. Soc. Am./Vol. 73, No. 11/November 1983

2/1A 1. IfS is bounded we can chose any a that satisfies

O <a <2/ If J hX - t)J 2dx dt}

The classical Van Cittert algorithm is for the case S = T =

Rfn. It is this assumption that makes the proof of Van Cit-tert's iteration [Eqs. (3)] so simple if a is chosen to satisfy1 - ah(w)l < 1 whenever h(w) #4 0.1 Therefore if S d Rfn,under the more stringent condition F(w) Ž 0, Bialy's iterationprovides a nontrivial extension of the classical version of VanCittert's algorithm.

We next consider a discrete-discrete deconvolution prob-lem. In this case, the underlying Hilbert space is 12(B) = lam:m e B, ZeB lami2 < Aj, where B is a subset ofZn. Let h(m)be a sequence such that the operator

A: 12 (B) - 12(C)

x(m), me Zn -' h(k-m)x(m), k E Cme B

is bounded. Several conditions on h, similar to those givenfor the continuous case, can be found to ensure the boun-dedness of A.

If h satisfies

A;n h(M)J2 <meZ

B = C, and the Fourier series of ht:

mEZ h(m)e-2rmw > 0mCZn

for all co, then Bialy's theorem will apply.tion

Thus the itera-

XM(k) = XmI(k) + a[g(k) - h * JBXm-l(k)], k e B(4)

will converge to the minimum-norm solution of the problem:g(m) = (h * JJf)(m), m e B, provided that at least one solu-tion exists. Equation (4) can be written as follows:

xm(k) = ag(k) + xm-j(k) - a 5 h(k -)xm-(i),feB

k e B (4')

with 0 <a <2/h|Ai1. A simple rationale for choosing a is 0 <a < 2/(supljF).

Equation (4) [or its equivalent form, Eq. (4')] is a Van-Cittert recursive formula when the models for the observedand unknown signals are both discrete.

It is worth noticing that Eq. (4') can also be written by usingan operator-type notation:

Xm(k) = ag(k) + 1(6 - ah) * JBxnt-lj(k), k e B, (5)

where 6: Zn - C: 6(0) = 1, 6(k) = 0 if k i) 0.Recursion (5) was also considered in Ref. 1 for the special

case in which B is bounded. Under this assumption, Eq. (5)was shown in Ref. 9 to converge. However, in Ref. 9 it was alsoproved that if B is bounded a can be chosen independently

of B so that (6 - ah) * JB is a contraction mapping under mildassumptions on h [which include the case A (co) > 0 for all co

J. L. C. Sanz and T. S. Huang

e Rn]. We think that the theoretical importance is nowun-derstood of the a priori constraint that the signal sought islimited to the set B. From Lemma 5 and related discussions,it is seen that (6 - Xh) * JB will be a contraction mapping fora certain X and for a rather general h only if B is bounded. Onthe other hand, if B is not bounded, Eq. (5) will converge tothe minimum-norm solution of the deconvolution problem(Theorem 1), as was shown, but the contractive property of(6 - Xh) * JB will not hold in general.

To conclude this subsection we emphasize that, for thecontinuous-continuous model, if the set S, where the inputsignal f is not zero, is bounded, (I - ah) * Js will not be acontraction mapping in general. This is a major differencebetween the continuous and the discrete models: 12(B) is offinite dimension if B is bounded, whereas L 2 (S) does not havethis property.

B. Pseudoinverse RegularizationThe deconvolution problem that was discussed in Section 3.Ausually requires a more-involved solution because of the fol-lowing facts:

(1) g is given with noise, and therefore the solution to theproblem g = Af may not exist.

(2) h may not satisfy hA(w) > 0 for all w.(3) The period of time in which the observation g is given,

C, may not coincide with the support B of the signalsought.

A full answer to problems (2) and (3) and a partial solution toproblem (1) are given in this section. To this end, we showthe convergence of an iterative reconstruction algorithm. Weconsider the discrete-discrete model only. Similar commentsand results hold for the continuous-continuous case. Withthe same notation as in Section 3.A, let us suppose that ourobservation g (m), m E C is given with noise. Assume that gE R (A) + R (A) 1. Then the following problem will alwayshave a solution: AtAf = Atg, where At denotes the adjointof A (see Section 2). For the convolution case

(Af)(m) = L h(m - k)f(k),keB

m E C,then (Atq)(k) -=IeCh(-k+m)q(m),k eB. Thismeans that At is also given in terms of a convolution in whichthe new kernel is h*(-m). Specifically, AtA is given by

(AtAf)(j) = kF [nA, h(m-j)h(m-k) f(k), i E B,kGB [meCI

which is always nonnegative, as was pointed out in Section2.

Thus we can apply Theorem 2 for computing the mini-mum-norm least-squares solution of Af = g, if g E R(A) +R(A)-, by means of the iteration

Xo = 0,

Xm = Xm-1 + aAt(g - Axm- i), m 2 1, (6)

where 0 < a < 2/(IIA j2),An equivalent expression for Eqs. (6) is obtained by re-

placing A and At:

J. L. C. Sanz and T. S. Huang

xo = 0,

k E B, xm(k) = a T h(m - k)g(m) + Xm-i(k)mec

-a E L E h(m - k)h(m - j f(J).je B meC I

(6')

It is worth pointing out that Eqs. (6) or Eqs. (6') will notconverge if the noisy data g $ R (A) + R (A)I. However, wethink that this approach is useful for understanding the lim-itations of the technique and for setting a condition to ensureconvergence or divergence of the iteration.

A particular case is obtained for B = C = Zn. In that case,the technique that consists of convolving the equation h * f= g with h(-m) has been proposed independently by severalauthors,1' 10 but the approaches used were conceptually dif-ferent. For B = C = Zn, the operator AtA is given by a con-volution whose kernel is h(m) * h(-m). Then the transferfunction of the system AtA is Ih(w)12, W e Rn. Since Ih(w)12is always nonnegative, Van Cittert's algorithm applies to theequation [h(-m) * h(m)] * f = h(-m) * g, producing thefollowing frequency-space recursion:

xo(w) = 0,

xm(w) = ah(w)g(w) + [1 - ajh(w)I2]jmi(W), (7)

which is, of course, equivalent to Eqs. (6') when B = C = Zn.It is worth pointing out that there should be a solution to theproblem h(m)*f = g in order to obtain a finite-energy discretesignal whose Fourier transform is the limit of recursions(7).

As we showed above, the pseudoinverse approach providesa full answer to the problem when B and C are any subset ofZn, and the convergence of Eqs. (6) is characterized in termsof the data g.

To end this subsection we would like to remark that if g sR(A) + R(A)I then Eqs. (6) will not converge. Thereforecaution is recommended when iterations (6) [or Eqs. (6')] areused for theoretical derivations. On the other hand, whenEqs. (6) [or Eqs. (6')] are implemented numerically, a finitepiece of the signals is used and therefore convergence of theiteration is guaranteed. The conceptual point is that, forimplementation purposes, the underlying model for the dis-tortion is g(m) = (h * JBf)(m), m e C, where both B and C arefinite. Then the pseudoinverse solution will always exist andcan be approximated by means of Eqs. (6').

C. Papoulis-Gerchberg IterationLet us assume that g: F - C is a piece of an 2 band-limitedfunction [i.e.,g(w) = 0, w $ 2], where F /- 0 is an open subsetofRn. Let us suppose that the complete function g satisfiesthe finite-energy constraint Rrn Ig(x)l2dx < -.

Papoulis6 and Gerchbergii proposed the following algo-rithm for computing the continuation of g or its Fouriertransform:

go = 0,

gm = sinc0 * [JFg + (I - JF)gm-1], m > 1, (8)

where sincu denotes the function whose Fourier transform isthe indicator of 2.

Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1459

In Ref. 6, Eq. (8) was shown to be convergent to g in theenergy norm for the one-dimensional case. In Ref. 12 anotherapproach was shown to prove convergence of Eq. (8), whichis also valid for the multidimensional case. However, in Ref.13 this algorithm was presented as a special case of Land-weber's iteration.8 The underlying operator equation is(Af)(x) = g(x), where A is an integral operator given by

(Af)(x) = fs f(w)exp(-27riwx)dw, x e F; (9)

f e L2 (Q2) and 2 are assumed to be bounded. It is obviousthat the solution sought is f = g and is also unique. We cannow apply Theorem 2 to get a recursion:

o = 0,

fm = fm-i + aAt(g - Afmi), (10)

where 0 < a < 2/( |AA 1|) and At is the adjoint integral oper-ator given by (Ath)(w) = SF h(x)exp(27riwx)dx, w & Q.

It is easy to verify that |A|A •| < 1; then a = 1 is an admis-sible value and, from Eqs. (10),

o = 0,

fm = fm-i + At(g - A/r-i) (10')

will converge in the energy norm to the unique solution f ofthe equation (Af)(x) = g(x), x & F. But in that case,

frn falso in the energy norm ( denotes an inverse Fouriertransform). Since fn and f are supported on 2, fn(x) =Sf/n(w)exp(-2rixw)dw andf(x) = fs f(w)exp(-2griwx)dw,x e Rn. We now apply the inverse transform to both sidesof Eqs. (10'):

o =0,

fm = fm-1 + {f |g(x) W- fm-.(w)exp(-27rixw)dw]

X exp(27rix)dx1 * (11)

If we call gm = fim, m 2 0, we will obtain the recursion

go = 0,

gm = gm-i + sinc0 * (JFg - JFgm-1). (11')

Since gm-i is Q band limited, Eqs. (11') are equivalent to thefollowing:

go = 0,

gm = sinc0 * [JFg + (I - JF)gm-1], m > 1. (12)

Equations (12) are the Papoulis-Gerchberg iteration [Eqs.(8)].

We now show a generalization of Eqs. (8) to cases in whichthe low-pass convolution is performed by some other oper-ator.

To this end, we need to assume some further informationrelated to, = f. Let us suppose that, for a certain nonnegativebounded function h(w), w e £, h1(w) = 0, w $ Q, g satisfies

1460 J. Opt. Soc. Am./Vol. 73, No. 11/November 1983

Then, if we consider the operator A: L2 (Q) - L2(F),

(As)(x) = I hi1/2(w)exp(2 riwx)s(w)dw, x e F,

J. L. C. Sanz and T. S. Huang

(13)(2) Compute

E sinc(k-m)x(m) = y(k),meF

(14)

the equation g(x) = (As)(x), x e F will have a solution inL 2 (Q) (which is obviously g//i 1/2). It readily follows that thesolution is also unique. We can now apply Theorem 2 to theequation g(x) = (As)(x), x e F for A given in Eq. (14) to getan iterative procedure:

fo = 0,

fm = fm-i + aAt (g - Afm-i),

which converges to the unique solution. IEqs. (15) become

Ao = 0,

fm(W) f m-(W) + a 5 11/2(x)exp(27riwj

x [W(x - If /12(z)exp(-2rixz)f 1

If we now multiply both sides of Eqs.exp(-2 riwy), integrate with respect to w E

fo fm(W)I112(w)exp(-27riwy)dw, we obi

go = 0,

y e Rng,(y) = g,-i(y) + a SF {SI f (w"

X exp[2lriw(x - y)]dw} [g(x) - g,

Then it was shown that y can be computed by the followingiterative algorithm:

Yo = 0,

k E Zn, ym (k) = Ym-.i(k) + a E sincu (k -j)jeF

X [z j) -Yr-,(j)], m 2 1 (19)

for 0 < a < 2. (Both results were extended for arbitrarym > 1, (15) multidimensional F and Q in Ref. 17; for a relationship be-

tween this discrete solution and the continuous extrapolationMore specifically, problem given in Section 3.C, see Ref. 17.)

Perhaps the earliest reference to the technique given byEqs. (18) is Ref. 18 in which this problem was addressed undera rather different name and by using a quite general ap-proach.

The fact that iteration (19) computes the same sequenceas that of Eqs. 18 is simple. In this section, we show that it-

1. eration (19) can be obtained from Bialy's iteration for a certainL(z)dz]dx. (15') operator equation problem. The minimum-norm property

of the limit sequence is readily derived as a by-product.Let A: L 2(,Q) - 12(F) be the following linear operator:

k1D'J DY n-ktw)

Q. and call gm (Y)tain

m-l(x)]dx. (16)

If we call h(z) = Sf I (w)exp(2wriwz)dw, Eqs. (16) become

go = 0,

gm = gm-1 + ah(-z) * JF(g gm-1), m > 1, (17)

which converges to g uniformly over compact sets in Rn, when0 < a < 2/[su1Pi(w)] and condition (13) is satisfied.

D. Iterative Extrapolation of Infinite-Extent DiscreteSignalsLet F be a finite subset of Zn and z (m), m e F a sequence ofnumbers. The discrete band-limited extrapolation problemconsists of finding an infinite sequence y(m), m E Zn suchthat y(m) = z(m), m e F, and y(m) is Q band limited, i.e.,y(w) = 2meZn y(m)exp(-27rimw) = 0 if w i i (a fixedbounded set of frequencies Q c [-1, 1] n) (see Refs. 14-17).

The solution to this problem is nonunique. 1,14 In Ref. 14it was shown that the minimum-norm discrete extrapolationy can be computed by means of the following two-step pro-cedure:(1) Solve for x:

E sincn(k - m)x(m) = z(k), k E F. (18a)meF

(Af)(m) = fg f(w)exp(27rimw)dw, m e F.

It is clear that A is bounded when L2 (Q) and 12(F) are

equipped with the norms fQ If(w)I2dw and 2meF Iz(m)12 ,respectively. It is clear that the discrete extrapolationproblem can be put in this equivalent way:Find

f E L 2( 2): (Af)(m) = z(m), m e F. (20)

From Parseval's formula it is seen that the minimum-normextrapolation corresponds to minimizing I|f 112, where f satisfiesEq. (20). We can now solve Eq. (20) by means of Bialy's it-eration. To this end, we need to compute A t. It is simple toverify that, if s e 12(F), then

(Ats)(w) = E s(m)exp(-2rimw), w e [-1, 1]n.meF

Thus Bialy's iteration given by Theorem 2 becomes

o =0,

fm(w) = f-i(W) + a E [z(k)-f fm-,(z)keF

X exp(22rizk)dzjexp(-27rikw), w E Q, (21)

and fm converges to the minimum norm solution of Eq. (20)in the L2(9) norm. Therefore Su/fm (w)exp(27riwk)dw, k eZn approaches the minimum-norm Q band-limited extrapo-lation y(k), k E Zn when m -- in the 12 (Zn) norm (Parse-val's formula). Then, if we call Ym(k) = So fm(w)ex-p(27rikw)dw, k E Zn, Eqs. (21) become

k E Zn. (18b)

Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1461

Yo = 0,

k C n, Ym(k) = ym-i(k) + a E sincu(k -j) [z(J)i

xo(k) = 0, -ho < k • ko,

xn = xnl + aAt(z -Axn_) n > 1, (24)

eF where a can be chosen as 1. (Here, At is a transpose conjugate

-Y.-,( W]^ m > 1 (19) of A.)We now t Ake M1lnsth TDFT on hoth sides of Eos. (94) to

and convergence to y(k), k e Zn is ensured for 0 < a < 2.A final remark is in order. The operator A given by Eq. (20)

satisfies R (A) = 12(F) because F is finite; therefore iteration(21) or, equivalently, iteration (19) will always converge to theminimum-energy solution of the problem. This means thatthe algorithm does not distinguish signal from noise.

E. Iterative Extrapolation of Periodic Discrete SignalsAnother related discrete approach to band-limited extrapo-lation is to solve the following problem:Given

z(k), k -ko, . .., ko < N,

find

y(k), -N < k • N: y(k) = z(k), -ho < k < ko,NZ y(k)exp(-27rikn/M) = 0, I nl > ko, (22)

k=-N

where M = 2N + 1.In this case, the band-limited property of y(k), -N ' k ' Nis given in terms of the discrete Fourier transform (DFT):

k=-N y (k)exp (-2rikn/M).In Ref. 19 the following iterative algorithm for computing

extrapolation (22) was shown to be convergent:

yo(k) = 0, -N < k < N,

Yn = IDFT On ( )'-ko k <hootherwise

where

On= DFT J(k), -ko • k • ko (23b)Yn (k), Jkl > ko 2b

[IDFT stands for the inverse discrete Fourier transform givenby 1/M X k =-N x (k)exp(27rikn/M), n = -N,... ,N].

It is clear that procedure (23) incorporates at every iterationthe information available in both time and frequency domains.In Ref. 19, the proof of the convergence of this recursion wasdone by means of a certain nonexpansive property of an op-erator in CM.

In this section, we show that procedure (23) can also beconsidered a special case of Bialy's theorem. Perhaps this isthe simplest of the examples presented in this section becauseof the finite dimensional nature of the underlying Hilbertspaces.

Specifically, let A: C2ko+1 - C2ko+1 given by. the IDFToperator

1 ho(Ax)(n) =- x(k)exp(2rikn/M), -ho < n < ko.

Mk=-ho

It is obvious that problem (22) can be restated as that offinding a vector x e C2ko+1 such that (Ax)(k) = z(k), -ho <k ' ko. It is known that this system of equations has a uniquesolution. We can apply Bialy's iteration lEqs. (2)] for com-puting the solution x. So we obtain

obtain

Yo = 0,1 ko

Yn(k) = Yn-(k) +- E exp(2irihk/M)MhE=-ko

koX _ exp(-2irihm/M) X [z(m) -Yn-1(M),

m =-ho

-N S k < N. (25)

It is easy to verify that Eqs. (25) can also be written in thefollowing way:

Yo = 0,

1 ho NYn (k) = - L exp(2irihk/M) L exp(-2irihm/M)

Mk=-k, m=-N

X [Jkoz + (I-Jk 0)Yn-l](m), (26)

where Jko denotes truncation to [-ko, ko].It turns out that recursion (26) is the same as Eqs. (23), and

therefore the convergence of Yn to the sought extrapolationis ensured.

In the derivation presented above it was assumed to sim-plify notation that the length of the DFT is odd: 2N + 1.

The advantage of this approach to interpreting iteration(23) is that it is possible to characterize the convergence of asimilar procedure when the number of samples in the time andfrequency domains is not the same. In such a case, it is ob-vious that the extrapolation problem has no solutions or aninfinite number of solutions. In both cases, the correspondingequations [Eqs. (26)] will provide the minimum-norm least-squares extrapolation.

4. EXTENSIONS

In this section we show that some extensions of the Bialy it-eration can be useful for obtaining new algorithms for signalrestoration and extrapolation.

In Ref. 20 we presented the following iteration:

Ao = 0,

(27)

and we related it to the numerical continuation of analyticfunctions. Let us assume that A: H1 - H2 is a bounded linearoperator, g e R(A) + R(A) I-, and that D: R(At) - R(At) isa bounded linear symmetric operator that is assumed to beone to one (i.e., Vx: Dx = 0; then x = 0) and such that DAtAis nonnegative. Under this assumption it can easily be shownthat

lim fA. = f,

where f is the minimum-norm least-squares solution of Af =

g, if D is chosen so that |IDAtAll < 2.In the case when A is compact, the condition that DAtA is

nonnegative may be put in terms of the eigenvectors of A tA.

J. L. C. Sanz and T. S. Huang

fn = fn_1 + DA' (g - Afn-1), n > 1,

1462 J. Opt. Soc. Am./Vol. 73, No. 11/November 1983

This case was extensively analyzed in Ref. 21. The effects ofdifferent D's on the speed of convergence of iteration (27) werealso studied 21 for the compact case.

We show next how iteration (27) can be used for obtainingsome generalizations of the Landweber-Papoulis-Gerchbergalgorithm discussed in Section 3.C.

To this end let us call A: L2 (Q) - L2(F) the compact op-erator given in Section 3.C,

(Af)(x) = JAf(w)exp(-27riwx)dw, x e F. (9)

Since A tA is another linear nonnegative compact operatorthere exists a family of eigenvalue-eigenfunctions (Xi, Oi) i= 1, 2,... of AtA such that

A tAOn = Xn.n , n = 1,2,...(see Ref. 3).

To ensure convergence of iteration (27) it is sufficient thatthe following condition be met (see Ref. 21):

(1)(2)

(3)

D~tn = Pn(/n, n = 1,2 . ... ;Pn satisfies 0 < Pnn < 2 for all n;Pn, n = 1, 2,... is a bounded sequence.

It is interesting to remark that the operator A tA is given bythe integral kernel sincF, and therefore On, n = 1, 2, ... arethe prolate spheroidal wave functions.5

Many operators can be chosen to satisfy conditions (1)-(3).In Ref. 21, it was shown that it is sufficient to pick D =G(AtA), where G(X) is a polynomial or rational function such

A tI/I I

Ij I I I

J I II II

I I I

I I II I I/I \II I

Fig. 1. Fourier transform of the ideal signal.

A

I I

I I

\ II/

lI

j I IIlI

J. L. C. Sanz and T. S. Huang

A. 1

I I I I

I'I l I

I II I

I I'III II/

Fig. 3. DFT of 129 samples in (-1/2, 1/2).

A N

I ~I I III I I

J I I I

/ I I II I I

I I I

I I

I 11 II II I

V

(a)

A

I'

III I\ I

I I

(b)Fig. 4. (a) Papoulis-Gerchberg algorithm; 20 iterations. (b) Pap-oulis-Gerchberg algorithm; 500 iterations.

that 0 < G(X) < 2 for 0 < X < 1. If D is to be so chosen, it-eration (27) will converge in the L2(Q) norm to the solutionof the problem (Af)(x) = g(x), x e F, where g: Rn - C is as-sumed to be a Q band-limited function. If we now apply aninverse Fourier transform to both sides of iteration (27) we willget the following recursion:

go = 0,

gn (x) = gn-(x) + 35 exp(2i7riwx)D {IJ exp(-2 riz)

X [g(z) -gn-,(z)]dz| Mwdw. (28)Fig. 2. DFT of 129 samples in (-1, 1).

J. L. C. Sanz and T. S. Huang

Observe that when D = I, Eqs. (28) become the Landweber-Papoulis-Gerchberg algorithm. Equations (28) show a quitegeneral version of this classical situation.

In the remainder of this section, we present some numericalsimulation results comparing generalization (27) with theclassical iteration (10'). To this end, let us define

g: R - R: g(x) =sin (/2) 21 cosirx.g..gx- I(7r/2)x

The Fourier transform of this signal is plotted in Fig. 1. If wetake the interval F = (-1, 1) as the known part of g, a fairlyreasonable reconstruction of the Fourier transform can beobtained by means of the DFT's of 129 samples. This resultis plotted in Fig. 2. It is clear that the two peaks are easilydistinguished. On the other hand, if F =(-1/2, 1/2) the sit-uation will be completely different. Figure 3 plots the resultobtained for the DFT of 129 samples in (-1/2, 1/2). Thismeans that restricting the known part to (-1/2, 1/2) repre-sents an irretrievable loss for the application of the naive in-version technique. In other words, by means of the DFT ofsamples of g on (-1/2, 1/2) the outstanding features of thespectrum of g are lost. Therefore we think thatg: [-1/2, 1/2]- R is a reasonable test example for our numerical simula-tions.

We first apply the Landweber-Papoulis-Gerchberg itera-tion. Figure 4(a) shows the very poor result obtained after20 iterations. Figure 4(b) plots the reconstructed Fouriertransform after 500 iterations. In this case the result isbetter.

Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1463

A I

JI' I I

I

I I

(II(a)

(b)

Fig. 6. (a) D = (AtA + 0.005I)-'; 10 iterations. (b) (AtA +0.005I)-'; 50 iterations.

We now apply the more general procedure given by Eq. (27)for three different D's.

1. D = (A tA + y1)-1

For this, operator (27) is closelyTikhonov regularization method.22

becomes

Ao = 0,

related to the Twomey-In this case, iteration (27)

(AtA + -yI)fn = yfn-l + Atg n > 1.

(b)

Fig. 5. (a) D = (AtA + 0.00005I)-; 10 iterations. (b) D = (AtA +0.000051)-l; 20 iterations.

It is worth pointing out that My should be chosen positive. Inthat case, AtA + yI is always invertible.

Figure 5(a) shows the result obtained after 10 iterationswhen -y = 0.00005, and Figure 5(b) plots the reconstructedFourier transform after 20 iterations with the same parameter-y. In both cases the reconstructions are of good quality.

Fixing a value for -y and determining the number of itera-tions are by no means trivial matters. By comparing Figs. 5(a)and 5(b) it is seen that the reconstruction is quite sensitive tothe number of iterations. We think that the sensitivity alsodepends on the parameter y.

Figure 6(a) shows the result after 10 iterations obtained byapplying iteration (29) when y = 0.005. Figure 6(b) plots thecorresponding result for y = 0.005 and 50 iterations. Bycomparing Figs. 5(a) and 6(a) it is seen that the reconstructionis sensitive to the parameter y when the number of iterationsis fixed.

(a) (29)

1464 J. Opt. Soc. Am./Vol. 73, No. 11/November 1983

Fig. 7. D = (AtA + 0.000005I)-'; 10 iterations.

A

I'

I I

\II//

(a)

(b)

Fig. 8. (a) D = G(AtA); G sixth-order polynomial; 10 iterations. (b)D = G(A'A); G sixth-order polynomial; 200 iterations.

Figure 7 shows the reconstruction obtained for y = 0.000005after 10 iterations. It is clear that for a fixed number of iter-ations the smaller the parameter y is, the more distorted(because of the propagation of roundoff errors) the recon-struction will be.

In spite of some unanswered questions, the main conclusionthat can be drawn from these examples is that the resolutionobtained in Figs. 5(a), 5(b), 6(a), and 7 is much better thanthat of Figs. 4(a) and 4(b).

2. D = M(A 'A) and NA) = 804.375X6 - 3003X 5 + 4504.5X4

- 3465X3 + 1443.75X2 - 315X + 31.5

Some reasons for choosing such a D are well documented inRef. 21. We think that this example is also useful to illustrate

that the reconstruction is sensitive to the choice of D. For thisD, Fig. 8(a) shows the result after 10 iterations. It is seen thatthe resolution is poor. However, Fig. 8(b) plots the recon-structed Fourier transform for 200 iterations, which is a goodresult. This means that the procedure is slower comparedwith those given where D = (A'A + yJ)-1 .

It is also remarkable that, by using fewer iterations thanthose necessary for the classical Landweber-Papoulis-Gerchberg algorithm, recursion (27) provides a better recon-struction [compare Figs. 4(b) and 8(b)].

3. D = NA 'A) and FX) = X16This case is intended to be an example in which the speed ofthe reconstruction seems to be similar to that of the classicalapproach [Eqs. (9) and (10')].

Figure 9(a) shows the reconstruction obtained after 500iterations. By comparing Figs. 9(a) and 4(b) it is seen thatthe results look much the same. Figure 9(b) plots the resultobtained after 1000 iterations. If Fig. 9(b) is compared withFig. 9(a) it is noticeable that in the former the reconstructionof the Fourier transform was improved at the cost of doublecomputational effort.

It has been assumed so far that the given signal is not con-taminated with any noise. Since the techniques presentedin Examples 1 and 2 above represent a substantial improve-ment of the classical iteration procedure [Eqs. (8)-(10')] it isexpected that the noise will also propagate much faster in thereconstruction. Therefore a stopping rule is of great impor-tance for practical applications. It is also important to ana-

Fig. 9.ations.

A,\11 I /

\I

'I

I'

I' /

IIV I

II \

I I'II'IIV I /_ _ _ _

(b)

(a) D = (AtA) 16 ; 500 iterations. (b) D = (AtA)lS; 1000 iter-

-

J. L. C. Sanz and T. S. Huang

J. L. C. Sanz and T. S. Huang

lyze what the performance of the iteration is when the knownrange of g is smaller. Some related examples and furtheranalyses are given in Ref. 23.

ACKNOWLEDGMENTS

The research of J. L. C. Sanz was supported by The NationalCouncil of Scientific and Technical Research (Argentina).That of T. S. Huang was supported by the Joint ServicesElectronics Program under contract no. N00014-79-C-0424.

REFERENCES

1. R. Schaefer, R. Mersereau, and M. Richards, "Constrained iter-ative restoration algorithms," Proc. IEEE 69, 432-450 (1981).

2. H. Bialy, "Iterative Behandlung linearen Funktionalgleichungen,"Arch. Ration. Mech. Anal. 4, 166-176 (1959).

3. A. Kolmogorov and S. Fomin, "Functional analysis," in Measure,The Lebesque Integral, Hilbert Spaces, Vol. 2 (Graylock, Albany,N. Y., 1961).

4. W. Kammerer and M. Nashed, "Iterative methods for approxi-mate solutions of linear integral equations of the first and secondkinds," MRC Tech. Rep. No. 1117 (Mathematics ResearchCenter, Madison, Wisc., 1971).

5. 0. Slepian and H. 0. Pollack, "Prolate spheroidal wave functions(I)," Bell Syst. Tech. J. 40, 43-63 (1961).

6. A. Papoulis, "A new algorithm in spectral analysis and band-limited extrapolation," IEEE Trans. Circuits Syst. CAS-22,735-742 (1975).

7. V. Fridman, "A method of successive approximation for Fredholmintegral equations of the first kind," Usp. Mat. Nauk. 11, 232-234(1956).

8. L. Landweber, "An iteration formula for Fredholm integralequation of the first kind," Am. J. Math. 73, 615-624 (1951).

9. J. L. C. Sanz and T. S. Huang, "Iterative time-limited signalrestoration," Tech. Rep. R972-UILU-ENG 82-2238, CoordinatedScience Laboratory, University of Illinois at Urbana-Champaign;IEEE Trans. Acoust. Speech Signal Process. (to be published).

10. G. Thomas, "A modified version of Van Cittert's iterative de-convolution procedure," IEEE Trans. Acoust. Speech SignalProcess. ASSP-29, 938-939 (1981).

11. R. W. Gerchberg, "Super-resolution through error energy re-duction," Opt. Acta 21, 709-720 (1974).

Vol. 73, No. 11/November 1983/J. Opt. Soc. Am. 1465

12. J. Cadzow, "An extrapolation procedure for band-limited signals,"IEEE Trans. Acoust. Speech Signal Process. ASSP-27, 4-12(1979).

13. J. L. C. Sanz and T. S. Huang, "On the Papoulis-Gerchberg al-gorithm," Tech. Rep. R972-UILU-ENG 82-2239, CoordinatedScience Laboratory, University of Illinois at Urbana-Champaign;IEEE Trans. Circuits Syst. (to be published).

14. A. Jain and S. Ranganath, "Extrapolation algorithms for discretesignals with applications in spectral estimation," IEEE Trans.Acoust. Speech Signal Process. ASSP-29, 830-845 (1981).

15. J. L. C. Sanz and T. S. Huang, "Some aspects of band-limitedsignal extrapolation: models, discrete approximations, andnoise," Tech. Rep. R972-UILU-ENG 82-2238, CoordinatedScience Laboratory, University of Illinois at Urbana-Champaign;IEEE Trans. Acoust. Speech Signal Process. (to be published).

16. T. S. Huang and J. L. C. Sanz, "Four models for the band-limitedsignal extrapolation problem," presented at the Topical Meetingon Signal Recovery and Synthesis with Incomplete Informationand Partial Constraint, Incline Village, Nevada, 1983.

17. J. L. C. Sanz and T. S. Huang, "Discrete and continuous band-limited signal extrapolation," Tech. Rep. R972-UILU-ENG82-2238, Coordinated Science Laboratory, University of Illinoisat Urbana-Champaign; IEEE Trans. Acoust. Speech SignalProcess. (to be published).

18. K. Yao, "Applications of reproducing kernel Hilbert spaces-band-limited signal models," Inf. Control 11, 429-444 (1967).

19. V. Tom, T. Quatieri, M. Hayes, and J. McClellan, "Convergenceof iterative nonexpansive signal reconstruction algorithms," IEEETrans. Acoust. Speech Signal Process. ASSP-29, 1052-1058(1981).

20. J. L. C. Sanz and T. S. Huang, "Continuation techniques for acertain class of analytic functions," Tech. Rep. R973-UILU-ENG82-2239, Coordinated Science Laboratory, University of Illinoisat Urbana-Champaign; SIAM J. Appl. Math. (to be pub-lished).

21. 0. N. Strand, "Theory and methods related to the singular-function expansion and Lanbweber's iteration for integralequations of the first kind," SIAM J. 11, 798-825 (1974).

22. S. Twomey, "On the numerical solutions of Fredholm integralequations of the first kind by the inversion of the linear systemsproduced by quadrature," J. Assoc. Comput. Mach. 10, 79-101(1963).

23. J. L. C. Sanz and T. S. Huang, "Support-limited signal and imageextrapolation," in Advances in Computer Vision and ImageProcessing, Vol. 1, T. S. Huang, ed. (JAI Press, Greenwich, Conn.,1983).