The spectral factorization problem for multivariable distributed parameter systems

23

Transcript of The spectral factorization problem for multivariable distributed parameter systems

THE SPECTRAL FACTORIZATION PROBLEM FOR MULTIVARIABLEDISTRIBUTED PARAMETER SYSTEMS

Frank M. CALLIER and Joseph J. WINKIN

This paper studies the solution of the spectral factorization problem for mul-tivariable distributed parameter systems with an impulse response having anin�nite number of delayed impulses. A coercivity criterion for the existence ofan invertible spectral factor is given for the cases that the delays are a) arbitrary(not necessarily commensurate) and b) equally spaced (commensurate); for thelatter case the criterion is applied to a system consisting of two parallel trans-mission lines without distortion. In all cases, it is essentially shown that, underthe given criterion, the spectral density matrix has a spectral factor wheneverthis is true for its singular atomic part, i.e. its series of delayed impulses (withalmost periodic symbol). Finally, a small-gain type su�cient condition is studiedfor the existence of spectral factors with arbitrary delays. The latter condition ismeaningful from the system theoretic point of view, since it guarantees feedbackstability robustness with respect to small delays in the feedback loop. Moreoverits proof contains constructive elements.

1 IntroductionIn the control literature, much attention has been paid to the spectral factorization problemfor linear time invariant lumped - and distributed parameter systems. This problem is metunder di�erent forms in several applications. A classical one is the solution of Wiener-Hopftype problems, i.e. systems of integral equations on a half line, with kernels of a speci�ctype. A lot of results in this �eld have been developed by Gohberg and Krein: see e.g.[18],[22] and the book [17]. Another context where the factorization problem arises is the the-ory of feedback control system design: LQ-theory and robust feedback stability, see below;another issue may be the multiplier technique in passivity theory (circle criterion, etc...):see e.g. [36], [16]. The speci�c spectral factorization problem of this paper is motivated byapplications in feedback control system design, more precisely in the analysis of closed-loopstability robustness, i.e. the analysis of the graph distance between two possibly unstablesystems for obtaining robustness estimates of feedback stability, and in the solution to theLinear-Quadratic optimal control problem by frequency domain techniques for distributedparameter, i.e. in�nite-dimensional state-space, systems with bounded or unbounded control

and/or observation operators: see e.g. [8] - [10], [39], [27] - [31] and [33] - [34] and referencestherein.This paper studies the multivariable spectral factorization problem in the framework of theCallier-Desoer algebra of possibly unstable distributed parameter system transfer functions(see e.g. the survey paper [11] or the book [15]), whose corresponding subalgebra of properstable transfer functions is denoted by A�. The starting points are references [8] and [9]where one investigates repectively singlevariable general spectral factorization and multivari-able spectral factorization with no delayed impulses. The results obtained in those papersare extended here to multivariable distributed parameter systems with an impulse responsehaving an in�nite number of delayed impulses. Criteria for the existence of an invertiblespectral factor are given for the cases that the delays are a) arbitrary and b) equally spaced(commensurate case). The analysis of case a) is based on a result of [26] ([1],[2]), whilethat of case b) is a corrolary ( already present in [39]). In both cases the condition is thecoercivity on the extended imaginary axis of the matrix spectral density. An essential stepin the su�ciency proof of this condition is to show that, once the singular atomic part ofthe spectral density (i.e. its series of delayed impulses) has an invertible spectral factor,then so does the overall spectral density. Indeed, the problem is then reduced to spectralfactorization with no delayed impulses, whose solution is known, see [9]. It is also recalledthat the coercivity condition implies the existence of invertible matrix spectral factors withentries in a larger algebra than A�, viz. H1, see e.g. [25], [34] and references therein.Finally a stronger small-gain type condition is proved to be su�cient for the existence ofmatrix spectral factors with entries in A�. This last condition is seen to be meaningful fromthe system theoretic point of view, since it guarantees feedback stability robustness with re-spect to small delays in the feedback loop. The results are illustrated by some examples. Inparticular, the results of the case of commensurate delays are applied to a system consistingof two transmission lines in parallel without distortion.The paper is organized as follows. Section 1 contains the present introduction and a listof notations and abbreviations. The solution of the general spectral factorization problemof spectral densities with arbitrary delays is described in Section 2, whereas the detailedproofs are given in Section 3. The next section is devoted to two particular cases which areimportant in applications. The �nal section contains some conclusions.A list of notations and abbreviations and a remark on the causal-anticausal decompositionof a distribution with support on the real axis are given below.List of notations and abbreviations :IR, (respectively IR�; IR+) := set of real (respectively nonpositive-, nonnegative-real) num-bers;IC := �eld of complex numbers;IC �+, (respectively IC 0�+) := fs 2 IC : Re(s) � �, (respectively > �)g (� is omitted if � = 0);S� , (respectively S0�) := fs 2 IC : � � Re(s) � ��, (respectively � < Re(s) < ��)g;

LTD, (respectively LTD�;LTD+):= set of IC -valued Laplace transformable distributions withsupport on IR, (respectively IR�; IR+);�(:) := Dirac delta distribution (Dirac impulse);f(:) := (two-sided) Laplace transform of f 2 LTD;A := set of Laplace transforms of all f 2 A;L1� := class of all functions f , with support on IR+, such that R10 jf(t)j exp(��t)dt <1;Mat (A) := set of matrices having entries in A;An�n := set of n-by-n matrices having entries in A ;M� := hermitian transpose of the matrix M ;F�(t) := F (�t)�, parahermitian transpose of F 2 Mat (LTD), equivalently F�(s) = F (�s)�(= F (j!)� for s = j!);M � 0, (respectively > 0) := M is a positive semi-de�nite (respectively positive de�nite)matrix;log+ x := max(log x; 0).Preliminary remark :To any distribution f = fa(:) + fsa(:) := fa(:) + +1X

i=�1 fi�(: � ti) 2 LTD, (where fa is aIC -valued function and fi 2 IC ; i = 0;�1; � � �), we associate f+ := f+a (:) + f+sa(:) 2 LTD+and f� := f�a (:) + f�sa(:) 2 LTD� such that f+a := fa, almost everywhere (with respectto the Lebesgue measure), and f+sa := 2�1f0�(:) + 1X

i=1 fi�(: � ti) on IR+; f�a := fa, almosteverywhere, and f�sa := 2�1f0�(:)+ �1X

i=�1 fi�(:�ti) on IR� and f = f++f�. The distributionsf+ and f� are called respectively the causal part and the anticausal part of f .

2 Spectral Factorization with Arbitrary Delays2.1 Description of the problemThe spectral factorization problem is studied here in the framework of the distributed param-eter system transfer functions algebras A� and B, which have been introduced and analyzedin [5], [6], [7]. See also the tutorial paper [11] and the book [15].Let � � 0. An impulse response f 2 LTD+ is said to be in A(�) i�, on t � 0; f(t) =fa(t) + fsa(t) where its regular functional part fa is such that R10 jf(t)j exp(��t)dt < 1and its singular atomic part fsa := P1i=0 fi�(: � ti), where t0 = 0; ti > 0 for i � 1 andfi 2 IC for i � 0 with P1i=0 jfij exp(��ti) < 1. An impulse response f is said to be inA� i� f 2 A(�) for some � < 0. A� denotes the algebra of distributed parameter systemproper-stable transfer functions, i.e. Laplace transforms of elements in A�.De�nition 2.1 Let F = Fa + Fsa := Fa(�) + P1i=�1 Fi�(� � ti) be a matrix of Laplacetransformable distributions of a real variable t 2 IR, wherea) Fa(:) is a n� n-matrix valued function, Fi is a constant n� n-matrix, for i = 0;�1; � � �,

t0 = 0, ti > 0 and t�i < 0 for i � 1, and �(�) denotes the Dirac delta distribution (Diracimpulse);b) F is parahermitian self-adjoint, i.e. F (t) = F�(t) := F (�t)�, or equivalently

Fa(t) = (Fa)�(t); F�i = F �i and t�i = �ti for i � 0 ; (1)c) the Laplace transform of its causal part (i.e. its part with support on the nonnegative realnumbers), viz. F+ = F+a (:) + 2�1F0�(:) +P1i=1 Fi�(: � ti), is a matrix with all its entriesin the algebra A�, i.e. F+ 2 Mat (A�) ; (2)and d) F is positive semi-de�nite on the imaginary axis, i.e.

F (j!)(= F�(j!) = F (j!)�) � 0 for all ! 2 IR : (3)A spectral factor of the matrix spectral density F is a matrix valued function R = Ra+Rsa =Ra(�) +P1k=0 Rk � exp(� � �k) which is in Mat(A�) i.e. has all entries in A�, such that

F (j!) = R(j!)� � R(j!) for all ! 2 IR: 2 (4)The spectral factorization problem which is considered here consists of �nding a necessaryand su�cient condition on the matrix spectral density F under which it admits an invertiblespectral factor R, i.e. such that R and R�1 are in Mat (A�).

2.2 Main ResultsTheorem 2.1 [Spectral factorization with arbitrary delays] Let a matrix spectraldensity F be given as in De�nition 2.1, such that conditions (1)-(3) hold. Under theseconditions, F has an invertible spectral factor R, such that R is in Mat(A�) together withits inverse, if and only if it is (uniformly) coercive on the imaginary axis, i.e. the followinginequality holds:

there exists � > 0 such that F (j!) � �I for all ! 2 IR; : (5)Moreover, if (5) holds, then all invertible spectral factors of F are unique up to left multipli-cation by a constant unitary matrix U: 2Remark 2.1 a) The proof of this result is essentially based on the fact that, under condition(5), a given spectral density matrix has an invertible spectral factor whenever so has itsalmost periodic part Fsa, and on the fact that the latter holds by [26, Corollary 1], which is abyproduct of [2, Theorem 3]; see also [24, Theorem 5.1], [3, proof of Theorem 2.3 (especiallyLemma 3.2)] and references cited therein, for additional information. Then, by using asymmetric extraction technique, the problem can be reduced to the spectral factorization of aspectral density without delays, which has been solved in [9, Theorem 1]. See Subsection 3.1.b) Theorem 2.1 is a generalization of a similar earlier result on invertible factorizability ofa matrix spectral density with equally-spaced delays, see [39, Theorem 3.1M]. See Theorem

4.1 below for more detail.c) Condition (5) is also known to be a necessary and su�cient condition for the existenceof invertible spectral factors in Mat(H1), see [25, Theorem 3.7, p.54]; see also [34, p.316].Moreover, condition (5) is stronger than

! 2 IR 7! (1 + !2)�1 � log+kF (j!)�1k is integrable; (6)where log+ x := maxflog x; 0g for any real number x > 0. The latter condition is necessaryand su�cient for the existence of an outer spectral factor in Mat(H1) ( analytic with nozeros in the open right half-plane, bounded on the imaginary axis with zeros still allowed),see [25, Theorem 6.14, p.124].d) Theorem 2.1 con�rms the conjecture stated in [27, p.174], in the case of the algebra A�and it solves the open problem described in [12].A direct and important byproduct of Theorem 2.1 is the existence of normalized coprimefractions for distributed parameter system transfer functions. Recall that A� is our selectedclass of distributed proper-stable transfer functions. It contains the multiplicative subsetA1� , i.e. of transfer functions that are bounded away from zero at in�nity in IC+, i.e. thatare biproper-stable. Possibly unstable transfer functions are selected to be in the algebra B,where f 2 B i� f = n:d�1 with n 2 A� and d 2 A1� . Note that by [5, Theorem 3.3] a transferfunction is in B i� it is the sum of a completely unstable strictly proper rational functionand a stable transfer function in A�; hence d above can always be chosen biproper-stablerational, e.g. [32, Fact 20, p. 13]. Multivariable plants have transfer matrices P in Mat (B)described by a right matrix fraction P = ND�1 where N and D are in Mat (A�) and det Dis in A1� ; if this holds and there exist U and V in Mat (A�) such that UN+V D = I (Bezoutidentity), (or equivalently [N(s)T D(s)T ]T has full column rank in IC+), then P in Mat (B) issaid to have a right coprime fraction (N ; D) in Mat (A�); right coprime fractions are uniqueup to multiplication on the right by a factor in Mat (A�) together with its inverse; moreoverD above can always be chosen biproper-stable rational such that D(1) = I, see [7, proof ofTheorem 2.1].De�nition 2.2 Let P 2 Mat (B) have a right coprime fraction (N ; D) in Mat (A�): (N ; D)is said to be normalized i�

(N�N + D�D)(j!) = I for all ! 2 IR : (7)We call (right) coprime fraction spectral density the expression

F := N�N + D�D :2 (8)Remark 2.2 It follows from the proof of Theorem 2.2 below and from Theorem 2.1 thatnormalized right coprime fractions of a transfer matrix P 2 Mat (B) are unique up to rightmultiplication by a constant unitary matrix: see Subsection 3.2.Theorem 2.2 [Normalized coprime fractions] Every transfer matrix P 2 Mat (B) hasnormalized right coprime fractions unique up to right multiplication by a unitary matrix. 2

Example 1: Consider a system consisting of the parallel interconnection of a strictly properstable system with transfer function P1 = (P1)a 2 L1� � A� for some � < 0 (i.e. the impulseresponse P1 is absolutely continuous) and a proper stable system with transfer functionP2 = (P2)sa 2 LA(�) � A� (i.e. the impulse response P2 is purely singular atomic). Thesystem transfer function is given by P(s) = [P1(s); P2(s)] and is proper stable, i.e. P is inA1�2� . It follows that the pair (N ; D) := (P ; I) , where I is the two-by-two identity matrix,is a right coprime fraction of P ; whence, by the proof of Theorem 2.2, the correspondingright coprime fraction power spectral density F := N�N + D�D = I + P�P has a spectralfactor R invertible in A2�2� , whence (NR�1; DR�1) is a normalized right coprime fractionof P . Moreover the singular atomic part of F is a diagonal matrix which is given by

Fsa = diag [1; 1 + P2�P2]:By Lemma 3.3, Fsa has a spectral factor R1 which is given by

R1 = diag [1; W ] ;where W 2 A� is a spectral factor (invertible in Mat (A�)) of 1 + P2�P2 . Therefore, bythe proof of Theorem 2.1, a spectral factor R 2 Mat (A�) (invertible in Mat (A�)) of F isgiven by R = R2 � R1 , where R2 = R2a +R20 2 Mat (A�) is a spectral factor (invertible inMat (A�)) of (R�11� FaR�11 ) + I.

3 Proofs of the Main Results3.1 Proof of Theorem 2.1Throughout this subsection, it is assumed that we are given a matrix spectral density F asin De�nition 2.1, such that conditions (1)-(3) hold.We �rst prove the uniqueness of spectral factors, up to left multiplication by a constantunitary matrix, under condition (5).Lemma 3.1 [Multiplicity of spectral factors] Let a matrix spectral density F be givenas in De�nition 2.1, such that conditions (1)-(3) hold, and assume that (5) holds. If

R1 and R2 are spectral factors of F (9)then there exists a constant unitary matrix U , i.e. U 2 Mat (IC ) with U� = U�1, such that

R2 = UR1 :2 (10)Proof: Let R1 and R2 be two spectral factors of the spectral density F . Hence the followingidentity holds on the imaginary axis:

R1R�12 = R�11� R2� = (R1R�12 )�1� :

Since the matrix function on the left-hand side of this identity is in Mat(A�), it is boundedand holomorphic in an open right half-plane containing IC+, whence it can be analyticallyextended to a bounded entire function U . Then, by Liouville's theorem, see e.g. [21, pp.203-204], U is a unitary constant matrix such that (10) holds. 2Necessity: If R is an invertible spectral factor, in Mat(A�), of F , then, by e.g. [11, Theo-rem 1] or [15, Lemmas 7.1.5 and 7.2.1] (see also Fact 1c below), inffj det R(s)j : s 2 IC+g > 0.Since R�(j!) = R(j!)�, it follows that inffdet F (j!) : ! 2 IRg = inffj det R(j!)j2 : ! 2IRg > 0 : Hence (5) holds. 2Su�ciency: The proof of su�ciency of condition (5) for the existence of invertible spectralfactors is divided into three steps, which correspond to the following three lemmas, vizLemmas 3.2, 3.3 and 3.4. In order to prove these results, the following classes of Laplacetransformable distributions should be introduced.De�nition 3.1 Let � � 0. The sets LA+(�), LA�(�) and LA(�) of Laplace transformabledistributions are given respectively by

LA+(�) := A(�); (11)LA�(�) := ff 2 LTD� : f(��) 2 LA+(�)g; (12)

andLA(�) := LA+(�) + LA�(�) := ff 2 LTD : f = fa + fsa = fa(:) +P1i=�1 fi�(: � ti) ,with fa(:) a IC�valued function, fi 2 IC for i = 0;�1; � � � and t0 = 0, ti > 0 for i = 1; 2; � � �and ti < 0 for i = �1;�2; � � �, such that

Z 1�1 jfa(t)j: exp(��jtj)dt <1 and 1X

i=�1 jfij: exp(��jtij) <1g :2 (13)

The class LA(�) is known, in the literature, as a Beurling-type algebra, see e.g. [20, Chapter4] , and is sometimes called the Wiener{Pitt (WP) algebra, see e.g. [1], [2]. LA(�) is equipedwith a two-sided convolution product as follows. Let f = fa(:) +P1k=�1 fk�(: � tk) andg = ga(:) +P1l=�1 gl�(:� �l) belong to LA(�), then

(f � g)(t) := Z 1�1 f(t� s)g(s)ds = (f � g)a(t) + 1X

k=�11X

l=�1 fk gl �(t� tk � �l) (14a)where

(f � g)a(t) := (fa � ga)(t) + 1Xk=�1 fk:ga(t� tk) + 1X

l=�1 gl:fa(t� �l): (14b)Moreover LA(�) can also be equiped with a norm k:k1� which is de�ned by

kfk1� :=Z 1�1 jfa(t)j: exp(��jtj)dt + 1X

k=�1 jfkj: exp(��jtkj): (15)

Observe that (see e.g. [8, Fact 3.1]), under the norm k:k1�,LA(�) is a commutative convolution Banach algebra with unit element �(�); (16)

andLA+(�) and LA�(�) are closed subalgebras of LA(�): (17)

The distributions belonging to the classes de�ned above satisfy several important and usefulproperties which are listed in the following two facts, see e.g. [5] and [8, Fact 3.1]. Recallthat a complex valued function g is almost periodic in a vertical strip S(�; �0) if, for any� > 0, there exists L > 0 such that each interval of length L on the imaginary axis containsat least one �-translation number j� , i.e. one point j� such that

jg(�1 + j(! + �))� g(�1 + j!)j < � for all ! 2 IR and �1 2 [�; �0];see e.g. [14, p.73, Corollary p.75]. Moreover g is almost periodic in a right half-plane IC �+ ifg is almost periodic in every vertical strip contained in IC �+.Fact 1 [Properties of distributions in LA+(�)] a) If f = fa(:) + fsa(:) is a distribution inLA+(�), then its Laplace transform f satis�es the following properties:(i) f is uniformly continuous in IC �+;(ii) f is holomorphic in IC 0�+;(iii) fsa is analytic almost periodic in IC �+,(iv) [Riemann-Lebesgue]

(f � fsa)(s)! 0 as jsj ! 1 in IC �+;where in particular, for any �0 � � ,

jfa(�1 + j!)j ! 0 as j!j ! 1 uniformly in �1 2 [�; �0] ;b) F 2 Mat (LA+(�)) is invertible in Mat (LA+(�)) i�

inffj det F (s)j : s 2 IC �+g > 0 : (18)c) Let F be in Mat (LA+(�1)) for some �1 < 0, or equivalently in Mat (A�). ThenF�1 2 Mat (A�), i.e. F�1 2 Mat (LA+(�)) for some � < 0, i�

inffj det F (s)j : s 2 IC+g > 0 :2 (19)

Fact 2 [Properties of distributions in LA(�)] Let � � 0. Then a) If f = fa + fsa is adistribution in LA(�), then its Laplace transform f satis�es the following properties:(i) f is uniformly continuous in S�;

(ii) f is holomorphic in S0�;(iii) fsa is analytic almost periodic in S�;(iv) [Riemann-Lebesgue]

(f � fsa)(s)! 0 as jsj ! 1 in S�;where in particular

jfa(�1 + j!)j ! 0 as j!j ! 1 uniformly in �1 2 [�;��]:b) F 2 Mat (LA(�)) is invertible in Mat (LA(�)) i�

inffj det F (s)j : s 2 S�g > 0 :2 (20)

Note that LA(�)n�n, where n is a positive integer, can also be equiped with a norm denotedby k:k1�, which is given by (15) when n = 1 and by (21) below when n � 2, viz. the normof a matrix distribution F = Fa(:) +P1k=�1 Fk �(:� tk) in LA(�)n�n is de�ned by

kFk1� :=Z 1�1 kFa(t)k: exp(��jtj)dt + 1X

k=�1 kFkk: exp(��jtkj); (21)

where k � k is any induced (matrix) norm on IC n�n , (e.g. the norm induced by the euclideanvector norm). It is important to observe that, under the norm k � k1�,

LA(�)n�n is a convolution Banach algebra with unit element I�(�); (22)where I is the n-by-n identity matrix, and

LA(�)n�n is not a commutative algebra unless n = 1: (23)Lemma 3.2 [Coercivity] Let a matrix spectral density F be given as in De�nition 2.1,such that conditions (1)-(3) hold. Under these conditions, if F is coercive on the imaginaryaxis, i.e. (5) holds, then so is its almost periodic part Fsa, i.e.

there exists � > 0 such that Fsa(j!) � �I for all ! 2 IR :2 (24)Proof: First observe that condition (24) holds i�

inffdetFsa(j!) : ! 2 IRg > 0: (25)Now, by the uniform continuity of detF in some vertical strip (see Fact 2a(i)), it followsfrom the coercivity of F , i.e. (5), or equivalently

inffdetF (j!) : ! 2 IRg > 0; (26)that detF is bounded away from zero in a vertical strip, i.e. there exists a � < 0 such that

inffdetF (s) : s 2 S�g > 0 ,(see also [8, Lemma 3.1b and Fact 3.1c] (where �0 has been set equal to 0)). Thus, in view ofFact 2a (iv) and since detFsa = (detF )sa is analytic almost periodic in S� (see Fact 2a(iii)),it follows by e.g. [14, Theorem 3.6] that

inffdetFsa(s) : s 2 S�g > 0 ,(see [8, Fact 3.1c]); whence (25) holds. 2

Lemma 3.3 [Almost periodic spectral factorization] Let a matrix spectral density Fbe given as in De�nition 2.1, such that conditions (1)-(3) hold, and assume that it is almostperiodic, i.e.

F = Fsa: (27)Under these conditions, if F is coercive on the imaginary axis, i.e. (24) holds, then F hasan invertible almost periodic spectral factor R, such that R = Rsa is in Mat (A�) togetherwith its inverse. 2Proof: We proceed in two steps.Step 1: Assumptions (1)-(3) and condition (24) imply the existence of an almost periodicsquare matrix-valued function R = Rsa 2 Mat ( ^LTD+) such thata)

Fsa(j!) = R�(j!) � R(j!) for all ! 2 IR ; (28)b)

R 2 Mat (LA+(0)) and inffj det R(s)j : s 2 IC+g > 0; (29a)or equivalently, by Fact 1b,

R and R�1 are in Mat (LA+(0)) : (29b)Indeed, assume that F = Fsa is a IC n�n-valued matrix function. By condition (24), Fsatis�es the following inequality on the imaginary axis:

x�F (�)x � �kxk2 for all x 2 IC n:Moreover, since, for all ! 2 IR, F (j!) is a hermitian matrix (see (3)), the parameter � givenby

� := lim!112[arg(x�F (j!)x)]!=�;

which is independent of the nonzero vector x 2 IC n, is such that � = 0. It follows, by [2,Theorem 3], or equivalently by [26, Corollary 1], that (28)-(29b) hold for some invertiblealmost periodic matrix-valued function R = Rsa.Step 2: [Analytic extension] The square matrix-valued function R of Step 1 satis�es

R and R�1 are in Mat (A�): (30)

Indeed, denote by S(�1; �2); (S0(�1; �2) respectively) the vertical strip fs 2 IC : �1 � Re s ��2g; (fs 2 IC : �1 < Re s < �2g respectively). Rewrite now (28) as

R = R�1� :Fsa : (31)Observe (29b) where R�1� (s) = R�1(�s)�, and note that F is in Mat (LA(�)) for some� < 0. Then by Fact 1a (i)-(ii) and Fact 2a (i)-(ii),a) the right-hand side of equation (31) is holomorphic in S0(�; 0) and continuous in S(�; 0),b) the left-hand side of (31) has the same properties with respect to IC o+ and IC+ respectively.Hence [21, Theorem 7.7.1.] can be applied to (31) such that by analytic extension (continuousup to the boundary), equation (31) holds in the strip S(�; 0). This implies

R(� + �) = R�1� (� + �) � Fsa(� + �) on the j! � axis: (32)Note now that in (32),

Fsa(� + �) 2 Mat (LA(0)) ; (33)or equivalently exp(���)Fsa(�) 2 Mat (LA(0)); similarly

R�1� (� + �) 2 Mat (LA�(0)) (34)or equivalently exp(��)R�1(�) 2 Mat (LA+(0)). Hence by (31)-(34) using the convolution inLA(0), there follows by (16) that R(�+ �) 2 Mat (LA(0)). Now note that exp(���)R(�) hasits support on t � 0. Hence we get R(� + �) 2 Mat (LA+(0)) or equivalently

R 2 Mat (LA+(�)) for some � < 0 : (35)Finally observe that by (29a), inffj det R(s)j : s 2 IC+g > 0 : this together with (34) impliesthe conclusion (30) by Fact 1c. 2

Lemma 3.4 [Spectral factorization without delays] Let a matrix spectral density Fbe given as in De�nition 2.1, such that conditions (1)-(3) hold, and assume that it has nodelays, i.e. without loss of generality

F = Fa + I: (36)Under these conditions, if F is coercive on the imaginary axis, i.e. (5) holds, then F has aninvertible spectral factor R without delays, such that R = Ra + R0 is in Mat (A�) togetherwith its inverse. 2Proof: See [9, proof of Theorem 1]. 2By Lemmas 3.2 and 3.3, it follows from condition (5) that the almost periodic part Fsa of Fhas an invertible almost periodic spectral factor R1, such that R1 = (R1)sa is in Mat (A�)together with its inverse. It follows that the spectral density G given by

G := (R�11� F R�11 ) = (R�11� FaR�11 ) + I ;

has no delays and is coercive on the imaginary axis, whence, by Lemma 3.4, it has aninvertible spectral factor R2 without delays, such that R2 = (R2)a + R20, where R20 is aconstant unitary matrix, is in Mat (A�) together with its inverse. It follows that the squarematrix-valued function R := R2R1 2 Mat (A�) is a (right) spectral factor, invertible inMat (A�), of the spectral density F . Finally, the uniqueness of R up to left multiplicationby a constant unitary matrix holds by Lemma 3.1. This concludes the proof of Theorem 2.1.2Remark 3.1 Without loss of generality, a matrix spectral density F which is given as inTheorem 2.1 and which is coercive on the imaginary axis, i.e. such that (1) - (3) and (5)hold, is close to the identity matrix, i.e. such that

kF � Ik1 := supfkF (j!)� Ik : ! 2 IRg < 1:Indeed, since A� � H1, it follows from (1) - (3) and (5) that

�I � F (j!) � �I for all ! 2 IR; for some � and � > 0:Hence F can be rewritten as F = 2�1(� + �)(I + G) , where G := 2(� + �)�1F � I is suchthat kGk1 � (� � �)(� + �)�1 < 1. The idea of factorizing a spectral density close to theidentity is paramount in spectral factorization theory and has been extensively used in theliterature, see e.g. [35], [36], [16], [2], [26], [17], [39] and [24]. In particular the proof of [2,Theorem 3] (which is used in the proof of Lemma 3.3) is based on this idea, which leads toa conceptual method for the computation of spectral factors, see also Subsection 4.2.

3.2 Proof of Theorem 2.2This proof is based on the following two lemmas.Lemma 3.5 [Right coprime fraction] Every transfer matrix P in Mat (B) has a rightcoprime fraction (N ; D) in Mat (LA+(�)) � Mat (A�) for some � < 0, where

N(t) = Na(t) + 1Xi=0 Ni�(t� ti) and D(t) = Da(t) + I�(t) ;

with Na(�) and Da(�) in Mat (L1�) for some � < 0. This structure is necessary as soon asone requires the denominator distribution D(�) to have no delayed impulses with a singularatomic part Dsa(�) = I�(�). 2

Proof: This result follows e.g. from the proof of [11, Theorem 6] or similar results in [7,Theorem 2.1], [15, Chapter 7]. 2

Lemma 3.6 [Spectral factorization of a coprime fraction spectral density] LetP 2 Mat (B) have a right coprime fraction (N ; D) in Mat (A�) given, without loss ofgenerality, as in Lemma 3.5. Then the right coprime fraction power spectral density F givenby (8) has a spectral factor R invertible in Mat (A�). 2

Proof: It follows from identity (8) and from the fact that the right coprime fraction (N ; D)is in Mat (LA+(�)) for some � < 0, (see Lemma 3.5), that F satis�es conditions (1)-(3). Inview of Theorem 2.1 and the proof of Lemma 3.2, it remains to be shown that

inffdet F (j!) : ! 2 IRg > 0 : (26)Since (N ; D) is a right coprime fraction, given as in Lemma 3.5,

Dsa � I ; (37)and 2

64 D(j!)N(j!)

375 has full column rank for all ! 2 IR : (38)

Now (37) implies thatFsa(j!) = (Nsa�Nsa + Dsa�Dsa)(j!)

= (Nsa�Nsa)(j!) + I > 0 for all ! 2 IR ;whence

(det F )sa(j!) = det Fsa(j!) > 0 for all ! 2 IR ;or equivalently, by Fact 2a(iv), there exists a > 0 such that

inffdet F (j!) : j!j > g > 0 : (39)Furthermore, in view of (38),

det F (j!) > 0 for all ! 2 IR ; (40)since by (37),

F (j!) = [D(j!)�N(j!)�]:264 D(j!)N(j!)

375 � 0 for all ! 2 IR :

Hence condition (26) holds. 2Now consider any right coprime fraction (N ; D) of P , such that (N ; D) 2 Mat (A�) isgiven as in Lemma 3.5. By Lemma 3.6, the right coprime fraction power spectral densityF = N�N + D�D has a spectral factor R invertible in Mat (A�). Hence (NR�1; DR�1) isa normalized right coprime fraction of P . This concludes the proof of Theorem 2.2. 2

4 Important Particular Cases4.1 Spectral Densities with Equally-Spaced DelaysAn important particular case in applications is the spectral factorization of spectral densitieswith equally spaced delays, resulting in spectral factors also having equally spaced delays,see e.g. Example 2 below. It can be shown that a result similar to Theorem 2.1 holds inthis framework: see Theorem 4.1 below. This can be done by using the algebras LAT (�),LA+T (�) and LA�T (�), where T is a �xed positive real number, which are closed subalgebrasrespectively of LA(�), LA+(�), and LA�(�), and which are de�ned as follows:

LA+T (�) := ff 2 A(�) : fsa(:) = 1Xi=0 fi�(:� iT ); where fi 2 IC for i = 0; 1; � � �g ; (41)

LA�T (�) := ff 2 LTD� : f(�:) 2 LA+T (�)g ; (42)andLAT (�) denotes the set of all distributions f 2 LTD of the form f = fa + fsa = fa(:) +P1i=�1 fi�(:� iT ) , with fa(:) a IC�valued function and fi 2 IC for i = 0;�1; � � �, such that

Z 1�1 jfa(t)j: exp(��jtj)dt <1 and 1X

i=�1 jfij: exp(��jijT ) <1 : (43)It turns out that these algebras enjoy the same properties than those used in the previoussection (see, in particular, Facts 1 and 2) and that the proof of Theorem 4.1 goes along thelines of the proof of Theorem 2.1, where one should use [18, x14] instead of [2, Theorem 3]in the proof of Lemma 3.3 (Step 1). See [39] for more detail.Theorem 4.1 [Spectral factorization with equally-spaced delays] Let a matrix spec-tral density F be given as in De�nition 2.1, such that conditions (1)-(3) hold; assume, inaddition that F has T -equally spaced delays, i.e. F+ 2 Mat (LA+T (�)) for some � < 0.Under these conditions, F has an invertible spectral factor R, such that R is in Mat(A�)with T -equally spaced delays together with its inverse, i.e. R and R�1 2 Mat (LA+T (�)) forsome � < 0, if and only if F is (uniformly) coercive on the imaginary axis, i.e. (5) holds.Moreover, if this condition holds, then all invertible spectral factors of F are unique up toleft multiplication by a constant unitary matrix and thus have T -equally spaced delays. 2Remark 4.1 Any spectral factor R of a spectral density with T -equally spaced delays issuch that Rsa is periodic along any vertical line in IC+. More precisely, for any �0 � 0, thecomplex-matrix valued function of a real variable ! 7! Rsa(�0 + j!) is periodic of period2�T�1.

Remark 4.2 A result similar to Theorem 4.1 can be stated for the case that the delays areinteger multiples of a �nite number of positive real numbers that are linearly independentover ZZ. In that case, Fsa is quasi-periodic [1], and one can use [1, Theorem 3] for getting a

R1L1C1G1Transmission Line�1 = R1L1 = G1C1��

��6sZ1u1(t)

R2L2C2G2Transmission Line�2 = R2L2 = G2C2��

��6sZ2u2(t)

-0 1 x

�������

��3y(t)

P

�����@@

@@@

Figure 1: Distributed parameter circuit with transmission lines

result similar to Lemma 3.3. The resulting spectral factor has delays that are integer multi-ples of the aforementioned real numbers.

We conclude this section by a simple but nontrivial illustrative example.Example 2: Consider a distributed parameter circuit depicted in Fig.1 consisting of twoRLCG - transmission lines without distortion, i.e., RkLk = GkCk =: �k, k = 1; 2, loaded byresistances Zk, k = 1; 2, and coupled by an ideal summator realized by an operational am-pli�er. The inputs (controls) are the voltages u1(t), u2(t) and the output (observation) isthe measured voltage y(t) = y1(t) + y2(t), where yk(t), k = 1; 2, denote the voltage on Zk attime t.It has been proved in [19, Formula (13)] (see the appendix for some detail) that the transferfunction Pk, k = 1; 2, of each transmission line is given by the formula

Pk(s) = yk(s)uk(s) =

1 + �k�k � e�srk

1 + �k�2k e�2srk; k = 1; 2 ;

wherezk =

sLkCk ; rk = 1

vk = qLkCk; �k = Zk � zkZk + zk ; and �k = e�krk :

The parameters zk; vk and �k are called respectively the wave impedance, the velocity ofthe wave propagation and the re ection coe�cient. Now, since

y(t) = y1(t) + y2(t) ;the circuit has the following transfer function matrix

P(s) = [P1(s); P2(s)] =241 + �1

�1 � e�sr11 + �1�21 e�2sr1

; 1 + �2�2 � e�sr2

1 + �2�22 e�2sr235 :

Since�����k�2k���� < 1, k = 1; 2, (see [19, Text after formula (9)]) the following geometric series

expansion holds:Pk(s) = 1 + �k

�k e�srk 1Xn=0

��k�2k

!ne�2rkns :

By applying the inverse Laplace transform to the last identity, it follows thatPk(t) = 1 + �k

�k1Xn=0

��k�2k

!n�(t� (2n+ 1)rk) :

Since each Pk has impulses located at the points (2n + 1)rk, n � 0, the impulse responsematrix P(t) will have equally spaced delayed impulses i� r1 and r2 are commensurate (i.e.rationaly related), i.e. the velocities of the wave propagation in the two lines are commensu-rate. More precisely, if r1 and r2 are rationaly related such that r1 = q � r2 for some rationalnumber q = nqdq , where nq and dq are integers, then P(t) has T-equally-spaced delayed im-pulses with T = r1nq = r2dq .Moreover P is proper{stable, i.e. more precisely, P = Psa is in A1�2� . It follows that thepair (N ; D) := (P; I), where I is the two-by-two identity matrix, is a right coprime fractionof P. By the proof of Theorem 2.2 and by Theorem 4.1, the corresponding right coprimefraction power spectral density F := N�N + D�D = I + P�P = Fsa has a spectral factorR = Rsa = P1k=0 Rk exp(�:kT ) invertible in A2�2� . In addition the power spectral densityF reads F (s) = P1i=�1 Fi exp(�siT ), such that, under the conformal mapping transfor-mation z = exp(�sT ), an approximate spectral factor R of F can be computed e.g. by aBauer-type method [40], [39], by a Newton-type method [37] - [38], or via a discrete timeRiccati equation, see e.g. [28].

Remark 4.3 Observe that, in view of Theorem 2.1 and the proof of Lemma 3.6, the com-mensurability assumption is not needed for the existence of an invertible spectral factor of thespectral density F considered in Example 2 above. Hence Theorem 2.1 provides here a toolfor the investigation of spectral factors of spectral densities in the neighborhood of the spec-tral density above: the latter can be called a nominal spectral density even more so becausea spectral factor of such a spectral density is computable by discrete spectral factorizationalgorithms as those mentioned above.

4.2 Spectral Densities Close to the IdentityInspired by Remark 3.1, we conclude this section by giving a su�cient condition for the ex-istence of spectral factors with arbitrary delays. This condition concerns spectral densitieswhich are close to the identity matrix. It leads to a conceptual method for the computationof a spectral factor, which is based on the alternating projection principle, see e.g. [35], [2,proof of Theorem 1], or equivalently on a �xed point equation leading to a causal powerseries expansion of the spectral factor, see e.g. [16, Section 9.5] or [13, Corollary 1.2, p.39].

Theorem 4.2 [Spectral factorization close to the identity] Let a matrix spectral den-sity F be given as in De�nition 2.1, such that conditions (1)-(3) hold. Under these conditions,if Fsa is of the form

Fsa = k (I � G); (44)where k is a positive constant and G, in Mat (LA(�)) for some � < 0, is such that

kGk10 < 1; (45)then F has a spectral factor R invertible in Mat (A�). Moreover this spectral factor is givenby

R = R2 R1; (46)where R1 is the spectral factor of Fsa which is given by

R1 = k1=2(I � S); (47)where

S := L[(N �G)+] 2 Mat (A�) ; (48)where L[F ] denotes the Laplace transform of F 2 Mat (LTD) and N 2 Mat ( ^LTD�) is thesolution to the �xed point equation

N = I + (N �G)�; (49)and where

R2 is a spectral factor of (R�11� FaR�11 ) + I: 2 (50)Proof : First consider, for any �1 � 0, the following class of Laplace transformable distri-butions with support on IR :

A(�1) := ff 2 LA(�1) : fa � 0g: (51)Note that A(�1) is a closed, i.e. Banach, subalgebra of LA(�1), for any �1 � 0. Now observethat Fsa is in Mat (A(0)). Therefore, by using the factorization theorem in [16, Section9.5, pp. 211-215] in the framework of the (noncommutative) Banach algebra Mat (A(0))(see also e.g. [36, Section 3.7, pp. 72-81]), it follows from (44)-(45) that Fsa has a spectralfactor R1 in Mat (A(0)), which is given by (47)-(49). Now, by arguments similar to thoseused in Step 2 of the proof of Lemma 3.3 (i.e. essentially by using an analytic extensiontechnique), it can be proved that R1 is actually in Mat (A�) together with its inverse. The�nal conclusion is obtained by following the lines of the proof of Theorem 2.1. 2

Corollary 4.1 Consider a transfer (function) matrix P 2 Mat (B) given by P = Ps + Pu,where Ps = Pa + Psa is a proper stable transfer matrix in Mat (A�) and Pu is a completelyunstable strictly proper rational matrix. Let P have a right coprime fraction (N ; D) inMat (A�) given by

N = Ps � Du + Nu ; D = Du ; (52)where (Nu; Du) is a proper stable rational right coprime fraction of the unstable part Pu ofP such that without loss of generality

D(1) = Du(1) = I: (53)Under these conditions, if

kPsak10 = kPsakA(0) < 1 ; (54)then the right coprime fraction power spectral density F given by

F := N�N + D�D (7)has a spectral factor R = Ra+Rsa invertible in Mat (A�), whence P 2 Mat (B) has normal-ized right coprime fractions in Mat (A�) unique up to multiplication by a unitary constantmatrix. 2

Proof : Observe that, in view of (52)-(53), the singular atomic part Fsa of F is given by(44), with k = 1 and G = �(Psa)� Psa, such that, by (54), (45) holds. The conclusion followsby Theorem 4.2. 2

Remark 4.4 Recall that the norm k�k10 is an upper bound of the H1-norm k�k1. It followsthat the small-gain type su�cient condition (54) guarantees feedback stability robustness withrespect to small delays in the feedback loop, see e.g. [4, Theorem 1] and [23, Theorem 1.1and Remark 8.6]. Condition (54) is thus meaningful from the system theoretic point of view.

5 ConclusionThe solution to the spectral factorization problem has been analyzed for multivariable dis-tributed parameter systems with an impulse response having an in�nite number of arbitrarydelayed impulses. A coercivity criterion for the existence of a spectral factor has been de-rived in the general case and particularized to the important special case of equally-spaceddelays. In the latter case, it has been applied to a system consisting of the parallel intercon-nection of two transmission lines without distortion. In addition, a small-gain type su�cientcondition has been derived for the existence of spectral factors with arbitrary delays. Thelatter condition has also been indicated to be meaningful from the system theoretic point ofview.

Finally an interesting subject of future research should be the development of computationalmethods for spectral factorization, especially in order to get numerical procedures for solvingthe Linear-Quadratic optimal control problem for in�nite-dimensional state-space semigroupsystems with bounded or unbounded observation and/or control operators and for solvingrelated Riccati equations.

AcknowledgmentThis work was supported by the Human Capital and Mobility European programme (projectnumber CHRXCT-930-402) and by a Research Grant (1997-2001) from the Facult�es Univer-sitaires Notre-Dame de la Paix at Namur.The authors thank Dr. Piotr Grabowski (Academy of Mining and Metallurgy, Institute ofAutomatics, Cracow, Poland) and Prof. L. Rodman and I.M. Spitkovsky (College of William& Mary, Dept. of Mathematics, Williamsburg, Virginia, USA); the former for helpful discus-sions concerning Example 2, the latter two for indications concerning the literature. Theyalso thank the reviewers of this paper for their constructive remarks; they especially acknowl-edge that one of the reviewers mentioned references [2] and [26], which led to the proof ofthe su�ciency part in Theorem 2.1; these references were translated by Prof. G. Plotnikova(FUNDP, Dept. of Mathematics, Namur, Belgium), whose help is gratefully mentioned.Appendix: Transfer function of a transmission line:A transmission line model as in Example 2 is described by the following equations

8>>>>>>>>>>>>>>><>>>>>>>>>>>>>>>:

Lk @ik@t (x; t) = �@vk@x (x; t)� Rkik(x; t); 0 � x � 1; t � 0Ck @vk@t (x; t) = �@ik@x (x; t)�Gkik(x; t); 0 � x � 1; t � 0ik(1; t)Zk = vk(1; t); t � 0vk(0; t) = uk(t); t � 0yk(t) = vk(1; t)

9>>>>>>>>>>>>>>>=>>>>>>>>>>>>>>>;

: (55)

The d'Alembert solution of the �rst two equations above are8>><>>:ik(x; t) = e��kt �k(x�vkt)� k(x+vkt)2zkvk(x; t) = e��kt �k(x�vkt)+ k(x+vkt)2

9>>=>>; (56)

where k; �k are arbitrary smooth functions. Subsituting (56) into the boundary conditionswe get

k(1 + vkt) = �k�k(1� vkt) (57)12e��kt[�k(�vkt) + k(vkt)] = uk(t) (58)

yk(t) = 12e��kt[�k(1� vkt) + k(1 + vkt)] : (59)

Since the �rst equation holds for all t 2 IR, replacing t by t� rk yields k(vkt) = �k�k(2� vkt) : (60)

Taking (60) and (57) into account in (58) and (59) we get8>><>>:

12e��kt[�k(�vkt) + �k�k(2� vkt)] = uk(t)yk(t) = 1+�k2 e��kt�k(1� vkt)

9>>=>>; : (61)

Now de�ne the following new state variables8>><>>:x2k(t) := e��kt2 �k(�vkt)x1k(t) := x2k(t� rk) = �k e��kt2 �k(1� vkt)

9>>=>>; : (62)

It follows that 8>>>>>>><>>>>>>>:

x1k(t) = x2k(t� rk)x2k(t) + �k�2k x1k(t� rk) = uk(t)yk(t) = 1+�k�k x2k(t� rk)

9>>>>>>>=>>>>>>>;

: (63)

Indeed, the �rst equation of (63) clearly follows from the second equation of (62) and thethird equation of (63) from the second equation of (61) and the second equation of (62).Substituting

1�2kx

1k(t� rk) = 1�2kx

2k(t� 2rk) = e��k(t�2rk)2�2k �k(�vk(t� 2rk)) = e��kt

2 �k(2� vkt)and the �rst equation of (62) into the �rst equation of (61) yields the second equation of(63). Finally the transmission line transfer function Pk is obtained by applying the Laplacetransform to equations (63).

References[1] R.G. Babadzhanyan and V.S. Rabinovich, Systems of integral-di�erence equa-tions on the half-line, Dokl. Akad. Nauk ArmSSR, Vol. 81, No. 3, 1985, pp. 107{111.(in Russian)[2] R.G. Babadzhanyan and V.S. Rabinovich, On the factorization of almost-periodicfunctional operators, Di�erential and Integral Equations and Complex Analysis, Uni-versity Press, Elista, 1986, pp. 13{22. (in Russian)[3] J.A. Ball, Yu.J. Karlovich, L. Rodman and I.M. Spitkovsky, Sarason inter-polation and Toeplitz Corona theorem for almost periodic matrix functions, manuscript,1998.

[4] J.F. Barman, F.M. Callier and C.A. Desoer L2-stability and L2-instability oflinear time-invariant distributed feedback systems perturbed by a small delay in the loop,IEEE Transactions on Automatic Control, Vol. AC-18, No. 5, 1973, pp. 479{484.[5] F.M. Callier and C.A. Desoer, An algebra of transfer functions for distributedlinear time-invariant systems, IEEE Transactions on Circuits and Systems, Vol. 25 ,1978, pp. 651{662 (Ibidem, Vol. 26, 1979, p. 360).[6] F.M. Callier and C.A. Desoer, Simpli�cations and clari�cations on the paper"An algebra of transfer functions for distributed linear time-invariant systems", IEEETransactions on Circuits and Systems, Vol. 27 , 1980, pp. 320{323.[7] F.M. Callier and C.A. Desoer, Stabilization, tracking and disturbance rejection inmultivariable convolution systems, Annales de la Soci�et�e Scienti�que de Bruxelles, T.94 , 1980, pp. 7{51.[8] F.M. Callier and J. Winkin, The spectral factorization problem for SISO distributedsystems, in "Modelling, robustness and sensitivity reduction in control systems" (R.F.Curtain, ed.), NATO ASI Series, Vol. F34, Springer-Verlag, Berlin Heidelberg, 1987,pp. 463{489.[9] F.M. Callier and J. Winkin, On spectral factorization and LQ-optimal regulationfor multivariable distributed systems, International Journal of Control, Vol. 52, No. 1,July 1990, pp. 55-75.[10] F.M. Callier and J. Winkin, LQ{optimal control of in�nite{dimensional systemsby spectral factorization, Automatica, Vol. 28, No. 4, 1992, pp.757{770.[11] F.M. Callier and J. Winkin, In�nite dimensional system transfer functions, inAnalysis and optimization of systems: state and frequency domain approaches toin�nite{dimensional systems, R.F. Curtain, A. Bensoussan and J.L. Lions (eds.), Lec-ture Notes in Control and Information Sciences, Springer{Verlag, Berlin, New York,1993, pp. 72{101.[12] F.M. Callier and J. Winkin, Spectral factorization of a spectral density with arbi-trary delays, in Open problems in mathematical systems and control theory, V.D. Blon-del, E.D. Sontag, M. Vidyasagar and J.C. Willems (eds.), Springer-Verlag, London,1999, Chapter 17, pp. 79{82.[13] K. Clancey and I. Gohberg, Factorization of matrix functions and singular integraloperators, Birkh�auser Verlag, Basel, 1981.[14] C. Corduneanu, Almost periodic functions, Interscience Publishers, John Wiley &Sons, NY, 1968.[15] R.F. Curtain and H. Zwart, An introduction to in�nite dimensional system,Springer Verlag, New York, 1995.[16] C.A. Desoer and M. Vidyasagar, Feedback systems: Input-Output properties, Aca-demic Press, New York, 1975.[17] I.C. Gohberg and I.A. Fel'dman, Convolution equations and projection methodsfor their solution, AMS Translations, Providence, RI, 1974.[18] I.C. Gohberg and M.G. Krein, Systems of integral equations on a half line withkernels depending on the di�erence of arguments, AMS Translations, Vol. 2, 1960, pp.217{287.

[19] P. Grabowski, The LQ - controller problem: An example, IMA Journal of Mathe-matical Control and Information, Vol. 11, 1994, pp. 355{368.[20] G. Gripenberg, S-O. Londen and O. J. Staffans, Volterra integral and functionalequations, Encyclopedia of Math. and its Appl., Vol.34, Cambridge University Press,Cambridge, 1990.[21] E. Hille, Analytic function theory, Vol.1, Ginn and Co., Waltham, MA, 1959.[22] M.G. Krein, Integral equations on a half line with kernels depending upon the di�erenceof the arguments, AMS Translations, Vol. 22, 1962, pp. 163{288.[23] H. Logemann, R. Rebarber and G. Weiss, Conditions for robustness and nonro-bustness of the stability of feedback systems with respect to small delays in the feedbackloop, SIAM Journal on Control and Optimization, Vol. 34, 1996, pp. 572{600.[24] L. Rodman, I.M. Spitkovsky and H.J. Woerdeman, Caratheodory-Toeplitz andNehari problems for matrix valued almost periodic functions, Transactions of the AMS,to appear.[25] M. Rosenblum and J. Rovnyak, Hardy classes and operator theory, Oxford Uni-versity Press, New York, 1985.[26] I.M. Spitkovsky, Factorization of almost periodic matrix-valued functions, Mathe-matical Notes, Vol. 45, 1989, pp. 482{488.[27] O.J. Staffans, Quadratic optimal control of stable systems through spectral factoriza-tion, Math. Control Signals Systems, Vol. 8, 1995, pp. 167{197.[28] O.J. Staffans, On the discrete and continuous time in�nite-dimensional algebraicRiccati equations, Systems and Control Letters, Ser. A, No. 178, 1996.[29] O.J. Staffans, Quadratic optimal control through coprime and spectral factorizations,Abo Akademi Reports on Computer Science and Mathematics, Vol. 29, 1996, pp. 131{138.[30] O.J. Staffans, Coprime factorizations and well-posed linear systems, submitted toSIAM Journal on Control and Optimization, 1997.[31] O.J. Staffans, Quadratic optimal control of well-posed linear systems, submitted toSIAM Journal on Control and Optimization, 1997.[32] M. Vidyasagar, Control system synthesis: A factorization approach, MIT Press, Cam-bridge, MA , 1985.[33] M. Weiss, Riccati equations in Hilbert spaces: A Popov function approach, DoctoralThesis, University of Groningen (NL), 1994.[34] M. Weiss and G. Weiss, Optimal control of stable weakly regular linear systems,Math. Control Signals Systems, Vol. 10, 1997, pp. 287{330.[35] N. Wiener and P. Masani, The prediction theory of multivariate stochastic processes,II: The linear predictor, Acta Math., Vol. 99, 1958, pp. 93{137.[36] J.C. Willems, The analysis of feedback systems, The M.I.T. Press, Cambridge, MA,1971.

[37] G.T. Wilson, The factorization of matricial spectral densities, SIAM Journal on Ap-plied Mathematics, Vol. 23, 1972, pp. 420{426.[38] G.T. Wilson, A convergence theorem for spectral factorization, Journal of MultivariateAnalysis, Vol. 8, 1978, pp. 222{232.[39] J. Winkin, Spectral factorization and feedback control for in�nite-dimensional systems,Doctoral Thesis, Department of Mathematics, Facult�es Universitaires Notre-Dame dela Paix, Namur (Belgium), May 1989.[40] D.C. Youla and N.N. Kazanjian, Bauer-type factorization of positive matrices andthe theory of matrix polynomials orthogonal on the unit circle, IEEE Transactions onCircuits and Systems, Vol.25 , 1978, pp. 57{69.

Facult�es Universitaires Notre-Dame de la Paix,Department of Mathematics,Rempart de la Vierge 8, B-5000 Namur, Belgium;e-mail: [email protected], [email protected], 93C35, 93C22, 93C80, 49N10, 93D09