Orthogonal Rational Functions and Structured Matrices

21
Orthogonal Rational Functions and Structured Matrices Marc Van Barel Dario Fasino Luca Gemignani Nicola Mastronardi Report TW 350, November 2002 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A – B-3001 Heverlee (Belgium)

Transcript of Orthogonal Rational Functions and Structured Matrices

Orthogonal Rational Functions andStructured Matrices

Marc Van Barel Dario FasinoLuca Gemignani Nicola Mastronardi

Report TW 350, November 2002

Katholieke Universiteit LeuvenDepartment of Computer ScienceCelestijnenlaan 200A – B-3001 Heverlee (Belgium)

Orthogonal Rational Functions andStructured Matrices

Marc Van Barel Dario FasinoLuca Gemignani Nicola Mastronardi

Report TW 350, November 2002

Department of Computer Science, K.U.Leuven

Abstract

The space of all proper rational functions with prescribed poles is con-sidered. Given a set of points zi in the complex plane and the weights wi,we define the discrete inner product

〈φ, ψ〉 :=n∑

i=0

|wi|2φ(zi)ψ(zi).

In this paper we derive a method to compute the coefficients of a recur-rence relation generating a set of orthonormal rational basis functions withrespect to the discrete inner product. We will show that these coefficientscan be computed by solving an inverse eigenvalue problem for a matrixhaving a specific structure. In case where all the points zi lie on the realline or on the unit circle, the computational complexity is reduced by anorder of magnitude.

Keywords : orthogonal rational functions, structured matrices, diagonal-plus-semiseparablematrices, inverse eigenvalue problems, recurrence relationAMS(MOS) Classification : 42C05, 65F18, 65D15.

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED

MATRICES

MARC VAN BAREL∗, DARIO FASINO† , LUCA GEMIGNANI‡ , AND NICOLA

MASTRONARDI§

Abstract. The space of all proper rational functions with prescribed poles is considered. Givena set of points zi in the complex plane and the weights wi, we define the discrete inner product

〈φ, ψ〉 :=n�

i=0

|wi|2φ(zi)ψ(zi).

In this paper we derive a method to compute the coefficients of a recurrence relation generating aset of orthonormal rational basis functions with respect to the discrete inner product. We will showthat these coefficients can be computed by solving an inverse eigenvalue problem for a matrix havinga specific structure. In case where all the points zi lie on the real line or on the unit circle, thecomputational complexity is reduced by an order of magnitude.

Key words. orthogonal rational functions, structured matrices, diagonal-plus-semiseparablematrices, inverse eigenvalue problems, recurrence relation

AMS subject classifications. 42C05, 65F18, 65D15

1. Introduction and motivation. Proper rational functions are an essentialtool in many areas of engineering, as system theory and digital filtering, where poly-nomial models are inappropriate, due to their unboundedness at infinity. In fact, forphysical reasons the transfer functions describing linear time-invariant systems oftenhave to be bounded on the real line. Furthermore, approximation problems with ra-tional functions are in the core of, e.g., the partial realization problem [20], modelreduction problems [4, 5, 11], robust system identification [5, 24].

Recently a strong interest has been brought to a variety of rational interpola-tion problems where a given function is to be approximated by means of a rationalfunction with prescribed poles (see [6, 7, 32] and the references given therein). Bylinearization, such problems naturally lead to linear algebra computations involvingstructured matrices. Exploiting the close connections between the functional prob-lem and its matrix counterparts, generally allows us to take advantage of the specialstructure of these matrices to speed up the approximation scheme. For example, in[25] efficient algorithms are designed for rational function evaluation and interpolationfrom their connection with displacement structured matrices.

∗ Department of Computer Science, Katholieke Universiteit Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium, e-mail: [email protected]. The research of the first authoris partially supported by the Fund for Scientific Research (FWO), “SMA: Structured matrices andtheir applications”, grant G#0078.01, “ANCILA: Asymptotic analysis of the convergence behavior ofiterative methods in numerical linear algebra”, grant #G.0176.02, the K.U.Leuven research project“SLAP: Structured linear algebra package”, grant OT-00-16, and the Belgian Programme on Inter-university Poles of Attraction, initiated by the Belgian State, Prime Minister’s Office for Science,Technology and Culture, grant IPA V/22. The scientific responsibility rests with the authors.

†Dipartimento di Matematica e Informatica, Universita degli Studi di Udine, Viale delle Scienze208, 33100 Udine, Italy, e-mail: [email protected]

‡Dipartimento di Matematica, Universita di Pisa, Via F. Buonarroti 2, 56127 Pisa, Italy, e-mail:[email protected]. The research of the third author is partially supported by G.N.C.S. andMIUR funds.

§Istituto per le Applicazioni del Calcolo “M. Picone”, sez. Bari, Consiglio Nazionale delle Ricerche,Via G. Amendola, 122/I, I-70126 Bari, Italy, e-mail:[email protected]

1

2 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

The purpose of this paper is to devise a procedure to construct a set of properrational functions with prescribed poles, that are orthogonal with respect to a discreteinner product. Orthogonal rational functions are useful in solving multipoint general-izations of classical moment problems and associated interpolation problems; see [7]for further references on this topic. We also mention the recent appearance in thenumerical analysis literature of quadrature formulas that are exact for sets of rationalfunctions having prescribed poles, see e.g., [8, 19]. Such formulas provide a greateraccuracy than standard quadrature formulas, when the poles are chosen in such away to mimic the poles present in the integrand. The construction of Gauss-typequadrature formulas is known to be a task closely related to that of orthogonalizinga set of prescribed basis functions. In the polynomial case, this fact was explored byL. Reichel [26, 27]. Indeed, in these papers the construction of polynomial sequencesthat are orthogonal with respect to a discrete inner product by means of their three-term recurrence relation, is tied with the solution of an inverse eigenvalue problemfor symmetric tridiagonal matrices, that is equivalent to the construction of Gaussquadrature formulas.

In this paper, we adapt the technique laid down in [27] for polynomial sequencesto a specific set of proper rational functions. The goal is the computation of anorthonormal basis of the linear spaceRn of proper rational functions φ(z) = n(z)/d(z)w.r.t. a discrete inner product 〈 · , · 〉. Here deg(n(z)) ≤ deg(d(z)) ≤ n and d(z) hasa prescribed set {y1, . . . , yn}, yi ∈ C, of possible zeros; moreover, we set 〈φ, ψ〉 :=∑n

i=0 |wi|2φ(zi)ψ(zi), for φ(z), ψ(z) ∈ Rn. Such computation arises in the solution ofleast squares approximation problems with rational functions with prescribed poles.Moreover, it is also closely related with the computation of an orthogonal factorizationof Cauchy-like matrices whose nodes are the points zi and yi [16, 14].

We prove that an orthonormal basis of (Rn, 〈 · , · 〉) can be generated by meansof a suitable recurrence relation. When the points zi as well as the points yi areall real, fast O(n2) Stieltjes-like procedures for computing the coefficients of suchrelation were first devised in [14, 16]. However, like the polynomial (Vandermonde)case [26], these fast algorithms result to be quite sensitive to roundoff errors so thatthe computed functions are far from orthogonal. Therefore, in this paper we propose adifferent approach based on the reduction of the considered problem to the followinginverse eigenvalue problem (DS-IEP): Find a matrix S of order n + 1 whose lowertriangular part is the lower triangular part of a rank 1 matrix, and a unitary matrixQ of order n + 1 such that QH ~w = ‖~w‖~e1 and QHDzQ = S +Dy. Here and below~w = [w0, . . . , wn]T , Dz = diag[z0, . . . , zn] and Dy = diag[y0, . . . , yn], where y0 can bechosen arbitrarily. Moreover, we denote by Sk the class of k × k matrices S whoselower triangular part is the lower triangular part of a rank 1 matrix. If both S andSH belong to Sk, then S is called a semiseparable matrix.

A quite similar reduction to an inverse eigenvalue problem for a tridiagonal sym-metric matrix (T-IEP) or for a unitary Hessenberg matix (H-IEP) was also exploitedin the theory on the construction of orthonormal polynomials w.r.t. a discrete innerproduct (see [29, 21, 3, 13, 26, 2, 28, 17] for a survey of the theory and applicationson T-IEP and H-IEP). This theory can be generalized to orthonormal vector poly-nomials. We refer the interested reader to [1, 30, 31, 33, 9, 34]. Since invertiblesemiseparable matrices are the inverses of tridiagonal ones [18], we find that DS-IEPgives a generalization of T-IEP and, in particular, it reduces to T-IEP in the casewhere yi, zi ∈ R and all prescribed poles yi are equal.

We devise a method for solving DS-IEP which fully exploits its recursive proper-

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 3

ties. This method proceeds by applying a sequence of carefully chosen Givens rotationsto update the solution at the k-th step by adding a new data (wk+1, zk+1, yk+1). Theunitary matrix Q can thus be determined in its factored form as a product of O(n2)Givens rotations at the cost of O(n2) arithmetic operations (ops). The complexity offorming the matrix S depends on the structural properties of its upper triangular partand, in general, it requires O(n3) ops. In the case where all the points zi lie on the realaxis, we show that S is a semiseparable matrix so that the computation of S can becarried out using O(n2) ops only. In addition to that, the class Sn+1 results to be closeunder bilinear rational (Moebius) transformations of the form z → (αz+β)/(γz+ δ).Hence, by combining these two facts together, we are also able to prove that theprocess of forming S can be performed at the cost of O(n2) ops whenever all pointszi belong to a generalized circle (ordinary circles and straight lines) in the complexplane.

This paper is organized in the following way. In Section 2 we reduce the computa-tion of a sequence of orthonormal rational basis functions to the solution of an inverseeigenvalue problem for matrices of the form diag[y0, . . . , yn] + S, with S ∈ Sn+1. Byexploiting this reduction, we also determine relations for the recursive constructionof such functions. Section 3 provides our method for solving DS-IEP in the generalcase whereas the more specific situations corresponding to points lying on the realaxis, on the unit circle or on a genreic circle in the complex plane are considered inSection 4. In Section 5 we present and discuss numerical experiments confirming theeffectiveness and the accuracy of the proposed method and, finally, conclusions andfurther developments are drawn in Section 6.

2. The computation of orthonormal rational functions and its matrix

framework. In this section we will study the properties of a sequence of proper ra-tional functions with prescribed poles, that are orthonormal with respect to a certaindiscrete inner product. We will also design an algorithm to compute such a sequencevia a suitable recurrence relation. The derivation of this algorithm follows from re-ducing the functional problem into a matrix setting to the solution of an inverseeigenvalue problem involving structured matrices.

2.1. The functional problem. Given the complex numbers y1, y2, . . . , yn alldifferent from each other. Let us consider the vector space Rn of all proper rationalfunctions having possible poles in y1, y2, . . . , yn:

Rn := span{1,1

z − y1,

1

z − y2, . . . ,

1

z − yn

}.

The vector space Rn can be equipped with the inner product 〈 ·, · 〉 defined below:Definition 2.1 (Bilinear form). Given the complex numbers z0, z1, . . . , zn which

together with the numbers yi are all different from each other, and the “weights”0 6= wi, i = 0, 1, . . . , n, we define a bilinear form 〈 ·, · 〉 : Rn ×Rn → C by

〈φ, ψ〉 :=

n∑

i=0

|wi|2φ(zi)ψ(zi).

Since there is no proper rational function φ(z) = n(z)/d(z) with deg(n(z)) ≤deg(d(z)) ≤ n different from the zero function such that φ(zi) = 0 for i = 0, . . . , n,this bilinear form defines a positive definite inner product in the space Rn.

The aim of this paper is to develop an efficient algorithm for the solution of thefollowing functional problem:

4 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

problem 1 (Computing a sequence of orthonormal rational basis functions).Construct an orthonormal basis

~αn(z) := [α0(z), α1(z), . . . , αn(z)]

of (Rn, 〈 · , · 〉) satisfying the properties

αj(z) ∈ Rj \ Rj−1 (R−1 := ∅)

〈αi, αj〉 = δi,j (Kronecker delta)

for i, j = 0, 1, 2, . . . , n.We will show later that the computation of such an orthonormal basis ~αn(z) is

equivalent to the solution of an inverse eigenvalue problem for matrices of the formdiag[y0, . . . , yn] + S, where S ∈ Sn+1.

2.2. The inverse eigenvalue problem. Let Dy = diag[y0, . . . , yn] be the di-agonal matrix whose diagonal elements are y0, y1, . . . , yn, where y0 can be chosenarbitrarily; analogously, set Dz = diag[z0, . . . , zn]. Recall that Sk is the class of k× kmatrices S whose lower triangular part is the lower triangular part of a rank 1 matrix.Furthermore, denote by ‖~w‖ the Euclidean norm of the vector ~w = [w0, w1, . . . , wn]T .

Our approach to solving Problem 1 mainly relies upon the equivalence betweenthat problem and the following inverse eigenvalue problem (DS-IEP):

problem 2 (Solving an inverse eigenvalue problem). Given the numbers wi, zi, yi,find a matrix S ∈ Sn+1 and a unitary matrix Q such that

QH ~w = ‖~w‖~e1,(2.1)

QHDzQ = S +Dy.(2.2)

Observe that, if (Q,S) is a solution of Problem 2, then S can not have zero rowsand columns. By contradiction, if we suppose that S~ej = ~0, where ~ej is the j-thcolumn of the identity matrix In+1 of order n+ 1, then DzQ~ej = QDy~ej = yj−1Q~ej ,from which it would follow yj−1 = zi for a certain i.

Results concerning the existence and the uniqueness of the solution of Problem 2were first proven in the papers [14, 15, 16] for the specific case where yi, zi ∈ R andS is a semiseparable matrix. In particular, under such auxiliary assumptions, it wasshown that the matrix Q is simply the orthogonal factor of a QR decomposition of aCauchy-like matrix built from the nodes yi and zi. Next we give a generalization ofthe results of [14, 15, 16] to cover with the more general situation considered here.

Theorem 2.2. Problem 2 has at least one solution. If (Q1, S1) and (Q2, S2)are two solutions of Problem 2, then there exists a unitary diagonal matrix F =diag[1, eiθ1 , . . . , eiθn ] such that

Q2 = Q1F, S2 = FHS1F.

Proof. It is certainly possible to find two vectors ~u = [u0, . . . , un]T and ~v =[v0, . . . , vn]T with vi, ui 6= 0 and uiv0/(zi − y0) = wi, for 0 ≤ i ≤ n. Indeed, it issufficient to set, for example, vi = 1 and ui = wi(zi − y0). Hence, let us consider thenonsingular Cauchy-like matrix C ≡ (ui−1vj−1/(zi−1 − yj−1)) and let C = QR be aQR-factorization of C. From DzC − CDy = ~u~vT one easily finds that

QHDzQ = RDyR−1 +Q~u~vTR−1 = Dy + S,

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 5

where

S = RDyR−1 −Dy +Q~u~vTR−1 ∈ Sn+1.

Moreover, Q~e1 = CR−1~e1 = ~w/‖~w‖ by construction. Hence, the matrices Q andS = QHDzQ−Dy solve Problem 2.

Concerning uniqueness, assume that (Q,S) is a solution of Problem 2 with S ≡(si,j) and si,j = ui−1vj−1 for 1 ≤ j ≤ i ≤ n + 1. As S~e1 6= ~0, it follows that v0 6= 0and, therefore, we may assume v0 = 1. Moreover, from (2.2) it is easily found that

DzQ~e1 = Q~u+ y0Q~e1,

where ~u = [u0, . . . , un]T . From (2.1) we have

~u = QH(Dz − y0In+1)~w

‖~w‖.(2.3)

Relation (2.2) can be rewritten as

QHDzQ = ~u~vT

+ U = ~u~vT

+RDyR−1,

where U is an upper triangular matrix with diagonal entries yi and U = RDyR−1

gives its Jordan decomposition, defined up to a suitable scaling of the columns of theupper triangular eigenvector matrix R. Hence, we find that

DzQR−QRDy = Q~u~vTR = ~u~vT

and, therefore, QR = C ≡ (ui−1vj−1/(zi−1 − yj−1)) is a Cauchy-like matrix with

~u = Q~u uniquely determined by (2.3). This means that all the eligible Cauchy-likematrices C are obtained one from each other by a multiplication on the right by asuitable diagonal matrix. In this way, from the essential uniqueness of the orthogonalfactorization of a given matrix, we may conclude that Q is uniquely determined upto multiplication on the right by a unitary diagonal matrix F having fixed its firstdiagonal entry equal to 1. Finally, the result for S immediately follows from usingagain relation (2.2).

The above theorem says that the solution of Problem 2 is essentially unique up toa diagonal scaling. Furthermore, once the weight vector ~w and the points zi are fixed,then the determinant of S results to be a rational function in the variables y0, . . . , yn

whose numerator is not identically zero. Hence, we can show that, for almost anychoice of y0, . . . , yn, the resulting matrix S is nonsingular. The paper [16] dealt withthis regular case, in the framework of the orthogonal factorization of real Cauchymatrices. In particular, it is shown there that the matrix S is nonsingular when allthe nodes yi, zi are real and there exists an interval, either finite or infinite, containingall nodes yi and none of the nodes zi.

In what follows we assume that S−1 = H exists. It is well known that the inverseof a matrix whose lower triangular part is the lower triangular part of a rank 1 matrixis an irreducible Hessenberg matrix [18]. Hence, we will use the following notation:The matrix H = S−1 is upper Hessenberg with subdiagonal elements b0, b1, . . . , bn−1;for j = 0, . . . , n− 1, the j-th column Hj of H has the form

HTj =: [~hT

j , bj ,~0], bj 6= 0.

6 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

The outline of the remainder of this section is as follows. First we assume that weknow a unitary matrix Q and the corresponding matrix S solving Problem 2. Thenwe provide a recurrence relation between the columns Qj of Q and, in addition tothat, we give a connection between the columns Qj and the values at the points zi

attained by certain rational functions satisfying a similar recurrence relation. Finally,we show that these rational functions form a basis we are looking for.

2.3. Recurrence relation for the columns of Q. Let the columns of Q de-noted as follows:

Q =: [Q0, Q1, . . . , Qn] .

Theorem 2.3 (Recurrence relation). For j = 0, 1, . . . , n, the columns Qj satisfythe recurrence relation

bj(Dz − yj+1In+1)Qj+1 = Qj + ([Q0, Q1, . . . , Qj ]Dy,j −Dz [Q0, Q1, . . . , Qj ])~hj ,

with Q0 = ~w/‖~w‖, Qn+1 = 0 and Dy,j = diag[y0, . . . , yj ].Proof. Since QH ~w = ~e1‖~w‖, it follows that Q0 = ~w/‖~w‖. Multiplying relation

(2.2) to the left by Q, we have

DzQ = Q(S +Dy).

Multiplying this to the right by H = S−1, gives us

DzQH = Q(In+1 +DyH).(2.4)

Considering the j-th column of the left and right-hand side of the equation above wehave the claim.

2.4. Recurrence relation for the orthonormal rational functions. In thissection we define an orthonormal basis ~αn(z) = [α0(z), α1(z), . . . , αn(z)] for Rn usinga recurrence relation built by means of the information contained in the matrix H .

Definition 2.4 (Recurrence for the orthonormal rational functions). Let usdefine α0(z) = 1/‖~w‖ and

αj+1(z) =αj(z) + ([α0(z), . . . , αj(z)]Dy,j − z [α0(z), . . . , αj(z)])~hj

bj(z − yj+1),

for 0 ≤ j ≤ n− 1.In the next theorem, we prove that the rational functions αj(z) evaluated in the

points zi are connected to the elements of the unitary matrix Q. This will allow usto prove in Theorem 2.6 that the rational functions αj(z) are indeed the orthonormalrational functions we are looking for. In what follows, we use the notation Dw =diag[w0, . . . , wn].

Theorem 2.5 (Connection between αj(zi) and the elements of Q). Let

~αj = [αj(z0), . . . , αj(zn)]T ∈ Cn+1, 0 ≤ j ≤ n.

For j = 0, 1, . . . , n, we have Qj = Dw~αj .Proof. Filling in zi for z in the recurrence relation for αj+1(z), we get

bj(Dz − yj+1In+1)~αj+1 = ~αj + ([~α0, . . . , ~αj ]Dy,j −Dz [~α0, . . . , ~αj ])~hj .

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 7

Since Q0 = ~w/‖~w‖ = Dw~α0, the theorem is proved by finite induction on j, comparingthe preceding recurrence with the one in Theorem 2.3.

Now it is easy to prove the orthonormality of the rational functions αj(z).Theorem 2.6 (Orthonormality of ~αn(z)). The functions α0(z), . . . αn(z) form

an orthonormal basis for Rn with respect to the inner product 〈 · , · 〉. Moreover, wehave αj(z) ∈ Rj \ Rj−1.

Proof. Firstly, we prove that 〈αi, αj〉 = δi,j . This follows immediately fromthe fact that Q = Dw[~α0, . . . , ~αn] and Q is unitary. Now we have to prove thatαj(z) ∈ Rj \Rj−1. This is clearly true for j = 0 (recall that R−1 = ∅). Suppose it istrue for j = 0, 1, 2, . . . , k < n. From the recurrence relation, we derive that αk+1(z)has the form

αk+1(z) =rational function with possible poles in y0, y1, . . . , yk

(z − yj+1).

Also limz→∞ αk+1(z) ∈ C and, therefore, αk+1(z) ∈ Rk+1. Note that simplificationby (z − yk+1) does not occur in the previous formula for αk+1(z) because Qk+1 =Dw~αk+1 is linearly independent of the previous columns of Q. Hence, αk+1(z) ∈Rk+1 \ Rk.

In the next theorem, we give an alternative relation among the rational functionsαj(z).

Theorem 2.7 (Alternative relation). We have

z~αn(z) = ~αn(z)(S +Dy) + αn+1(z)~sn,(2.5)

where ~sn is the last row of the matrix S and the function αn+1(z) is given by

αn+1(z) = c

n∏

j=0

(z − zj)/

n∏

j=1

(z − yj)

for some constant c.Proof. Let Hn be the last column of H = S−1, and define

αn+1(z) = ~αn(z)(zIn+1 −Dy)Hn − αn(z).(2.6)

Thus, the recurrence relation given in Definition 2.4 can also be written as

~αn(z)(zIn+1 −Dy)H = ~αn(z) + αn+1(z)~eTn+1.

Multiplying to the right by S = H−1, we obtain the formula (2.5). To determinethe form of αn+1(z) we look at the definition (2.6). It follows that αn+1(z) is arational function having degree of numerator at most one more than the degree ofthe denominator and having possible poles in y1, y2, . . . , yn. Recalling from Theorem2.5 the notation ~αj = [αj(z0), . . . , αj(zn)]T and the equation Q = Dw[~α0, . . . , ~αn], wecan evaluate the previous equation in the points zi and obtain:

Dz [~α0, . . . , ~αn]H − [~α0, . . . , ~αn]DyH = [~α0, . . . , ~αn] + ~αn+1~eTn .

Since DwDz = DzDw, multiplying to the left by Dw we obtain

DzQH −QDyH = Q+Dw~αn+1~eTn+1.

8 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

From equation (2.4) we obtain that Dw~αn+1~eTn+1 is a zero matrix; hence, it follows

that αn+1(zi) = 0, for i = 0, 1, . . . , n, and this proves the theorem.Note that αn+1(z) is orthogonal to all αi(z), i = 0, 1, . . . , n, since αn+1(z) 6∈ Rn

and its norm is

‖αn+1‖2 =

n∑

i=0

|wiαn+1(zi)|2 = 0.

3. Solving the inverse eigenvalue problem. In this section we devise anefficient recursive procedure for the construction of the matrices Q and S solvingProblem 2 (DS-IEP). Our procedure is recursive. The case n = 0 is trivial: It issufficient to set Q = z0/|z0| and S = z0 − y0. Let us assume we have alreadyconstructed a unitary matrix Qk and a matrix Sk for the first k+1 points z0, z1, . . . , zk

with the corresponding weights w0, w1, . . . , wk. That is, (Qk, Sk) satisfies

QHk ~wk = ‖~wk‖~e1

QHk Dz,kQk = Sk +Dy,k,

where ~wk = [w0, . . . , wk]T , Sk ∈ Sk+1, Dz,k = diag[z0, . . . , zk] and, similarly, Dy,k =diag[y0, . . . , yk]. The idea is now to add a new point zk+1 with corresponding weightwk+1 and construct the corresponding matrices Qk+1 and Sk+1.

Hence, we start with the following relations:

[1 00 QH

k

] [wk+1

~wk

]=

[wk+1

‖~wk‖~e1

]

[1 00 QH

k

] [zk+1 0

0 Dz,k

][1 00 Qk

]=

[zk+1 0

0 Sk +Dy,k

].

Then, we find complex Givens rotations Gi = Ii−1 ⊕Gi,i+1 ⊕ Ik−i+1,

Gi,i+1 =:

[c s−s c

], GH

i,i+1Gi,i+1 = I2,(3.1)

such that

GHk · · ·G

H1

[1 00 QH

k

][wk+1

~wk

]=

[‖~wk+1‖

0

],

and, moreover,

GHk · · ·G

H1

[1 00 QH

k

][zk+1 0

0 Dz,k

] [1 00 Qk

]G1 · · ·Gk ∈ Sk+2.

Finally, we set

Qk+1 =

[1 00 Qk

]G1 · · ·Gk,

and

Sk+1 = GHk · · ·G

H1

[zk+1 0

0 Sk +Dy,k

]G1 · · ·Gk.

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 9

With the notation

SS

([u0, u1, . . . , uk][v0, v1, . . . , vk]

)

we denote the lower triangular matrix whose nonzero part equals the lower triangularpart of the rank 1 matrix [ui−1vj−1]

j=0,...,ki=0,...,k . Moreover, with the notation

RR

([η0, η1, . . . , ηk−1][~r0, ~r1, . . . , ~rk−2]

)

we denote the strictly upper triangular matrix whose (i+ 1)-st row, 0 ≤ i ≤ k − 2, isequal to [~0, ηi, ~r

Ti ].

Let us describe now in what way Givens rotations are selected in order to performthe updating of Qk and Sk. In the first step we construct a Givens rotation workingon the new weight. Let G1,2 be a Givens rotation as in (3.1), such that

GH1,2

[wk+1

‖~wk‖

]=

[‖~wk+1‖

0

].

The matrix Sk is updated as follows: We know that

Sk = SS

([u0, u1, . . . , uk][v0, v1, . . . , vk]

)+RR

([η0, η1, . . . , ηk−1][~r0, ~r1, . . . , ~rk−2]

).

Let

Sk+1,1 +Dy,k+1,1 :=

[GH

1,2 00 Ik

] [zk+1 0

0 Sk +Dy,k

][G1,2 0

0 Ik

],

where Sk+1,1 and Dy,k+1,1 are defined as follows:

Sk+1,1 = SS

([u0, u1, u1, u2, . . . , uk][v0, v1, v1, v2, . . . , vk]

)+RR

([η0, η1, η1, . . . , ηk−1][~r0, ~r1, ~r1 . . . , ~rk−2

])

and

Dy,k+1,1 = diag(y0, y1, y1, y2, . . . , yk),

with[α δγ β

]:= GH

1,2

[zk+1 0

0 y0 + u0v0

]G1,2

and

v0 = −sv0 u0 = (α− y0)/v0 η0 = δv1 = cv0 y1 = β − u1v1 u1 = γ/v0

η1 = cη0 ~r0 =[−sη0,−s~rT

0

]T ~r1 = c~r0.

In the next steps, we are transforming Dy,k+1,1 into Dy,k+1. The first of thesesteps is as follows. If v1u1 − η1 6= 0, we choose t such that

t =y1 − y1v1u1 − η1

,

10 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

and define the Givens rotation working on the 2-nd and 3-rd row and column as

G2,3 =

[1 t−t 1

]/√

1 + |t|2.

Otherwise, if v1u1 − η1 = 0, we set

G2,3 =

[0 1−1 0

].

It turns out that the associated similarity transforms Sk+1,1 and Dy,k+1,1 into Sk+1,2

and Dy,k+1,2 given by

Sk+1,2 = SS

([u0, u1, u2, u2, . . . , uk][v0, v1, v2, v2, . . . , vk]

)+RR

([η0, η1, η2, η2, . . . , ηk−1][~r0, ~r1, ~r2, ~r2 . . . , ~rk−2

]),

Dy,k+1,2 = diag(y0, y1, y2, y2, y3, . . . , yk),

with

GH2,3

[u1

u1

]=

[u1

u2

], [v1, v1]G2,3 = [v1, v2] .

Moreover, y2 = y1, η1 is the (1, 2)-entry of

GH2,3

[u1v1 + y1 η1u1v1 u1v1 + y1

]G2,3

and

GH2,3

[~r1

[η1, ~r1]

]=

[~r1[η2, ~r2

]].(3.2)

At the very end, after k steps, we obtain

Sk+1,k = SS

([u0, u1, . . . , uk, uk+1][v0, v1, . . . , vk, vk+1]

)+RR

([η0, η1, η2, . . . , ηk][~r0, ~r1, ~r2, . . . , ~rk−1

])

and

Dy,k+1,k = diag(y0, y1, . . . , yk, yk+1).

The last step will transform yk+1 into yk+1 by applying the transformation

uk+1 ← uk+1

vk+1 ← (yk+1 − yk+1 − uk+1vk+1)/uk+1.

The computational complexity of the algorithm is dominated by the cost of performingthe multiplications (3.2). In general, adding new data (wk+1, zk+1, yk+1) requiresO(k2) ops and hence, computing Sn = S requires O(n3) ops. In the next section wewill show that these estimates reduce by an order of magnitude in the case where somespecial distributions of the points zi are considered which lead to a matrix S with astructured upper triangular part. We stress the fact that, in the light of Theorem 2.2,the above procedure to solve DS-IEP, can also be seen as a method to compute theorthogonal factor in a QR factorization of a suitable Cauchy-like matrix.

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 11

4. Special configurations of points zi. In this section we specialize our al-gorithm for the solution of DS-IEP to cover with the important case where the pointszi are assumed to lie on the real axis or on the unit circle in the complex plane. Underthis assumption on the distribution of the points zi, it will be shown that the resultingmatrix S also possesses a semiseparable structure. The exploitation of this propertyallows us to overcome the multiplication (3.2) and to construct the matrix Sn = Sby means of a simpler parametrization, using O(n) ops per point, so that the overallcost of forming S reduces to O(n2) ops.

4.1. Special case: all points zi are real. When all the points zi are real, wehave that

S +Dy = QHDzQ = (QHDzQ)H = (S +Dy)H .

Hence, the matrix S +Dy can be written as

S +Dy = tril(~u~vT , 0) +Dy + triu(~v~uH , 1),(4.1)

with ~v the complex conjugate of the vector ~v. Here we adopt the Matlab1 notationtriu(B, p) for the upper triangular portion of a square matrix B, where all entriesbelow the p-th diagonal are set to zero (p = 0 is the main diagonal, p > 0 is abovethe main diagonal, and p < 0 is below the main diagonal). Analogously, the matrixtril(B, p) is formed by the lower triangular portion of B by setting to zero all itsentries above the p-th diagonal. In particular, the matrix S is a Hermitian semisep-arable matrix, and its computation requires only O(n) ops per point, since its uppertriangular part needs not to be computed via (3.2). Moreover, its inverse matrix H

is tridiagonal, hence the vectors ~hj occurring in Definition 2.4 have only one nonzeroentry.

When also all the poles yi (and the weights wi) are real, all computations canbe performed using real arithmetic instead of doing operations on complex numbers.When all the poles are real or come in complex conjugate pairs, also all computationscan be done using only real arithmetic. However, the algorithm works then with ablock diagonal Dy instead of a diagonal matrix. The details of this algorithm arerather elaborate. So, we will not go into the details here.

4.2. Special case: all points zi lie on the unit circle. The case of points zi

located on the unit circle T = {z ∈ C : |z| = 1} in the complex plane can be reducedto the real case treated in the preceding subsection by using the concept of rationalbilinear (Moebius) transformation [22]. To be specific, a function M : C ∪ {∞} →C ∪ {∞} is a Moebius transformation if

M(z) =αz + β

γz + δ, αδ − βγ 6= 0, α, β, γ, δ ∈ C.

Interesting properties concerning Moebius transformations are collected in [22]. Inparticular, a Moebius transformation defines a one-to-one mapping of the extendedcomplex plane into itself and, moreover, the inverse of a Moebius transformation isstill a Moebius transformation given by

M−1(z) =δz − β

−γz + α.(4.2)

1Matlab is a registered trademark of The MathWorks.

12 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

The Moebius transformationM(S) of a matrix S is defined as

M(S) = (αS + βI)(γS + δI)−1

if the matrix γS + δI is nonsingular. The basic fact relating semiseparable matriceswith Moebius transformations is that in a certain sense the semiseparable structureis maintained under a Moebius transformation of the matrix. More precisely, we havethat:

Theorem 4.1. Let S ∈ Sn+1 with S ≡ (si,j), si,j = ui−1vj−1 for 1 ≤ j ≤ i ≤n+ 1, and v0 6= 0. Moreover, let Dy = diag[y0, . . . , yn] and assume that M maps theeigenvalues of both S + Dy and Dy into points of the ordinary complex plane, i.e.,−δ/γ is different from all the points yi, zi. Then, we find that

M(S +Dy)−M(Dy) ∈ Sn+1.

Proof. Observe that S ∈ Sn+1 implies that RSU ∈ Sn+1 for R and U uppertriangular matrices. Hence, if we define R = I − ~e1[0, v1/v0, . . . , vn/v0], the theoremis proven by showing that

R−1(M(S +Dy)−M(Dy))R ∈ Sn+1,

which is equivalent to

R−1M(S +Dy)R−M(Dy) ∈ Sn+1.

One immediately finds that

R−1M(S +Dy)R = ((γ(S +Dy) + δI)R)−1(α(S +Dy) + βI)R,

from which it follows

R−1M(S +Dy)R = (γv0~u~eT1 +R1)

−1(αv0~u~eT1 +R2),

where R1 and R2 are upper triangular matrices with diagonal entries γyi + δ andαyi + β, respectively. In particular, R1 is invertible and, by applying the Sherman-Morrison formula we obtain

R−1M(S +Dy)R = (I − σR−11 ~u~eT

1 )(αv0R−11 ~u~eT

1 +R−11 R2),

for a suitable σ. The thesis is now established by observing that the diagonal entriesof R−1

1 R2 coincides with the ones ofM(Dy) and, moreover, from the previous relationone gets

R−1M(S +Dy)R−R−11 R2 ∈ Sn+1,

and the proof is complete.This theorem has several interesting consequences since it is well known that

we can determine Moebius transformations mapping the unit circle T except for onepoint onto the real axis in the complex plane. To see this, let us first consider Moebiustransformations of the form

M1(z) =z + α

z + α, α ∈ C \ R.

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 13

It is immediately found thatM1(z) is invertible and, moreover,M1(z) ∈ T wheneverz ∈ R. For the sake of generality, we also introduce Moebius transformations of theform

M2(z) =z − β

1− βz, |β| 6= 1,

which are invertible and map the unit circle T into itself. Then, by composition ofM2(z) with M1(z) we find a fairly general bilinear transformation M(z) mappingthe real axis into the unit circle:

M(z) =M2(M1(z)) =(1− β)z + (α− βα)

(1− β)z + (α− αβ).(4.3)

Hence, the inverse transformationM−1(z) =M−11 (M−1

2 (z)), where

M−11 (z) =

αz − α

−z + 1, M−1

2 (z) =z + β

βz + 1,

is the desired invertible transformation which maps the unit circle (except for onepoint) into the real axis.

By combining these properties with Theorem 4.1, we obtain efficient proceduresfor the solution of Problem 2 in the case where all the points zi belong to the unitcircle T.

Let Dy = diag[y0, . . . , yn] and Dz = diag[z0, . . . , zn] with |zi| = 1. Moreover,let M(z) be as in (4.3), such that M−1(zi) and M−1(yi) are finite, i.e., zi, yi 6=(1 − β)/(1 − β) = M2(1), 0 ≤ i ≤ n. The solution (Q,S) of Problem 2 with inputdata ~w, {M−1(zi)} and {M−1(yi)} is such that

QH diag[M−1(z0), . . . ,M−1(zn)]Q = S + diag[M−1(y0), . . . ,M

−1(yn)],

from which it follows that

M(QH diag[M−1(z0), . . . ,M−1(zn)]Q) =M(S + diag[M−1(y0), . . . ,M

−1(yn)]).

By invoking Theorem 4.1, this relation gives

M(QH diag[M−1(z0), . . . ,M−1(zn)]Q) = QHDzQ = S +Dy, S ∈ Sn+1,

and, therefore, a solution of the original inverse eigenvalue problem with points zi ∈ T

is (Q, S) where Q = Q and S is such that

S +Dy =M(S + diag[M−1(y0), . . . ,M−1(yn)]).(4.4)

Having shown in (4.1) that the matrix S satisfies

S = tril(~u~vT , 0) + triu(~v~uH , 1),

for suitable vectors ~u and ~v, we can use (4.4) to further investigate the structure of

S. From (4.4) we deduce that

SH +DHy = M(SH + diag[M−1(y0), . . . ,M

−1(yn)]H).

14 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

The Moebius transformation M of a matrix S is defined as

M = (γS + δI)−1(αS + βI)

when M = (αz + β)/(γz + δ). By applying again Theorem 4.1, assuming that all yi

are different from zero, this implies that

SH +D ∈ Sn+1,

for a certain diagonal matrix D. Summing up, we obtain that

S = tril(~u~vT , 0) + triu(~p~qT , 1),(4.5)

for suitable vectors ~u, ~v, ~p and ~q. In case, one or more of the yi are equal to zero, itcan be shown that S is block lower triangular where each of the diagonal blocks hasthe desired structure. The proof is rather technical. Therefore, we omit it here.

From a computational viewpoint, these results can be used to devise several dif-ferent procedures for solving Problem 2 in the case of points zi lying on the unit circleat the cost of O(n2) ops. By taking into account the semiseparable structure of S(4.5) we can simply modify the algorithm stated in the previous section in such away to compute its upper triangular part without performing multiplications (3.2).A different approach is outlined in the next subsection.

4.3. Special case: all points zi lie on a generic circle. Another approach todeal with the preceding special case, that generalizes immediately to the case wherethe nodes zi belong to a given circle in the complex plane, {z ∈ C : |z−p| = r}, exploitsan invariance property of Cauchy-like matrices under a Moebius transformation of thenodes. Such property is presented in the next lemma for the case of classical Cauchymatrices; the Cauchy-like case can be dealt with by introducing suitable diagonalscalings. With minor changes, all forthcoming arguments also apply to the case whereall abscissas lie on a generic line in the complex plane, since the image of R under aMoebius transformation is either a circle or a line.

Lemma 4.2. Let zi, yj, for 1 ≤ i, j ≤ n, be pairwise distinct complex numbers, let

M(z) =αz + β

γz + δ, αδ − βγ 6= 0,

be a Moebius transformation and let CM ≡ (1/(M(zi) −M(yj))). Then CM is aCauchy-like matrix with nodes zi, yj .

Proof. Using the notations above, we have

1

M(zi)−M(yj)=

1

αδ − βγ

(γzi + δ)(γyj + δ)

zi − yj

.

Hence CM has the form CM ≡ (uivj/(zi − yj)).In the next theorem, we show how to construct a Moebius transformation mapping

R onto a prescribed circle without one point, thus generalizing formula (4.3). Togetherwith the preceding lemma, it will allow us to translate Problem 2 with nodes on acircle into a corresponding problem with real nodes. The latter can be solved withthe technique laid down in Subsection 4.1.

Theorem 4.3. Let the center of the circle p ∈ C and its radius r > 0 be given.Consider the following algorithm:

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 15

1. Choose arbitrary nonzero complex numbers γ = |γ|eiθγ and δ = |δ|eiθδ suchthat e2i(θγ−θδ) 6= 1; moreover, choose θ ∈ [0, 2π].

2. Set α = pγ + r|γ|eiθ.

3. Set θ = θ + θγ − θδ.

4. Set β = pδ + r|δ|eiθ.Then the function M(z) = (αz + β)/(γz + δ) is a Moebius transformation mappingthe real line onto the circle {z ∈ C : |z − p| = r} without the point z = α/γ.

Proof. After simple manipulations, the equation

∣∣∣∣αz + β

γz + δ− p

∣∣∣∣2

= r2

leads to the equation

z2|α− pγ|2 + 2z<((α− pγ)(β − pδ)) + |β − pδ|2 =(4.6)

= z2r2|γ|2 + 2zr2<(γδ) + r2|δ|2.

Here and in the following, <(z) denotes the real part of z ∈ C. By construction, wehave |α− pγ| = r|γ| and |β − pδ| = r|δ|. Moreover,

<((α − pγ)(β − pδ)) = r2|γδ|<(ei(θ−θ))

= r2|γδ|<(ei(θδ−θγ))

= r2<(γδ).

Hence equation (4.6) is fulfilled for any real z. The missing point is given by

z = limz→∞

αz + β

γz + δ=α

γ.

It remains to prove that αδ − βγ 6= 0. Indeed, we have

αδ − βγ = (pγ + r|γ|eiθ)δ − (pδ + r|δ|eiθ)γ

= r|γ|δeiθ − rγ|δ|eiθ

= r|γδ|(ei(θ+θδ) − ei(θ+θγ))

= r|γδ|ei(θ+θδ)(1− e2i(θγ−θδ)).

Since e2i(θγ+θδ) 6= 1 we obtain αδ − βγ 6= 0.Suppose we want to solve Problem 2 with data wi, zi, yi, where |zi − p| = r. As

seen from the proof of Theorem 2.2, if we let C ≡ (wi−1(zi−1 − y0)/(zi−1 − yj−1))and C = QR, then a solution is (Q,S), with S = QHDzQ − Dy. Let M(z) =(αz + β)/(γz + δ) be a Moebius transformation built from Theorem 4.3. Recallingthe inversion formula (4.2), let zi =M−1(zi), yi =M−1(yi), vi = γyi + δ, and

wi = wi

zi − y0zi − y0

γzi + δ

αδ − βγ, 0 ≤ i ≤ n.

Note that zi ∈ R, by construction. From Lemma 4.2, we also have

C ≡

(wi−1(zi−1 − y0)vj−1

zi−1 − yj−1

).

16 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

0 100 200 300 400 500 600 700 800 900 10000

20

40

60

80

100

120

140

160

180

200

size of the matrix

exec

utio

n tim

e in

mic

rose

cond

s / (

size

of t

he m

atrix

)2

Fig. 5.1. Computational complexity

Again from Theorem 2.2, we see that the solution of Problem 2 with data wi, zi, yi is(Q, S) where

S = QHM−1(Dz)Q−M−1(Dy).

Let S = S +M−1(Dy). Observe that S is a diagonal-plus-semiseparable matrix[10, 12, 15]. After simple passages, we have

S =M(S)−Dy = [αS + βI ][γS + δI ]−1 −Dy.

Hence S can be recovered from S by determining the entries in its first and last rowsand columns. This latter task can be carried out at a linear cost by means of severaldifferent algorithms for the solution of diagonal-plus-semiseparable linear systems.See, e.g., [10, 12, 23, 35].

5. Numerical experiments. In this section, we show the numerical behaviourof the solution of the inverse eigenvalue problem for some real points zi and real polesyi, i = 0, 1, . . . , n. The points are zi = i+ n, for i = 0, 1, 2, . . . , n, with correspondingweights wi = 1. The poles are yi = i+ n− 1

2 . We implemented the O(n2) algorithmin Matlab on a PC running at 833 MHz and having 512MB of RAM. To show thatthe algorithm is indeed O(n2), we plot in Figure 5.1 the execution time divided by n2

for the different sizes of the problem. Here we set n = 10, 20, 30, . . . , 1000. Figure 5.2gives the maximum relative error on the eigenvalues of the computed diagonal-plus-semiseparable matrix compared to the original points zi for n = 10, 20, 30, . . . , 500.In Figure 5.3, the same is done for the weights. Figures 5.2 and 5.3 show that thealgorithm is accurate for this specific data set. We have tried other data sets resultingin less accurate results. It seems much depends on the lay-out of the poles yi withrespect to the points zi. More research has to be done to develop a robust and accuratealgorithm.

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 17

0 50 100 150 200 250 300 350 400 450 50010

−16

10−15

10−14

value of n

max

imum

rel

ativ

e er

ror

on th

e ei

genv

alue

s

Fig. 5.2. Relative accuracy of the eigenvalues

6. Conclusions and further developments. In this paper, we have shownthat solving a certain inverse eigenvalue problem gives all the information necessaryto construct orthogonal rational functions in an efficient way. The algorithm which wedeveloped here, gives accurate results for a lot of data sets but we found examples forwhich the algorithm does not perform so well. Further research is necessary to identifyif for these data sets the problem is ill-conditioned or the algorithm is numerically notstable.

REFERENCES

[1] G. Ammar and W. Gragg, O(n2) reduction algorithms for the construction of a band matrixfrom spectral data, SIAM J. Matrix Anal. Appl., 12 (1991), pp. 426–431.

[2] G. Ammar, W. Gragg, and L. Reichel, Constructing a unitary Hessenberg matrix fromspectral data, in Numerical linear algebra, digital signal processing and parallel algorithms,G. Golub and P. Van Dooren, eds., vol. 70 of NATO-ASI Series, F: Computer and SystemsSciences, Berlin, 1991, Springer-Verlag, pp. 385–395.

[3] D. Boley and G. Golub, A survey of matrix inverse eigenvalue problems, Inverse Problems,3 (1987), pp. 595–622.

[4] A. Bultheel and M. V. Barel, Pade techniques for model reduction in linear system theory,J. Comput. Appl. Math., 14 (1986), pp. 401–438.

[5] A. Bultheel and B. De Moor, Rational approximation in linear systems and control, J.Comput. Appl. Math., 121 (2000), pp. 355–378.

[6] A. Bultheel, P. Gonzalez-Vera, E. Hendriksen, and O. Njastad, Orthogonal rationalfunctions with poles on the unit circle, J. Math. Anal. Appl., 182 (1994), pp. 221–243.

[7] , Orthogonal rational functions, vol. 5 of Cambridge Monographs on Applied and Com-putational Mathematics, Cambridge University Press, 1999.

[8] , Quadrature and orthogonal rational functions, J. Comput. Appl. Math., 127 (2001),pp. 67–91.

[9] A. Bultheel and M. Van Barel, Vector orthogonal polynomials and least squares approxim-ation, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 863–885.

[10] S. Chandrasekaran and M. Gu, A fast and stable solver for recursively semi-separable sys-

18 M. VAN BAREL, D. FASINO, L. GEMIGNANI AND N. MASTRONARDI

0 50 100 150 200 250 300 350 400 450 50010

−14

10−13

10−12

10−11

value of n

max

imum

rel

ativ

e er

ror

on th

e w

eigh

ts

Fig. 5.3. Relative accuracy of the weights

tems of linear equations, in Structured matrices in mathematics, computer science, andengineering, II (Boulder, CO, 1999), vol. 281 of Contemp. Math., Amer. Math. Soc., Provid-ence, RI, 2001, pp. 39–53.

[11] P. Delsarte, Y. Genin, and Y. Kamp, On the role of the Nevanlinna-Pick problem in circuitand system theory, Internat. J. Circuit Theory Appl., 9 (1981), pp. 177–187.

[12] Y. Eidelman and I. Gohberg, A look-ahead block Schur algorithm for diagonal plus semisep-arable matrices, Comput. Math. Appl., 35 (1998), pp. 25–34.

[13] S. Elhay, G. Golub, and J. Kautsky, Updating and downdating of orthogonal polynomialswith data fitting applications, SIAM J. Matrix Anal. Appl., 12 (1991), pp. 327–353.

[14] D. Fasino and L. Gemignani, A Lanczos-type algorithm for the QR factorization of Cauchy-like matrices. To appear on Contemporary Mathematics.

[15] , Direct and inverse eigenvalue problems for diagonal-plus-semiseparable matrices. Sub-mitted, 2001.

[16] , A Lanczos-type algorithm for the QR factorization of regular Cauchy matrices, Numer.Linear Algebra Appl., 9 (2002), pp. 305–319.

[17] B. Fischer and G. Golub, How to generate unknown orthogonal polynomials out of knownorthogonal polynomials, J. Comput. Appl. Math., 43 (1992), pp. 99–115.

[18] F. R. Gantmacher and M. G. Kreın, Oszillationsmatrizen, Oszillationskerne und kleineSchwingungen mechanischer Systeme, Akademie-Verlag, Berlin, 1960.

[19] W. Gautschi, The use of rational functions in numerical quadrature, J. Comput. Appl. Math.,133 (2001), pp. 111–126.

[20] W. Gragg and A. Lindquist, On the partial realization problem, Linear Algebra Appl., 50(1983), pp. 277–319.

[21] W. B. Gragg and W. J. Harrod, The numerically stable reconstruction of Jacobi matricesfrom spectral data, Numer. Math., 44 (1984), pp. 317–335.

[22] P. Henrici, Applied and Computational Complex Analysis, vol. 1, Wiley, 1974.[23] I. Koltracht, Linear complexity algorithm for semiseparable matrices, Integral Equations

Operator Theory, 29 (1997), pp. 313–319.[24] B. Ninness and F. Gustafsson, A unifying construction of orthonormal bases for system

identification, IEEE Trans. Automat. Control, 42 (1997), pp. 515–521.[25] V. Olshevsky and V. Pan, Polynomial and rational evaluation and interpolation (with struc-

tured matrices), in Automata, languages and programming, vol. 1644 of Lecture Notes inComput. Sci., Springer, Berlin, 1999, pp. 585–594.

ORTHOGONAL RATIONAL FUNCTIONS AND STRUCTURED MATRICES 19

[26] L. Reichel, Fast QR decomposition of Vandermonde-like matrices and polynomial least squaresapproximation, SIAM J. Matrix Anal. Appl., 12 (1991), pp. 552–564.

[27] , Construction of polynomials that are orthogonal with respect to a discrete bilinear form,Adv. Comput. Math., 1 (1993), pp. 241–258.

[28] L. Reichel, G. Ammar, and W. Gragg, Discrete least squares approximation by trigonometricpolynomials, Math. Comp., 57 (1991), pp. 273–289.

[29] H. Rutishauser, On Jacobi rotation patterns, in Proceedings of Symposia in Applied Mathem-atics, vol. 15, Experimental Arithmetic, High Speed Computing and Mathematics, Provid-ence, 1963, Amer. Math. Society, pp. 219–239.

[30] M. Van Barel and A. Bultheel, A new approach to the rational interpolation problem: thevector case., J. Comput. Appl. Math., 33 (1990), pp. 331–346.

[31] , A parallel algorithm for discrete least squares rational approximation, Numer. Math.,63 (1992), pp. 99–121.

[32] M. Van Barel and A. Bultheel, Discrete linearized least-squares rational approximation onthe unit circle, J. Comput. Appl. Math., 50 (1994), pp. 545–563.

[33] M. Van Barel and A. Bultheel, Discrete linearized least-squares rational approximation onthe unit circle, J. Comput. Appl. Math., 50 (1994), pp. 545–563.

[34] , Orthonormal polynomial vectors and least squares approximation for a discrete innerproduct, Electronic Transactions on Numerical Analysis, 3 (1995), pp. 1–23.

[35] E. Van Camp, N. Mastronardi, and M. Van Barel, Two fast algorithms for solving diagonal-plus-semiseparable linear systems, Tech. Rep. 17/2002, Instituto per le Applicazioni delCalcolo ”M. Picone”, Sez. Bari (Consiglio Nazionale delle Ricerche), Italy, Aug. 2002.