Reshetov LA The best linear unbiased estimation with the stabilization (BLUES) .Between OLSE and...

29
The best linear unbiased estimation with the stabilization (BLUES) .Between OLSE and BLUE.. At the moment I spend modeling stabilized estimation BLUES, which is designed to work in non- standard conditions with a poor description of the model. I see that in some cases this compromise estimate yields gain in signal-to-noise ratio of two to three times compared to the OLSE and BLUE . Anyone who is engaged in applied problems of estimation and signal PROCESSING, recommend to pay attention to this relatively simple technique. Perhaps it will be useful in your research . Sincerely Reshetov Leonid Comparison of estimates BLUE and OLSE and their application to the problem of detection of fluctuating targets In this informational article we mainly use signs and symbols listed in review article S.Puntanen , George P.H.Styan “Best Linear Unbiased Estimations in Linear Models and article S.Puntanen , A.J.Scott “ Some Further Remarks on the Singular Linear Model” Linear Algebra and Its Applications 237/238 p.313-327,1996 Let the triplet { y, Xβ, V } denote the general linear model , in which y = Xβ +ε is an n×1 observable vector with expectation vector E(y) = and covariance matrix cov(y) = E(εε * ) = V. We are interested in the future, estimations of the vector Λ .The orthogonal

Transcript of Reshetov LA The best linear unbiased estimation with the stabilization (BLUES) .Between OLSE and...

The best linear unbiased estimation with the

stabilization (BLUES) .Between OLSE and BLUE..

At the moment I spend modeling stabilized estimation BLUES, which is designed to work in non-standard conditions with a poor description of the model. I see that in some cases this compromise estimate yields gain in signal-to-noise ratio of two tothree times compared to the OLSE and BLUE . Anyone who is engaged in applied problems of estimation and signalPROCESSING, recommend to pay attention to this relatively simple technique. Perhaps it will be useful in your research .

SincerelyReshetov Leonid

Comparison of estimates BLUE and OLSE and their application to the problem of detection of fluctuating targets

In this informational article we mainly use signs and symbols listed in review article S.Puntanen , George P.H.Styan “Best Linear Unbiased Estimations in Linear Models and article S.Puntanen , A.J.Scott “ Some Further Remarks on the Singular Linear Model” Linear Algebra and Its Applications 237/238 p.313-327,1996

Let the triplet { y, Xβ, V } denote the general linear model , in which y = Xβ +ε is an n×1 observable vector with expectation vector E(y) = Xβ and covariancematrix cov(y) = E(εε*) = V. We are interested in the future, estimations of the vector XβΛ .The orthogonal

projector onto the column spaces of X is PX = H = XX+ , where X+ denotes the ( unique ) Moore- Penrose inverse of X .The ordinary least square estimator (OLSE) is defined as XβΛ

OLSE = Hy . On the other hand the best linear unbiased estimator(BLUE) of Xβ [1[,[2] is

XβΛBLUE = [H –

HVM(MVM)+M]y =

= XβΛOLSE –

HVM(MVM)+My ,

here M =In – H ,M2 = M , In denoting the n×n identity matrix . Matrix expressions

M(MVM)+M = M(MVM)+ = (MVM)+M = (MVM)+

offers other equivalent ways to express XβΛBLUE . We

note that the results are equivalent to the result of A.Albert[3]

XβΛBLUE = XβΛ

OLSE – HV(MV)+y . Difference estimates XβΛ

OLSE and XβΛBLUE can be

represented as follows

XβΛOLSE - XβΛ

BLUE = Dy , D = HVM(MVM)+M = HV(MVM)+ = HV(MV)+ . (1)

When D = 0, we obtain the equality of estimations XβΛOLSE

and XβΛBLUE .Since HM = 0,then, subject to

HV = VH, (2) we obtain D = 0, which corresponds to the equality of estimations . It is of interest to study the dependence of the matrix D and its norms II D II as a function of the angular

distance between the column subspaces of matrices H andV . he angle between the subspaces has no clear definition, especially when it comes to the complex case [3],[4]. Typically, the angle between the subspaces is defined in terms of the principal angles or combinations thereof

[5]. For example, if the columns of matrices A and B belong to the complex space Cn , then every eigenvalue of the matrix AA+BB+ is the square of the cosine of the principal angle . Which implies that the trace of the matrix or its determinants, as well as the arithmetic mean of the largest and smallest priincipal angles can serve as an estimate of the angle between the subspaces. Note that the smallest principal angle is called the minimal angle between the subspaces. The angle between the subspaces can be expressed by a variety of natural angles. If each matrix element represents, for example,a separate output signal of receiver from the linear equidistant array , then for the signal is detected in the far field, we can write the formula aik= skexp{j(2πih/λ)sinαk} , bil = slexp{j(2πih/λ)sinαl} . Heresk,sl is the complex amplitude of the signals from the respective source of radiation field. We have introduced the following notation: h is the distance between antenna array elements, and λ is the wavelengthof the emitted or reflected signals. If we look at moving targets, the value of the parameter λ should take into account the Doppler shift λk ,λl. Here is αk corresponds to a plurality of angles of the firsts group of sources and αl is angles of the second group ofsources. Everywhere j is the imaginary unit √-1.

Similar models of signals encountered in maintenance tasks of radar and sonar signal processing antenna arrays of sonar . Moreover, the group of vectors, forming one of the subspaces, relies useful, and the other group is considered to be interfering. Interfering signals can be caused by both artificial and natural due to reflections from the sea surface or other hard surfaces (eg, the surface of the Earth or the Moon)[6].

Regardless of the definition of the angle between the subspaces columns of the matrices X and V, we can give an unambiguous definition of the angle in two extreme cases. If we consider the singular version of the matrix V, ie rank(V) < n, then the following assertions:

1) The angle between the subspaces columns of matrices X and V is equal to zero if all the principal angles are equal to zero. In this case we obtain VM = 0,VH = HV , D = 0

2) The angle between the subspaces columns of matrices X and V is 90○, if all the principal angles are equal 90○. In this case we obtain HV = 0 , VH = HV , D = 0 .

These conditions for equality of BLUE(Xβ) and OLSE(Xβ) can be formulated in terms of the natural angles.

To compute the difference between OLSE(Xβ) and BLUE(Xβ), depending on the angle between the column subspaces of matrices X and V must be set structure of matrix V . We believe that the matrix V has the following form

V = σ2In + SQS* . (3)

In this formula, matrix S has size n×p and the rank of the matrix is p . Here S* denotes the conjugate transpose of the matrix S . In the real case it is necessary to apply the usual transpose ST. We believe that the matrix Q is a diagonal matrix, whose diagonalelements are all greater than zero.

1) singular case σ = 0

For the description of the matrix D, we can use any expression in formula (1), such as a formula

D = HV(MV)+ = HSQS*( MSQS*)+ .

We restrict ourselves to the assumption that the angle between the column subspaces of matrices S and X are significantly greater than zero . Such a requirement can be achieved if the condition of conservation of rank, that is, we assume that rank(MS) = rank(S) . Since the matrix S is a matrix of full rank, then the matrix MSQ also has full rank . Then the matrix D takesthe form

D = HSQS*(S*)+(MSQ)+ = HSQ(MSQ)+ = X(X+S)(MS)+ = X(X+S)(S*MS)-1S*M . (4)

In this formula, X*S factor explains the behavior of thematrix D in the high angular distances close to 90○ and zero for orthogonal subspaces position . Behavior of

the matrix D in the region of small angles is connectedwith the matrix (MS)+. If the angle is close to zero, this factor increases dramatically, but it is zero at zero angle between the subspaces . This behavior can beexplained by the non continuous properties of the Moore-Penrose inverse.

Degree of deviation of OLSE and BLUE, we will determineby the magnitude of the relationship of matrix norms II D IIF/ II H IIF. Here II.IIF is the Frobenius norm of the matrix .

Example

Let X and S are complex vectors of Cn . If the angle between the vectors defined as [5]

φ = arccos I< X,S >I / IIXII IISII

the ratio of matrix norms is as follows

δ = II D IIF/ II H IIF = ctg φ , 0 <φ ≤ 90○

δ = 0, φ = 0 .

2) non singular case σ > 0

We use the representation of the matrix D in the following form D = HV(MVM)+ . We find an expression thatcontains of parentheses of this formula .

(MVM)+ = (M(σ2In+ SQS* )M)+ = ( Mσ2In + MSQS*M)+ .

f we put A = Mσ2In and B = MSQS*M , then we can see thatthe following projection conditions AA+B = B , BA+A = B may be apply . Fulfillment of these conditions allows us to apply the generalizition of the Sherman- Morrison- Woodbury formula [8] , [9] :

(MVM)+ = Mσ-2 – Mσ-2S(Q-1 + S*Mσ-2S)-1S*σ-2M . (5)

Then the matrix D can be written as

D = HV(MVM)+ = HV Mσ-2[ In –

S(Q-1 + S*Mσ-2S)-1S*σ-2M ] =

= X(X+S)Q(S*M)σ-2[ In –

S(Q-1 + S*Mσ-2S)-1S*σ-2M ] (6)

Equation (6) admits a number asymptotics. Here are onlytwo of them.

a) σ→∞ (low level of interference)

D ≈ X(X+S)Q(S*M)σ-2 . (7)

Example. Let X and S are complex vectors of Cn and the matrix Q is a scalar q . Then the ratio of the norm of the matrix D to the norm of the matrix H is

δ =μ sin2φ . (8)

Here μ = 1/2q(S*S)σ-2 is an energy ratio of interferenceand noise . We see that the greatest deviation BLUE to

ratio OLSE is reached at φ = 45○ for this approximation . It is clear that for the calculation ofthe physical angle must consider the geometry of the receiving antenna . If we consider the reception field on a linear equidistant antenna array consisting of three receiving elements, the vectors X and S have the form

X = {exp[j(2πh/λ)sinαX] , 1, exp -[j(2πh/λ)sinαX]}T ,

S = {exp[j(2πh/λ)sinαS] , 1, exp -[j(2πh/λ)sinαS]}T .

Here, αX and αS indicate the angle in the direction of the desired signal and the interfering signal, respectively. When αX = 0, the angles φ and αS connects the relation

φ =arccos I R(αS) I,

where R(αS) is the antenna pattern .

  If the receiving antenna has a length of 2λ, the antenna pattern will look like as                                               R(αS) = 1/3[1 + 2cos(2πsinαS)] .With such antenna configuration, the largest differencein the estimations OLSE and BLUE  is achieved for the angle of interference arrival αS = 8.9○ .

b) σ → 0 ( low noise )We rewrite the expression in round brackets of the

formula (6)

(Q-1 + S*Mσ-2S)-1 = (Ip + QS*Mσ-2S)-1Q = (QS*Mσ-

2S)-1(Ip + (QS*Mσ-2S)-1)-1QSo we believe that the norm of the matrix MS is not equal to zero, then for σ→0 norm of matrix (QS*Mσ-2S)-1 is less than 1, and then we can replace the matrix (Ip +(QS*Mσ-2S)-1)-1 by its series . If you keep only the first two terms of this series, we obtain an approximate expression for the ratio of matrix norms δ ≈ ctg φ . (9)Note that in the singular case, this expression is accurate . The main conclusion that can be drawn from the analysis of the asymptotic formulas is that a change inthe ratio of interference to noise functional dependence of δ = II D IIF/ II H IIF on the angle φ will vary from μ sin2φ to ctg φ . To illustrate this conclusion we perform calculation of δ in accordance with formula (6) in thespecial case when p = 1. After making the appropriate calculations, we obtain δ = μ sin2φ / 1+ 2μsin2φ . (10)For μ→0, we obtain (8), and for μ→∞ we obtain (9) .To clarify the meaning of the formula (10) in Figure 1 shows the results of calculation of δ for three values of μ .

Fig. 1

In the figure the red curve corresponds to μ = 0.2, blue curve is calculated with μ = 1 and the green curvecorresponds to μ = 5 . Angle φ is measured in degrees . We will find the value of the covariance of difference of OLSE and BLUE in the linear model with mismatch . Before starting the consideration we get another useful representation matrices (MVM)+ in the form of a matrix series. And if earlier as the leading parameter, we used the energy parameter μ, in this case, we will limit the scope of the existence of matrix series with help the angular distance between the column subspaces of matrices X and S . However, webelieve that the angular distance is large and all of the principal angles between subspaces are close to 90○.Referring to the reception of signal by the linear antenna, this situation corresponds to finding of the interferences in the angles of the sidelobe of the antenna array . Rewrite (5) as follows

(MVM)+ = Mσ-2 – Mσ-2S[(Q-1 + S*S σ-2) - S*XX+S σ-2]-1S*σ-2M . (11)Norm of the matrix S*XX+S depends on the angle between the column subspaces of matrices X and S . Indeed, if the columns of the matrix S are orthonormal vectors, then the eigenvalues of the matrix S*XX+S are the square of the cosines of the principal angles. Generally matrix S*XX+S takes into account the energy ofthe interference . Since we believe that the principal angles are large, then we can assume that the norm of the matrix G = (Q-1 + S*S σ-2)-1S*XX+S σ-2 is less than 1.Under this condition || G || < 1 , the matrix ( Ip –G )-1 can be replaced by the matrix series ( Ip – G )-1 = Σ∞ i=0 (-1)i Gi . (12)Substituting (12) into (11), we obtain (MVM)+ = Mσ-2 – Mσ-2S(Σ∞ i=0 (-1)i

Gi)(Q-1 + S*S σ-2)-1S*σ-2M = = Mσ-2 – Mσ-2S( IP+Σ∞ i=1 (-1)i Gi)(Q-1 + S*S σ-2)-1S*σ-2M = = Mσ-2 – Mσ-2S(Q-1 + S*S σ-2)-1S*σ-2M - Mσ-2S( Σ∞ i=1 (-1)i Gi)(Q-1 + S*S σ-2)-1S*σ-2M = = M[ V+ - σ-2S( Σ∞ i=1 (-1)i Gi)(Q-1 + S*S σ-2)-1S*σ-2]M . (13)As a result, we obtain two approximations for the matrix (MVM)+ , provided || G || < 1 (MVM)+

0 = MV+M , (14)

(MVM)+1 = MV+M + Mσ-2S(Q-1 + S*S σ-2)-1S*XX+S σ-2 (Q-1

+ S*S σ-2)-1S*σ-2M . (15)

We proceed to calculate the covariance matrixof the estimations. It is understood that in the actualimplementation of the linear model her parameters willbe different from parameters of the intended model. That is, instead of model { y, Xβ, V }, we will have a model { y, X~ β, V~ } with other matrices X~ , V~. If itis assumed that the condition of compatibility y є R [ X~, V~] with probability 1 . Increasingly, we will assume that X has no distortion . This condition ensures unbiased estimates of OLSE and BLUE . However, the property "best" in the mean square sense can be lost and it may be necessary to adopt a different optimality criterion .

Put first that the matrices X and V are presented without distortion . Then we have matrix the covarianceof estimation OLSE Cov(Xβ^

OLSE) = HVH . (16)Estimation BLUE, as we have seen, can be represented inthe following form Xβ^

BLUE = Hy – Dy = Xβ^

OLSE – Dy.Since BLUE is an unbiased estimate, its covariance matrix is Cov(Xβ^

BLUE) = (H –D)V(H –D)* = H[In – V(MVM)+]V[In – (MVM)+V]H = = HVH + (HV(MVM)+V(MVM)+VH – 2HV(MVM)+VH) = = HVH + (HV(MVM)+MVM(MVM)+VH – 2HV(MVM)+VH) =

= HVH – HV(MVM)+VH = Cov(Xβ^

OLSE) – HV(MVM)+VH . (17) Formula (17) is based on the property of symmetry of the Moore- Penrose inverse[10] and in the result of its application, we obtain the important relation [1],[11] : V(MV)+ =V(MVM)+ , (MVM)+V = (VM)+V .Covariance matrix estimates in a similar notation can be found in [12] . Since we determined the difference between OLSE and BLUE as Dy, then the covariance matrixto the difference is equal Cov(Dy) = DVD* = HV(MVM)+VH .Compare the covariance matrices OLSE and BLUE for the case of a singular model, ie when rank V < n . Suppose,for example, V = SQS*. Then we have Cov(Xβ^

OLSE) = HVH = XX+SQS*XX+.If we assume that the condition of conservation rank take place rankS*M = rankS* , which is satisfied if the minimal angle between the column subspaces of matrices X and S is greater than zero, or, equivalently, all theprincipal angles greater than zero, the matrix (MVM)+ can be written as (MVM)+

= (S*M)+Q-1(MS)+. (18)To calculate the covariance matrix Cov(Xβ^

BLUE) we use the expression (17) Cov(Xβ^

BLUE) = Cov(Xβ^

OLSE) – HV(MVM)+VH =

= Cov(Xβ^OLSE) – HV(MVM)+VH =

Cov(Xβ^OLSE) – HVM(MVM)+MVH =

= Cov(Xβ^OLSE) –

HSQS*M(MVM)+MSQS*H .If we use the formula (18) obtained while maintaining rank matrices S* and S*M, and hence the matrix VM, we obtain the following result Cov(Xβ^

BLUE) = Cov(Xβ^OLSE) –

HSQS*M(S*M)+Q-1(MS)+MSQS*H . Since by assumption the matrix S is a matrix of full rank, then in the end we see that Cov(Xβ^

BLUE) = Cov(Xβ^OLSE) – HSQS*H =

Cov(Xβ^OLSE) – XX+SQS*XX+= 0 (19)

This result also follows from the work of [13] (see Eq.2.16) rank(Cov(Xβ^

BLUE)) = rankV1/2 - rankV1/2M .

Given that rankV1/2 = rankV and rankV1/2M = rankVM we rewrite the last expression rank(Cov(Xβ^

BLUE)) =rankV - rankVM .Recalling our condition of rank conservation rankV = rankVM we get zero results rank(Cov(Xβ^

BLUE)) = 0 , whichcorresponds to the formula (19) . Note that Cov(Xβ^

OLSE) = 0 if and only if the angle between the column subspaces X and S is 90○, i.e. X*S = 0 From the previous results, it follows that the covariance matrix of the difference of OLSE and BLUE will be equal to Cov(Dy) = HSQS*H .

We can draw some conclusions and make interim findings . Significant benefits that accrue to BLUE with respect to OLSE are caused primarily by the fact that it agreed with the desired signal and interferencebut OLSE agreed only with the signal . These benefits are particularly noticeable in the case of large interference in relation to noise, which corresponds toapproximately a singular task. But the realization of these benefits is only possible with the full concurrence model with reality. From misperceptions we can destroy all these advantages and turn them into significant shortcomings. It is possible that under conditions of significant deviations models have to sacrifice the optimality of estimation BLUE and hold her coarsening [14],[16] . We understand that every element of the matrices X or V may be wrong, but their influence on the bias of estimations and their covariance matrix may be significantly different . We will look at some examples for the magnitude of the impact of incorrect linear model on the characteristicsof estimations that were originally built as OLSE and BLUE . We define the impact on the covariance matrices OLSE and BLUE unobserved interference, ie a additional interference about the existence of which we know nothing . Then the covariance matrix of y for singularmodel is of the form V~

= SQS* + TBT* ,where the matrix T has size n×h and rank h , and the matrix B is a square matrix of rank h . To interferencewas unobserved, we assume that T*X = 0 . Under this

assumption, the covariance matrix estimation OLSE remain unchanged Cov(Xβ^

OLSE) = HVH = XX+SQS*XX+ = HV~H . (20)Estimation BLUE behaves quite differently. Its covariance matrix receives an additional term Cov(Xβ^

BLUE) = HV(MVM)+ TBT*(MVM)+VH = HS(MS)+TBT*(S*M)+S*H . (21)We can see that at T*S ≠ 0 the covariance matrix BLUE depends on the norm of the matrix B . If || B || = 0, then Cov(Xβ^

BLUE) = 0 . But || B || →∞, then || Cov(Xβ^

BLUE) || increases indefinitely . Then it is possible that Cov(Xβ^

BLUE) ≥L Cov(Xβ^OLSE) . Here B ≥L A

means that A is below B with respect to the Lowner partial ordering , I.e., that the difference B – A is asymmetric nonnegative definite matrix [15].

Equations (20) and (21) we have used to demonstrate the relative stability of estimators OLSE and BLUE with respect to perturbations of linear model. For the case p = 1 we have performed calculations of Cov(Xβ^

OLSE) and Cov(Xβ^BLUE) , as well as they were

directly simulated assuming that the initial covariance matrix of the observed interference SQS* wassuddenly supplemented with covariance matrix of unobserved interference TBT* . Matrix X, and our case, the vector X, we assume equal X =(1,1,1)T. Vector T, which determines the unobserved interference, we put equal T = (1,0, -1)T. Since we introduce the vector S byan adjustable parameter e

S = eX +(1- e)T , e є [0,1] .By changing parameter e can to change the angular distance between the signal vector X and the vector of the observed interference S from 0○ to 90○ . Norms of the vectors X,S and T were further reduced to one . Since the norm of the vector X is now equal to 1, then Q = q is the ratio of the power of observed interference to the power of signal . While B = b is the ratio of the power of non-observed interference to signal power . In accordance with formulas (20) and (21) we calculated norms of matrices II Cov(Xβ^

OLSE) IIF , II Cov(Xβ^

BLUE)IIF for some values of φ,q and b . The results of calculations power unobserved interference b for the equality of norms covariance matrices II Cov(Xβ^

OLSE) IIF = II Cov(Xβ^BLUE)IIF are placed In

Table 1. The angle φ is taken with account of three values 10○,45○ and 80○.The results of calculations are given in the table for three values of the observed interference power q = .25, q = 1, q = 4.

Table 1 ------- φ ------- q

10○

45○

80○

0.25

0.0075

0.125

0.245

1 0.03 0,504 0.97 4

0.121

2.016

3.881

Analysis of the calculation results shows that the requirement of effective suppression of interference by estimators BLUE reduces the stability of the procedure to perturbations of model . As with the angular convergence signal and interference, which we consider in the model using the matrix V, the influence of other unrecorded factors (in this case, unobserved interference) increases . So, if the angle between the signal and the observed interference is 10○,then the unobserved interference, which may be only a few percent of the signal creates the same level of random distortion in the estimator BLUE so as observed interference in the estimator OLSE in which no action to suppress interference is not conducted Thus, the increased sensitivity of optimal estimators for a certain kind of distortion of the proposed model requires the introduction of restrictions on the use ofsuch assessments and the possible replacement of their more simple, but more robust estimators. You also need to specify the range of application of optimal estimates under non-standard conditionsю As we saw in the model perturbations in the form of non-observed interference effective application of such estimators can only for a sufficiently large angular distance between the column subspaces of matrices X and V . In our simple example, this angle is 10○ . On the other hand the use of optimal estimators of BLUE at high angles too is meaningless, because here estimators BLUEsmoothly transits to the estimators OLSE . The angle

between subspaces, which determines the range of the use of estimators BLUE in the singular case is determined very simple .For this is enough to find the minimal principal angle between the column subspaces of matrices X and S . In case of non-singularity, ie inthe case when the matrix V is organized in more complexthan V = SQS* is necessary to use as a control parameteris the minimal numerical angle between the columns subspaces of the matrices X and V . But before that you need to hold selection of the largest eigenvalues of these matrices It should be noted that the degree of estimator's stability to perturbations of the model depends on thetype of perturbation. Consider, for example, internal disturbance of interference , that is, instead of the matrix V = SQS* we see that covariance matrix of interference, contained in the vector y, will be equal to V~ = SQ~S* . Provided that the rankMS =rankS or rankMS* =rankS*, which holds if the minimal angle between the column subspaces of matrices X and S is greater than zero, the covariance matrix estimator BLUE, according to formula (17) will have the form Cov(Xβ^

BLUE) = Cov(Xβ^

OLSE) – HSQ~SH . Since the covariance matrix of estimation OLSE is equal HV~H = HSQ~S*H, the covariance matrix Cov(Xβ^

BLUE) is equal to the zero matrix for any matrix Q~ . The influence of external disturbances of interference , which is defined as the substitution of V = SQS* by V~ = S~QS*~ may be varied, depending on the nature of the random or deterministic disturbances . Random perturbations can be explained by random distortions of the form of the wave front in the

propagation path of the radiated field and by random deformations of the antenna, as well as possible failures receivers of a field .. Deterministic perturbations usually are fall within range the parametric model S(α) . Here, the vector α is a vector of unknown parameters . Dimension and the physical content of the vector α depends on the specific problemstatement . Most often, the vector α describes coordinates of repressed or accompanied target, and in the case of multiple targets is a set of coordinates for each reflector or emitter . To illustrate the above considerations, we present the results of probabilistic modeling for the singular linear model with a certain kind of perturbations of this model . Vectors X and T we defined earlier. Recallthat the vector X is the signal of the input mixture, and the vector T corresponds to the non-observed interference. The angular distance between these vectors is 90○ . To describe the development process in time, we introduce a discrete time using the index set k. The total number of digital samples equal to 50. Figure 2 shows the time variation of the norm of the vector 5X, if the signal duration is 3 digital readout and it is present in temporary cells k = 20 , k = 21 , k = 22 .

Fig. 2To determine the observed interference, we chose the vector S, which is a vector with an angle of 45○ in ratio to vector X . Norms of all three vectors X, S, and T were adjusted to one. As a result, we obtain X = (0.577 ,0.577 , 0.577)T , S = (0.908 ,0.408 ,- 0.092)T ,T = (0.707 , 0 , - 0.707)T . Changes in time of vectors S and T are defined using sequences of random normal numbers nkS and nkT . These independent random numbers have zero mean and unit variance . Figure 3 shows the implementation of the norm of the vector yk = 5Xk + SnkS + TnkT , which is the sum signal vector 5Xk and of the vectors of observed SnkS and unobserved TnkT interferences .

Fig.3We see that this implementation is obtained at the ratio of amplitudes signal to observable and unobservable interferences equal to 5 .

Effect of filter that is matched to the signal, we can observe in Figure 4, which shows the norm of thevector Hyk , H = XX+ on output of the estimator OLSE .

Fig.4 So as OLSE is an unbiased estimate, the signal remains unchanged, but the observed interference is suppressed in accordance with the angular distance between the vectors X and S, and unobserved interference T is suppressed completely . For the same conditions, we present the results of the impact of the best estimation BLUE on the inputvector y . Figure 5 shows the time dependence of the norm of the vector [ H – HV(MVM)+]yk at the exact assignment matrix V = SST .

Fig.5 A norm of vector on output of estimation BLUE for precise elements of the covariance matrix V =SSЕ and the

erroneous(misspecified) assignment of realcovariance matrix V~ =SST + TTT.

Due to the fact that the filter parameters V are set exactly and the zero of transfer function [H – HV(MVM)+]is fixed at 45, the observed interference S is suppressed completely, and the random component of the implementation of the is due to the presence of non-observed interference T . Increase the noise level and thus a reduction of the signal-to- unobserved interference ratio is explained by the inner nature of estimation BLUE, which is associated with the presence of the pseudo inverse in the formula [H – HV(MVM)+] . Significant impact on the deterioration of the quality of estimator BLUE , which has a additional interference leads to the idea of regulating the ratio of OLSE and BLUE in the overall evaluation

framework .On the figure 6 we present the results of calculating the norm of the vector yk at the output of filter H – ξ HV(MVM)+ , ξ =0.5 which is a combination of estimations OLSE and BLUE . Estimation of this type,we decided to name the best linear unbiased estimation with the stabilization (BLUES) . Parameter ξ, by analogy with the regularization parameter, we will be call the stabilization parameter.

Fig.6Selection of the optimal stabilization parameter ξ,

which gives the minimum value of the estimation error

BLUES, depends on many factors and is an independent

task .

In order to show the dependence of the

interference level at the output of estimation BLUES,

we measured the average level of interference ( norm

of vector y) when no signal and the volume of the

samples equal to 2×104. Measurement error in this

data volume was less than half a percent.

References

1. Pukelsheim F. Schatzen von Mittlvert and Streuungmatrix in Gauss-Markoff-Modellen,Diplomarbeit,Freiburg im Breisgau, 1974

2. Searle S.R. Extending Some Results and Proofs for the Singular Linear Models, Linear Algebra and its Applications ,210,p.139-151,1994

3. Albert A. The Gauss-Markov theorem for regression model with possibly singular covariances ,SIAM J. Appl.Math. 24,p.182-187,1967

4.Baksalary O.M.,Trenkler G. On angles and distances between subspaces. Linear Algebra and its Appl.431(2009),2243-2260

5. Galantai A. , Hegedus C.J. Jordan”s principal angles in complex vector spaces.Numer.Linear Allgebra Appl. 2006,13 ,p. 589-598

6.Risteski I.B.,Trecevski K.G. Principial values and principial subspaces of two subspaces vector spaces

with inner product . Beitr.Algebra Geom.42(2001),289-300

7.Gorski T. Space–Time Adaptive Signal Processing for Sea Surveillance Radars , Military University of Technology in Warsaw,2008

8. Cline R.E.,Funderlic R.E. The Rank of a Difference of Matrices and Accociated Generalaized Inverses. Linear Algebra and Its Applications,1979,24,p.185-215

9. Deng C.Y. A generalization of the Sherman- Morrison-Woodbury formula . Applied Mathematics Letters,2011,24,1561-1564

10.Penrose R. A generalized inverse for matrices .Proceedings of the Cambridge Philosophical Society ,1955,51,406-413

11.Gross J. On the Product of Orthogonal Projectors .Linear Algebra and its Applications,1999,89,141-150

12. Liski E.P., Puntanen S.,Wang S.G. Bound for the Trace of the Difference of the Covariance Matrices of the OLSE and BLUE , Linear Algebra and its Applications,1992,176,121-130

13. Puntanen S., Scott A.J. Some Further Remarks on theSingular Linear Model, Linear Algebra and Its Applications ,1996,237/238, p.313-327

14.TianY.,Puntanen S. On the equivalence of estimations under general linear model and its transformed models , Linear Algebra and its Applications,2009,430,2622-2641

15. Baksalary J.K. , Puntanen S. Characterizations of Best Linear Unbiased Estimator in the General Gauss- Markov Model with the Use of the Matrix Partial Orderings, Linear Algebra and its Applications,1990,127,363-370

16. Tian Y.G. On Equalities for BLUEs under Misspecified Gauss-Markov Models , Acta Mathematica Sinica ,2009 ,Vol.25, No.11,pp.1907-1920

K.Friedrichs,On certain inequalities and characteristicvalue problems for analytic functions and for functionsof two variables,Trans.Amer.Math.Soc.41(1937)321–364.

K.Scharnhorst,Angles in complex vector spaces,ActaAppl.Math.69(2001)95–103.

Comparison of estimators OLSE , BLUE and BLUES on incorrect (misspecified) linear model

When the covariance matrix BLUE is greater than the covariance matrix OLSE (Cov(Xβ^

BLUE) ≥L Cov(Xβ^OLSE) ?

The minimal numerical angle Θrs between the column subspaces of matrices A and B.

Let the matrices A and B are square symmetric nonnegative definite matrices of size n×n .Eigenvalues of A and B are sorted in non-increasing order. We denote the set of eigenvectors ei , I =1,..,r corresponding to the first r eigenvalues of the matrix A through R . Proceeding similarly distinguish the first s largest eigenvalues of the matrix B and their eigenvectors ej , j = 1,..,s. The set of these vectors will be denoted as S . Then we will call the smallest principal angle between the subspaces R and S as the minimal numerical angle Θrs . Such a definition of the angle between the subspaces we used in determining the stability of the best estimator BLUE for linear Gauss-Markov model {y,Xβ,V} with application on incorrect linear model {y,X~β,V~} . Clearly, this definition of the angle is not unique, but often the matrix V consists of two well-separated components. One of them belongs to the noise and other only interference . In this case the eigenvalues of the matrix V are well separated into two groups, large and small eigenvalues and the selection of largest eigenvalues gives unambiguous result . Thus, if V has the form V = σ2In + SQS* , the stability assessment of model to perturbations, such as the emergence of non-observed interference TBT* is completely determined by the angular distance between the column subspaces of matrices X and S .

Ммм

ммм