Random Signals and Noise – Winter Semester 2020-21 Solution to ...

11
Random Signals and Noise Winter Semester 2020-21 Solution to Problem Set 8 Problem 1: 1. The sample function of the process: The lower graph is for 1 = , the middle one is for 2 = and the upper one is for 3 = . 2. We will "freeze" the time parameter at 0 1 t = . We get a random variable with the following distribution: ( ) 0 1 1 .. 2 1 1 .. 4 1 2 .. 4 wp Xt e wp e wp = = The density function of this random variable is:

Transcript of Random Signals and Noise – Winter Semester 2020-21 Solution to ...

Random Signals and Noise – Winter Semester 2020-21

Solution to Problem Set 8

Problem 1:

1. The sample function of the process:

The lower graph is for 1 = , the middle one is for

2 = and the upper one

is for 3 = .

2. We will "freeze" the time parameter at 0 1t = . We get a random variable with

the following distribution:

( )0

11 . .

2

11 . .

4

12 . .

4

w p

X t e w p

e w p

= =

The density function of this random variable is:

3. Given that ( )0 0 1X t = = holds, it is obvious that 1 2, (for 3 = we

would have gotten ( ) 0

0 0 2 2X t e= = = , contrary to the given). Therefore, for

X(1), there are two possible values:

( ) ( )( ) ( )( )

( )( )( )

( ) ( )1

1

1 2

Pr 1 1 0 1 Pr 2Pr 1 1| 0 1

Pr 0 1 Pr Pr 3

X XX X

X

=

= = = = = = = = = = + =

( ) ( )( ) ( )( )

( )( )( )

( ) ( )2

2

1 2

Pr 1 0 1 Pr 1Pr 1 | 0 1

Pr 0 1 Pr Pr 3

X e XX e X

X

=

= = = = = = = = = = + =

Thus, overall, we get the conditional distribution:

( ) ( )0 1

21 w.p.

31 |

1w.p.

3

XX

e=

=

Problem 2:

1.

−= daafetXE A

at )()}({

2.

daafeetXtXEttR A

atat

x

−−== )()}()({ˆ),( 21

2121

3. We need to find )(tf x. ( )X t is a one-to-one function of A. That is to say, for

every value of A, there exists a constant ( )X t , and no 'a a such that )(txa

is equal to ' ( )ax t . Therefore, based on the transition formula between two

distribution functions:

( ) ln( )( )

ln

ln( )( )

1 1( ; ) ( )

( ) | |

1 ln 1 ln

| |

xx t A Aat at

xa

t

A A

xX t x a

t

f x t f a f aX t te

a

x xf f

tx t xt t

−− =

−=

−= =

= = = −

− − = =

Problem 3:

1. a. The sum of the probabilities of 0N taking each whole number must be 1.

From here:

( )( )

( ) ( )( )

( ) ( )

1 1

0

1 1

1

1

| |Pr 1 2 1

12 22 1 2 1

2

1 2 1 1

1

M M

n n M n

M

n

n nN n A A A

M M

M MA AA A M n A A M

M M

A M M AM

AM

− −

=− =− − =

=

= = − = + − =

−= + − − = + − − =

+ − − − =

=

b. The expected value is 0, since the probability is a symmetric function:

( ) ( )0 0.Pr Prn N n N n = = = −

c. Since the expectation is 0, we get:

( ) 2 2 2

0 0 0 0VAR N E N E N E N = − =

( )

3 21 12 2 2

0

1 1

1 | | 2 11

6

M M

n M n

n n ME N n n

M M M M

− −

=− − =

− = − = − =

2. a.

0 0

1 1

0k k

j j

E N k E N W j E N E W j= =

= + = + =

b. Let us first assume that l k :

( )

1

2 2

1

0 0

1 1

,

.

.

k k

N

j k

k k

j k

k ka

j j

R k k k E N k N k k E N k N k W j

E N k E N k W j E N k Var N k

Var N k Var N W j Var N Var W j

+

= +

+

= +

= =

+ = + = + =

= + = =

= + = +

where ( )a is from the fact that W k is i.i.d and independent of 0N .

( )22 2 2 2 21 1 1 1

1 0 1 .4 2 4 2

Var W k E W k E W k E W k = − = = − + + =

Let us use the variance of 0N from section 1:

2

0

1

1 1,

6 2

k

N

j

MR k k k Var N k Var N Var W j k

=

− + = = + = +

And in the general case: ( )2 1 1

, min ,6 2

N

MR k l k l

−= + .

c. The process is not stationary in either sense since its auto-correlation function

is not dependent only on time difference.

3. a. The optimal estimator is, of course, the conditional expectation estimator:

ˆ | 1 1 | 1

1 | 1 | 1

1 1 .

optN k E N k N k E N k W k N k

E N k N k E W k N k

N k E W k N k

= − = − + − =

= − − + − =

= − + = −

where we made use of the fact that W k is independent of 1N k − , since it is

a function of 0N and of W j , j k .

b. The optimal estimator is the conditional expectation estimator:

ˆ | 1 , 2 1 | 1 , 2

1 | 1 , 2 | 1 , 2

1

optN k E N k N k N k E N k W k N k N k

E N k N k N k E W k N k N k

N k

= − − = − + − −

= − − − + − −

= −

* The last equality is due to the fact that 1 , 2N k N k− − are a deterministic

function of 0 1 1, , , kN W W −. W k is independent of 0 1 1, , , kN W W −

, and,

thus, also of 1 , 2N k N k− − .

c. Notice that the optimal estimator of N k from the pair of samples came out

linear:

1

ˆ 1 0 12

opt

N kN k N k

N k

−= = −

Therefore, this is also the optimal linear estimator.

Calculation of the MSE: for the three cases, the estimator is the same –

ˆ 1optN k N k= − , and, thus, the MSE for all of them is:

( ) ( ) 2 2

2 1ˆ 1 12

optMSE E N k N k E N k N k W k E W k = − = − − − + = =

4. a. Let us first calculate the optimal estimator of N k from the samples vector.

For the same reason as in section 3.b., we get that ˆ 1N k N k= − :

ˆ | 1 , , 10

1 | 1 , , 10

1 | 1 , , 10 | 1 , , 10

1

optN k E N k N k N k

E N k W k N k N k

E N k N k N k E W k N k N k

N k

= − −

= − + − −

= − − − + − −

= −

Here, too, the estimator we got is linear, and, therefore, it is the optimal linear

estimator from the samples vector.

b. Calculation of the MSE: as in the previous section:

( ) 2

2 1ˆ2

optMSE E N k N k E W k = − = = .

Problem 4:

1. )(tX is S.S.S, that is to say:

( ) ( ) nintttxxfttxxf innXXnnXX nn,1,,,...,;,...,,...,;,..., 11,...,11,..., 11

++=

Let us take a look at:

( )nnZZ ttzzfn

,...,;,..., 11,...,1

Since the system is memoryless, each iZ is a function of iX from the above

PDF, and, thus, in order to calculate said PDF, we can use the formula for

calculating PDF of a function of a random vector.

Let us assume that for all nizi ...2,1= exist im solutions to the equation

( ) izxg =

and we will denote them: im

jiji xS1=

= .

Overall, the set of solutions of the system of equations:

( ) n

iii zxg1=

=

is:

( ) nnn SxSxSxxxxS = ,...,,|,....,, 221121

The Jacobian of the system is:

( )

( )( )

( )

( )

=

n

nn

xg

xg

xg

xg

xxxzzzJ

'0000

.............

0....'00

0...0'0

0....00'

,...,|,..., 2

2

1

2121

And, therefore, based on the formula:

( )( )

( )( )

( )

=Sxxx

xxxnn

nnXX

nnZZ

n

n

n

n

xxxzzzJ

ttxxfttzzf

......2121

11,...,

11,...,

21

21

1

1

,...,|,...,

,...,;,...,,...,;,...,

The denominator is not dependent of time at all, but on the values n

iiz1=

, and

the numerator is not dependent of shifting because )(tX is S.S.S. In particular:

( )( )

( )( )

( )

( )

( )( )

( )

( )nnZZ

Sxxxxxxnn

nnXX

Sxxxxxxnn

nnXX

nnZZ

ttzzfxxxzzzJ

ttxxf

xxxzzzJ

ttxxfttzzf

n

n

n

n

n

n

n

n

,...,;,...,,...,|,...,

,...,;,...,

,...,|,...,

,...,;,...,,...,;,...,

11,...,

......2121

11,...,

......2121

11,...,

11,...,

1

21

21

1

21

21

1

1

==

=++

=++

And this is true for ,,nti .

)(tZ is S.S.S

2. An example of a W.S.S process that operation of a memoryless system on it

creates a process which is not W.S.S:

Let us define the following random process:

)sin()cos()( tBtAtX +=

where A and B are independent random variables:

)1,0(~,2/1..1

2/1..1NB

pw

pwA

+=

Let us check whether )(tX is W.S.S:

ttttBEtAEtBtAEtXE =+=+=+= ,0)sin(0)cos(0)sin(][)cos(][)]sin()cos([)]([

)()cos()sin()sin(1)cos()cos(1

)sin()sin(][)cos()cos(][)]()([

21212121

21

2

21

2

21

ttRtttttt

ttBEttAEtXtXE

X −=−=+=

=+=

We got that )(tX is W.S.S.

Now, let us examine the random process 2))(()( tXtZ = :

)sin()cos(2)(sin)(cos))sin()cos(()( 22222 ttABtBtAtBtAtZ ++=+=

At 0=t we get:

1..1)0sin()0cos(2)0(sin)0(cos)0( 22222 pwAABBAZ ==++=

Namely, 0))0(var( =Z .

At 2

=t , we get:

22222

2sin

2cos2

2sin

2cos

2BABBAZ =

+

+

=

Namely, 02

var

Z .

Overall we got:

var( (0)) var (*)2

Z Z

Since variance constant in time is a necessary condition for W.S.S, it follows

that )(tZ is not W.S.S.

Explanation: if we negate to assume that )(tZ is W.S.S, we get:

( )(1)

22 2 2

2

2

var( (0)) (0) (0) 0,0 (0) ,2 2 2

var2 2 2

Z Z Z ZZ E Z E Z R E R E

E Z E Z Z

= − = − = −

= − =

where (1) is from the negated assumption that )(tZ is W.S.S (expectation

constant in time and auto-correlation dependent of time difference only).

Namely, we got that var( (0)) var2

Z Z

=

, in contradiction to (*), and, thus,

the assumption that )(tZ is W.S.S is incorrect.

Problem 5:

1. It is clear that in this case the probability is 1, since the sine function is bounded

between -1 and 1.

2. Throughout a complete cycle, sin necessarily takes negative values, therefore

the probability is 0.

3. Notice that the sine function takes positive values throughout half a cycle and

negative values throughout the following half of the cycle. We would like the

given section, whose length is a quarter of a cycle, to be completely contained

in the half cycle where sin takes positive values, and from here that the

probability is 0.25.

4. The answer in this case is 0.5 (only once in a cycle the value 1 is taken. We want

that a section with length of half a cycle contain this point).

5. There are two solutions in the section , − to the equation ( ) ( )sin siny =

and they are 1 2,y y = = − . Therefore, based on the given, we conclude

that , = = − , but these angles are obtained with equal probability since

distributes uniformly in the section , − . Thus:

( )( ) ( )

( )

( )( )

0

0 sin

0

1sin . .

2

1sin . .

2

X

t w p

X t

t w p

=

+

= + −

6. The expected value of the process:

( ) ( ) ( )0 0 0 0 0

1sin sin 0

2E X t E t t d

= + = + =

7. "Expectation in time": notice that the signal is bounded in the section

( ) 1,1X t − , and, thus, the integral on some segment and division by a value

approaching to infinity will give the result zero.

( )0

10

T

TX t dtT

→⎯⎯⎯→

8. For simplicity of the solution, we will assume for the following section that

0 = . For all values of t, when -1< x <1, two solutions for exist that lead

to x:

2

arcsin( )

1

2

21

2

arcsin( )

1( ; ) ( ) |

( )

1 1 1 2 1 1

| cos( ) | 2 2 cos(arcsin( )) 1 sin (arcsin( ))

1 11 1

1

0 . .

i

i

X i x t

i

i

i

x t

f x t fx t

t x x

xx

O W

= −

=

=

= −

= =

= = = =+ −

−=

9.

);(),;|(),;,( 22)(2121)(2121 txfttxxfttxxf txtxx =

When )( 2tX is known, can take two values; that is to say, there are two

different possible sample functions that satisfy )( 2tX and their probabilities are

equal. Namely, given )( 2tX , )( 1tX can take one of two values with probability

0.5.

1 1 1 2 2

1 2 2 1 2 2

2

2 1 2 2 1 2

( ) sin( ) sin( ( ) )

sin( ( ))cos( ) cos( ( ))sin( )

1 ( ) sin( ( )) ( )cos( ( ))

x t t t t t

t t t t t t

x t t t x t t t

= + = − + + =

= − + + − + =

= − − + −

The last equality is true due to:

2

2 2cos( ) 1 ( )t x t + = −

And, therefore:

( )

( )

−−+−−+

+−

−−−−−

=

==

..0

1|||,|1

1))(sin(1))(cos(

2

1

1

1))(sin(1))(cos(

2

1

);(),;|(),;,(

212

2

21

2

22121

2

2

21

2

22121

22)(2121)(2121

WO

xxx

ttxttxx

xttxttxx

txfttxxfttxxf txtxx

Problem 6:

1. The initial condition is 00 =X , this means that it is deterministically known (its

value is 0 with probability 1) and, thus, )()(0

xxfX = .

Let us calculate for the following samples:

1

1 0 1 1 1

1 . . 1/ 21 10

0 . . 1/ 22 2

1 1( ) ( ) ( 1)

2 2X

w pX X W W W

w p

f x x x

= + = + = =

= + −

2

2 1 2

1.5 . . 1/ 4

1 . . 1/ 41

0.5 . . 1/ 42

0 . . 1/ 4

1( ) ( ) ( 0.5) ( 1) ( 1.5)

4X

w p

w pX X W

w p

w p

f x x x x x

= + =

= + − + − + −

Similarly, for a general n we get:

( ) ( )

−−+−=

=

−−−

=

−−

−− 12

0

)1(12

0

)1(

11

2122

1)(

nn

n

k

n

k

n

nX kxkxxf

We got that nX distributes uniformly through n2 points ordered uniformly in

the interval between 0 and )1(22 −−− n .

It can be seen that, at the limit →n , the distribution of the sample nX

approaches uniform distribution ( )2,0Unif .

2.

=

−−−−−

+

==

=++

=+

+=+=

n

k

k

knn

nnnnnnnnn

WX

WWXWWXWXX

1

0

12

2

121

2

1

2

1

2

1

2

1

2

1

2

1

2

1

For initial condition 00 =X we get:

=

=

n

k

k

kn

n WX1 2

1

Notice that this form suits a binary expansion of a number from ]2,0[ :

121. WWWW nnn −−

where each kW takes the value 0 or 1. At the limit, when →n , this

approaches an infinite binary expansion of any number in the interval ]2,0[ .

3. Let us examine the distribution ( )2,0Unif as a stationary distribution. Let us

assume that ( )2,0~1 UnifX n− and we will check what is the distribution of nX

in this case:

nnn WXX += −12

1

( )

)2,0(0

205.0

0

201

2

1

0

211

2

1

0

101

2

1

)1(0

101

2

1)(

0

101

2

1

)1(2

1)(

2

1

0

101

)1(2

1)(

2

1

0

2205.02

)()2(2)()()(1

12

1

Unifelse

x

else

x

else

x

else

x

xelse

xx

else

x

xxelse

x

xxelse

x

xfxfxfxfxfnnn

nn WXW

XX

=

=

=

=

+

=

=−

+

=

=

−+

=

=

−+

=

===−

We got that )()(1

xfxfnn XX −

= and, thus, ( )2,0Unif is a stationary distribution.

In other words, we get an S.S.S process if we choose:

𝑋0 ∼ 𝑈{0,2}

4. Code: import numpy as np

import matplotlib.pyplot as plt

L = 100

M = 10

w_n = np.random.randint(0, 2, (M, L - 1))

for j in range(2):

x_n = np.zeros((M, L))

if j == 0:

x_n[:, 0] = np.random.randint(0, 100, M)

else:

x_n[:, 0] = np.random.randint(0, 2, M)*2

for i in range(L - 1):

x_n[:, i + 1] = 0.5 * x_n[:, i] + w_n[:, i]

plt.plot(np.mean(x_n, axis=0))

plt.xlabel('n')

plt.ylabel('$X[n]$')

plt.legend(['$X_a[n]$','$X_b[n]$'])

plt.show()

5. Plots are given on the same graph

6. In part a, 𝑋0 is chosen arbitrarily with a large expectation, we can see that it

takes some time for the mean of the process to converge to the stationary mean.

That is to say, the process is asymptotically S.S.S and therefore its mean is not

constant but converges to a constant for large values of 𝑛.

However, in part b 𝑋0 is chosen according to section 3 of the problem. This

yields a S.S.S. process with a constant mean.