Homework #1 Odd Numbered Answers
1. From Wikipedia, there are many many very very readable readable accounts accounts of the life and technical technical achiev achievemen ements ts of J.C. Maxwell, and H.R. Hertz, G. Marconi, and N. Tesla. Their backgrounds, personalities, professional recognitions, nitions, and financial successes successes from their works, works, varied great. ”History ”History of Radio,” Radio,” is another another interesti interesting ng article article on Wikipedia Wikipedia on wireless wireless communicatio communications. ns. This article also referenced referenced a PBS documentary documentary program by Ken Burns entitle, entitle, ” Empire of the Air: The Men Who Made Radio,” Radio,” 1992. Besides Besides the book, ”History of Wireless,” by T.K. Sarkar, et al, J. Wiley, 2006, many other books on early workers in this area appeared in the references at the end of the article,”History of Radio.” −5 3. a. For R = 1 G bits/sec. bits/sec. , the new power power requiremen requirementt become b ecomess P T watts. T = 5.68 × 10 −5 watts. If this power is b. For R = 100 K bits/sec., the new power requirement becomes P T T = 5.54 × 10 r = 1 km. Then P T too demanding, we can get closer to the base-station with a range of r 22× ×10−6 T = 2.22 watts. If this power is still considered to be too high, perhaps we can attach an external higher gain −6 antenna with GT = 2 . Then P T watts. T = 1.11 × 10 c. Under the new conditions, the power requirement becomes P T T = 1, 680 watts. This power requirement is definitely not reasonable for a small UAV. If GT = 100, then P T T = 168 watts, perhaps may still be too demanding. For a low-altitu low-altitude de satellite, satellite, the pow p ower er becomes becomes P T 65 × watts. s. By using using a low lower gain gain d. For × 10 −5 watt T = 3.65 −4 watts, which is still very modest. GT = 1, 000, the power becomes P T T = 3.65 × 10
5. Using the Matlab fminsearch fminsearch.m .m function to minimize the residual of ||y − as||l1 , with an initial starting point of 0.4, we obtained ˆaAE = 0.5545312500000005 with an AE error of ϵ ) = 4.724562499999993. ϵ AE (ˆ aAE 0 0 AE Using an an initial starting point of 0.6, we obtained aˆ0 = 0.5545605468749999 with an AE error of ) = 4.724484374999999. W Wee can conclu conclude de that regardle regardless ss of the initia initiall starti starting ng points, points, both ϵAE (ˆ aAE 0 a1 ) = 11.208 and ϵ AE (˜ a2 ) = 5 .188. Clearly, both ϵ AE (˜ a1 ) solutions are extremely close. Furthermore, ϵ AE (˜ AE and ϵ AE (˜a2 ) are greater than ϵ AE (ˆaAE ) since (ˆ ) is the minimum minim um AE error. erro r. , ϵ AE a0 0 7. Using (6-15) for x ˆ and y2, the explicit solution of the optimum LSE solution aˆLS = 1.601464615384616. 0 Using the Matlab fminsearch.m function to minimize the residual of || of || y2 − as||l1 , with an initial starting AE point of 0.53, we obtained a ˆ0 = 0 .5558271484375. The following left figure, plots x vs y2 and a straight LS . In both figures, line of slope a ˆ0 . The following right figure, plots x vs y2 and a straight line of slope ˆaAE 0 we notice two outliers at x = 3 and x = 5. However, in the right figure, most of the values of y2 are quite close to the straight line with the ˆaAE slope, whilte whilte in the left figure figure most of the values values of y2 y2 are quite far 0 LS from the straight line with the ˆa0 slope. This example example shows shows the robust property property of linear estimation estimation based on the LAE Criterion over that based on the LSE Criterion.
y
100
100
90
90
80
80
70
70
60
60
50
y
50
40
40
30
30
20
20
10
10
0 0
2
4
6 x
8
10
12 12
0 0
2
4
6
8
10
12 12
x
ˆn = [0.4] and ϵMS (a ˆn ) = 0.68. For n = 2, a ˆn = 9. From From (6-35) (6-35) - (6-37) (6-37),, we obtained obtained for n = 1, a T
T
Chapter 2 Review of Probability and Random Processes Odd Numbered Homework Solutions
≥ ≥ 0, p + q = = 1.
1. Bernoulli: Bernoulli: p, q
f X (x) = qδ ( qδ (x) + pδ + pδ (x
− 1), 1),
F X (x) = qu q u(x) + pu + pu((x
− 1), 1),
2 σX = pq.
µX = p,
≥ ≥ 0, q + + q = = 1, n ≥ 1.
Binomial: p, q
n k n−k P n (k) = p q , 0 k
()
≤ k ≤ n,
f X (x) =
n
n
∑
∑
P n (k)δ (x
k =0
− k ),
F X (x) =
P n (k )u(x
k =0
− k ),
2 σX = npq.
µX = np, Poisson: b > 0. P k =
bk −b e , k = 0, 1, 2, . . . , k!
fX (x) =
∞
∑
P k δ (x
k=0
− k ),
∞
∑
F X (x) =
P k u(x
k =0
− k),
2 σX = b.
µX = b, Cauchy: a > 0. a f X (x) = , π(x2 + a2 )
1 x 1 F X (x) = tan t an−1 + , π a 2
()
| − ∞ < x < ∞,
−∞ < x < ∞.
The mean and variance of the Cauchy random variable are undefined. Exponential: λ > 0. f X (x) =
�
λe−λx , 0,
x 0 , F X (x) = otherwise
√ 1 2πσ X
�−
≥
�
1 e−λx , 0,
−
≥
x 0 , otherwise
µX =
1 , λ
σx2 =
1 . λ2
Gaussian: f X (x) =
(x µX )2 , 2 2σX
exp
� −∞
−
µX = E[X E[X ], Laplace: b > 0.
∞,
(
F X (x) = Φ
x
− µX , −∞ < x < ∞. σX
)
2 σX = Var[X Var[X ]
b f X (x) = exp( b x ), 2
− | | | − ∞ < x < ∞, µX = 0, 1 bx e , −∞ < x < 0, 0 , 2 F X (x) = 1 − 12 e−bx , 0 ≤ x < ∞.
2 σX =
2 , b2
�
Raleigh: b > 0. f X (x) =
�
( ) 2
2x b
exp −bx 0,
,
≥
x 0 otherwise, otherwise, µX =
3. a. 1
1 = c =
√
πb , 4
1
∫ ∫
F X (x) = 2 σX = (4 (4
1
(x + y ) dxdy = dxdy = c c
0 0 1 1 c ( 2 y + 12 y 2 ) 0 = c
∫ 0
( ) 2
1 exp −bx 0,
−
,
≥
x 0 otherwise, otherwise,
− π) 4b
( 12 x2
⇐⇒ c = 1.
�
+ xy) xy)
1 0
1
dy = dy = c c
∫ 0
( 12 + y ) dy
Chapter 2 Review of Probability and Random Processes Odd Numbered Homework Solutions
≥ ≥ 0, p + q = = 1.
1. Bernoulli: Bernoulli: p, q
f X (x) = qδ ( qδ (x) + pδ + pδ (x
− 1), 1),
F X (x) = qu q u(x) + pu + pu((x
− 1), 1),
2 σX = pq.
µX = p,
≥ ≥ 0, q + + q = = 1, n ≥ 1.
Binomial: p, q
n k n−k P n (k) = p q , 0 k
()
≤ k ≤ n,
f X (x) =
n
n
∑
∑
P n (k)δ (x
k =0
− k ),
F X (x) =
P n (k )u(x
k =0
− k ),
2 σX = npq.
µX = np, Poisson: b > 0. P k =
bk −b e , k = 0, 1, 2, . . . , k!
fX (x) =
∞
∑
P k δ (x
k=0
− k ),
∞
∑
F X (x) =
P k u(x
k =0
− k),
2 σX = b.
µX = b, Cauchy: a > 0. a f X (x) = , π(x2 + a2 )
1 x 1 F X (x) = tan t an−1 + , π a 2
()
| − ∞ < x < ∞,
−∞ < x < ∞.
The mean and variance of the Cauchy random variable are undefined. Exponential: λ > 0. f X (x) =
�
λe−λx , 0,
x 0 , F X (x) = otherwise
√ 1 2πσ X
�−
≥
�
1 e−λx , 0,
−
≥
x 0 , otherwise
µX =
1 , λ
σx2 =
1 . λ2
Gaussian: f X (x) =
(x µX )2 , 2 2σX
exp
� −∞
−
µX = E[X E[X ], Laplace: b > 0.
∞,
(
F X (x) = Φ
x
− µX , −∞ < x < ∞. σX
)
2 σX = Var[X Var[X ]
b f X (x) = exp( b x ), 2
− | | | − ∞ < x < ∞, µX = 0, 1 bx e , −∞ < x < 0, 0 , 2 F X (x) = 1 − 12 e−bx , 0 ≤ x < ∞.
2 σX =
2 , b2
�
Raleigh: b > 0. f X (x) =
�
( ) 2
2x b
exp −bx 0,
,
≥
x 0 otherwise, otherwise, µX =
3. a. 1
1 = c =
√
πb , 4
1
∫ ∫
F X (x) = 2 σX = (4 (4
1
(x + y ) dxdy = dxdy = c c
0 0 1 1 c ( 2 y + 12 y 2 ) 0 = c
∫ 0
( ) 2
1 exp −bx 0,
−
,
≥
x 0 otherwise, otherwise,
− π) 4b
( 12 x2
⇐⇒ c = 1.
�
+ xy) xy)
1 0
1
dy = dy = c c
∫ 0
( 12 + y ) dy
b.
F XY (x, y ) = XY (x,
c.
F X (x) = F XY (x, 1) XY (x,
=
≤ −∞ ≤ −∞
0, 0, xy(x+y) , 2 x(x+1) , 2 y(y+1) , 2
f Y (y) = Y (y
d dx F X (x)
y + 12 , 0,
�
≤
0 < x < 1
⇐⇒ f X (x) = Similarly,
≤
≤ ≤
1,
x(x+1) , 2
∞ ∞
x 0,
�
=
x + 12 , 0,
0 < x < 1, 1 , elsewhere. elsewhere.
0 < y < 1, 1, elsewhere. elsewhere.
5. a. [ x1 x2 x3 ]Λ−1 [ x1 x2 x3 ]T = 2x21 x1 x2 + x 22 2x1 x3 + 4x33 . Deno Denote te Λ−1 = [σij ]. Then Then σ11 = 2, σ22 = 1, σ33 = 4, σ23 = σ 32 = 0, σ12 = σ = σ 21 = 1/2, and σ and σ 13 = σ 31 = 1. Then
−
Λ−1 =
2 −1/2 −1/2 1 1 0
−
− −
−1
0 4
,
Λ=
−
2/3
1/3
1 /6
1/3
7/6
1/12
1/6
1/12
7/24
1 /6 by direct computation, C computation, C = = ((2π ((2π )3/2 Λ 1/2 )−1 = b. Since Λ = 1/
| |
| |
Y 2 Y 3 ]T = B[ B [ X 1 X 2 X 3 ]T yields, Y 1 = X = X 1 X 2 /4 X 3 /2; Y 2 = X = X 2 2X 3 /7; Y 3 = X 3 . d. [ X 1 X 2 X 3 ] = B −1 [ Y 1 Y 2 Y 3 ] yields, X 1 = Y = Y 1 + Y 2 /4 + 4Y 4Y 3 /7; X 2 = Y = Y 2 + 2Y 2 Y 3 /7; X 3 = Y = Y 3 . c. [ Y 1
−
−
.
√ 6/(2π (2π )3/2 ≃ 0.1555.
−
|2 | | 1 | 2 2 Then − − − 2x1x3 + 4x3) = − 2 (2y (2y1 + 7y2 /8 + 2 4y 4y32 /7) and f and f Y (y) = f X (x)J (X |Y ) Y ) = Y (y 1 /2 /2 3 1 2 2 2 1 2 f X (x). Thus, f Thus, f Y (y) = (6/ (6/(2π (2π ) ) exp(− 2 (2y (2y1 + 7y2 /8 + 2 4y 4y3 /7)) = [(2/ [(2/2π ) exp(−y1 )]× [(1/ [(1 /2π · Y (y /2 /2 1 2 1 2 8/7) exp(−7y2 /16)] × [(1/ [(1/2π · 7/24) exp(−12 12yy3 /7)].
transformation J yields J = B = 1. e. The Jacobian of transformation J 1 (2x (2x21 2
x1 x2 + x22
f. Since the 3-dimensional pdf factors as the product of three 1-dimensional pdfs, the three r.v.’s are
2 = 1/2, σ 2 = 8/7, and mutually mutually independen independent. t. By inspection: inspection: µY = µY = µY = 0 and σY Y 2 = 7/24. σY 1
2
3
1
2
3
{ } 2 = V ar{Z } = σ 2 + σ 2 = λ + λ . b. σZ X Y X Y
7. a. µZ = E Z = µX + µY = λ X + λY . c.
φZ (t) = E et(X +Y ) ) = E etX E etY ) ) = e −λ ee
{
⇕
}
{ } {
}
t
X
λX
e−λ ee Y
t
λY
= e −(λ
X
+λY ) eet (λX +λY )
2 = λ + λ . Z is is a Poisson rv with µ with µ Z = λ X + λY and σZ X Y
d.
{
} { −
− }
{
}
P ( P (X + + Y = k) k ) = P ( P ( X = 0, 0 , Y = k or X = 1, Y = k 1 or . . . X = k, k , Y = 0 ) = P ( P (X = 0, Y = k) k ) + P ( P (X = 1, 1 , Y = k 1) + . . . + P ( P (X = k, k , Y = 0) = P ( P (X = 0) P ( P (Y = k) k ) + P ( P (X = 1) P ( P (Y = k 1) + . . . + P ( P (X = k) k ) P ( P (Y = 0)
−
−λX 0 −λY k −λX 1 e−λY λk−1 −λX k −λY 0 Y = e 0! λX e k! λY + e 1! λX (k−1)! + . . . + e k! λX e 0! λY 0 1 0 k k λk−1 λX λY = e−(λX +λY ) λ0!X λkY ! + λ1!X (kY −1)! . . . + k! 0! . Butthesuminthebracket= (λ (λX + λY )k /k! /k ! . Thus, Thus, − λX +λY ) k ( P ( P (X + + Y = k) k ) = e (λX + λY ) /k! /k ! , k = 0, 1, . . . .
�
⇕
�
2 = λ + λ . Z is Z is a Poisson rv with µ Z = λ X + λY and σZ X Y
.
e. Use the moment generating function method to attack this problem. 2t
φZ (t) = E et(2X +3Y ) ) = E et2X E et3Y ) ) = e −λ ee
{
}
{
} {
}
X
λX
3t
e−λ ee Y
λY
= e −(λ
X
+λY ) e2t λX +e3t λY
e
.
t
No.Z No.Z is is not a Poiss Poissonr onrv v , since since its φZ (t) isnotof theform theform e−λ ee λ . 9. S (ω)
= R(0) + R(1)[e (1)[eıω + e−ıω ] + R(2)[e (2)[eı2ω + e−ı2ω ] 1 = 1 + cos cos(ω ) + cos(2ω cos(2ω), π ω < π. 2
− ≤
R(n)
= =
=
π π 1 1 1 S (ω )eıωn dω = dω = [1 + cos(ω cos(ω ) + cos(2ω cos(2ω )]e )]eıωn dω 2π −π 2π − π 2 π 1 1 [eıωn + (eıω(n+1) + eıω(n−1) ) 2π −π 2 1 + (eıω(n+2) + eıω(n−2) )] dω 4 π 1 1 cos(ωn cos(ωn)) + [cos(ω [cos(ω (n + 1)) + cos(ω cos(ω (n 1))] 2π −π 2 1 + [cos(ω [cos(ω(n + 2)) + cos(ω cos(ω (n 2))] dω 4 1 sin(ωn sin(ωn 1 sin(ω sin(ω(n + 1)) sin(ω sin(ω (n 1)) )+ + 2π n 2 n+1 n 1 π 1 sin(ω sin(ω (n + 2)) sin(ω sin(ω (n 2)) + + 4 n+2 n 2 −π
∫ ∫
∫
∫ {
−
−
=
=
�
1 2π
�
�
−
�
−
−
−
−
�
2sin(πn 2sin(πn)) sin(π sin(π (n + 1)) sin(π sin(π(n 1)) + + n n+1 n 1 1 sin(π sin(π (n + 2)) 1 sin(π sin(π(n 2)) + + 2 (n + 2) 2 (n 2)
−
−
But
sin(πm sin(πm)) = m
Thus, R Thus, R((n) in indeed yields
R(n) =
11. a.
{
1 1/2 1/2 1/4 1/4 0
π, 0,
}
���� � .
m = 0 m = 0.
̸
, n = 0, , n = 1, , n = 1, , n = 2, , n = 2, , all other integral n integral n..
− −
2π
E X (t, Θ) = b.
�
−
}
∫
Acos( Acos(ωt + θ )
0
dθ = 0. 0. 2π
2π
{
}
RX (t, t + τ ) τ ) = E X (t, θ)X (t + τ, θ) = 2π
Acos( Acos(ωt + θ )Acos( Acos(ω (t + τ ) τ ) + θ)
0
dθ 2π
dθ A2 = cos( cos(ωτ ) ωτ ), 2 0 2π 2 where we used the identity cos( cos (u)cos( cos(v ) = (1/ (1/2)cos 2)cos((u + v) + (1/ (1/2)cos 2)cos((u v). =
c.
A2
∫
∫
[cos( cos(ωτ ) ωτ ) + cos(2 cos(2ωt ωt + 2θ 2θ)]
−
T 1 < X (t, θ) >= > = lim T →∞ Acos( Acos(ωt + θ )dt = dt = 0. 2T −T This integral is zero since there is as much area above the abscissa as below due to the symmetry of the problem. Of course, if we do the integration, we will find the same result.
∫
d.
T 1 < X (t, θ)X (t + τ, θ) >= lim T →∞ Acos(ωt + θ)Acos(ω(t + τ ) + θ)dt 2T −T
A2 1 = limT →∞ 2 2T
T
∫ {
−T
∫ ∫ T
cos(ωτ )dt +
}
cos(2ωt + ωτ + 2θ)dt =
−T
A2 cos(ωτ ). 2
e. Yes, both the ensemble and time averaged means and autocorrelations are the same for this random
process. Of course, we can not claim this process is ergodic having shown only these two averages are equivalent.
Chapter 3 Hypothesis Testing Odd Numbered Homework Solutions
1. a.
x 0 1 2
2 π 1 2
∫ e− erf (x) = √ Q(x) =
b&c.
− ·
x 0.5000 1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 4.0000 4.5000 5.0000
−t2
x 2· 1 1 e 2 dt = 1 0 2π √ 2 1 erfc( x2 ). 2
t2
dt = 2 √ · x erf ( √ )= 2
·
Q(x) 0.3085 0.1587 0.0668 0.0228 0.0062 0.0013 0.0002 0.0000 0.0000 0.0000
∫
1
− 2 · Q(x/ √ 12 ).
√
Q1(x) 0.7041 0.2420 0.0863 0.0270 0.0070 0.0015 0.0002 0.0000 0.0000 0.0000
Q2(x) 0.3085 0.1587 0.0668 0.0228 0.0062 0.0013 0.0002 0.0000 0.0000 0.0000
logQ(x) -0.5107 -0.7995 -1.1752 -1.6430 -2.2069 -2.8697 -3.6333 -4.4993 -5.4688 -6.5426
logQ1(x) -0.1523 -0.6162 -1.0638 -1.5687 -2.1542 -2.8305 -3.6032 -4.4755 -5.4495 -6.5267
logQ2(x) -0.5107 -0.7995 -1.1752 -1.6430 -2.2069 -2.8697 -3.6333 -4.4991 -5.4684 -6.5420
Approximations of Log Q(x) by Log Q (x) and Log Q (x) 10
10 1
10 2
0
log Q(x) 10 Log Q (x) 10 l Log Q (x)
−1
10 2
) −2 x ( Q 0 1
g o L f o−3 s e u l a v n o i t −4 a m i x o r p p A−5
−6
−7 0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
x
3. Under H 0 , p0 (x) = (1/2)e−x/2 , x 0; under H 1 , p1 (x) = (x2 /16)e−x/2 , x 0. Then Λ(x) = 2x2 , x 0. Thus, R1 = x : Λ(x) Λ0 = 1 = x : x2 8 = x : x 2 2 and R0 = x : 0 x 2 2 . ∞ √ Then P F A = P (x R1 H 0 ) = 2 2 (1/2)exp( x/2)dx = exp( 2) = 0.243 and P D (x R1 H 1 ) = ∞√ (x2 /16)exp( x/2)dx = (1/16)exp( 2)(32 + 16 2) = 0.830 . 2 2
{
∫
−
≥ ∈ |
≥ ∫ } {
≥ √ ≥ √ } −
≥ } { − √ √ −
5. We are given
n
pN (n) =
� e− ,
≤
0 n< n < 0.
0,
{
≥ ≤ ≤ √ } ∈ |
∞,
Then x
p0 (x)
=
p1 (x)
=
� e− , � 0,e− −
(x 1) ,
0,
0 x< x < 0,
≤
∞,
1 x< x < 1.
≤
∞,
Thus, p1 (x) Λ(x) = = p0 (x)
� e,
0,
1 0
≤ x < ∞, ≤ x < 1.
We note, Λ(x) is not defined (and need not be defined) for x < 0, since p 0 (x) and p 1 (x) are zero for x < 0.
{
}
If P ( x : Λ(x) = Λ0 ) = 0, then we can arbitrarily associate the equality of Λ(x) = Λ 0 with R1 and have R1 R0
= =
{x : Λ(x) ≥ Λ0} {x : Λ(x) < Λ0}.
If P ( x : Λ(x) = Λ0 ) > 0 (which is the case for this problem), we need to associate the equality with R 1 and R 0 in a “fairer” manner consistent with other constraints.
{
}
If we set 0 < Λ0 < e, then R1 R0
{1 ≤ x < ∞} = {x : e = Λ(x) > Λ0} {0 ≤ x < 1} = {x : 0 = Λ(x) < Λ0}
= =
For this case, P F A =
�
p0 (x) dx =
R1
� ∞ − e
x
dx = e −1
1
≃ 0.37 ̸= 0.1,
as required. Thus 0 < Λ0 < e is not possible. If we set e < Λ 0 <
∞, then {1 ≤ x < ∞ : e = Λ(x) < Λ0} ⊂ {0 ≤ x < 1 : 0 = Λ(x) < Λ0} ⊂
R0 R0 .
Thus, R0 R1
= =
P F A =
�
For this case,
{0 ≤ x < ∞}, ∅. ̸
p0 (x) dx = 0 = 0.1,
R1
as required. Thus, e < Λ 0 <
∞ is not possible.
Thus, we must have Λ 0 = e. In this case R1 cannot contain any x 0 0 = Λ(x) < Λ 0 = e. Then R 1 includes only those x [1, ) such that
∈ { ≤ x < 1 }, since these x’s satisfy
∈ ∞
�
P F A =
p0 (x) dx = 0.1.
(1)
R1
a. Clearly, there are many possible R 1 that can satisfy (1). Two examples are given in (b) and (c). Thus,
R1 and R 0 are not unique. b. Suppose we pick R 1 = x : 1
{
≤ x1 ≤ x}. The we need P F A =
� ∞ −
x
e
dx = e −x = 0.1. 1
x1
Thus, x 1 =
− ln 0.1 = ln10 ≃ 2.3 and P D = 1
�
p1 (x) dx =
� ∞ −
(x 1)
e
1
R1
− dx = e
� ∞ − e
x
dx = 0.1e.
x1
This shows,
{
c. Suppose we pick R 1 = x : 1
R1
=
R0
=
{x : x1 = ln10 ≤ x}, {x : 0 ≤ x < x1 = ln10}.
≤ x ≤ x2}. Then
P F A =
�
R1
x2
p0 (x) dx =
� 1
e−x dx = e −1
− e−x
2
= 0.1.
Thus, x 2 =
− ln(e−1 − 0.1) ≃ 1.32 and P D = 2
�
x2
�
p1 (x) dx =
e−(x−1) dx = e
1
R1
x2
�
e−x dx = 0.1e.
1
This shows, R1 R0
{x : 1 ≤ x ≤ x2 = − ln(e−1 − 0.1)}, {x : x2 < x} ∪ {x : 0 ≤ x < 1}.
= =
d. Consider any R 1
⊂ [1, ∞) such that P F A =
�
p0 (x) dx = 0.1.
R1
For x
∈ R1, Λ(x) = p1(x)/p0(x) = Λ0 = e or p1(x) = p0(x)e. Thus, P D =
�
p1 (x) dx = e
R1
�
p0 (x) dx = 0.1e.
R1
Indeed, it is consistent that P D = P D = P D = 0.1e. 1
2
7. Often we denote as P I , the probability of error of type I, for the probability of declaring H 1 given H 0 is true, and P II , for the probability of error of type II, for the probability of declaring H 0 given H 1 is true. Thus,P F A = P I = P (Declare H 1 H 0 ) = P (X 5 p = 0.05) = 5k=0 C k200 (0.05)k (.95)200−k . We can use the Poisson approximation to the binomial probability using λ = np = 200 0.05 = 10. Thus, P F A 5 10 P F A = 1 0.067 = 0.933. Similarly,P M = P II = k=0 exp( 10) k ! = 0.067 or P (Declare H 0 H 0 ) = 1 ∞ 200 P (Declare H 0 H 1 ) = P (X > 5 p = 0.02) = k=6 C k (0.02)k (.98)200−k . Now, λ = np = 200 0.02 = 4 5 4 and P M 1 k=0 exp( 4) k! = 0.215. In plain English, if one uses the existing manufacturing process (i.e., H 0 ), then one has a probability of 0.933 of having declaring more than 5 defective items in a sample of 200 items, while if one uses the new manufacturing process (i.e., H 1 ), then one has a much lower probability of 0.215 of declaring more than 5 defective items in a sample of 200 items. The manufacturing manager will be much happier with the new manufacturing process.
|
∑
−
|∑ ≈ −
k
−
∑
≤ |
∑|
|
k
−
×
−
≈
×
9. a. From the LR Test, we know R1 = x : x > x0 and R0 = x : x < x0 where x0 = π 1 + ln 1− π . Then 2
{
¯ (π, π) C
¯ (π) = π = C
}
� ∞ x0
b.
= πQ(x0 ) + (1 ¯ (π) π C 1E 3 1E 3 0.01 0.01 0.1 0.098664 0.2 0.18616 0.3 0.253 0.4 0.2945 0.5 0.30854 0.6 0.2945 0.7 0.253 0.8 0.18616 0.9 0.098664 0.99 0.01 0.999 1E 3 Table 1.
−
−
−
¯ (π, π0 ) = πQ(x1 ) + (1 c. C d. See Table 1.
{
x2 /2 1 e dx + (1 2π
√
−
}
x0
− π)
�
− π)(1 − Q(x0 − 1)).
−∞
¯ (π, 0.1) ¯ (π, 0.5) C ¯ (π, 0.9) C C 4.4477E 3 0.30854 0.95422 0.013013 0.30854 0.94566 0.098664 0.30854 0.86001 0.19383 0.30854 0.76484 0.289 0.30854 0.66967 0.38417 0.30854 0.5745 0.47933 0.30854 0.47933 0.5745 0.30854 0.38417 0.66967 0.30854 0.289 0.76484 0.30854 0.19383 0.866001 0.30854 0.098664 0.94566 0.30854 0.013013 0.95422 0.30854 4.4477E 3 ¯ ¯ C (π) and C (π, π0 ) vs. π.
−
−
− π)(1 − Q(x1 − 1)), where x1 = 12 + ln( 1−ππ ).
¯ (π) and C ¯ (π, π0 ) vs. π. e. Plots of C
2
e−(x−1)
0
0
/2
dx
1 + 2
ln Λ0 =
11. From the solution of Problem 6, we know Λ(x) = pp
(x) 1 (x) 0
2 0 2 1
= (m1 /m0 ) m m 2 0 2 1
2
+x2 , +x2
−∞ < x < ∞ , m1 > m0 ,
≥ Λ0 ⇔ mm ++xx ≥ (m0/m1)Λ0 = γ . √ √ If γ ≤ 0, then R1 = ℜ and R0 = ∅. If γ > 0, then R1 = {x : |x| ≥ γ } and R0 = {x : |x| ≤ γ }. This Λ(x)
2
shows that R 1 and R 0 are functions of the parameter m 1 . Furthermore,
−√ γ
√ ∞ ∫ ∫ ∫ p (x)dx P = p (x)dx + p (x)dx =1 − √ −∞√ −√ � √ � ∫ − =1 − = 1 − tan FA
γ
0
0
0
γ
γ
γ
2m π
0
dx m20 +x2
2 π
γ m0
1
⇔ m0 tan((π/2)(1 − P F A)) = √ γ .
This shows that P F A is indeed a function of the parameter m 1 , and thus this LR test is not UMP. 13. a.
θ1x (1 θ0x (1 m
Lm (xm ) =
m
− θ1)m−x − θ0)m−x
m
m
.
b.
−
−
−
log(B) < xn log(θ1 /θ0 ) + (m xm ) log((1 θ1 )/(1 θ0 )) < log(A) , b + c m < xm < a + c m , B) log(A) log((1−θ )/(1−θ )) b = log(θ (1−log( θ )/(θ (1−θ ))) , a = log(θ (1−θ )/(θ (1−θ ))) , c = log(θ (1−θ )/(θ (1−θ ))) . 1
0
0
1
1
c.
A
0
0
1
1
1
0
0
0
1
0.95 1 − β 0.05 ≃ αβ = 0.05 = 19 , B ≃ = = 0.0526 . 1−α 0.95
A)β log(B )β +log(A)(1−β ) ≃ log(B)(1E {−zα|H �)+log( , } � , n¯ H ≃ E {� z |H } � �� � � p (x |H ) θ (1−θ ) E {z |H 0 } = E H log p (x |H ) = E H log θ (1−θ ) | �� � � � � � � � � � θ� θ θ 1−θ 1−θ = θ0 log θ + (1 − θ0 )log 1−θ = θ 0 log θ − log 1−θ + log 11− −θ � � � � � � ( ) −θ ) 1−θ 0.8 0.4 =θ0 log θθ (1 = 0.2log 00..62× ×0.4 + log 0.8 = −0.3348 , (1−θ ) + log 1−θ .14722 n ¯ H ≃ −2.79778+0 = 7.92 , �� −0.3346 � � p (x |H ) .7972 E {z |H 1 } = E H log p (x |H ) = 0.3820; n ¯ H ≃ −1.4725+2 = 6.94 . 0.3820
n ¯ H
0
1
0
Xi
0
Xi
i
1
xi 1 xi 0
1
i
0
0
−xi
1
1
−xi
1
0
1
1
1
1
1
0
0
0
0
0
1
0
1
0
1
0
0
1
Xi
i
1
Xi
i
0
1
{4.6875e − 002, 4.2235e − 002, 3.1250e − 002, 3.1250e − 002, 3.5156e − 002, 3.1250e − 002, 3.1250e − 002, 4.6875e − 002, 4.6875e − 002, 4.6875e − 002} . The associated n values are: { 7, 33.5, 5, 10, 5, 5, 7, 7, 7}. Thus, the average number of terms of these real-
d. Under the H 0 simulation, the 10 L n values are:
izations is n ¯ = 9.10 . The theoretical value is given by ¯nH = 7.92. We note, simulation result of n ¯ is fairly close to the theoretical value of ¯nH . Under the H 1 simulation, the 10 Ln values are: 4.5562e+001, 2.7000e+001, 3.0375e +001, 2.0250e+ 001, 2.0250e + 001, 2.7000e + 001, /, 2.7000e + 001, 2.5629e + 001, 2.5629e + 001 . The associated n values are: 10, 3, 8, 6, 6, 3, /, 3, 3, 16, 15 Thus, the average number of terms of these realizations is n ¯ H = 7.79 . The theoretical value is given by n ¯ H = 6.94. We note, in realization 7, even after considering up to n = 35, the Ln value still did not exceed the threshold value of A = 19. Thus, by omitting that outlier, only 9 terms were used in the evaluation of ¯nH . We also note, simulation result of n ¯ is fairly close to the theoretical value of ¯nH . 0
0
{
{
}
}
1
1
1
1
15. There are three typos on p. 90. In Table 3.3, 1., for n = 20, αA should = 0.13159 and 2., for n = 10, αA should = 0.005909 . 3. Five lines below Table 3.3, α should = Φ(γ 0 / n). The answer to this problem is tabulated below in Table 1. n n = 10
αA
−
√
β G and β ST
αA = 0.17188 β G = 0.95680 > β ST = 0.87913 αA = 0.010742 β G = 0.64135 > β ST = 0.37581 n = 20 αA = 0.13159 β G = 0.99591 > β ST = 0.96786 αA = 0.005909 β G = 0.89368 > β ST = 0.62965 Table 1: Comparisons of probability of detection P D (Gaussian) = β G to P D (SignTest) = β ST for n = 10, αA = 0.17188 and 0.010742, and for n = 20, α A = 0.13159 and 0.005909, with p = 0.8.
Chapter 4 Detection of Known Binary Deterministic Signals in Gaussian Noises Odd Numbered Homework Solutions
1.
Λ0
s02 = 12 + 22 + 32 = 1 + 4 + 9 = 14 = s12 π0 (C 10 − C 00 ) 0.25 × (2 − 0) 0.50 2 = = = ; π1 (C 01 − C 11 ) 0.75 × (1 − 0) 0.75 3 1 1 σ 2 ln Λ0 + s1 2 − s0 2 = −0.406. 2 2
=
γ =
T
x (s1
− s0) ≥ γ ⇒ H 1. η1
µη µη
1
1
− 2[x1, x2, x3]
0
1
≤ 0.203} ,
{
√ 12π
t2 /2
/2
1 2π
√
= 1
0.203
e−(η
1
−14)
2
/28
dη1
−∞ 1 2π
√
dt =
13.8/3.74
−
2
e−t
/2
dt
−∞
dt = Q(3.69) = 0.000112.
3.69
1 2π
= P η1 =
−√
e
2
e−t
{ ≤ 0.203|H 1} = √
P D
Since s 0 = [2, 6, 2]T = R1 , P F A , and P D apply.
√
(0.203 14)/ 14
− −∞∞ −
≥ 0.203}.
{
{ ≤ 0.203|H 0} = √ ×
=
H 1 .
R0 = x : η 1
1 2π 14
= P η1
√
γ
= 12 + 22 + 32 = 14.
R1 = x : η 1 = x T s0
=
1 2 3
≤ 0.406 = 0.203 ⇒ H 1 . 2 2 E{η1 |H 0 } = s0 = 14 −14
= =
|H |H
1 2π
≥ ⇒
= x1 + 2x2 + 3x3
ση21
P FA
σ2 = 1.
14.203/3.74
2
e−t
/2
0.203
e−(η
1
+14)2 /28
dη1
−∞
dt =
−∞
1 2π
√
3.80
e−t
2
/2
dt
−∞
− Q(3.80) = 0.999928.
−s1 has the same norm as s 0 = [1, 2, 3]T = −s1, all the previous results on R 0,
3. a. From theory, we know for a binary detection of deterministic waveform in WGN, the sufficient statistic T T is given by η = 0 x(t)[s1 (t) s0 (t)]dt = 10 0 x(t) sin(2πf 0 t)dt. Futhermore, the threshold constant T T for η is given by γ = (N 0 /2)ln(Λ0 ) + 0.5 0 [s21 (t) s 20 (t)]dt = 50 0 sin2 (2πf 0 t)dt = 25. Then R1 = x(t) : η γ and R o = x(t) : η γ .
{
−
≥ }
{
≤ }
−
b. Since η is a Gaussian r.v., we need to find its means and variances under H 0 and H 1 . µ 0 = E η H 0 = 0. T sin2 (2πf 0 )tdt o
µ1 = E η H 1 = 100 = 50. Then P F A = Q(γ/σ η ) = Q(3.5355) = 2.035E
{| }
c. Now, η =
T 0 x(t)s
{ | }
2
×{ |
−5
× 2 × 105 00.5 10 T (s (t))2 dt = 100 × 1 = 100. 0
}
sin(2πf 0 t)dt = 200/π = 63.6620.
1
{| }
T sin2 (2πf 0 t)dt 0
= E {(η − µ0 ) |H 0 } = 100 − 4. P D = Q((γ − µ1)/ση ) = 0.9998.
(t)dt. Then µ 0 = E η H 0 = 0.
µ1 = E η H 1 = 100 ση2 = 100
ση2
= 50.
P F A = Q(γ/σ η ) = Q(25/10) = Q(2.5) = 6.2E 3. = Q((25 63.662)/10) = Q( 3.8662) = 0.9999. P D > P D (which is good), but P > P F A (which is bad.) Thus, it is hard to compare the We note, P D FA performance of the receiver using s (t) instead of s(t) explicitly if we allow both probability of false alarm and probability of detection to vary as compared to the original optimum case.
−
−
−
d. In order to find a new γ , we set P F A = P F A (γ ) = 2.035E
− 4. Then, we must have 3.5355 = γ /10, or γ = 35.355. Then P D (γ ) = Q((γ − 63.662)/10) = Q(−2.8307) = 0.9977. We note, this P D (γ ) < P D . In other words, having used the sub-optimum s (t) instead of the optimum s(t) at the receiver, for the same fixed probability of false alarm, the probability of detection went down from 0.9998 to 0.9977. The degradation of the probability of detection is very small. However, the reduction of complexity of the receiver is fairly significant.
5. Under H 1 , the output signal is given by 0,
to < 0,
to
so (to ) =
s(τ )h(to
o
T s(τ )h(to o
− τ )dτ,
0
− τ )dτ,
≤ to ≤ T,
T
≤ to.
Under H 0 , s o (to ) = 0. The output noise under both hypothesis is given by ˜ o) = N (t
to
N (τ )h(t
o
− τ )dτ.
˜ is zero, its variance is given by Since the mean of N (.)
σo2 (to ) =
0,
to < 0, to
(N o /2)
h2 (to
o
(N o /2)
T 2 o h (to
− τ )dτ,
− τ )dτ,
0
≤ to ≤ T,
T
≤ to.
The output SNR can be defined by SN R(to ) = s2o (to )/σo2 (to ). For to < 0, SN R(to ) is not defined. b Under H 0 , for 0 < to , then SN R(to ) = 0. From Schwarz Inequality, we have ( a f (t)h(t)dt)2 b 2 b f (t)dt a h 2 (t)dt, with equality if and only if f (t) = ch(t) for some constant c. Then under H 1 , a we have
to
Max{h(.)} SN R(to ) = (2/N 0 )
{
}
s2 (τ )dτ, 0
0
T
M ax{h(.)} SN R(to ) = (2/N 0 )
{
}
Thus, o
∞
−∞
K (t
≤ to.
0
; h(.)} {SN R(to )} = (2/N 0 )
7. Let K (t) = h(t) and f (t) = exp(iω0 t). Then denote t
− s)f (s)ds =
−∞ −
h(u) exp(i ω0 (t
∞
− u))du =
≤ to ≤ T,
s2 (τ )dτ, T T
M ax{0≤t
s2 (τ )dτ.
0
− s = u and ds = −du. Thus,
∞ −∞
h(u)exp( i ω0 u)du
−
exp(i ω0 t) = H (ω0 ) exp(i ω0 t) .
This show that the eigenvalue λ = H (ω0 ) and the eigenfunction f (t) = exp(i ω0 t).
2
≤
9. a.
8 2 2 5
R =
|R − λI | = 0 (8 − λ)(5 − λ) − 4 = 0 = 40 − 13λ + λ2 − 4 = λ 2 − 13λ + 36 = (λ − 9)(λ − 4) ⇒ λ1 = 9, λ2 = 4. 8 2 θ11 θ11 2θ12 = 0 11 + 2θ12 = 9θ11 ⇒ 8θ ⇒ −2θθ1111 + = 9 − 4θ12 = 0 2 5 θ12 θ12 2θ11 + 5θ12 = 9θ12 √ √ θ = 2θ , 4θ 2 + θ2 = 1 ⇒ 5θ2 = 1 ⇒ θ = 1/ 5, θ = 2/ 5.
11
12
12
12
12
12
11
⇒ √ 8 2 2 5
θ21 θ22
θ21 θ22
= 4
1 2
1 , 5
2 1
θ1 =
8θ21 + 2θ22 = 4θ21 2θ21 + 5θ22 = 4θ22
2θ21 + θ22 = 0 2θ21 + θ22 = 0
⇒ 1 5
√ 1 2
θ2 =
−
1 2
b. R− = T D− T T
T =
2 1
√
1 , 5
1 2
−
D =
⇒ 9 0 0 4
1 2
D− =
1 3
0 1 2
0
c. Whitening filter.
R−
1 2
=
1 5
=
1 5
2 1
1 2
−
1 3
4 1 3 + 2 2 1 3
0 1 2
0
2 3 1 3 +
− 1
−
− − − − −
2
2 1
1 2
8+3 6 1 3
1 = 5
=
1 5
1 3 1+6 3
−
2 1
2 3 1 2
1 2
11 6 1 3
1 = 5
1 3
1
1 3 7 3
−
1 = 30
11 2
−
−2 14
Check. R−1 = =
1 25
121 36 + 11 18
− −
1 9
1 9 7 9
125 4 25 2
1 RR−1 =
1 49 9 + 9
− 252
× 25 −
=
50
1000 100 250 250 900
− −
− −− − −
− 1118 − 97
125 36 25 18
1 = 25
1 900
125 50
400 + 400 100 + 1000
−
=
25 18 50 9
50 200
1 0 0 1
d. T
x
-1/2
R
-1/2
xR
T
-1
R (s1-s0) x x -1/2 ~ -s ~ R (s -s ) = s = x 1
0
1
-1
R
x
= x
T
-1
R (s1-s0)
0
s1-s0
-1/2
R
s1-s0 Conventional approach to LR Receiver for CGN
Whitening filter approach to LR Receiver for CGN 3
η
1 2
− s˜0) = xT R−1(s1 − s0) −60 > 125 −50 < −50 200 60
= xT R− (˜s1 1 = x 900 T
=
5 T x 3
− 7 10
H1
> < 0
H0
H1
= ( 7x1 + 10x2 ) > < 0.
−
H0
R1 R0
= =
{10x2 − 7x1 ≥ 0} {10x2 − 7x1 ≤ 0}.
4
H1
H0
H1
> < 0
H0
0
11. The integral equation of interest has the form T
R(t t ' )q(t ' )dt ' q(t ),
0 ≤ t ≤ T.
0
(1)
The power spectral density function of R(t ) is given by S ( ) F{R (t )} .
Since the integral in the left hand side of (1) is a convolution, thus by using Fourier transform, this integral equation can be converted to S ( )Q( j ) Q( j ) .
(2)
Since S(ω) is an even function and if it is assumed to be a rational function, we can write it as S ( ) N ( ) / D ( ) , 2
2
and (2) becomes N ( )Q( j ) D( )Q( j ), 2
2
(3)
where N ( ) 2
m
b ( ) 2
k
k
and D( ) 2
k 0
n
a
k
( 2 ) k .
(4)
k 0
We also have
2 d 2 (e jt ) / dt 2 ( 2 ) e j t ,
(5)
thus from operational calculus (3) and (4) be come m
b
k
k 0
2 k
d
2 k
dt
a
q (t )
2 k
n
k 0
d k
2 k
dt
q(t ) .
(6)
We know that q(t ) cannot contain any impulse function, and therefore the solutions to the above differential equation are also solutions to the integral equation. Let us consider the specific kernel of
5
R (t t ' ) exp( t t ' ) .
(7)
Then (1) becomes
T
0
exp( t t ' ) q (t ' )dt ' q (t ),
0 ≤ t ≤ T.
(8)
The Fourier transform of the kernel of (7) is given by S ( )
2 2 2
, .
(9)
Then the integral equation of (8) can be converted to the following homogeneous differential equation of 2 q(t ) q" (t ) 2 q(t ),
(10)
which can be rewritten as q" (t ) ( 2 / ) q (t ) 0 .
(11)
The solution to this differential equation is given by q (t ) A exp( t ) B exp( t ) ,
(12)
( 2 2 / )1/2 .
(13)
where
Substituting (12) into (1), and after some manipulation, we obtain A( ) B( ) 0,
(14a)
A( ) exp( T ) B( ) exp( T ) 0.
(14b)
For a solution to exist, the determinant of the matrix of the coefficients of these two e quations must be 0. Therefore exp( T ) ( ) /( ) ,
(15)
where γ is a complex number.
6
To proceed, let 2 1/ 2 j ( 2 / ) ,
(16)
then (15) becomes cos( T ) j sin( T ) ( j ) 2 /( 2 2 ).
(17)
From the real and imaginary components, we obtain cos(T ) ( 2 2 ) / 2 ,
(18a)
sin(T ) / ,
(18b)
when combined with (16) result in tan(T ) 2 / ( 2 2 ). Let sin( x ),
(19)
cos( x), then with x T / 2, (19) becomes
tan( T / 2) ( T / 2) /( T / 2). If we let
(20)
sin( x), cos( x) , and x T / 2, then (19) becomes cot( T / 2) ( T / 2) /( T / 2).
Let k
(21)
T / 2, then from (20), we obtain
k
tan( k )
T / 2 k
,
(22)
and from (21), we obtain
cot( k )
T / 2 k
.
(23)
The solutions to (22) are the intersections of the tangent function to a hyperbola and the solutions of (23) are the intersections of the -cotangent function to a hyperbola. Both of these equations have countably infinite number of solutions and thus countably infinite number of eigenvalues.
7
The hyperbola is monotonically decreasing in (0, ∞) and the tangent function and negative cotangent function have period of π and are monotonically increasing in each period. In addition, they have discontinuities at multiples of π /2. Thus, the hyperbola intersects these two functions alternatively, and the locations of the intersections increase with increasing T / 2 . In order to have a clearer picture of the solutions, these three functions are plotted in Fig. 1 with α = 10 and β = 5.5. (Without loss of generality, we picked T = 1). From Fig. 1, we can find the first three intersections corresponding to the three largest eigenvalues. From Fig. 1, intersections occur at k values of about 1.1, for k = 1, of about 2.4, for k = 2, and about 3.7, for k = 3. Upon solving (22) and (23) more precisely (e.g., using Newton’s Method), 1 = 1T /2 = 1.16888, 2 = 2T /2 = 2.41995, and 3 = 3T /2 = 3.77161. We note the plot on page 368 of nd
McDonough and Whalen (2 edition) are incorrect. (β T/2)/x, tan(x), & −cot(x) 15 (β T/2)/x tan(x) −cot(x)
10
y
5
0 0
1
2
3
4
5
6
7
x=(wT/2)
Figure 1. Intersections of ( T/ 2)/x with tan (x) and –cot (x) for = 10 and = 5.5. We note from (16), we can obtain the eigenvalues k 2 / (k 2 2 ), k 1, 2, .
8
(24)
For = 10 and = 5.5, (24) yields the three largest eigenvalues by the differential equation approach to be 1( DF ) = 3.07992, 2( DF ) = 2.04939, and 3( DF ) = 1.26219.
(25)
The subscript of ( DF ) indicates these eigenvalues were obtained based on the differential equation method. Shortly, we will evaluate these three largest eigenvalues by the matrix eigenvalue method. Substitute the above value of k ( DF ) in (25) into (16) and then into (15), we obtain q(t ) a cos( t ) b sin( t ) ,
where a and b are arbitrary constants. To find the values of a and b, we substitute this q(t ) into the original integral equation and obtain a b 0,
(26a)
cos(T ) sin(T ) a sin(T ) cos( T ) b 0,
(26b)
which yields [tan(T ) 2 / ( 2 2 )] a 0.
(27)
From (19), this equation is already satisfied for any
k
, which is the solution to (22) and 23).
The parameter a can be arbitrary, and the eigenfunctions we are looking for are q k (t ) a k cos(
t ) ( /
k
k
) sin(
t ),
k = 1, 2, … ,
k
(28)
with ak satisfying (29) 2
1 2
a k
T
0
cos( t ) sin( t ) k k dt , k = 1, 2, … , k
(29)
so that these eigenfunctions will form an orthonormal set. The three associated eigenfunctions corresponding to the eigenvalues in (25) are given by
9
q1 (t ) 0.483657 cos(2.33777t ) (2.35267)sin(2.33777t ) ,
(30)
q2 (t ) 0.850800 cos(4.83990t ) (1.13639)sin(4.83990t ) ,
(31)
q3 ( t ) 1.07616 cos(7.54321t ) (0.729132)sin(7.54321t ) .
(32)
Plots of these three eigenfunctions (given by (30) – (32)) obtained by the differential equation method are shown in Fig. 2. Eigen functions of the three largest eigenvalues using dif. equ. method 1.5
λ (DE)=3.07992 1 (DE) =2.04939 2 (DE) λ =1.26219 3
λ
1
0.5
1 q
0
−0.5
−1
−1.5 0
0.2
0.4
0.6
0.8
1
t1
Figure 2. Eigenfunctions associated with the three largest eigenvalues obtained by the differential equation method.
From the plots in Figure 2, we can see the eigenfunction associated with the largest eigenvalue 3.08 has no zero-crossing, one for eigenfunction associated with eigenvalue 2.05, and two for eigenfunction associated with eigenvalue 1.26. The number of zero-crossings increases with decreasing eigenvalues. Now, consider the solution of this problem using the discretized matrix eigenvalue method as considered in Problem 10. The three associated eigenvalues are now given by 1( ME ) = 3.08064 , 2( ME ) = 2.05085 , and 3( ME ) = 1.26363.
10
(33)
The three largest eigenvalues obtained from the matrix eigenvalue method shown in (33) are extremely close to those three largest eigenvalues obtained from the differential equation method shown in (25). Clearly, there are some numerical errors in the computations of these eigenvalues in both methods. Figure 3 shows the plots of the three eigenvectors corresponding to the eigenvalues of (33) evaluated based on the matrix eigenvalue method. Eigen functions of the three largest eigenvalues using matrix eigenvalue method 1.5 (ME)
=3.08064
(ME)
=2.05085
λ 1 λ 2
1
(ME) λ 2 =1.26363
0.5
2 q
0
−0.5
−1
−1.5
0
0.2
0.4
0.6
0.8
1
t2
Figure 3: Eigenfunctions associated with the three largest eigenvalues obtained with discretized matrix eigenvalue method.
Figure 4 plots the two eigenfunctions corresponding to 1(
DF )
ME and 1( ) . Similarly, Figure 5 plots
the two eigenfunctions corresponding to 2( DF ) and 2( ME ) and Figure 6 plots the two eigenfunctions corresponding to 3( DF ) and 3( ME ) .
We note, in Figures 4, 5, and 6, values of the
eigenfunctions obtained from the differential equation method a re essentially identical to those obtained from the matrix eigenvalue method.
11
Comparison of eigenfunctions of largest eigenvalues with two different methods 1.3 Diff eqn Matrix eigenvalue 1.2 1.1 1 0.9 q
0.8 0.7 0.6 0.5 0.4 0
0.2
0.4
0.6
0.8
1
t
Figure 4: Eigenfunctions of two largest eigenvalues with two different methods.
Comparison of eigenfunctions of second largest eigenvalue with two different methods 1.5 Diff eqn Matrix eigenvalue 1
0.5
q
0
−0.5
−1
−1.5
0
0.2
0.4
0.6
0.8
1
t
Figure 5: Eigenfunctions for second largest eigenvalues with two different methods.
12
Comparison of eigenfunctions of third largest eigenvalue with two different methods 1.5 Diff eqn Matrix eigenvalue 1
0.5
q
0
−0.5
−1
−1.5 0
0.2
0.4
0.6
0.8
1
t
Figure 6. Eigenfunctions for third largest eigenvalues with two different methods.
KY wants to acknowledge the contributions of [16; pp. 361-370] and various students (and particularly S. Lee) in a class project dealing with this problem.
13
13.
a. The
optimum correlation function {g(t), 0 ≤ t ≤ T } for the CGN problem is obtained from the solution
of
T
R(t − t )g(t )dt = s(t), 0 ≤ t ≤ T .
(1)
o
The CGN matched filter impulse response function h(t) is given by h(t) =
g(T − t), 0 ≤ t ≤ T , 0, elsewhere.
(2)
An approximation for the solution of g(t) in (1) for large T can be obtained from ∞
R(t − t )g(t )dt = s(t), 0 ≤ t ≤ T .
(3)
−∞
∞
Denote N (ω) = F{R(t)} = −∞ R(t)exp(− jωt)dt, S (ω) = F{s(t)}, G(ω) = F{g(t)}, and H (ω) = F{h(t)}. Apply the Fourier transform to the convolutional equation in (3). Then N (ω)G(ω) = S (ω), −∞ < ω < ∞.
(4)
Apply the Fourier transform to (2). Then H (ω) = G∗ (ω)e−jωT . − ∞ < ω < ∞.
(5)
Combine (4) with (5) to obtain H (ω) = b. If
S ∗ (ω) −jωT S ∗ (ω) −jωT e = e , −∞ < ω < ∞. N ∗ (ω) N (ω)
(6)
the input is a CGN process of power spectral density N (ω) =
K K 1/2 K 1/2 × = , ω2 + ω02 jω + ω0 − jω + ω0
then the causal whitening filter H W (ω) satisfies N (ω)|H W (ω)|2 = C, forany C > 0. Thus,
C C (ω2 + ω02 ) = N (ω) K
|H W (ω)|2 = and
H W (ω) = C 0 ( jω + ω0 ), C 0 = (C/K )1/2 .
c. The
Fourier transform of the input signal is S (ω). Then the signal component of the output of the matched filter in the frequency domain is given by ( |S (ω)|2 /N (ω))exp(− jωT ). Thus the signal component of the output of the matched filter in the time domain s 0 (t) is given by 1 s0 (t) = 2π
d. From
∞
−∞
|S (ω)|2 jw(t−T ) e dω. N (ω)
(4.211), we have P D = Q
where from (4.206) γ 0c = from (4.208)
1 2
∞
i=0
γ 0c − µ1 σ1
i=0
14
,
s2i + ln(Λ0 ) , λi ∞
µ1 =
s2i , λi
and from (4.209)
∞
σ1 =
i=0
Then P D = Q
From Parseval Theoren ∞
i=0
Thus, P D = Q(−∞) = 1 . e.
s2i λi
1 − 2
∞
i=0
s2i ln(Λ0 ) + λi ∞ s
.
2 i
i=0
λi
∞
Powerofsignals(t)inθi coordinate Av.powerofnoiseN (t) inθi coordinate i=0 ∞ |S (ω)|2 1 2π N (ω) dω = ∞ . −∞
=
=
s2i . λi
If N (ω) = 0 , ω ∈ {(−b , − a) ∪ (a, b)} , but S (ω) = 0 , ω ∈ { (−b , − a) ∪ (a, b)} ,, then 1 2π
∞
−∞
|S (ω)|2 dω = ∞ . N (ω)
Thus, P D = Q(−∞) = 1 .
15
15.
In order to obtain the upper bound of
s
T
Λs
s
T
Λ
1
s in (4.242), where || s||2 is taken to have unit norm, we can
perform the following eigenvalue decomposition. Denote = UDUT and -1 = UD-1UT . where diag(D) = [ 1, …, n], with 1 > 2 > … n > 0. Since U is an orthogonal matrix, define a new n 1 vector z = UT s or s = U z and sT = uT U T , with ||z||2 = 1. Thus,
s
T
Λ
1
s
s
T
Λs
= z
T
UT UD1UT Uz
z
T
UT UDUT Uz
M 2 M 2 z D z z Dz = zi / i zi i . i=1 i=1 1
T
Define n new variables yi
s
T
Λ
T
= zi2 0 , i 1,..., n , and substitute them into (A1). Then
M M s s Λs = yi / i yi i . i=1 i=1
1
(A1)
T
T
Thus, the upper bound of s Λs
s
T
Λ
1
(A2)
s is given by the solution of the following Theorem.
Theorem. The maximum of (A3) given by
n n yi / i yi i , i=1 i=1
(A3)
subject to n
y
i
= 1 , yi 0 , i 1,..., n,
(A4)
i=1
and
1 > 2 > ... n > 0,
(A5)
yˆ1 = yˆ n = 1/2, yˆ i = 0 , i 2,..., n 1 .
(A6)
as attained by using
Proof: Consider the nonlinear minimization problem of Min y f( y) ,
(A7)
subject to gi( y) 0 , i= 1,…, M ,
(A8)
h( y) = 0 .
(A9)
T The celebrated Karush-Kuhn-Tucker (KKT) [4] necessary conditions for y yˆ [ yˆ 1 ,..., yˆ M ] to be a local minimum
solution of (A7) subject to (A8) and (A9), are such that there exist constants I , i = 1,…, M , and satisfying
16
M
1. f ( yˆ ) i gi ( yˆ ) h( yˆ ) 0,
(A10)
i 1
2. gi ( yˆ ) 0, i 1,..., M , 3. h( yˆ ) 0, i 1,..., M , 4. i 0, i 1,..., M , 5. i gi ( yˆ ) 0, i 1,..., M .
(A11) (A12) (A13) (A14)
In order to use the KKT method for our maximization of (A7) with the constraints of (A8) and (A9), denote
f ( yˆ ) A(i ) B (i),
(A15)
gi ( yˆ ) yi , i 1,..., M , h ( yˆ )
(A16)
M
y 1,
(A17)
i
i 1
1 > 2 > … M > 0 ,
(A18)
where we define
A(i )
M
yi / i , B(i )
i 1
M
y . i
i
(A19)
i 1
Now, we show the conditions of (1)- (5) of (A10) – (A14) are satisfied for the expressions of (A15) - (A18). From Condition 1, by taking its partial derivative wrt to yi, i = 1, …, M , we have
1 i
B ( j ) i A( j ) i
0.
(A20)
Multiply (A20) by yi yields
yi i From Condition 4 and yi
B( j ) i yi A( j ) i yi yi 0.
(A21)
0 , then i yi 0. Thus, (A21) becomes yi i
B( j ) i yi A( j ) yi 0.
(A22)
Summing (A22) over all i = 1, …, M , yields M
y
i
A(i ) B( j ) B(i) A( j ),
i 1
and from (A8), we have
A(i ) B( j ) B(i) A( j ) 2 A( j ) B( j).
17
(A23)
Substitute (A23) into (A22) yields
yi
i
B( j ) i yi A( j ) 2 yi B( j) A( j) 0 ,
or
1
yi
i
B( j ) i A( j ) 2 B( j) A( j) 0,
(A24)
Thus, (A24) shows either yi = 0 or
1
B ( j ) i A( j ) 2 B( j) A( j) 0.
(A25)
i2 A( j ) 2 i B( j ) A( j) B( j) 0.
(A26)
i Multiply (A25) by i , to obtain
The quadratic equation of (A26) in i, has two non-zero solutions given by
2 B( j ) A( j ) 4 B( j ) 2 A( j) 2 4 B( j) A( j) . i 2 A( j )
(A27)
The rest of the ( M -2) number of yi = 0 . Denote the indices of the two non-zero yi by a and b. Then the maximization of A(i ) B(i ) in (A7)) reduces to the maximization of
ya yb ya a yb b a b ya2
a b
ya yb yb2
b a
ya yb
= ya yb a b ya yb b a 2 = ya yb a b 2 ya yb . b a 2
(A28)
2
By denoting yb = 1 – ya in (A28), we obtain
ya yb ya a yb b a b 1 a b 2 (1 ya ) ya b a H ( ya , a , b ).
18
(A29)
Thus, the maximization in (A7)) reduces to the maximization of H ( ya , a , b ) in (A29). We note H ( ya , a , b ) is a quadratic function of ya. Taking the partial derivative of H ( ya , a , b ) wrt to ya yields
H ( ya , a , b ) a b 2 ya ya b a a b 2 (1 ya ) 0, b a or
2 a b 2 ya a b 2 , b a b a or
ya yb 1 / 2 ,
(A30)
since
a b 2 0, a b with a
b . The second partial derivative of H ( ya , a , b ) shows 2 H ( ya , a , b ) 2 a b 2 2 ya b a (a b ) 2 2 0, a b
0 and b 0. Thus, the local maximum solution of ya yb 1 / 2 in (A30) is a global maximum solution of H ( ya , a , b ) . By using ya yb 1 / 2 and denoting a / b 1, with the assumption of a b 0, then H ( ya , a , b ) of (A29) can be expressed as since a
H ( ) 1 (1 / 4)( 2 1 / ) ( 1)( 1) 1 (1 / 4) 0. 2 This shows H ( ) is a positive monotonically increasing function of . Since
1 / M . This means yˆ1 yˆ M 1 / 2, yˆ i 0, i 2,..., M 1.
(A31)
a / b and a b 0, the
maximum of H ( ) is attained by using max
19
(A32)
Thus, the solution given by (A32) is the only solution that satisfies the KKT Condition 1 of (A15) that provides the local minimum of
M M M M f ( yˆ ) yi / i yi i or the local maximum of yi / i yi i . But H ( ya , a , b ) of (A29) is i 1 i 1 i 1 i 1 a quadratic function, thus the solution given by (A32) yields the global maximum of (A3) or the upper bound of (A2) and (A1). (KY wants to thank discussions with Prof. L. Vandenberghe and various students in a class project of EE230A for various inputs in the solution of this problem.)
20
Chapter 5 M-ary Detection Odd Numbered Homework Solutions
1. The M-ary MAP decision rule state that we decide for hypothesis H j , if p(H j |x) > p(H i |x), ∞ < x < ∞, all j̸ = i, 1 ≤ i, j ≤ M. From Bayes rule, p(H i /x) = p(x|H i )πi /p(x), 1 ≤ i ≤ M, and p(x) = M k=1 p(x|H k )πk > 0. Thus,we decide for hypothesis H j , if p(x|H j )πj > p(x|H i )πi , all j̸ = i, 1 ≤ i, j ≤ M.
∑
3. List of some M-ary orthogonal transmission system using a carrier frequency includes: a. M-ary PSK (including conventional QPSK; offset QPSKl π − /4 QPSK; etc.); b. M-FSK.
List of some M-ary orthogonal baseband signals modulating a carrier frequency includes: a. Time-multiplexed PPM signals; b. Time-multiplexed PWM signals; c. Time-multiplexed PAM signals; d. Time-multiplexed PCM signals.
List of some M-ary non-orthogonal transmission system using a carrier frequency includes: a. M-QAM; b. M-CPM (including MSK, etc.).
List of some M-ary non-orthogonal baseband signals modulating a carrier frequency includes: a. Non-time-multiplexed PPM signals; b. Non-time-multiplexed PWM signals; c. Non-time-multiplexed PAM signals; d. Non-time-multiplexed PCM signals.
5. In a M -ary PAM system where the signal set is given by {± A, ± 3A , . . . , (M/2)A}. We assume M is an even integer, and denote sm = (2m − 1)A, m = 1, . . . , M/2 and s−m = − (2m − 1)A, m = 1, . . . , M/2. The transmited signal is disturbed by an AWGN noise of zero-mean and variance σ 2 = N 0 /2. Then under the minimum-distance decision rule, the decision regions R m , for the received value x is given by Rm =
{x : 2(m − 1)A x < 2mA} , for m {x : (m − 2)A x < ∞} , for {x : 2mA < x 2(m + 1)A} , for m = {x : −∞ < x (m + 2)A} , for
= 1, 2, . . . , (M/2) − 1 , (1a) m = M/2 , (1b) −1, −2, . . . , − (M/2) + 1 , (1c) m = −M/2 . (1d)
(5.1)
For the two extreme symbols in (1b) and (1d), their two error probabilities are given by 2 Q(A/σ), while the other (M − 2) error probabilities due to (1a) and (1c) are given by (M − 2)2Q(A/σ). Thus, the average symbol error proability is given by P es = (1/M )[2Q(A/σ) + (M − 2)2Q(A/σ)] = (2(M − 1)/M )Q(A/σ).
M/ 2
(5.2)
√
The average energy E av = (1/M ) m=−M/ 2 s2m = (M 2 − 1)A2 /3 or A/σ = 3E av /(σ2 (M 2 − 1). The energy per bit is given by E b = E av /log2 M. Thus, the average symbol error proability can be expressed as
∑
√
P es = (2(M − 1)/M )Q( (6(E b /N 0 )log2 M )/(M 2 − 1)).
(5.3)
7. Let P b (ψ) be the probability of bit error given a phase error of ψ, where ψ is the realization of a random variable Ψ with pdf equal to 1 −ψ 2 pΨ (ψ) = exp , 2 2 2σΨ 2πσ Ψ
� �
√ ��
defined over ψ ∈ ( −π/2, π/2] (Note that we assume that Q(π/(2σΨ )) ≈ 0). Then,
�
2E cos(ψ) . N 0
P b (ψ) = Q The average BER is equal to
π/ 2
P b =
∫
P b (ψ) pΨ (ψ) dψ.
−π/ 2
If we expand P b (ψ) into a power series up to the second-order term, we have dP b (ψ) P b (ψ) ≈ P b (0) + dψ where P b (0) dP b (ψ) dψ d2 P b (ψ) dψ 2
�� � � � � �
=
= ψ=0
ψ2,
ψ =0
2E , N 0
= Q
ψ=0
1 d2 P b (ψ) ψ + 2 dψ 2 ψ =0
� ��
E E exp − (cos(ψ))2 sin(ψ) πN 0 N 0 E E exp − (cos(ψ))2 πN 0 N 0
= 0,
ψ =0
2E cos(ψ)(sin(ψ))2 + cos(ψ) N 0
With this approximation, we obtain for P b
�� � � 2E 1 + N 0 2
P b ≈ Q
� �
E E exp − πN 0 N 0
� � =
ψ=0
� �
E E exp − πN 0 N 0
2 σΨ .
9. Consider an uncorrelated random sequence {Bn , −∞ < n < ∞}, with E {Bn } = µ, E{Bn Bn+m } = µ2 , 2 . and its variance denoted by σ B n
a. In s(t) =
N n=−N B n g(t
∑
− nT ), −∞ < t < ∞, first assume Bn ’s are deterministic and then take its Fourier transform to yield C (f ) = G(f )
N
Bn e−in2πf T , −∞ < f < ∞.
∑
(9.1)
n=−N
After taking the time and statistical averages of | C (f )|2 , we obtain
� � �
1 S S (f ) = |G(f ))|2 lim E N →∞ (2N + 1)T 1 |G(f ))|2 = lim T N →∞ (2N + 1) =
|G(f ))|2 T
b. Since
N
Bn e−in2πf T
n=−N
−N
(9.2)
n=−N m=−N
R(m)e−im2πf T .
m=−∞
µ2 , m = ̸ 0, + µ2 , m = 0 , upon substituting (9.3) into (9.2), we obtain R(m) = E {Bn+m Bn } =
S S (f ) =
R(m)e−i(m−n)2πf T
∞
�
2
N
|G(f ))|2 T
�
2 σB
n
+
�
µ2
2 σB
(9.3)
(9.4)
n
∞
∑
m=−∞
e−im2πf T
�
.
.
By using the Poisson formula ∞
∑
e−im2πf T =
m=−∞
1 T
∞
∑
in (9.4), we obtain
|G(f ))|2 S S (f ) = T 2 |G(f ))|2 σB = T
n
δ (f − m/T ) , ∞
� � () �
+
m=−∞
2 σB
n
µ T
µ2 + T
2
m δ (f − ) T m=−∞
�
∞
m m |G( ))|2 δ (f − ) . T T m=−∞
(9.5)
(9.6)
11. % input data X=[[-.225 0.93]’,[-1 2]’,[-2.5 .5]’,[-3 -1]’]; Y=[[-0.75 0.75]’,[1 .8]’,[2 -1]’,[2.5 1.5]’,[3 1]’]; N=4; M =5; % Solution via CVX cvx_begin variables a(2,1) b u(4,1) v(5,1) X’*a - b >= 1 - u ; Y’*a - b <= -(1-v); %a’*X(:,i) - b >= 1 - u(i), i = 1,2,3,4; %a’*Y(:,j) - b <= -(1 - v(j)); j = 1,2,3,4,5; u >= 0; v >= 0; minimize ([ 1 1 1 1]*u + [1 1 1 1 1]*v) cvx_end b a % Displaying results linewidth = 0.5; % for the squares and circles t_min = min([X(1,:),Y(1,:)]); t_max = max([X(1,:),Y(1,:)]); tt = linspace(t_min-1,t_max+1,100); p = -a(1)*tt/a(2) + b/a(2); p1 = -a(1)*tt/a(2) + (b+1)/a(2); p2 = -a(1)*tt/a(2) + (b-1)/a(2); graph = plot(X(1,:),X(2,:), ’o’, Y(1,:), Y(2,:), ’o’,-4:.01:4,zeros(1,801),’k’,zeros(1,601),(-3:.01 graph = plot(X(1,:),X(2,:), ’o’, Y(1,:), Y(2,:), ’o’,-4:.01:4,zeros(1,801),’k’,zeros(1,601),(-3:.01 set(graph(1),’LineWidth’,linewidth); set(graph(2),’LineWidth’,linewidth); set(graph(1),’MarkerFaceColor’,[0 0.5 0]); hold on; plot(tt,p, ’-r’, tt,p1, ’--r’, tt,p2, ’--r’); axis([-4 4 -3 3]) xlabel(’x(1)’) ylabel(’x(2)’)
Homework #6 Answers
1. Since X and Y are two independent Gaussian r.v.’s, then pX,Y (x, y) =
(
e−x
2
)(
/(2σ 2 )
√
2πσ 2
e−y
2
/(2σ 2 )
√
2πσ 2
)
2
=
e−(x
+y2 )/(2σ 2 )
.
2πσ 2
Let x = r cos(θ) and y = sin(θ). Upon converting from the rectangular coordinate to the polar coordinate, r2 = x 2 + y 2 and rdrdθ = dxdy. Then pR,Θ (r, θ) drdθ = pX,Y (x, y) dxdy =
|
|
|
2π
pR (r) dr =
Thus, R =
2
e−(x
+y 2 )/(2σ 2 )
r drdθ
|
2πσ 2
2
rdre−r /(2σ pR,Θ (r, θ) dθ = 2πσ 2
∫ 0
|
�
2
)
2π
∫ 0
�
2
| |
r e−r /(2σ = x = r cos(θ) 2πσ 2 y = r sin(θ)
2
R (r)
= (r/σ 2 ) e−r
2
/(2σ 2 )
, 0
≤ r < ∞ .
Fig. 1. Plots of Rician pdf’s for five values of µ and σ. 5. a. Using the Matlab function mean, we obtained the estimated mean µ ˜R = mean(r) = 1.2619. 2 = var(r) = 0.43366. ˜R b. Using the Matlab function var, we obtained the estimated variance σ
˜2 = d. Using σ
×
√
√
2/π, then we obtained σ ˜1 = 1.0069.
2 /(2 σ ˜R
− π), we obtained σ˜2 = 1.0052.
|drdθ| .
≤ r < ∞ .
3.
˜1 = µ ˜R c. Using σ
)
2
r e−r /(2σ ) dθ = dr , 0 σ2
√ X 2 + Y 2 is a Rayleigh r.v. with a pdf given by p
2
˜ 1 and σ ˜2 are very close to the original σ = 1 of the Gaussian r.v. e. Thus, both σ
Rayleigh cdf with σ = 1 1
0.9
0.8
0.7
0.6 ) r (
R
F
0.5
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
Fig. 2. Plot of Rayleigh cdf F R (r) vs. r.
2 r
2.5
3
3.5
4
f. Fig. 3 shows the the empirical cdf and the Rayleigh cdf match each other well. Rayleigh cdf and Empirical cdf 1 Rayleigh cdf Empirical cdf 0.9
0.8
0.7
0.6 ) r (
0.5
R
F
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
2 r
2.5
3
3.5
4
Fig. 3. Plots of empirical cdf and Rayleigh cdf F R (r) vs. r. 7. In Fig. 8, the Rician pdf is represented by the solid curve. The Gaussian pdf with mean of µ = 2 and σ = 0.5 is represented by the dashed curve. We see this Gaussian approximation pdf appears to have an offset to the left of the Rician pdf. However, the Gaussian approximation pdf with a modified mean of µ2 + σ 2 and σ = 0.5 is represented by the dotted curve and provide a much better approximation to the desired Rician pdf.
√
0.9 Rician Gaussian Gaussian−mod. mean
0.8
0.7
0.6
s 0.5 e u l a v f d P0.4
0.3
0.2
0.1
0 0
0.5
1
1.5
2
2.5
3
3.5
x
Fig. 8 Plots of Rician pdf, Gaussian pdf, and Gaussian pdf with a modified mean.
4
9. Fig. 9 compares the Rician pdf with different parameters of the Nakagame pdf’s. Rician K=10,s=2 Nakagami m=5.76
Rician K=2,s=1 Nakagami m=1.8
1.4
1.4
1.2
1.2
1
1
s 0.8 e u l a v f d P0.6
s 0.8 e u l a v f d P 0.6
0.4
0.4
0.2
0.2
0 0
1
2 x
3
0
0
1
2
3
x
Fig. 9. Plots of Rician pdf and Nakagami-m pdf. (a). Rician (K=10, s=2), Nakagami (m = 5.76). (b). Rician (K=2, s=1), Nakagami (m = 1.80).
Homework #7 Answers
1. a. ϵ2 ∂ϵ 2 ∂a
=
ϵ2Mi n
=
E (s(t + τ )
=
R2 (0) R2 (τ ) R(0)
b. For R(t) = δ (t),
=
− as(t))2} = R(0) + a2R(0) − 2aR(τ ) R(τ ) 2ˆ aR(0) − 2R(τ ) = 0 =⇒ a ˆ = R(0)
E (s(t + τ )
{
2
{
(τ ) − aˆs(t))s(t + τ )} = R(0) − aˆR(τ ) = R(0) − RR(0)
−
−∞ < t < ∞, denote RC (τ ) =
�
1/c , ( c/2) < τ < (c/2) , 0 , elsewhere ,
−
then define δ (τ ) = limc→0 RC (τ ) . Thus, a ˆ =
�
limc→0 R(τ )/(1/c) = 0, limc→0 (1/c)/(1/c) = 1,
τ = 0 , τ = 0
̸
ϵ2Min =
�
0,
τ = 0 τ = 0.
∞, ̸
c. Consider
E (s(t + τ )
{
− aˆs(t))s(u)}
= R(t + τ
− u) − aˆR(t − u) e−α|t+τ −u| − e−α|τ | e−α|t−u| e−α(t+τ −u) − e−ατ e−α(t−u) e−α(t+τ −u) − e−α(t+τ −u) 0, τ > 0, t ≥ u.
= = = =
This means N
�
ϵ21
= E (s(t + τ )
� −
− aˆs(t)
i=1
N
�
�(�
2
= E (s(t + τ
bi s(ui ))2
bi s(ui ) { − aˆs(t)) } + E i=1 2 2 ≥ E{(s(t + τ ) − aˆs(t)) } = ϵMin .
2
)�
Thus, the minimum m.s. error is attained with all bi = 0. That is, the optimal linear estimate is a ˆs(t), where ˆa = R(τ )/R(0), for τ > 0 , and t > ui , for all i. 3. a. From Orthogonal Principle, we obtain E [S (n)
{
∞,
or RSX ( j) =
∞
∑
− k) , −∞
−∞
∞
∑
h(k)X (n
k=
h(k)X ( j
k=
− (
−∞
< j <
− k))]X (n − j) } = 0 , −∞ < j <
∞ . Without additional information, it is not
possible to solve for the h(k) directly. b. By taking the Fourier transform of the last expression of part (a), we obtain S SX (f ) = H (f ) S X (f ) , 1/2 f < 1/2 . Thus, H (f ) = S SX (f )/S X (f ) , 1/2 f < 1/2 .
− ≤
− ≤
c.
RSX (n) = E S (m)X (n + m) = E S (m)(S (n + m) + V (n + m)) 2 = RS (n) = a|n| σS , < n < , RX (n) = E X (m)X (n + m) = E (S (m) + V (m))(S (n + m) + V (n + m)) = RS (n) + δ (n) , < n < ,
{
} { −∞ ∞ { } { −∞ ∞ ∞
2 S SX (f ) = S S (f ) = σS
∑
k=
}
}
2
2
(1 a ) a|k| exp( i2πkf ) = 1−2aσcos(2 πf ) + |a|2 ,
−
S
−| |
− 12 ≤ f
< 12 ,
−∞ (1−|a| ) 1+|a| −2a cos(2πf ) + σ (1−|a| ) S X (f ) = RS (n) + 1 = 1−2aσcos(2πf , − 12 ≤ f < 12 , ) + |a| + 1 = 1−2a cos(2πf ) + |a| σ (1−|a| ) H (f ) = S S ((f f )) = 1+|a| −2a cos(2πf ) + σ (1−|a| ) , − 12 ≤ f < 12 . 2
2
2
2
2
S
2
2
S
2
2
SX X
2
S
2
S
2
d. From Orthogonal Principle, we have
ˆ(n))2 = E (S (n) S
ε2Mi n = E (S (n) = E
{
�
{
S (n)
−
But we also have RS (0) =
� −∑ ∞
}
{
h(k)X (n
− k)
k=
−∞
1/2
1/2
∫
∫
S S (f ) df , RS ( k) =
−
−1/2
− S ˆ(n))} = E {(S (n) − S ˆ(n))S (n) } ∞ S (n) } = RS (0) − h(k)RS (−k) .
�� −
ˆ(n))(S (n) S
∑
k=
−∞
∞
exp(i2πf ( k)) S S (f ) df , H (f ) =
−
�
h(k) exp( i2πf k) .
−
k=
−∞
−1/2
Combining the above two sets of equations, we obtain ε2Min =
1/2
1/2
∫
∫
[S S (f )
− H (f ) S S (f )]df =
−1/2
[S S (f )
−
−1/2
(S S (f ))2 ]df = S S (f ) + S V (f )
1/2
∫
−1/2
S S (f )S V (f ) df . S S (f ) + S V (f )
5. From Orthogonal Principle, we obtain E or
∞
S (t + a)
∫ −
h(τ )X (t τ ) dτ X (t
−
0
− s)
∞
RS (s + a) =
∫
= 0,
h(τ )RX (s τ ) dτ , 0
−
0
−∞ < t < ∞ , 0 ≤ s < ∞ , ≤ s < ∞ ,
where RX (τ ) = RS (τ ) + RV (τ ) ,
−∞ < τ < ∞ . 7. By comparing the given autocorrelation function RS (k) = 4 × 2−|k| , −∞ < k < ∞ , to the autocorrelation function R S (k) of (2.3.2-6), we obtain
2 σW a −|k} = 4 (1/2)−|k| , 1 a2
−
or ln
2 σW 1 a2
� � − | |
−∞ < k < ∞ ,
k ln a = ln 4
−
{ } − |k| ln
{}
��
1 , 2
2 2 which shows that a = 1/2 and 4 σW /3 = 4, or σW = 3. Thus, this random sequence S (k) is a first-order autoregressive sequence modeled by (2.3.2 - 1). Then the recursive Kalman filtering algorithm of Sec. (2.3.2) is valid and the steady state version of the Kalman gain p of (2.3.2 -15) reduces to a p2 + b p + c = 0 2 where a = 1 b = 3 + SNR , c = 4 S N R , and S N R = 3/σV . A plot of p versus S N R is given here.
9. The density of X is given by pX (u; A) = pW (u d ( dA pW (u A))/pN (u A), we have that
−
−
−
� �� E
� ��
d pX (u; A) dA
2
− A).
−1
Since
�∫ � �∫ � ∞
=
−∞ ∞
=
−∞
d dA ln pX (u; A)
d pW (u dA d pW (u) du
�
− A) 2
�
2
d = ( dA pX (u; A))/pX (u; A) =
du pW (u A)
du pW (u)
�
−1
−
�
−1
.
For the Laplacian pdf we have dpW (u) = du
√
√
d ( du pW (u))2 /pW (u) = σ24σ e− 2|u|/σ = pdf, the CRLB is equal to σ 2 .
�−
√
− 2u/σ , u 0 , ( 1/σ 2 )e √ (1/σ2 )e 2u/σ , u < 0.
2 σ 2 pW (u).
≥
The CRLB is therefore equal to σ 2 /2. For the Gaussian
3
2.5
2 e u l a v p e t a 1.5 t s y d a e t S
1
0.5
0 0
2
4
6
8
10
12
14
SNR
11.
1
p(x; a) =
T
1
√
(2π)N det(C)
e− 2 (x−a1)
−1 C (x
−a1) ,
¯ij = C ¯ji . where 1 = [1, . . . , 1]T . Let C−1 = ( ¯ C )ij and, by symmetry, C d ln p(x; a) da
N
1 d 2 da
N
¯ij (xi C
=
N
N
N
¯ij xi C
N i=1
n0 1
p(x; n0 ) =
−
∏ √ √ n=0
=
1
2πσ 2
¯ij = C
i=1 j =1
−1
d2 p(X; a) da2
E
N ¯ j =1 C ij xi /
N i=1
N
¯ij (xj C
¯ij (xj a) + C
(n)/(2σ2 )
∏
−
√
σ
∑
N −1 n=0
x2 (n)
−
e
1
2πσ 2
n=n0
1 1 e− 2 2 (2πσ 2 )N
N ¯ j =1 C ij xi N ¯ j =1 C ij
N ¯ j =1 C ij
N
N
a
�
¯ij . C
i=1 j =1
.
unbiased (E X i = a, E Aˆ = a). Therefore, it
−
n0 +M 1 2
e−x
N i=1
− a)
i=1 j =1
1
=
N ¯ j =1 C ij is 1 N ¯ij C . j =1
N i=1
is efficient, with variance equal to 13.
N
N i=1 N i=1
N
a
i=1 j =1
CRLB =
a)(xj
i=1 j =1
=
The estimator ˆa =
1 a) = 2
� � − − � � � − − � � − � � � ∑∑ ∑∑ − � � � �− � �� ∑ ∑ ∑ ∑ ∑ ∑ { } { } (∑ ∑ ) 1 2σ 2
∑
N 1
e−(x(n)−s(n−n
n0 +M −1 n=n0
0
−
))2 /(2σ2 )
∏
n=n0 +M
[ 2x(n)s(n n0 )+s2 (n m0 )]
−
−
−
√ 1
2πσ 2
2
e−x
(n)/(2σ 2 )
.
Therefore, the maximum likelihood estimator is found by minimizing n0 +M 1
−
�
n=n0
[s2 (n
− n0) − 2x(n)s(n − n0)].
1 2 1 2 Since nn0=+nM s (n n0 ) = M n=0 s (n) does not depend on n 0 , the maximum likelihood estimator for 0 −1 x(n)s(n n ). n0 is found by maximizing nn0=+nM 0 0
∑
−
−
∑ ∑
−
−
Chapter 8 Odd Numbered Homework Solutions 1. %% r and( ' s t at e' , 49) ; u=r and( 1, 501) ; un=u( 1: 500) ; un1=u( 2: 501) ; pl ot ( un, un1, ' k* ' ) xl abel ( ' u_n' ) yl abel ( ' u_{n+1}' ) %% Yi el ds 1 0.9 0.8 0.7 0.6 1 + n
0.5
u
0.4 0.3 0.2 0.1 0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
u
n
Fig. 1. Plot of Un+1 vs. Un , n = 1, …, 500, for the rand generator. The 500 points in this plot seems to be fairly randomly distributed with no obvious similar patters among adjacent values. %% 3. s0=49; xx= [ ] ; m = 2^31 - 1; f or n=1: 502, s = mod( ( 7^5) *s0, m) ; x= s/ m; xx=[ xx, x] ; s0=s; end subpl ot ( 1, 2, 1) pl ot ( xx(1: 500) , xx(2: 501) , ' k* ' ) xl abel ( ' x_n' )
yl abel ( ' x_{n+1}' ) axi s( [ 0 1 0 1] ) %% subpl ot ( 1, 2, 2) pl ot ( xx(1: 500) , xx(3: 502) , ' k* ' ) xl abel ( ' x_n' ) yl abel ( ' x_{n+2}' ) axi s( [ 0 1 0 1] )
yields
1 + n
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
2 + n
0.5
x
0.5
x
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0 0
0.2
0.4
0.6 x
0.8
1
0
0.2
0.4
0.6 x
n
0.8
1
n
Fig. 3a. Plot of x +1 vs. x , n = 1, …, 500. Fig. 3b. Plot of x +2 vs. x , n = 1, …, 500. From these two plots, there does not appear to have any regular patterns. Thus, from these simple visual inspections, there does not appear to be much correlations among elements of this sequence. n
n
n
n
5. For these two x1 and x2 sequences, the Matlab function kstest(x1, x2) yields 0. This means we can not reject the null hypothesis that these two sequences have the same distribution with a 5 % significance statistical level test. Loosely speaking, while the KS (and some other) statistical tests can not confirm these two sequences have the same distribution, it can only state with fairly high confidence that one can not conclude the two sequences came from two different distributions. 7.
See Table 8.1 of Ex. 8.2 of Chapter 8 on p. 275.
9.
See Table 8.4 of Ex. 8.4 of Chapter 8 on p. 278.
11. randn('state',19); r = raylrnd(1,[1,5000]); randn('state',47); x = r.*v; mean(x) = 1.0122; xx=0:.01:6; hist(x,xx), axis[[0 6 0 60]) Histogram of x 60
50
40
30
20
10
0
0
1
2
3 x
4
5
6
The histogram of X appears to have an exponential pdf with a mean of 1.0122. Compare the empirical cdf of X with an exponential cdf with a men of 1.0122. F = cdfplot(x), axis([0 6 0 1]); ecdf=expcdf(xx,1.0122); hold on G = plot(xx,expcdf(xx,1.0122),'r--'); title('Empirical Exponential cdf and Exponential cdf') legend([F,G], 'Empirical cdf', 'Exponential cdf','Location','NW')