EE132B-HW Set #6
UCLA 2014 Fall
Prof. Izhak Rubin
Problem 1 (a) We calculate the stationary distribution by using the following equations: π = πP ;
X
πi = 1.
(1)
i∈S
This set of equations yields π =
h
52 93
21 93
i
20 93
.
(b) Let A0 = {X0 = c}, A1 = {X1 = b}, A2 = {X2 = c}, A3 = {X3 = a}, A4 = {X4 = c}, A5 = {X5 = a}, A6 = {X6 = c}, and A7 = {X7 = b}. Then we have: P(
6 \
Ai ) =
i=1
7 Y
P (Ak |
k−1 \
Ai ) =
i=1
k=1
7 Y
P (Ak | Ak−1 ) (2)
k=1
3 . = p(c, b)p(a, c)p(c, a)p(a, c)p(c, a)p(b, c)p(c, b) = 2500 (c) Due to the time-homogeneous property, we have P (Xk+2 = c | Xk = b) = P (2) (b, c) =
1 P (b, k)p(k, c) = . 6 k∈S X
(3)
Problem 2 (a) We calculate the stationary distribution by using the following equations: π = πP ;
X
πi = 1.
(4)
i∈S
This set of equations yields π =
h
1 4
1 3
5 12
i
.
(b) We have P (X1 = b, X3 = a, X4 = c, X6 = b | X0 = a) =
X X
P (X1 = b, X2 = m, X3 = a, X4 = c, X5 = n, X6 = b | X0 = a)
n∈S m∈S
=
X X
P (X1 = b | X0 = a)P (X2 = m | X1 = b)P (X3 = a | X2 = m)
n∈S m∈S
P (X4 = c | X3 = a)P (X5 = n | X4 = c)P (X6 = b | X5 = n) 1 = . 180 1
(5)
EE132B-HW Set #6
UCLA 2014 Fall
Prof. Izhak Rubin
(c)
P (X1 = b, X2 = b, X3 = a) =
X
P (X1 = b, X2 = b, X3 = a, X0 = n)
n∈S
=
X
P (X3 = a | X2 = b)P (X2 = b | X1 = b)P (X1 = b | X0 = n)P (X0 = n)
(6)
n∈S
=
51 . 960
Problem 3 (a) To prove that N is a Markov chain, we need to show that: P (Nn+1 = i | Nn , Nn−1 , . . . , N0 ) = P (Nn+1 = i | Nn ),
(7)
for all i in S. Let Mn denote the number of successes in the nth trial, i.e., Mn = 1 if the nth trial is successful, and Mn = 0 otherwise. Then, for n = 0, 1, . . . , we have Nn+1 = Nn + Mn+1 .
(8)
Since Mn+1 is independent of Nn , Nn+1 , . . . , N0 , we have P (Nn+1 = i | Nn , Nn+1 , . . . , N0 ) = P (Nn + Mn+1 = i | Nn , Nn+1 , . . . , N0 ) = P (Nn + Mn+1 = i | Nn ) = P (Nn+1 = i | Nn )
(9)
Therefore, N is a Markov chain. (b) Since N0 = 0, the initial distribution for N is: (
π0 =
1 , for i = 0 0 , otherwise
(10)
We obtain the transition probabilities as follows: p(i, j) = P (Nn+1 = j | Nn = i) = P (Nn + Mn+1 = j | Nn = i) = P (Mn+1 = j − i) p
, for j = i + 1 = 1 − p , for j = i, ∀i ≥ 0 0 , otherwise. 2
(11)
EE132B-HW Set #6
UCLA 2014 Fall
Prof. Izhak Rubin
Problem 4 (a) We have P (Xn+1 | Xn , . . . , X0 ) = P
n+1 X
!
Yk = j | Xn , . . . , X 0
k=1
=P
Yn+1
+
n X
Yk = j |
k=1
| {z }
Xn , . . . , X 0
(12)
=Xn
= P (Yn+1 + Xn = j | Xn ) = P (Xn+1 | Xn ) . Therefore, X is a Markov chain. (b) We calculate the transition probabilities as follows: p(i, j) = P (Xn+1 = j | Xn = i) = P (Yn+1 + Xn = j | Xn = i) = P (Yn+1 = j − i) (
=
pj−i , for j − i ≥ 0, i ≥ 0, 0 , otherwise.
3
(13)