Homework # 6
11.4. By assumption f X (x) =
1 1 π 1 + (x (x α)2
−
We first consider the case a case a > 0. When y > 0 F Y (y ) = P = P a/X Y (y
{
≤ y} = P {X ≤ ≤ ≤ 0} + P {X ≥ ≥ a/y} = F X (0) + 1 − F X (a/y) a/y )
When y < 0,
a ≤ y} = {a/y ≤ X < 0} = F (0) − F F (y (y ) = P = P {a/X ≤ Y Y
X
X
y
In any case, a a a 1 y2 1 1 f Y (y ) = 2 f X = 2 = Y (y ˜ 1 + (y ˜2 y y y π y 2 + (a (a y 2 α)2 βπ (y ˜ α)2 /β
where
−
−
aα ˜ = α ˜ = and β = 2 1+α
a 1 + α2
˜. So Y So Y has has Cauchy distribution with parameters α ˜ and β Assume now a now a < 0 and write Y write Y = ( a)/( X ). ). Notice that X has has Cauchy distribution α and 1. Apply what we obtained in the previous case, Y Y has Cauchy distribution with parameters ( a)( α) a ˆ= α ˆ = and β 1 + α2 1 + α2
−
−
−
−
−
− −
In summary, in any case Y case Y has Cauchy distribution with parameters α ¯ =
aα ¯ = and β = 2 1+α
|a|
1 + α2
11.13. Under the extra assumption that F that F ((x) is strictly increasing, the inverse F inverse F −1 (x) of F ( F (x) exists. Then for each 0 < 0 < y < 1, F Y (y ) = P = P F ( F (X ) Y (y
{
−1
≤ y} = P {X ≤ ≤ F
(y ) = F F −1 (y ) = y
}
So Y So Y is is uniformly distributed on (0, (0 , 1). Without the extra assumption we define, for each 0 < 0 < y < 1, that F −1 (y ) = inf x; F ( F (x)
{
1
≥ y}
The key observation is that −1
{x; F (x) < y} = − ∞, F
(y)
−1
and F F (y) = y
( )
∗
(Note: F −1 (y) here is a new notation we introduced, does not have to be the inverse function — because the equality F −1 F (x) = x does not have to be true in general).
The first equality follows directly from the definition of F −1 (y). As for the second equality, notice that for any x′ > F −1 (y), F (x′ ) y. Let x F −1 (y)+ . By the right continuity of the distribution function we obtain that
≥
→
F F −1 (y)
≥y
≤
On the other hand, F (x′′) < y for any x ′′ < F −1 (y). By continuity assumption on F , letting x ′′ F −1 (y)− leads to F F −1 (y) y
→
Finally, for any 0 < y < 1,
F Y (y −) = P F (X ) < y = P X < F −1 (y) = P X
{
}
{
−1
{ ≤ F
}
(y) = F F −1 (y) = y
}
where the second equality follows from the first equation in (*), the first equality follows from the continuity assumption of F , and the last equality follows from the second equation in (*). By monotonicity of F Y , we have F Y (y) = F Y (y −) = y for all 0 < y < 1. So Y is uniformly distributed on (0, 1). Remark. Can you prove it without continuity assumption on F ? (Of course, as a
distribution function, F is always right continuous) 11.14. Notice that F U (u) = u for any 0 F X (x) = P F −1
{
≤ u ≤ 1. (U ) ≤ x} = P {U ≤ F (x)} = F U F (x) = F (x)
12.1. ∞
∞
−
e
2 +y2
x
2σ 2
2π
dxdy =
∞
dθ
0
−∞ −∞
−
e
r
∞
2
2σ 2
rdr = 2π
0
12.3.
−
e
r
0
∞
f X (x) =
f (x, y)dy
−∞
Write
(x
2
2
− µ ) − 2r(x − µ )(y − µ ) + (y − µ ) σ σ σ σ y−µ r(x − µ ) (x − µ ) − = + (1 − γ ) 1
1
2 1
1
2
σ2
2
2
2 2
2
2
1
1
2
σ12
σ1
2
2
2σ 2
2
rdr = 2πσ 2
Then
2
− µ ) √ f X (x) = exp − σ 2πσ σ 1 − r 1 y − µ r(x − µ ) × exp − 2(1 − r ) σ − σ dy (x − µ ) z 1 √ = exp − exp − dz σ 2(1 − r ) 2πσ 1 − r (x − µ ) 1 = √ exp − 1
1
2
2
(x
1
2 1
∞
2
2
2
−∞
1
2
1
1
2
∞
2 1
2
1
2
1
2
−∞
2
σ12
2πσ1
Therefore, f X =x (y) =
f (x, y) f X (x)
1
= 2π(1 − r )σ 1 = 2π(1 − r )σ 2
2
2
2
1 y − µ r(x − µ ) − σ exp − 2(1 − r ) σ 1 µ r(x − µ ) 2
2
1
2
2
1
2
exp
− 2(1 − r )σ
2 2
2
y
−
2
σ2
+
1
σ1
This result says, conditioning on X = x, Y has the normal distribution with expectation µx = and the variance (1
2
µ2 r(x µ1 ) + σ2 σ1
−
2 2
− r )σ .
12.11. First notice that Z
≥ 0 and −π/2 < W < π/2. Let z > 0 and −π/2 < w < π/2.
Y ≤w (z, w) = P X + Y ≤ z, arctan X x + y 1 exp − dxdy 2
F Z,W
2
2
= where
2πσ 2
2
2σ 2
D
D = (x, y); x + y 2
2
≤
y z and arctan x
We now do the polar substitution
x = r cos θ and y = r sin θ Or, r = The Jacobian determinant
x + y 2
2
and θ = arctan
∂ x, y = r ∂ r, θ
{ } { } 3
y x
≤ w
Thus
w
2 F Z,W (z, w) = 2πσ 2
z
2
r exp − rdr 2σ r 1 π w + exp − rdr π/ 2
−
2
0
z
= So we have
where
πσ 2
2
2
2σ2
0
∂ 2 F f Z,W (z, w) = = f Z (z)f W (w) ∂z∂w
1 π π − ≤ w≤ 2 f (w) = π 0 else2 z z exp − z ≥ 0 2σ f (z) = σ 0 z < 0 W
2
Z
2
2
4