计算代写-MATH254
时间:2022-05-06
MATH254: Tutorial Exercise for Week 7
1. Let X and Y be independent random variables with common distribution function F and density
function f . Let U = max{X,Y } and determine the distribution and density functions of V . Also,
find the distribution and density functions of V = min{X,Y }.
2. Continuous random variables X and Y have joint pdf
f(x, y) = x+ y, 0  x  1, 0  y  1,
otherwise, it is zero.
(a) Find the marginal pdfs of X and Y .
(b) Find the two conditional densities.
(c) By integrating over the appropriate regions, find
(i) Pr(X > Y ),
(ii) Pr(X + Y  1),
(iii) Pr(X  0.5).
3. Suppose the joint density of X and Y is given by
f(x, y) = 4y(x y)e(x+y), 0  x <1, 0  y  x;
otherwise, it is zero. Compute E[X | Y = y].
1
MATH254: Solutions of Tutorial Exercise for Week 7
1. Note that max{X,Y }  u if and only if X  u and Y  u. Hence, by independence,
P (U  u) = P (X  u, Y  u) = P (X  u)P (Y  u) = F (u)2.
Di↵erentiate to obtain the density function fU (u) = 2F (u)f(u).
In a similar way, min{X,Y } > v if and only if X > v and Y > v. Hence, the distribution function
FV (v) is:
P (V  v) = 1 P (V > v) = 1 P (X > v)P (Y > v) = 1 [1 F (v)]2.
Further, fV (v) = 2f(v)[1 F (v)].
2. (a) Marginal pdfs:
fX(x) =
Z 1
y=0
(x+ y) dy =

xy +
y2
2
1
y=0
= x+ 0.5 (0  x  1)
fY (y) =
Z 1
x=0
(x+ y) dx =

x2
2
+ xy
1
x=0
= 0.5 + y (0  y  1)
(b) Conditional densities: for y 2 (0, 1), we have
fX|Y (x|y) = x+ y0.5 + y (0  x  1)
and for x 2 (0, 1), we have
fY |X(y|x) = x+ yx+ 0.5 (0  y  1)
(c) Probabilities:
(i) Pr(X > Y ) =
Z 1
x=0
Z x
y=0
(x+ y) dy dx =
Z 1
x=0

xy +
y2
2
x
y=0
dx
=
Z 1
x=0

x2 +
x2
2

dx =
Z 1
x=0
3
2
x2 dx =

x3
2
1
0
=
1
2
(Also follows by symmetry, since f(x, y) = f(y, x), so that Pr(X > Y ) = Pr(Y > X) = 0.5.)
(ii) Pr(X + Y  1) =
Z 1
x=0
Z 1x
y=0
(x+ y) dy dx =
Z 1
x=0

xy +
y2
2
1x
y=0
dx
=
Z 1
x=0

x(1 x) + (1 x)
2
2

dx
=
Z 1
x=0

x x2 + 0.5 x+ 0.5x2 dx
=
Z 1
x=0

0.5 0.5x2 dx = x
2
x
3
6
1
x=0
=
1
3
(iii) Pr (X  0.5) =
Z 0.5
x=0
Z 1
y=0
(x+ y) dy dx =
Z 0.5
x=0

xy +
y2
2
1
y=0
dx
=
Z 0.5
x=0
(x+ 0.5) dx =

x2
2
+
x
2
0.5
x=0
=
3
8
(Or could use marginal pdf fX(x) to find Pr (X  0.5).)
1
3. If y < 0, it is clear that E[X |Y = y] = 0. We only consider the case y 0. The conditional density
of X, given that Y = y, x > y, is given by
fX|Y (x|y) = f(x, y)fY (y) =
4y(x y)e(x+y)R1
y 4y(x y)e(x+y)dx
=
(x y)e(x+y)R1
y (x y)e(x+y)dx
=
(x y)exR1
y (x y)exdx
.
Integrating by parts shows that the above gives
fX|Y (x|y) = (x y)e
x
ey
= (x y)e(xy), x > y.
Therefore,
E[X|Y = y] =
Z 1
1
xfX|Y (x|y)dx =
Z 1
y
x(x y)e(xy)dx.
Integration by parts yields
E[X|Y = y] = x(x y)e(xy)
1
y
+
Z 1
y
(2x y)e(xy)dx =
Z 1
y
(2x y)e(xy)dx = y + 2.
2
MATH254: Tutorial Exercise for Week 8
1. Discrete random variables X and Y have joint probability mass function given by
Y = 1 Y = 2 Y = 3 Y = 4
X = 2 0.25 0.25 0 0
X = 4 0 0 0.25 0.25
Find the correlation Corr[X,Y ].
Comment on your answer.
2. Continuous random variables X and Y have joint pdf
f(x, y) = 60x2y, 0  x, 0  y, x+ y  1,
otherwise, it is zero.
(a) Find the marginal pdfs of X and Y .
(b) Find the conditional density of Y given X = x, and hence evaluate P (Y > 0.1 | X = 0.5).
(c) Find the correlation Corr[X,Y ].
3. Continuous random variables X and Y have joint pdf
f(x, y) = (2/⇡) exp
x2 + y2 /2 , x, y > 0,
otherwise, it is zero.
(a) Are X and Y independent? Justify your answer.
(b) Random variables U and V are defined by
U = X + 2Y, V = X/Y,
Find the joint pdf of U and V .
1
MATH254: Solutions of Tutorial Exercise for Week 8
1. Marginals:
P (X = 2) = 0.5, P (X = 4) = 0.5,
P (Y = 1) = 0.25, P (Y = 2) = 0.25, P (Y = 3) = 0.25, P (Y = 4) = 0.25.
So separate means and variances are
E[X] = 0.5⇥ 2 + 0.5⇥ 4 = 3
E[Y ] = 0.25⇥ 1 + 0.25⇥ 2 + 0.25⇥ 3 + 0.25⇥ 4 = 2.5
Var[X] =

0.5⇥ 22 + 0.5⇥ 42 32 = 10 9 = 1
Var[Y ] =

0.25⇥ 12 + 0.25⇥ 22 + 0.25⇥ 32 + 0.25⇥ 42 2.52 = 7.5 6.25 = 1.25
and now
Cov[X,Y ] = (0.25⇥ 2⇥ 1 + 0.25⇥ 2⇥ 2 + 0.25⇥ 4⇥ 3 + 0.25⇥ 4⇥ 4) 3⇥ 2.5 = 8.5 7.5 = 1,
Corr[X,Y ] = 1/
p
1⇥ 1.25 =
p
4/5 ⇡ 0.894.
Comment: Correlation is large and positive, indicating that X and Y tend to be large together and
small together. This can be seen from the joint probability mass function given in the question:
when X is small (X = 2), then only the small Y values have non-zero probability (Y = 1, 2) whereas
when X is large (X = 4) only the large Y values have non-zero probability (Y = 3, 4).
2. (a) Marginal:
fX(x) =
Z 1x
y=0
60x2y dy =

30x2y2
⇤1x
y=0
= 30x2(1 x)2, 0  x  1,
fY (y) =
Z 1y
x=0
60x2ydx =

20x3y
⇤1y
x=0
= 20(1 y)3y, 0  y  1,
(b) Conditional density: For 0  x  1,
fY |X(y|x) = f(x, y)fX(x) =
60x2y
30x2(1 x)2 =
2y
(1 x)2 , 0  y  1 x,
and hence
P (Y > 0.1 | X = 0.5) =
Z 10.5
y=0.1
2y
(1 0.5)2 dy =
Z 0.5
y=0.1
8ydy =

4y2
⇤0.5
y=0.1
= 4

0.52 0.12 = 0.96.
(c) From marginal,
E[X] =
Z 1
0
30x3(1 x)2 dx =
Z 1
0
30

x3 2x4 + x5 dx
= 30

x4
4
2x
5
5
+
x6
6
1
0
= 7.5 12 + 5 = 0.5
E[Y ] =
Z 1
0
20y2(1 y)3 dy =
Z 1
0
20

y2 3y3 + 3y4 y5 dy
= 20

y3
3
3y
4
4
+
3y5
5
y
6
6
1
0
=
20
3
15 + 12 10
3
=
1
3
Var[X] =
Z 1
0
30x4(1 x)2 dx 0.52 =
Z 1
0
30

x4 2x5 + x6 dx 0.25
= 30

x5
5
x
6
3
+
x7
7
1
0
0.25 = (6 10 + (30/7)) 0.25 = 1/28
Var[Y ] =
Z 1
0
20y3(1 y)3 dy (1/3)2 =
Z 1
0
20

y3 3y4 + 3y5 y6 dy (1/9)
= 20

y4
4
3y
5
5
+
y6
2
y
7
7
1
0
(1/9) = (5 12 + 10 (20/7)) (1/9) = 2/63.
1
From joint density:
E[XY ] =
Z 1
x=0
Z 1x
y=0
(xy)

60x2y

dy dx =
Z 1
x=0
Z 1x
y=0
60x3y2 dy dx
=
Z 1
x=0

20x3y3
⇤1x
y=0
dx =
Z 1
x=0
20x3(1 x)3 dx
=
Z 1
x=0
20

x3 3x4 + 3x5 x6 dx = 20 x4
4
3x
5
5
+
x6
2
x
7
7
1
x=0
= 5 12 + 10 (20/7) = 1/7,
and so
Cov[X,Y ] = (1/7) (1/2)⇥ (1/3) = 1/42
Corr[X,Y ] =
1/42p
(1/28)⇥ (2/63) =
1p
2
⇡ 0.7071.
3. (a) X and Y are independent, since range of each variable is independent of the value of the other,
and the density function is the product of two density functions of x and of y, respectively.
(b) Inverting the relationship,
V = X/Y ) X = Y V
substituting for X, then U = X + 2Y = Y V + 2Y = Y (V + 2)) Y = U
V + 2
and hence X = Y V =
UV
V + 2
Jacobian:
@x
@u
=
v
v + 2
@x
@v
=
(v + 2)u uv
(v + 2)2
=
2u
(v + 2)2
@y
@u
=
1
v + 2
@y
@v
=
u
(v + 2)2
and so
J =
@(x, y)
@(u, v)
=

v
v + 2
◆✓ u
(v + 2)2



1
v + 2
◆✓
2u
(v + 2)2

=
u
(v + 2)2
Joint pdf of (U, V ) is therefore given by
fU,V (u, v) = fX,Y (x(u, v), y(u, v))
@(x, y)@(u, v)

=

2


exp
(


uv
v + 2
◆2
+

u
v + 2
◆2!,
2
)
u
(v + 2)2
=
2u
⇡(v + 2)2
exp
(


u
v + 2
◆2
v2 + 1
,
2
)
for u > 0, v > 0. Otherwise, fU,V (u, v) = 0.
2
MATH254: Tutorial Exercise for Week 9
1. Consider a fair 3-sided die with faces 1, 2 and 4 each occuring with probability 1/3. Write down the
PGF of the score for one throw of the die. Hence find the PGF of the total score for n independent
throws. Using the PGF obtain the mean and variance of the total score when n = 2.
2. Find the PGF of the discrete random variable X with probability mass function
P (X = k) = (2/3)⇥ (1/3)k1 for k = 1, 2, . . .
Hence find the mean and variance of X.
3. Find the MGF of
(a) the Uniform[0, 1] distribution;
(b) the discrete random variable X with P (X = 4) = 1;
(c) the continuous random variable Y with probability density function f(y) = 2y for 0  y  1,
density zero elsewhere.
4. For the standard normal random variable Z ⇠ N(0, 1), write down the MGF MZ(t) of Z. Defining
X = µ+ Z for real numbers µ, with > 0, what is the distribution of X? Use the MGF MZ(t)
to find the MGF MX(t) of X.
1
MATH254: Solutions of Tutorial Exercise for Week 9
1. Denoting by X the score on one throw, the probability mass function of X is
P (X = 1) = 1/3 P (X = 2) = 1/3 P (X = 4) = 1/3
and so the probability generating function of X is
GX(s) = E

sX

= (1/3)

s+ s2 + s4

Denoting by Sn the sum of the scores on n independent throws, and by Xi the score on throw i,
then
GSn(s) = E

sSn

= E

sX1+···+Xn

= E

sX1 ⇥ · · ·⇥ sXn⇤
=

E

sX1
⇤n
= (1/3)n

s+ s2 + s4
n
When n = 2, then
GS2(s) = (1/9)

s+ s2 + s4
2
G0S2(s) = (2/9)

s+ s2 + s4

1 + 2s+ 4s3

G00S2(s) = (2/9)

1 + 2s+ 4s3

1 + 2s+ 4s3

+

s+ s2 + s4

2 + 12s2

hence
G0S2(1) = (2/9)⇥ 3⇥ 7 = 14/3
G00S2(1) = (2/9)(7⇥ 7 + 3⇥ 14) = 182/9
and so
E [S2] = G
0
S2(1) = 14/3
Var [S2] = G
00
S2(1) +G
0
S2(1)

G0S2(1)
2
=
182
9
+
14
3


14
3
◆2
=
28
9
2. PGF:
GX(s) = E

sX

=
1X
k=1
sk

2
3
◆✓
1
3
◆k1
=
2s
3
1X
k=0
⇣s
3
⌘k
=
2s
3

1
1 (s/3)

for 3 < s < 3
=
2s
3 s
(Note range of validity 3 < s < 3.)
To find mean and variance:
G0X(s) =
(3 s)⇥ 2 2s⇥ (1)
(3 s)2 =
6
(3 s)2 G
0
X(1) =
3
2
G00X(s) =
12
(3 s)3 G
00
X(1) =
3
2
E [X] = G0X(1) = 3/2
Var [X] = G00X(1) +G
0
X(1) (G0X(1))2 = (3/2) + (3/2) (3/2)2 = 3/4
3. (a) For X ⇠ Uniform[0, 1], pdf is f(x) = 1 for 0  x  1, density zero elsewhere, so
MX(t) = E

etX

=
Z 1
0
etx dx =

etx
t
1
x=0
=
et 1
t
1
(b) With P (X = 4) = 1, then MX(t) = E

etX

= e4t.
(c) Integrating by parts,
MY (t) = E

etY

=
Z 1
0
ety 2y dy =

2y
ety
t
1
y=0

Z 1
0
2ety
t
dy
=
2et
t


2ety
t2
1
y=0
=
2et
t
2e
t
t2
+
2
t2
=
2 ((t 1)et + 1)
t2
4. For Z ⇠ N(0, 1), MGF is MZ(t) = et2/2.
Defining X = µ+ Z then X ⇠ N µ,2.
MGF of X is
MX(t) = E
h
et(µ+Z)
i
= E

etµetZ

= etµMZ(t) = e
tµe(t)
2/2 = exp

tµ+
2t2
2

2
MATH254: Tutorial Exercise for Week 10
1. A sweet maker produces mints whose weights are normally distributed with mean 21.37 and vari-
ance 0.16.
(a) Let X denote the weight of a single mint selected at random from the production line. Find
P (X < 20.857).
(b) Let X¯ denote the mean weight of a sample of 100 mints. Find P (21.31  X¯  21.39).
2. The random variables X1, X2, . . . are independent and identically distributed with the Probability
Mass Function Pr(X = 1) = Pr(X = 3) = 0.5.
(a) Calculate µ = E(X) and 2 = V ar(X).
(b) For the case n = 200 use the Central Limit Theorem to approximate the probability Pr{ln(X¯) >
0}, where
X¯ = (1/200)
200X
i=1
Xi
is the sample mean and ln denotes the natural logarithm (i.e., logarithm with base e).
(c) Find the minimum n for which
Pr
n nX
i=1
Xi > 190
o
> 0.99.
3. For V ⇠ Binomial(100, 0.2), compute P (V = 20) (i) exactly; (ii) using a Poisson approximation;
(iii) using a normal approximation. Comment on your results.
4. Suppose that a measurement has mean µ and variance 2 = 25. Let X¯ be the average of n such
independent measurements. Use the Central Limit Theorem to estimate how large n should be so
that P
|X¯ µ| < 1 = 0.95.
1
MATH254: Solutions of Tutorial Exercise Week 10
1. (a) It is told that X ⇠ N(21.37, 0.16), so
P (X < 20.857) = P

Z <
20.857 21.37
0.4

where Z ⇠ N(0, 1)
= P (Z < 1.2825) = P (Z 1.2825) ⇡ 1 (1.28) = 1 0.8997 = 0.1003.
(b) With n = 100 then X¯ ⇠ N(21.37, 0.16/100) = N 21.37, 0.042, so
P (21.31  X¯  21.39) = P

21.31 21.37
0.04
 Z  21.39 21.37
0.04

= P (1.5  Z  0.5) = (0.5) (1.5) = (0.5) (1 (1.5))
= 0.6915 (1 0.9332) = 0.6247.
2. (a) µ = 1 · 12 + 3 · 12 = 1; EX2 = 12 + 92 = 5; 2 = V ar(X) = EX2 (EX)2 = 4.
(b)
Pr{ln(X¯) > 0} = Pr{X¯ > 1} = Pr{
200X
i=1
Xi > 200}
= Pr
nP200
i=1Xi 200 · 1
2
p
200
>
200 200 · 1
2
p
200
o
⇡ Pr{Z > 0} = 0.5
(here Z has standard normal distribution.)
(c)
Pr
n nX
i=1
Xi > 190
o
⇡ Pr
n
Z >
190 n · 1
2
p
n
o
> 0.99.
The 99% critical value is 2.33, so we want to find n such that
190 n
2
p
n
< 2.33.
Set n = x2 and solve equation 190 x2 = 4.66x, i.e.,
x2 4.66x 190 = 0.
= 4.662 + 4 · 190 = 781.72; p = 27.96;
x1 =
4.66 + 27.96
2
= 16.31; x2 =
4.66 27.96
2
= 11.65.
Thus, n1 = 266.02; n2 = 135.72. Clearly, 190n2 > 0 so n2 does not satisfy our requirement.
Thus the minimum n is 267.
3. (i) Exact: P (V = 20) =
100
20
⇥ 0.220 ⇥ 0.880 = 0.099300.
(ii) Poisson approximation: Have E[V ] = 100⇥ 0.2 = 20, so approximate by Poisson(20) distribu-
tion, and then P (V = 20) ⇡ 20
20
20!
e20 = 0.088835.
(iii) Normal approximation: Have Var[V ] = 100 ⇥ 0.2 ⇥ 0.8 = 16, so approximate by N(20, 16),
and then
P (V = 20) = P (19.5  V  20.5) ⇡ P

19.5 20
4
 Z  20.5 20
4

= P (0.125  Z  0.125)
= (0.125) (0.125) = (0.125) (1 (0.125)) = 2(0.125) 1
⇡ (0.12) + (0.13) 1 = (0.5478 + 0.5517) 1 = 0.0995
Comment: Normal approximation rather more accurate (0.0995/0.099300 = 1.002, close to 1,
whereas 0.088835/0.099300 = 0.8946, not so close to 1), as we would expect, since np = 20 > 5 and
n(1 p) = 80 > 5, so conditions for normal approximation valid, whereas condition for Poisson
approximation that np < 5 is not satisfied.
1
4. By the CLT, for large n we approximately have X¯ ⇠ N (µ, 25/n). Hence
P
|X¯ µ| < 1 = P ✓X¯ µ5/pn
< 15/pn

⇡ P

|Z| <
p
n
5

where Z ⇠ N(0, 1)
= P


p
n
5
< Z <
p
n
5

=
✓p
n
5



1
✓p
n
5
◆◆
= 2
✓p
n
5

1,
so we require
2
✓p
n
5

1 = 0.95
)
✓p
n
5

= 0.975
)
p
n
5
= 1.96
) n = (1.96⇥ 5)2 = 96.04
So we require n = 96 (approximately).
2
MATH254: Tutorial Exercise for Week 11
1. Suppose U and V are independent chi-square random variables with m and n degrees of freedom,
respectively. Show that the random variable W defined by
W =
U/m
V/n
has density
fW (w) =
(m+n2 )
(m2 )(
n
2 )
⇣m
n
⌘m/2
w
m
2 1

1 +
m
n
w
⌘m+n2
, w > 0.
Hint: Recall that the density of a chi-square distribution with n degrees of freedom is
fV (v) =
(1/2)n/2
(n2 )
v
n
21 ev/2.
Recall also that if A and B are independent random variables, then the density of the quotient
C = B/A is
fC(c) =
Z
|a|fA(a)fB(ca)da.
Finally, recall that the gamma density is
f↵,(x) =

(↵)
x↵1 ex, x > 0, ↵, > 0.
2. Suppose that (X,Y ) are jointly Normal distributed. Identify the conditional density functions
fY |X(y | x) and fX|Y (x | y).
3. Suppose that (X,Y ) has a standard jointly Normal distribution with parameter ⇢ 2 (1, 1). Let
V = (Y ⇢X)/
p
1 ⇢2.
Verify that X and V are independent, standard Normal random variables.
1
MATH254: Solutions of Tutorial Exercise Week 11
1. We know that density of V is
fV (v) =

1
2
n/2


n
2
vn/21ev/2.
Consider A = V/n. Using transformation method
fA(a) = fV (na)n =

1
2
n/2


n
2
(na)n/21ena/2n.
Similarly, setting B = U/m, we obtain
fB(b) =

1
2
m/2


m
2
(mb)m/21emb/2m.
Since A and B are independent, we can use the formula for the density of the quotient W = B/A:
fW (w) =
Z 1
0
|a|fA(a)fB(wa)da
=
Z 1
0
a

1
2
n+m
2


n
2



m
2
(na)n/21n(mwa)m/21mena/2emwa/2da
=

1
2
n+m
2


n
2



m
2
nn/21(mw)m/21nm Z 1
0
an/2+m/21e
1
2 (n+mw)ada.
Note that Gamma density is
f↵,(x) =

(↵)
x↵1ex.
In our case = 12 (n+mw) and ↵ =
n+m
2 and a plays the role of x. So
fW (w) =

1
2
n+m
2


n
2



m
2
nn/2mm/2wm2 1 m+n2 ⇥
1
2 (n+mw)
⇤n+m
2
=


m+n
2



n
2



m
2
⇣m
n
⌘m/2
w
m
2 1

1 +
m
n
w
⌘n+m2
.
2. By a direct computation, we have
fY |X(y | x) = fX,Y (x, y)fX(x) =
1
Y
p
2⇡(1 ⇢2) · exp

1
2
h
y µY ⇢ YX (x µX)
i2
2Y (1 ⇢2)

.
This is a normal density function with mean µY + ⇢(x µX)Y /X and variance 2Y (1 ⇢2). In a
similar way, we can get
fX|Y (x | y) = fX,Y (x, y)fY (y) =
1
X
p
2⇡(1 ⇢2) · exp

1
2
h
x µX ⇢XY (y µY )
i2
2X(1 ⇢2)

.
3. By the last theorem in Chapter 9, since X and V are linear combinations of X and Y , it follows that
any linear combination of X and V is a linear combination of X and Y , so is normally distributed
(particularly, X or V is Normal) and so (X,V ) has a bivariate Normal distribution.
Noting that E(V ) = 0 and
V ar(V ) = (1 + ⇢2 2⇢2)/(1 ⇢2) = 1,
the random variable V is standard Normal distributed like the random variable X.
To prove the independence of X and V , we only verify that Cov(X,V ) = 0. This is immediate
from
Cov(X,V ) = (Cov(X,Y ) ⇢ · V ar(X))/
p
1 ⇢2
= (⇢ ⇢)/
p
1 ⇢2 = 0.
1
essay、essay代写