R studio代写-STAT802
时间:2021-05-10
MCMC methods
STAT802 Advanced Topics in Analytics
Patricio Maturana-Russel
p.maturana.russel@aut.ac.nz
Department of Mathematical Sciences, School of Engineering, Computer and Mathematical
Sciences, Auckland University of Technology, Auckland, New Zealand
Semester 1, 2021
1 / 63
MCMC methods
Paper contents
6. Resampling methods
8. MCMC methods
- Metropolis-Hastings
- Gibbs sampling
- Parallel tempering
- Lab exercises
9. Bayesian statistics
10. Bayesian statistics
11. Bayesian analysis
12. Model selection
2 / 63
MCMC methods
MCMC methods
3 / 63
MCMC methods
MCMC
Markov chain Monte Carlo (MCMC) is a simulation algorithm that gener-
ates a dependent sample θ1, . . . , θn from the target distribution f(θ).
4 / 63
MCMC methods
Use
An expected value can be approximated as follows
E (h(θ)) =

Θ
h(θ)f(θ)dθ ≈ 1
n
n∑
i=1
h(θi),
where θi ∼ f(θ), for i = 1, . . . , n.
5 / 63
MCMC methods
Use: Bayesian statistic
The Bayes’ theorem is given by
p(θ|M,X) = L(X|M,θ)pi(θ|M)
z
,
where θ ∈ Θ is the parameter vector, L the likelihood, pi the prior, p the
posterior, X the data set, M the model and z the marginal likelihood
given by ∫
Θ
L(X|M,θ)pi(θ|M)dθ.
We use MCMC methods to:
• to study the posterior distribution,
• to estimate z.
6 / 63
MCMC methods
Metropolis-Hastings algorithm
History
• MCMC was introduced by physicists (Metropolis et al. 1953).
N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E.
Teller. “Equations of State Calculations by Fast Computing Machines”.
Journal of Chemical Physics, 21(6):1087-1092, 1953.
• Metropolis algorithm is generalized in the 1970’s by the statistician
Hastings.
• Applied mathematicians Geman and Geman (1984) used Gibbs sam-
pler in image restoration problems.
• Since the early 1990’s MCMC methods have revolutionized Bayesian
statistics.
7 / 63
MCMC methods
Metropolis-Hastings algorithm
Metropolis-Hastings algorithm
θ∗∼q(θn|θ∗) is accepted according to the acceptance ratio
α(θ∗|θn) = min
{
1,
f(θ∗)
f(θn)
q(θn|θ∗)
q(θ∗|θn)
}
.
Otherwise, stay at point θn.
• f is the target distribution.
• q is the proposal distribution.
• θn is the current position.
• θ∗ is the proposal position.
8 / 63
MCMC methods
Metropolis-Hastings algorithm
Metropolis algorithm
If q is symmetric, i.e., q(θn|θ∗) = q(θ∗|θn), we have that
α(θ∗|θn) = min
{
1,
f(θ∗)
f(θn)
}
.
Note that the M-H algorithm requires to know f up to a (normalization)
constant, i.e., if f(θ) = g(θ)/c we have that
f(θ∗)
f(θn)
=
g(θ∗)
g(θn)
,
therefore we only need to know g(θ).
This is why z (marginal likelihood in Bayes’ theorem) is generally ignored
in the Bayesian parameter inference process.
9 / 63
MCMC methods
Metropolis-Hastings algorithm
MH proposal
Consider α(θ∗|θn) = min
{
1,
f(θ∗)
f(θn)
}
in the following cases:
f(θ
)
l
l
l
θ^ θn θ
~
where θn is the current position and θ̂ and θ˜ are the proposal positions (θ
∗).
10 / 63
MCMC methods
Metropolis-Hastings algorithm
MH proposal
• θ̂ is accepted with probability f(θ̂)/f(θn).
• θ˜ is accepted with probability 1.
11 / 63
MCMC methods
Metropolis-Hastings algorithm
MH algorithm
1. Initialize θ0
2. For n = 0, . . . , N − 1, repeat N times:
i. Sample θ∗ from q(θ|θn).
ii. Sample u from Uniform(0,1).
iii. If u ≤ α(θ∗|θn)→ θn+1 = θ∗
else θn+1 = θn
12 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
Sample from N(0,1) using proposals from Uniform(θn − 2, θn + 2).
Specifications:
• Starting value = 0.
• Starting value = -100.
• Starting value = -100 and use log-densities.
13 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
Simple R implementation of the MH algorithm.
mh = function(n, starting_value, logden = FALSE){
out = NULL; # to store the points
current_value = starting_value;
current_den = dnorm(current_value, log = logden);
for(i in 1:n){
proposal_value = runif(1, min = current_value - 2,
max = current_value + 2);
proposal_den = dnorm(proposal_value, log = logden);
if(logden == TRUE){
alpha = proposal_den - current_den;# Acceptance probability
u = log(runif(1));
}else{
alpha = proposal_den / current_den;
u = runif(1);
}
14 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
if(u <= alpha){ # the proposal is accepted
out[i] = proposal_value;
current_value = proposal_value;
current_den = proposal_den;
}else{ # the proposal is rejected
out[i] = current_value;
}
} # end loop
return(out);
}
15 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
mh(n=5000, starting value= 0, logden = FALSE)
x
D
en
si
ty
−2 0 2 4
0.
0
0.
1
0.
2
0.
3
0.
4
Iteration
x
0 1000 2000 3000 4000 5000

2
0
2
4
16 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
> x = mh(n=5000, starting_value=-100, logden = FALSE)
Error in if (u <= alpha) { : missing value where TRUE/FALSE needed
Problem:
dnorm(proposal value, log = FALSE) = 0 &
dnorm(current value, log = FALSE) = 0
⇒ alpha = proposal den / current den =∞
In phylogenetics the likelihood values are so small, so the log-likelihood is
used instead.
17 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
> x = mh(n=5000, starting_value=-100, logden = FALSE)
Error in if (u <= alpha) { : missing value where TRUE/FALSE needed
Problem:
dnorm(proposal value, log = FALSE) = 0 &
dnorm(current value, log = FALSE) = 0
⇒ alpha = proposal den / current den =∞
In phylogenetics the likelihood values are so small, so the log-likelihood is
used instead.
17 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
x = mh(n=5000, starting value=-100, logden=TRUE)
x
D
en
si
ty
−100 −80 −60 −40 −20 0
0.
00
0.
01
0.
02
0.
03
0.
04
0.
05
Iteration
x
0 1000 2000 3000 4000 5000

10
0

80

60

40

20
0
Solution?
18 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
x = x[-c(1:200)];
x
D
en
si
ty
−2 0 2 4
0.
0
0.
1
0.
2
0.
3
0.
4
Iteration
x
0 1000 2000 3000 4000

2
0
2
4
This is known as burn-in period.
19 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
x contains 4,800 samples.
What if discard half of the points as follows:
y = x[seq(1, length(x), by = 2)]?
→ length(y)=2400
Min. 1st Qu. Median Mean 3rd Qu. Max. SD
x -3.438 -0.674 0.009 0.007 0.738 4.008 1.032
y -3.438 -0.668 0.008 0.004 0.719 3.089 1.030
The number of steps to thin the chain is known as a thinning factor (in
this example = 2).
20 / 63
MCMC methods
Metropolis-Hastings algorithm
Exercise 8.1
y
D
en
si
ty
−3 −2 −1 0 1 2 3
0.
0
0.
1
0.
2
0.
3
0.
4
Iteration
y
0 500 1000 1500 2000

3

2

1
0
1
2
3
21 / 63
MCMC methods
Metropolis-Hastings algorithm
Practical implementation issues
• Setting starting value θ0.
• Defining proposal density q.
• Deciding how long to run the Markov chain.
• Deciding the burn-in period.
• How to use the output to approximate the target distribution f .
22 / 63
MCMC methods
Metropolis-Hastings algorithm
Mixing
0 200 400 600 800 1000 1200
Iteration
θ
Good mixing
0 200 400 600 800 1000 1200
Iteration
θ
Bad mixing
0 200 400 600 800 1000 1200
Iteration
θ
Bad mixing
23 / 63
MCMC methods
Metropolis-Hastings algorithm
Effective sample size (ESS)
The ESS is the number of independent draws from the target distribution
that the Markov chain is equivalent to.
Study of gene expressions:
Twins
Suppose the correlation between them is 0.7.
ESS =
number samples
1 + (number samples− 1)× correlation
=
2
1 + (2− 1)× 0.7 ≈ 1.18
They are counted as 1.18 people.
Samples that are highly correlated do not count as individual samples.
Source: www.youtube.com/watch?v=67zCIqdeXpo
StatQuest - Sample Size and Effective Sample Size, Clearly Explained
24 / 63
MCMC methods
Metropolis-Hastings algorithm
Effective sample size (ESS)
Study of gene expressions:
Twins
Suppose the correlation between them is 0.1.
ESS =
number samples
1 + (number samples− 1)× correlation
=
2
1 + (2− 1)× 0.1 ≈ 1.82
They are counted as 1.82 people.
Source: www.youtube.com/watch?v=67zCIqdeXpo
StatQuest - Sample Size and Effective Sample Size, Clearly Explained
25 / 63
MCMC methods
Metropolis-Hastings algorithm
Effective sample size (ESS)
The effective sample size of N samples generated by a process with auto-
correlations ρt at lag t ≥ 0 is defined by
ESS =
N
1 + 2
∑∞
t=1 ρt
.
In practice, it is estimated.
Tracer1 flags ESSs < 100, and also indicates whether ESSs ∈ [100, 200].
But this may be a bit liberal and ESSs > 200 are more desirable. Larger
than 10,000 is almost certainly a waste of computational resources (Drum-
mond and Bouckaert, 2015).
1Tracer is a program for analysing the trace files generated by Bayesian MCMC runs (that is, the
continuous parameter values sampled from the chain). It can be used to analyse runs of BEAST,
MrBayes, LAMARC and possibly other MCMC programs.
26 / 63
MCMC methods
Metropolis-Hastings algorithm
Effective sample size (ESS)
In the previous example:
x = mh(n=5000, starting_value=-100, logden=TRUE);
x = x[-c(1:200)]; # burn-in period -> 4800 samples
y = x[seq(1, length(x), by = 2)]; # thinning -> 2400 samples
If we calculate their ESS values:
> coda::effectiveSize(x)
828.15
> coda::effectiveSize(y)
882.12
Consider the following case:
> coda::effectiveSize(rnorm(1000))
1000
27 / 63
MCMC methods
Gibbs sampling
Gibbs sampling
It is an MCMC algorithm widely used in Bayesian statistics.
It originated in the context of image processing where the posterior distri-
bution of interest is a Gibbs distribution.
It works by sampling from full conditional distributions.
28 / 63
MCMC methods
Gibbs sampling
Gibbs sampling
Consider θ = (θ1, θ2, . . . , θp)
> ∼ f(·).
Given an arbitrary set of starting values θ(0) = (θ
(0)
1 , θ
(0)
2 , . . . , θ
(0)
p )>, the
(k + 1)th iteration of the algorithm is given by:
Draw θ
(k+1)
1 ∼ f(θ1|θ(k)2 , θ(k)3 , . . . , θ(k)p )
Draw θ
(k+1)
2 ∼ f(θ2|θ(k+1)1 , θ(k)3 , . . . , θ(k)p )
...
Draw θ(k+1)p ∼ f(θp|θ(k+1)1 , . . . , θ(k+1)p−1 )
yielding θ(k+1) = (θ
(k+1)
1 , θ
(k+1)
2 , . . . , θ
(k+1)
p )>.
Repeat this process N times.
29 / 63
MCMC methods
Gibbs sampling
Bivariate normal sampling
To sample from a bivariate normal distribution(
X
Y
)
∼ N
((
µX
µY
)
,
(
σ2X ρσXσY
ρσXσY σ
2
Y
))
we can sample from
X|Y = y ∼ N
(
µX +
ρσX(y − µY )
σY
, (1− ρ2)σ2X
)
Y |X = x ∼ N
(
µY +
ρσY (x− µX)
σX
, (1− ρ2)σ2Y
)
30 / 63
MCMC methods
Parallel tempering
A sampling problem
Problem:
θ
f(θ
)
31 / 63
MCMC methods
Parallel tempering
Parallel tempering
A solution:
θ
f(θ
)
32 / 63
MCMC methods
Parallel tempering
Parallel tempering
Sample from the transitional distributions
fβ(θ) =
f(θ)β

, where 0 6 β 6 1,
where zβ is the normalizing constant.
In Bayesian statistic, we define the power posterior density
pβ(θ|M,X) = L(X|M,θ)
βpi(θ|M)

.
This density defines a path between pi and p.
33 / 63
MCMC methods
Parallel tempering
Use
This algorithm is used for Bayesian inference for parameter estimation in
Gravitational Wave field.
See for instance: Parameter estimation for compact binaries with ground-
based gravitational-wave observations using the LALInference software li-
brary by J. Veitch et al. (2015).
34 / 63
MCMC methods
Parallel tempering
Algorithm
Initialize θβ00 , . . . , θ
βK−1
0
1. Randomly select a swap index k = 1, . . . ,K − 1.
2. Calculate swap rate
α = min
1,
(
f(θ
βk−1
n )
f(θβkn )
)βk−βk−1 .
3. Sample u from Uniform(0,1).
4. If u ≤ α→ θβkn+1 = θβk−1n & θβk−1n+1 = θβkn (swap)
else θβkn+1 = θ
βk
n & θ
βk−1
n+1 = θ
βk−1
n
5. For i ∈ {0, . . . ,K − 1}\{k − 1, k}
θβin+1 = RWM(θ
βi
n , fβ)
where RWM(θ, f) stands for a random walk Metropolis (or related) algo-
rithm performed to update θ targeting the distribution f .
35 / 63
MCMC methods
Parallel tempering
Parallel tempering
0 50 100 150 200 250 300 350
Iteration
θ
l
l
Swap with probability α
RWM
RWM
RWM
Note that here we swap the points after some iterations.
36 / 63
MCMC methods
Other algorithms
Other algorithms
• Slice sampling.
• Hamiltonian Monte Carlo.
• Reversible jump Markov Chain Monte Carlo (RJMCMC).
• Nested sampling.
37 / 63
MCMC methods
Laboratory
Lab
38 / 63
MCMC methods
Laboratory
Lab 8.1
Generate 10,000 points using the M-H algorithm from a Gamma distri-
bution with parameters: shape = 9 and rate = 2. Use a N(θn, σ
2) as
proposal density.
• Find σ such that the acceptance proportion is between 0.3 and 0.4.
• Define a burn-in period if it is necessary
• Use a thinning factor of 5.
• Plot the resulting histogram along with the target pdf.
• Draw the trace plot for the acceptance proportion.
• Estimate the mean and variance via Monte Carlo method and compare
them to the true values.
39 / 63
MCMC methods
Laboratory
Lab 8.2
Simulate observations using the M-H algorithm from a bivariate normal distri-
bution (X1, X2)
> ∼ N2 (µ,Σ), where
µ =
(
µ1
µ2
)
and Σ =
(
σ21 ρσ1σ2
ρσ1σ2 σ
2
2
)
.
- Case 1: µ = (1, 2)>, σ1 = 2, σ2 = 3, and ρ = 0.
- Case 2: ρ = 0.9.
Consider a Uniform(x∗i − si, x∗i + si) as proposal distribution, where x∗i is the
current position of the Xi variable, and si is the proposal length, for i = 1, 2.
Do the following:
• C1 & C2: Simulate 2,000 observations, using a thinning factor of 1.
• C2: Generate 2,000 observations using a thinning factor of 10.
• C2: Calculate the ESSs.
Generate the trace plots for each simulated component in both cases. What can
you conclude?
Hint: you might use mvtnorm::dmvnorm() to calculate the multivariate normal.
40 / 63
MCMC methods
Laboratory
Lab 8.3
Simulate from the bivariate normal distribution defined in Lab 8.2 but now
using the Gibbs sampling algorithm. How does it perform in comparison
to the Metropolis-Hastings algorithm?
41 / 63
MCMC methods
Laboratory
Lab Solution
42 / 63
MCMC methods
Laboratory
Lab 8.1
Generate 10,000 points using the M-H algorithm from a Gamma distri-
bution with parameters: shape = 9 and rate = 2. Use a N(θn, σ
2) as
proposal density.
• Find σ such that the acceptance proportion is between 0.3 and 0.4.
• Define a burn-in period if it is necessary
• Use a thinning factor of 5.
• Plot the resulting histogram along with the target pdf.
• Draw the trace plot for the acceptance proportion.
• Estimate the mean and variance via Monte Carlo method and compare
them to the true values.
43 / 63
MCMC methods
Laboratory
Lab 8.1
# Metropolis algorithm
N = 10000; # number of samples
current_value = 1;
current_den = dgamma(current_value, shape=9, rate=2, log=TRUE);
sigma = 1; # proposal length
out = NULL; # to store samples
count = NULL; # to count the number of proposals accepted
for(i in 1:N){
proposal_value = rnorm(1, mean=current_value, sd=sigma);
proposal_den = dgamma(proposal_value, shape=9, rate=2, log=TRUE);
alpha = proposal_den - current_den; # log-acceptance pbb
u = log(runif(1)); #
if(u <= alpha){
current_value = proposal_value;
current_den = proposal_den;
44 / 63
MCMC methods
Laboratory
Lab 8.1
count = c(count, 1);
}else{
count = c(count, 0);
}
if(mean(count) > 0.4){
sigma = sigma * 1.1; # increase proposal steps
}else if(mean(count) < 0.3){
sigma = sigma * 0.9; # decrease proposal steps
}
out = c(out, current_value);
} # end for
Observation: the proposed tuning of the proposal step can fail in certain circumstances,
for instance, if the starting value is far away from the target distribution.
45 / 63
MCMC methods
Laboratory
Lab 8.1
1. sigma = 4.456
2. plot(out, type="l", xlab="Iteration", ylab="X", main="Trace plot")
0 2000 4000 6000 8000 10000
2
4
6
8
10
12
Trace plot
Iteration
X
Burn-in period is not necessary.
3. x = out[seq(1,N, by=5)]
46 / 63
MCMC methods
Laboratory
Lab 8.1
4. hist(x, freq = FALSE, ylim = c(0, max(dgamma(x, shape=9, rate=2))),
main = "Gamma distribution");
curve(dgamma(x, shape = 9, rate = 2), add=TRUE);
Gamma distribution
x
D
en
si
ty
2 4 6 8 10 12
0.
00
0.
05
0.
10
0.
15
0.
20
0.
25
47 / 63
MCMC methods
Laboratory
Lab 8.1
5. plot(cumsum(count)/(1:N), type = "l", ylab = "Acceptance rate",
xlab="Iteration");
abline(h = 0.3, lty = 2); abline(h = 0.4, lty = 2);
mtext(paste("Mean = ", mean(count)), side = 3)
48 / 63
MCMC methods
Laboratory
Lab 8.1
0 2000 4000 6000 8000 10000
0.
4
0.
6
0.
8
1.
0
Iteration
Ac
ce
pt
an
ce
ra
te
Mean = 0.3629
6. The sample mean and variance compared to the true values.
49 / 63
MCMC methods
Laboratory
Lab 8.1
Estimate True
Mean 4.52 4.50
Variance 2.25 2.25
50 / 63
MCMC methods
Laboratory
Lab 8.2
Simulate observations using the M-H algorithm from a bivariate normal distri-
bution (X1, X2)
> ∼ N2 (µ,Σ), where
µ =
(
µ1
µ2
)
and Σ =
(
σ21 ρσ1σ2
ρσ1σ2 σ
2
2
)
.
- Case 1: µ = (1, 2)>, σ1 = 2, σ2 = 3, and ρ = 0.
- Case 2: ρ = 0.9.
Consider a Uniform(x∗i − si, x∗i + si) as proposal distribution, where x∗i is the
current position of the Xi variable, and si is the proposal length, for i = 1, 2.
Do the following:
• C1 & C2: Simulate 2,000 observations, using a thinning factor of 1.
• C2: Generate 2,000 observations using a thinning factor of 10.
• C2: Calculate the ESSs.
Generate the trace plots for each simulated component in both cases. What can
you conclude?
Hint: you might use mvtnorm::dmvnorm() to calculate the multivariate normal.
51 / 63
MCMC methods
Laboratory
Lab 8.2
library(mvtnorm)
mh = function(n, mu, sigma, starting_value, step){
current_value = starting_value;
current_den = dmvnorm(current_value, mean=mu, sigma=sigma, log=TRUE);
proposal_value = current_value;
out = NULL; # to store output
step1 = step[1]; # proposal step lengths - First component
step2 = step[2]; # proposal step lengths - Second component
for(i in 1:n){
## updating first component ##
proposal_value[1]=runif(1, current_value[1]-step1, current_value[1]+step1);
proposal_den = dmvnorm(proposal_value, mean=mu, sigma=sigma, log=TRUE);
alpha = proposal_den - current_den; # acceptance probability
u = -rexp(1); # equivalent to log(runif(1))
52 / 63
MCMC methods
Laboratory
Lab 8.2
if(u <= alpha){
current_value[1] = proposal_value[1];
current_den = proposal_den;
}else{
proposal_value[1] = current_value[1]; # reset proposal
}
## updating second component ##
proposal_value[2]=runif(1, current_value[2]-step2, current_value[2]+step2);
proposal_den = dmvnorm(proposal_value, mean=mu, sigma=sigma, log = TRUE);
alpha = proposal_den - current_den;
u = -rexp(1); # equivalent to log(runif(1))
if(u <= alpha){
current_value[2] = proposal_value[2];
current_den = proposal_den;
}else{
proposal_value[2] = current_value[2]; # reset proposal
53 / 63
MCMC methods
Laboratory
Lab 8.2
}
out = rbind(out, current_value);
}
rownames(out) = NULL;
return(out);
}
54 / 63
MCMC methods
Laboratory
Lab 8.2
### case 1 ###
n = 2000; # sample size
mu = c(1,2);
rho = 0.0;
s1 = 2; s2 = 3; s12 = s1 * s2 * rho;
sigma = matrix(c(s1^2, s12, s12, s2^2), ncol=2);
set.seed(1)
R1 = mh(n, mu, sigma, starting_value = c(1,1), step = c(4,5));
> coda::effectiveSize(R1[,1])
405.1305
> coda::effectiveSize(R1[,2])
312.6341
0 500 1000 1500 2000

4
0
2
4
6
Iteration
X 1
0 500 1000 1500 2000

5
0
5
10
Iteration
X 2
l
ll
l
l l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
lll l
l
l
l
l
l
l
l
l l
l
l
l
l
ll
l
l
l
ll
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
ll
ll l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
ll
l l l
ll
l
l
ll
l l
ll
ll
l
l
l
l
ll
l
l
l l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l l
ll
ll
l
l
l
l
l
ll
ll
ll
l ll
l
l
l
ll
l
l
l
l
l
l l
l
l
l
l
l
l
ll
l
l
l
l
l ll
l
l
l
l
l
l
l
l
l
l
l
ll l ll
ll
l
l
l
l
l
l
l
l
l l
l
l l
l
l
l
ll
l
l
l
l
l
l
l
l l
ll
l
l
l
l ll
l
l
ll ll
l
l
l
ll
l l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l l ll
l
l l
ll
l
l
ll
l l
l l
l
l
l
l
l
l l l
l
l
ll
ll
l
l
l
ll
l l
l
l
l
l
l
l l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
lll
l
ll
ll l
l
l
l l
l
ll
l
ll
l l
l
l
l
l
l
ll
l ll
l
l
l
l
l l
l l
ll
l
l
l
l
l ll
l
l
l
ll
l
l
l
l l
ll
l
l
ll
l
l
l l
ll
lll
l
l
l
l
ll
l
l
l
l
ll l
ll l
l
ll
ll
l
l l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
ll
l
l
l
l
l ll
l
l
l
l
l
l
ll
l l
l
l
l
l
l
l
l
l
l ll l
l
l
l
ll
ll l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
ll
ll l
l
l
l
ll
l
l
l
l
l l
l
l
lll
l
l l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l l
l
l l
ll
ll
l
ll
l
l l
l
l l
l
l
l l
l
l
l
ll
l
l
l
ll
ll
l
l
l
ll
l
l l
l
l
l l
l
l
l
l
l l
l
ll
l
ll
l
l
l
l l l
l
l
l
l
ll
l
ll
l
l l
ll
l
l l
l
l ll l
l
l
ll
l
l
l
l
l
l
l
ll l
l l
l
l
l
l
ll
ll
l l
l
l
l
l
l
ll l
l
l
l
l
l
l l
l
l
l
l
l
ll
l
l
l
l
l
l ll
ll
l lll l l
l
ll
l
l
l
l
l ll
l
l
l
l
l
l
l
l l
l l
ll
l
l
l
l l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
ll
l
ll l
ll
l
l
l
l ll
l l
l
l
l
l
l
l
l
l
l
l
lll
l
l
l
l l
ll
l
l
l l
l
l
l
ll
l
ll
l ll
ll
l
l
ll
l
l
l l l
ll l l
l
l
l
l
l
l
l
l l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
l l l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
lll l
l
l
l
l l
l
l
l l l l
l
l
ll
l
ll
l l
l
l
l
l l
l
ll
l l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l l
l
l l
l
l
l
l
l
l
ll
l l
ll
l
l
l
l
l
l
l
ll
lll
l
l
l ll
l
l
l
l
l
l
l l
l
ll
l
ll
l
l
l
l
l l
l
l
l
lll
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l l
l l
ll
ll
l
l
l
l
l
l
l
l
ll l
l
l
l l
l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
ll
l
ll
l l
l
l
l
l l
l l
ll
l
l l
l
l
ll l
l l
l
l
l
l
ll
−4 −2 0 2 4 6

5
0
5
10
X1
X 2
D
en
si
ty
−4 −2 0 2 4 6 8
0.
00
0.
05
0.
10
0.
15
0.
20
X1
D
en
si
ty
−5 0 5 10
0.
00
0.
02
0.
04
0.
06
0.
08
0.
10
0.
12
X2
55 / 63
MCMC methods
Laboratory
Lab 8.2
### case 2 ###
rho = 0.9;
s12 = s1 * s2 * rho;
sigma = matrix(c(s1^2, s12, s12, s2^2), ncol=2);
set.seed(1);
R2 = mh(n, mu, sigma, starting_value = c(1,1), step = c(4,5))
> coda::effectiveSize(R2[,1])
74.21516
> coda::effectiveSize(R2[,2])
62.31479
0 500 1000 1500 2000

4
0
2
4
6
Iteration
X 1
0 500 1000 1500 2000

5
0
5
10
Iteration
X 2
l
l
l
ll l
ll
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l l
llll
ll
l
l
l
ll
l ll
l
l
l l l
l
l l
l
l
l
llll
l
l
l
l
l l
l
l
l
l
l
ll
l
l
ll l
l
ll
l
l
l
ll
l
ll
lll
l
ll
l
ll
l
l
l
l
l
l
l l
l
l
l
ll lll l
l
l
l l ll
l
l
l
l l
l
l
l
ll
ll l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l l ll
ll l
l
l
l
l l
ll
l
l
l
l
l
l
ll
l
l
ll
l l
ll
l l
l
l
l
ll
l
ll
l
l
l lll
l
ll
l
l
l
l
l
l
l
l ll l
l
l
l
l
l
ll
l
l
l
l
l
l
l
ll
l
l
l
l lll ll
l
ll
l l
l
l
l
ll l
l
ll
l l
l
l
l
l
l
l
l
l
l
ll
l l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l l
ll
l
ll
l
l l
l ll l
l
l l
lll
l
l
l
l
ll l
l
l l
l
ll
l
l
l
l
ll
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
lll
l
l ll l
lll
ll ll
lll
ll
ll l
l
l
ll
l
l
l
ll
l
l lll
l l
l
l
l
l
l
l
ll l
l
l
l
l l l
l
l
l
l
l
l
l ll
l l
l
l ll
l
ll
l
l
l
l
l
l
l
ll
l
l
l
ll
ll
l
l
ll l
ll l
lll
l
l
l l
l
l
l l
l
l
ll l
ll
l l
l
l l
l
l
l
l
l
ll
l
lll
l
l
l l
l
l
l
l ll
l l
l
l
l
l l
ll
l
l
l
l
lll
l
l
l
l
ll
l
l
l
l
l
l
l
lll ll ll l
l
l
l
l
l
l lll
l
ll
l
l
l l
l
l
ll
ll l
l
l
l
l
l
l
l
ll
l
l l
ll
l lll
l
ll
l
l
l
l
l
l
ll
l
l
l
ll l
l l
l
l
ll
l ll ll
l
l
ll
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
ll
lll
l
l
ll
l
l
l
l
l l
l
l
l
ll
l
ll
l
l
l
l l
ll l
l
ll
l
l
l
ll
l l
l
ll
l
l
l
ll
l
l
l
l
l
l
ll
ll
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l ll
l
l
ll
l
l
l
ll
l
l
l
ll
l
l ll
l
l
l
l
l
l
l l
ll
l
l
l
ll l
l
l
ll
l
l
l
l
lll
l l
l
ll
l
l
ll
l
ll
l
l
l
l l
l
ll
l
ll
l
l
l
l
l
ll l
ll
l
ll
l
l
l
l
l ll
l
l
l
l l
l
ll
lll
l
−4 −2 0 2 4 6

5
0
5
10
X1
X 2
D
en
si
ty
−4 −2 0 2 4 6
0.
00
0.
05
0.
10
0.
15
0.
20
X1
D
en
si
ty
−5 0 5 10
0.
00
0.
02
0.
04
0.
06
0.
08
0.
10
0.
12
X2
Note that the correlation affects the mixing.
56 / 63
MCMC methods
Laboratory
Lab 8.2
set.seed(1)
R3 = mh(n = 20000, mu, sigma, starting_value = c(1,1), step = c(4,5));
R3 = R3[seq(1,20000, by = 10),];
> coda::effectiveSize(R3[,1]);
608.4707
> coda::effectiveSize(R3[,2]);
649.8742
0 500 1000 1500 2000

4
0
4
8
Iteration
X 1
0 500 1000 1500 2000

10
0
5
10
Iteration
X 2
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
l
l
l l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
lll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l l
l
l l
l
l
lll
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l l
l
l
l l
l
l
l
l
l
l l
l l
l
ll
l
l l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l ll
l
l
l
l
l
l l
ll
l
l
l
l
l
l
l
l
l
l l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll l
l ll
l
l l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
lll
l
l
l l
l
l
l
ll
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
ll
−4 −2 0 2 4 6 8

10

5
0
5
10
X1
X 2
D
en
si
ty
−6 −4 −2 0 2 4 6 8
0.
00
0.
05
0.
10
0.
15
0.
20
X1
D
en
si
ty
−10 −5 0 5 10
0.
00
0.
02
0.
04
0.
06
0.
08
0.
10
0.
12
X2
A larger chain with a thinning factor can help in the mixing and ESS.
57 / 63
MCMC methods
Laboratory
Lab 8.3
Simulate from the bivariate normal distribution defined in Lab 8.2 but now
using the Gibbs sampling algorithm. How does it perform in comparison
to the Metropolis-Hastings algorithm?
58 / 63
MCMC methods
Laboratory
Lab 8.3
To sample from a bivariate normal distribution(
X
Y
)
∼ N
((
µX
µY
)
,
(
σ2X ρσXσY
ρσXσY σ
2
Y
))
we use the following conditional distributions for the Gibbs sampling algo-
rithm:
X|Y = y ∼ N
(
µX +
ρσX(y − µY )
σY
, (1− ρ2)σ2X
)
Y |X = x ∼ N
(
µY +
ρσY (x− µX)
σX
, (1− ρ2)σ2Y
)
59 / 63
MCMC methods
Laboratory
Lab 8.3
# Gibbs sampling algorithm
gibbs = function(n, mu1, mu2, s1, s2, rho, x0){
x1_0 = x0[1]; # starting values
x2_0 = x0[2];
out = NULL;
for(i in 1:n){
x1_0 = rnorm(1, mean = mu1 + rho * s1 * (x2_0 - mu2) / s2,
sd = sqrt((1-rho^2) * s1^2));
x2_0 = rnorm(1, mean = mu2 + rho * s2 * (x1_0 - mu1) / s1,
sd = sqrt((1-rho^2) * s2^2));
out = rbind(out, c(x1_0, x2_0));
}
return(out);
}
60 / 63
MCMC methods
Laboratory
Lab 8.3
### case 1 ###
R1 = gibbs(n = 2000, mu1 = mu[1], mu2 = mu[2], s1 = s1, s2 = s2,
rho = 0, x0 = c(1,1));
> coda::effectiveSize(R1[,1])
2000
> coda::effectiveSize(R1[,2])
2000
0 500 1000 1500 2000

4
0
4
8
Iteration
X 1
0 500 1000 1500 2000

5
0
5
10
Iteration
X 2
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
ll
l
l
l
l
l
l
l
l
l l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l l
l l
l l
l
l
l
l
l
l
l l
l
l
l
l
l
ll
l
ll
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
ll
l
l
l l
l
l l
l
l
l
l
ll
l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
−4 −2 0 2 4 6 8

5
0
5
10
X1
X 2
D
en
si
ty
−5 0 5
0.
00
0.
05
0.
10
0.
15
X1
D
en
si
ty
−10 −5 0 5 10
0.
00
0.
02
0.
04
0.
06
0.
08
0.
10
0.
12
X2
The mixing is excellent.
61 / 63
MCMC methods
Laboratory
Lab 8.3
### case 2 ###
R2 = gibbs(n = 2000, mu1 = mu[1], mu2 = mu[2], s1 = s1, s2 = s2,
rho = 0.9, x0 = c(1,1));
> coda::effectiveSize(R2[,1])
213.3355
> coda::effectiveSize(R2[,2])
269.3241
0 500 1000 1500 2000

5
0
5
Iteration
X 1
0 500 1000 1500 2000

5
0
5
10
Iteration
X 2
l
l
l
l l
l
l
l
l
l
l
l
l
ll
ll
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
ll
l l
l
l
l
l
lll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
ll
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
ll
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll l
l l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l l
l
l
l
l
l
l
l l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l l
l
l
ll
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
ll
l
l
l
ll
ll
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l
l l
l
l
l
l l
l
l l
l
ll
ll
l
l
l
l l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l l
l
l l
l
l
l l
ll
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
l
l
l
l
l
−5 0 5

5
0
5
10
X1
X 2
D
en
si
ty
−5 0 5 10
0.
00
0.
05
0.
10
0.
15
0.
20
X1
D
en
si
ty
−5 0 5 10
0.
00
0.
02
0.
04
0.
06
0.
08
0.
10
0.
12
X2
The mi


essay、essay代写