matlab代写-CAPE 5710
时间:2021-03-06
Discrete Models: Monte
Carlo Simulation
Prof. Andy Mullis
CAPE 5710
Monte Carlo Models
 The term ‘Monte Carlo’ simulation is currently applied
to such a large class of simulation techniques that it is
often difficult to determine what is common amongst
them. At its heart any type of simulation that
fundamentally relies on performing random sampling
to provide statistical estimation may be termed Monte-
Carlo. Such models include ‘random walk’
simulations (e.g. radiation flux through an absorbing
material), random packing of particles and statistical
thermodynamics.
Monte Carlo Models
 Surprisingly though, even problems with no stochastic
content can be solved using Monte Carlo techniques.
One of the most basic (and first) applications of the
technique is in determining the volume of an arbitrary
region in M-dimensional space. More generally, the
problem is that of evaluating the integral of a function,
F, defined over such a region.
Monte Carlo Models
 Let R denote a region of
unknown area, (R ),
bounded by an arbitrary
line on the domain [0,
1]  [0, 1]. We wish to
estimate the area (R ) but
the definition of R is such
as to make an analytical
estimation intractable.
Estimating Volume & Area
 One way we could do this
would be as follows;
generate a sequence of
ordered points x(j): x(1),
…., x(n), as shown.
j  1 and S  0
while j  n
generate
)( jx
  0 )( jx
if 
)( jx then   1 )( jx
  )( jxSS 
j  j+ 1
nSn / )( 
Estimating Volume & Area
 By assuming that each point is at the centre of a
square of area 1/n, the area  can be estimated using
the following algorithm:
 Here is an estimate of the true area . The
area estimated by this algorithm and the associated
errors are also shown.
)(n
Estimating Volume & Area
 However, this strategy has a serious flaw. How
should we chose n such that the likely error, , is
within a given band. Moreover, if we perform the
calculation one with a given n, say n1, it is very difficult
to redo the calculation with a different n, say n2,
without starting over from scratch.
Estimating Volume & Area
 The area can also be estimated using the Monte-
Carlo technique using the algorithm above with one
modification. Rather than using an ordered set of
point descriptors, x(j), we use a set of independent
random samples, X(j), drawn from a uniform
probability distribution. This strategy has a number of
advantages.
Estimating Volume & Area
 Firstly, because each sample, X(j), is independent of
all previous samples, it is easy to add more samples if
required, i.e. to change n during the computation if
required. This allows for convergence testing during
the evaluation. Secondly, because the sampling is
random, probability theory can be used to estimate 
and hence to estimate a priori the value of n to use.
Estimating Volume & Area
 where is the standard error of and
serves as a rough measure of the statistical error in .
Estimating Volume & Area
 
n
S
n
)1(
)var(
1
var
2




  













n
S
nn
S
1
)1(
var 
 From random sampling theory it can be shown that:
  var n
Generalization to M-Dimensions
 Monte Carlo methods are also more computationally
efficient for a given maximum error. Consider again
the ordered sampling case. We may estimate the
worst case value of n(), that is the maximum number
of points n required to ensure the error is below a
given value .
 The error is
    n
Generalization to M-Dimensions
 Now since n = km
ksn /)(
 Let S(R ) be the length of the perimeter and 1/k be
the maximal width of a rectangle containing the errors
on the boundary. Hence the absolute error has upper
bound
mn n
s
/1
)(
  
  ms
n 




 



Generalization to M-Dimensions
 That is n increases exponentially with m. For m = 2
(as above) we are calculating area, for m = 3 volume.
For m > 3 we are calculating the ‘volume’ of the region
contained in the M-dimensional hypercube. Note that
problems with m > 3 are not purely hypothetical.
Many problems can be formulated in m-dimensional
space where m is simply the number of independent
variables. For instance elasticity problems can be
formulated with m = 6, corresponding to the three
orthogonal spatial directions and the three
components of the stress.
Generalization to M-Dimensions
 I.e. for Monte Carlo n is independent of the problem
dimension.
 
24
1
,

 cn
 Returning now to Monte Carlo formulation,
Chebyshev’s inequality gives the worst case value of
n for an error  with a confidence limit  as
Particle Packing
 A quite different type of Monte Carlo simulation can be
used to simulate particle packing. For uniform
spheres without inter-particle forces, approximate
analytical methods yield random loose packing
fractions between 0.58 and 0.65. However, for non-
uniform spherical particles or particles which interact
(i.e. by surface adhesion, ‘sticky particles’) analytical
solutions are not possible.
Particle Packing
 However, packing fraction and packing geometry can
be estimated by a form of Monte Carlo simulation.
The principal though can be understood with
reference to uniform non-interacting spheres. We
consider a box, with sides L, much larger than the
radius, a, off the unit sphere.
Particle Packing
 Particles are ‘fired’ into the box 1 at a time. Their
starting position (x, y, z = 0) at the top of the box is set
by an independent random variable for each particle,
as is the initial deflection from the vertical, .
Depending upon the algorithm used a particle that’s
initial trajectory would take it out of the box can either
be discarded or reflected from the walls of the box.
The particles follow simple Newtonian mechanics until
it comes to rest, which is assumed to occur when it
has formed contact with three other spheres.
Particle Packing
Particle Packing
 Typically, several thousands or tens of thousands of
particles of spheres are required for good
convergence of the packing statistics. Some of the
structures that can be formed are shown below.
These include random close packed, random loose
packed and bridge free random loose packed.
Particle Packing
 Although it is possible to add non-uniform spheres
there are certain computational restrictions. Consider
a dispersion containing two types of spherical particle,
large and small, of the same material. The volume
ratio of large to small particles is 100. Equal masses
(i.e. solid volumes) of the two powders are mixed (i.e.
there are 100 small spheres for each large sphere). If
we require 10,000 large spheres to ensure
convergence of the packing statistics then we have
106 small spheres!
Particle Packing & Flow
Random Walk Processes
 As the final example of a Monte Carlo process we
consider the shielding of a radiation source. Similar
arguments can be applied to modelling the
penetration depths of electrons bombarding a material
surface. The source S is shielded by a cylindrical
container with inner radius r and outer radius r + d.
The source emits a neutron at energy level E0. The
neutron will then have a mean free path  between
collisions. The actual distance between collisions will
be a random variable distributed such that its RMS
value is  (the distribution of which may or may not be
normal).
Random Walk Processes
Absorbed
Escaped
d
S
Containment
Random Walk Processes
 Each collision may be either elastic (kinetic energy
conserved but direction changed) or inelastic, again
decided randomly. In an inelastic collision the
direction of the neutron is changed and its kinetic
energy reduced. If the incident energy is below a
critical level, E*, it is assumed to be absorbed.
Repeated random walk processes allow the
probability of the neutron escaping, Q(E0, d), to be
estimated. Other input parameters include the
relative probability of elastic or inelastic collision and
the mean energy lose in an inelastic collision. Both
are functions of the material from which the shield is
constructed.
Random Walk Processes
 is an unbiased estimator of Q(E0,d). Moreover, it can
be shown that the variance is

n i
on X
n
dEQ
1
)(1),(ˆ
 Let the outcome of an individual trail be;
 X(i) = 1 Neutron escapes
 X(i) = 0 Otherwise
   ),(1),(
1
),(var dEQdEQ
n
dEQ ooo 
Random Processes
 Finally we note that the last two examples are of
Monte Carlo techniques applied to stochastic
processes, the first example is a non-stochastic
process.
 When considering Monte Carlo simulations it must be
borne in mind that the simulation can only be as good
as the random number generator used. All computers
generate pseudo-random numbers rather than truly
random numbers, usually by taking the modulus of a
complex binary operation. It is important that the
distribution of ‘random’ numbers be unbiased.
Further Reading
 1/ ‘Monte Carlo – Concepts, Algorithms &
Applications’, G.S. Fishman.
 2/ ‘A Guide to Monte Carlo Simulation in Statistical
Physics’, D.P. Landau & K. Binder.
 3/ ‘Monte Carlo Simulation in Statistical Physics’, K.
Binder & D.W. Heermann.




































































































































































































































































































学霸联盟


essay、essay代写