Matthew Dennison

Approximating π using Monte Carlo Integration


In this example we calculate π by placing random points in the square below, and counting the number that fall within the circle. The ratio of the number that fall within the circle to the total number of points (number of MC iterations, niter) is proportional to π.

The MC iterations are split into batches of n_trial iterations. After each batch, an estimate of π is calculated, πsub. The average of these gives our estimate of π, while the standard deviation can be used to find the statistical error (given here as the 95% confidence interval), shown in the top left plot. The error will decrease as (niter)-0.5, shown in the bottom left plot.

We are essentially sampling from a binomial distribution which has probability P = π/4 of a success (i.e. the point is within the circle) for n = n_trial independent experiments. According to the central-limit theorem, if we draw a large number of random samples from any type of distribution, the distribution of πsub values will always be a normal distribution (so long as the sample size is large enough, typically > 30), shown in the bottom right plot. Try varying n_trial to see how this changes.


π against number of MC iterations

Zoom: click and drag, Restore: double click


Input

Statistics

Iterations:
π = +/-

% error in the calculation



Percentage error (given here as the 95% confidence interval) in π against the number of MC iterations niter. Red circles shows the result, while the solid black line shows the expected value, given as the standard deviation of the underlying distribution function divided by the square-root of the number of iterations, (niter)-0.5.

The error in our result is given by:


where ⟨...⟩ denotes the mean after niter iterations

Distribution of results



Probability density function of πsub values. Red circles are calculated result, solid black line is the expected result based on the central-limit theorem i.e. a normal distribution.