Monte Carlo Pi

Math & Simulation

Estimates the value of pi by randomly sampling points inside a unit square.

This visualization estimates the value of pi using a Monte Carlo method -- a technique that uses random sampling to approximate numerical results. The term "Monte Carlo" was coined by Stanislaw Ulam and John von Neumann in the 1940s during their work on nuclear weapons at Los Alamos National Laboratory, named after the famous Monte Carlo Casino in Monaco as a nod to the central role of randomness. The idea of estimating pi by random sampling, however, traces back to the Buffon's needle problem posed by Georges-Louis Leclerc, Comte de Buffon, in 1777 -- one of the earliest known problems in geometric probability. This visualization demonstrates the core Monte Carlo concept: randomly scattering points in a square and using the ratio that fall inside an inscribed quarter circle to estimate pi.

How It Works

  1. Setup: Consider a unit square (1 by 1) with a quarter circle of radius 1 inscribed in its corner. The quarter circle's area is exactly pi/4, while the square's area is 1.
  2. Sample: Generate random points (x, y) where both x and y are uniformly distributed between 0 and 1.
  3. Test: Check if each point falls inside the quarter circle by evaluating whether x squared plus y squared is less than or equal to 1. This is just the equation of a circle with radius 1 centered at the origin.
  4. Estimate: The ratio of points inside the circle to total points approximates pi/4, since a uniformly random point lands inside the quarter circle with probability equal to its area relative to the square.
  5. Calculate: Multiply the ratio by 4 to get the estimate of pi. As more points are sampled, the estimate converges toward the true value of 3.14159265...

The Mathematics Behind It

The method works because of the law of large numbers from probability theory. If you generate n random points uniformly in the unit square, the fraction that land inside the quarter circle converges to the true probability pi/4 as n approaches infinity. The estimate is an unbiased estimator of pi, meaning its expected value is exactly pi regardless of how many points are used.

Convergence

PointsTypical accuracy
100~1 decimal place
10,000~2 decimal places
1,000,000~3 decimal places
100,000,000~4 decimal places

The error decreases as O(1/square root of n), so getting one additional digit of accuracy requires 100 times more points. This slow convergence is the fundamental limitation of all basic Monte Carlo methods. After one million points, you typically have only three correct decimal places of pi -- vastly inferior to dedicated algorithms like the Chudnovsky formula that can compute billions of digits. However, the Monte Carlo approach generalizes to problems where no such specialized formula exists.

Variance Reduction

Several techniques can improve convergence beyond basic random sampling. Stratified sampling divides the square into subregions and samples from each, ensuring more uniform coverage. Importance sampling concentrates points near the circle boundary where the classification matters most. Antithetic variates pair each point (x, y) with (1-x, 1-y) to reduce variance. These techniques are widely used in production Monte Carlo systems.

When It's Used

Monte Carlo methods are used throughout science and engineering when analytical solutions are intractable. In finance, they price complex derivatives and model portfolio risk. In physics, they simulate particle interactions and compute quantum mechanical properties. In computer graphics, ray tracing and path tracing use Monte Carlo sampling to render photorealistic images. In statistics, Markov Chain Monte Carlo (MCMC) methods underpin modern Bayesian inference. The pi estimation is a simple, visual introduction to the entire family of Monte Carlo techniques, demonstrating both its power -- it works for any geometry without requiring a formula -- and its fundamental limitation of slow convergence.