Estimates the value of pi by randomly sampling points inside a unit square.
This visualization estimates the value of pi using a Monte Carlo method -- a technique that uses random sampling to approximate numerical results. The term "Monte Carlo" was coined by Stanislaw Ulam and John von Neumann in the 1940s during their work on nuclear weapons at Los Alamos National Laboratory, named after the famous Monte Carlo Casino in Monaco as a nod to the central role of randomness. The idea of estimating pi by random sampling, however, traces back to the Buffon's needle problem posed by Georges-Louis Leclerc, Comte de Buffon, in 1777 -- one of the earliest known problems in geometric probability. This visualization demonstrates the core Monte Carlo concept: randomly scattering points in a square and using the ratio that fall inside an inscribed quarter circle to estimate pi.
The method works because of the law of large numbers from probability theory. If you generate n random points uniformly in the unit square, the fraction that land inside the quarter circle converges to the true probability pi/4 as n approaches infinity. The estimate is an unbiased estimator of pi, meaning its expected value is exactly pi regardless of how many points are used.
| Points | Typical accuracy |
|---|---|
| 100 | ~1 decimal place |
| 10,000 | ~2 decimal places |
| 1,000,000 | ~3 decimal places |
| 100,000,000 | ~4 decimal places |
The error decreases as O(1/square root of n), so getting one additional digit of accuracy requires 100 times more points. This slow convergence is the fundamental limitation of all basic Monte Carlo methods. After one million points, you typically have only three correct decimal places of pi -- vastly inferior to dedicated algorithms like the Chudnovsky formula that can compute billions of digits. However, the Monte Carlo approach generalizes to problems where no such specialized formula exists.
Several techniques can improve convergence beyond basic random sampling. Stratified sampling divides the square into subregions and samples from each, ensuring more uniform coverage. Importance sampling concentrates points near the circle boundary where the classification matters most. Antithetic variates pair each point (x, y) with (1-x, 1-y) to reduce variance. These techniques are widely used in production Monte Carlo systems.
Monte Carlo methods are used throughout science and engineering when analytical solutions are intractable. In finance, they price complex derivatives and model portfolio risk. In physics, they simulate particle interactions and compute quantum mechanical properties. In computer graphics, ray tracing and path tracing use Monte Carlo sampling to render photorealistic images. In statistics, Markov Chain Monte Carlo (MCMC) methods underpin modern Bayesian inference. The pi estimation is a simple, visual introduction to the entire family of Monte Carlo techniques, demonstrating both its power -- it works for any geometry without requiring a formula -- and its fundamental limitation of slow convergence.