Monte Carlo Integration

Math & Simulation

Approximates definite integrals by randomly sampling points under a curve.

Monte Carlo Integration approximates the value of a definite integral -- the area under a curve -- by randomly sampling points and using statistical averaging to converge on the true answer. While numerical integration has been studied since Newton and Leibniz developed calculus in the 17th century, the Monte Carlo approach emerged in the 1940s from the work of Stanislaw Ulam and John von Neumann at Los Alamos. They realized that random sampling could solve integrals that were intractable by traditional methods, particularly the multidimensional integrals arising in nuclear physics. Today Monte Carlo integration is indispensable in fields ranging from computational physics to computer graphics, especially for high-dimensional problems where deterministic methods collapse under the curse of dimensionality.

How It Works

There are two primary Monte Carlo approaches to integration:

Hit-or-miss method:

  1. Bound the area: Define a rectangle that contains the region under the curve f(x) over the interval [a, b], with height equal to the maximum value of f in that interval.
  2. Sample randomly: Generate random points (x, y) uniformly within the bounding rectangle.
  3. Test: Check if each point falls below the curve -- that is, whether y is less than or equal to f(x).
  4. Estimate: The integral is approximately the rectangle area multiplied by the fraction of points that fell below the curve.

Mean value method:

  1. Sample x values: Generate n random x values uniformly distributed in [a, b].
  2. Evaluate: Compute f(x) at each sampled point.
  3. Average: The integral is approximately (b - a) multiplied by the average of all f(x) values.

The mean value method is generally more efficient because it uses every sample to contribute information about the function's value, rather than reducing each sample to a binary hit-or-miss classification.

Convergence

MethodError rate
Hit-or-miss Monte CarloO(1/square root of n)
Mean value Monte CarloO(1/square root of n)
Trapezoidal ruleO(1/n^2)
Simpson's ruleO(1/n^4)

For one-dimensional integrals, Monte Carlo methods converge far more slowly than classical quadrature rules. The trapezoidal rule achieves two extra digits of accuracy per 10x increase in sample points, while Monte Carlo gains only one digit per 100x increase. This makes deterministic methods vastly superior in low dimensions.

The Curse of Dimensionality

The critical advantage of Monte Carlo integration emerges in higher dimensions. For a d-dimensional integral, the trapezoidal rule's error rate becomes O(1/n^(2/d)) -- requiring exponentially more points as dimensions increase. A 10-dimensional integral using the trapezoidal rule with the same accuracy as 100 points in 1D would need 100^5 = 10 billion points. Monte Carlo integration, by contrast, maintains its O(1/square root of n) error rate regardless of dimensionality. In 10 dimensions, 100 dimensions, or 1000 dimensions, the convergence rate is the same. This dimension-independence is the fundamental reason Monte Carlo methods dominate high-dimensional computation.

Variance Reduction Techniques

Several techniques improve upon basic Monte Carlo sampling. Importance sampling concentrates samples in regions where the integrand is large, dramatically reducing variance for peaked functions. Stratified sampling divides the domain into subregions and samples each, ensuring uniform coverage. Control variates use a known function similar to the integrand to reduce variance. Quasi-Monte Carlo methods replace random points with low-discrepancy sequences (such as Halton or Sobol sequences) that fill space more uniformly, often achieving convergence rates closer to O(1/n) in practice.

When It's Used

Monte Carlo integration dominates in high-dimensional applications where traditional quadrature methods are computationally infeasible. In physics, it computes partition functions in statistical mechanics, evaluates Feynman path integrals in quantum field theory, and simulates neutron transport. In finance, it prices complex derivatives such as Asian options and mortgage-backed securities whose payoffs depend on many correlated variables. In computer graphics, path tracing algorithms use Monte Carlo integration to compute global illumination by averaging the light arriving at each pixel from thousands of random ray paths. For low-dimensional integrals (one to three dimensions), deterministic methods like Simpson's rule or Gaussian quadrature are more efficient and should be preferred.