Numerical integration

After the corresponding transformations, we can assume that the prolbem of interest is the estimation of the d-dimensional integral

{\color[rgb]{1,1,.99} I(f)=\int_{[0,1)^d}f(\vec{x})d\vec{x}.}

One can use a numerical method such as the design of a particular Gaussian quadrature rule. However, most numerical integration methods suffer from the curse of dimensionality, or in other words, the convergence rate depends exponentially on the dimensionality of the problem. Below, some usual examples of convergence rates:

Method Convergence rate
Trapezoidal rule ${\color[rgb]{1,1,.99}\mathcal{O}(n^{-2/d})}$
Simpson's rule ${\color[rgb]{1,1,.99}\mathcal{O}(n^{-4/d})}$
IID Monte Carlo ${\color[rgb]{1,1,.99}\mathcal{O}(n^{-1/2})}$
quasi-Monte Carlo ${\color[rgb]{1,1,.99}\mathcal{O}(n^{-1+\varepsilon})}$

Thus, Monte Carlo and quasi-Monte Carlo methods convergence rates do not depend on the dimension. The hints behind these convergence rates come from the central limit theorem (MC), and the Koksma-Hlawka inequality (qMC).

Quasi-Monte Carlo methods

Quasi-Monte Carlo seems a good choice to estimate high dimensional problems. Convergence rates can be improved by scrambling Sobol' sequences as described by Art Owen (Scrambled net variance for integrals of smooth functions), or by using Josef Dick high order nets (How to generate higher order Sobol points).

However, low discrepancy sequences -the qMC points- also have weaknesses. One can see how the distribution of points in higher dimensions is usually worse:

Projection 1-2
Plot of dimensions 1 and 2 projected, for the first 2048 scrambled Sobol' points.


Projection 75-77
Plot of dimensions 75 and 77 projected, for the first 2048 scrambled Sobol' points.


Indeed, qMC can be thought as a highly stratified MC whose stratification properties are good in the lower dimensions, but it gets worse as dimensions increase. This makes qMC a really effective method for low effective dimension integrals. The concept of effective dimension has then an important role when using qMC. As its name suggests, the effective dimension of a function is indeed a concept that tries to measure how important is each dimension in the integration process (from the probability point of view, it can be measured as the variance explained by each dimension). A simple example can be the function ${\color[rgb]{1,1,.99}f(x_1,x_2,x_3)=x_1}$, whose nominal dimension is 3 but the effective dimension is 1.

At last but not least, qMC methods are proven to work well when the function's variation is bounded in the Hardy-Krause sense. Hence, jump functions (infinite variation after the second derivative) might cause problems when applying qMC.