# Monte Carlo and quasi-Monte Carlo error

The most common Monte Carlo integration error measure is the mean squared error. The mean squared error of estimating the expecation of $\color[rgb]{1,1,.99} Y$ is $\color[rgb]{1,1,.99} {\rm Var}(Y)/n$. Simlarly, for quasi-Monte Carlo, one can bound the integration error using the Koksma-Hlawka inequality inequality, or more specifically by $\color[rgb]{1,1,.99} {\rm V}(f)D^*(P_n)$, where $\color[rgb]{1,1,.99} {\rm V}(f)$ is the Hardy and Krause variation of the function to integrate and $\color[rgb]{1,1,.99} D^*(P_n)$ the star-discrepancy of the point set $\color[rgb]{1,1,.99} P_n$. In high dimensions, the variation of the function is really hard to estimate and thus other methods to bound the error need to be explored.

A different perspective about these error measures can be described as follows. Consider the Fourier series decomposition of the function to integrate:

Here, many details are skipped but the most important points to keep in mind is that the wavenumber space is specially ordered, and the right orthonormal basis $\color[rgb]{1,1,.99}\phi_{\kappa}(\vec{x})$ is chosen according to the type of points used (particularly rank-1 lattices and digital nets).

Given this decomposition, the numerical integration error is given by:

Therefore, the error for IID MC comes from the sum of all the Fourier coefficients while for qMC, only from the sum of some of these coefficients. This analysis can provide an idea on which method will be better depending on the particular integrand. In addition, one can see that the design of adaptive algorithms needs to be different for each case.