ML Reviews

evidence

This page is on the Bayesian evidence, with the notation Pr(DM)\text{Pr}(D \vert M); the probability of data DD under model MM.

In an alternative way of writing Bayes' rule:

p(Θ)=L(Θ)π(Θ)Zp(\Theta) = \frac{\mathcal{L}(\Theta)\pi(\Theta)}{\mathcal{Z}}

where Z=ΩΘL(Θ)π(Θ) dΘ\mathcal{Z}=\int_{\Omega_{\Theta}} \mathcal{L}(\Theta)\pi(\Theta)~d\Theta; i.e. the posterior p(Θ)p(\Theta) is given by the normalized product of likelihood and prior, with Θ\Theta in the hypothesis space ΩΘ\Omega_\Theta. In essence, the posterior is just a vector of weights to Θ\Theta.

The motivation behind this is to be able to compare between models MM, which involves integration over volumes of parameter space, as opposed to the surface sampled by conventional MCMC.

One way to do this is using (dynamic) [[nested-sampling]].