evidence
This page is on the Bayesian evidence, with the notation ; the probability of data under model .
In an alternative way of writing Bayes' rule:
where ; i.e. the posterior is given by the normalized product of likelihood and prior, with in the hypothesis space . In essence, the posterior is just a vector of weights to .
The motivation behind this is to be able to compare between models , which involves integration over volumes of parameter space, as opposed to the surface sampled by conventional MCMC.
One way to do this is using (dynamic) [[nested-sampling]].
Backlinks
nested-sampling
Sampling bounded shells of the hypothesis space, in order to efficiently compute the Bayesian [[evidence]].
bayes-rule
where $p(y)$ is the [[evidence]], $p(x)$ is the [[prior]], $p(x \vert y)$ is the [[posterior distribution]], and $p(y \vert x)$ is the [[conditional-distribution]] known as the likelihood.