Variational Inference
VI another form of approximating the posterior distribution (or likelihood), but using a parametric distribution. Thus, it is computationally simpler but causes errors to linger around in the approximated final distribution.
VI starts by approximating the true posterior with a simpler distribution (eg: Gaussian) and then try to make this approximation better by minimizing the dissimilarity between and . Most commonly used dissimilarity measure is KL-Divergence. A populate objective that is used here for optimization is ELBO.
Thus, in an essence, this can be seen as an extension of techniques like Expectation Maximization (EM) or Maximum APosteriori estimation (MAP).
See also: evidence-lower-bound
AKA: variational bayesian methods