Eugenia Koblents Lapteva, a PhD student in the Signal Processing Group of the University Carlos III de Madrid will defend her doctoral thesis titled «Nonlinear Population Monte Carlo Methods for Bayesian Inference« on March, 5th
- Title: «Nonlinear Population Monte Carlo Methods for Bayesian Inference»
- Advisor: Joaquín Míguez Arenas.
- Event Date: Friday, March 5, 2015, 11:30 am.
- Location: Adoración de Miguel (1.2.C16); Agustín de Betancourt Building; Leganés Campus; Universidad Carlos III de Madrid.
In the present work we address the problem of Monte Carlo approximation of posterior probability distributions and associated integrals in the Bayesian framework. In particular, we investigate a technique known as population Monte Carlo (PMC), which is based on an iterative importance sampling (IS) approach. The PMC method displays important advantages over the widely used family of Markov chain Monte Carlo (MCMC) algorithms. Opposite to MCMC methods, the PMC algorithm yields independent samples, allows for a simpler parallel implementation and does not require a convergence period. However, both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse of dimensionality, and limits their applicability in large-scale practical problems.
In this thesis we present a novel family of PMC algorithms which specifically addresses the degeneracy problem arising in high dimensional problems. In particular, we propose to perform nonlinear transformations to the IWs in order to smooth their variations and increase the efficiency of the underlying IS procedure, specially when drawing from proposal functions which are poorly adapted to the true posterior. This technique, termed nonlinear PMC (NPMC), avoids the need for a careful selection of the proposal distribution and can be applied in fairly general settings. We propose a basic NPMC algorithm with a multivariate Gaussian proposal distribution, which is better suited for unimodal target distributions. For general multimodal target distributions, we propose a nonlinear extension of the mixture PMC (MPMC) algorithm, termed adaptive nonlinear MPMC (NMPMC) method, which constructs the importance functions as mixtures of kernels. Additionally, the new technique incorporates an adaptation step for the number of mixture components, which provides valuable information about the target distribution.
We also introduce a particle NPMC (PNPMC) algorithm for offline Bayesian inference in state-space models, which allows to approximate the posterior distribution of both the model parameters and the hidden states given a set of observed data. A major difficulty associated to this problem is that the likelihood function becomes intractable in general nonlinear, non-Gaussian state-space models. To overcome this drawback, the new technique resorts to a particle filter (PF) approximation of the likelihood, in a manner equivalent to the widely used particle MCMC (PMCMC) algorithm.