## 2015 |

## Inproceedings |

Martino, Luca ; Elvira, Victor ; Luengo, David ; Artés-Rodríguez, Antonio ; Corander, J Smelly Parallel MCMC Chains Inproceedings 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4070–4074, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. Abstract | Links | BibTeX | Tags: Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization @inproceedings{Martino2015a, title = {Smelly Parallel MCMC Chains}, author = {Martino, Luca and Elvira, Victor and Luengo, David and Artés-Rodríguez, Antonio and Corander, J.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178736 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_martino.pdf}, doi = {10.1109/ICASSP.2015.7178736}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4070--4074}, publisher = {IEEE}, address = {Brisbane}, abstract = {Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice.}, keywords = {Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice. |

## 2010 |

## Journal Articles |

Zoubir, A; Viberg, M; Yang, B; Miguez, Joaquin Analysis of a Sequential Monte Carlo Method for Optimization in Dynamical Systems Journal Article Signal Processing, 90 (5), pp. 1609–1622, 2010. Abstract | Links | BibTeX | Tags: Dynamic optimization, Nonlinear dynamics, Nonlinear tracking, Sequential Monte Carlo, Stochastic optimization @article{Zoubir2010, title = {Analysis of a Sequential Monte Carlo Method for Optimization in Dynamical Systems}, author = {Zoubir, A. and Viberg, M. and Yang, B. and Miguez, Joaquin}, url = {http://www.sciencedirect.com/science/article/pii/S0165168409004708}, year = {2010}, date = {2010-01-01}, journal = {Signal Processing}, volume = {90}, number = {5}, pages = {1609--1622}, abstract = {We investigate a recently proposed sequential Monte Carlo methodology for recursively tracking the minima of a cost function that evolves with time. These methods, subsequently referred to as sequential Monte Carlo minimization (SMCM) procedures, have an algorithmic structure similar to particle filters: they involve the generation of random paths in the space of the signal of interest (SoI), the stochastic selection of the fittest paths and the ranking of the survivors according to their cost. In this paper, we propose an extension of the original SMCM methodology (that makes it applicable to a broader class of cost functions) and introduce an asymptotic-convergence analysis. Our analytical results are based on simple induction arguments and show how the SoI-estimates computed by a SMCM algorithm converge, in probability, to a sequence of minimizers of the cost function. We illustrate these results by means of two computer simulation examples.}, keywords = {Dynamic optimization, Nonlinear dynamics, Nonlinear tracking, Sequential Monte Carlo, Stochastic optimization}, pubstate = {published}, tppubtype = {article} } We investigate a recently proposed sequential Monte Carlo methodology for recursively tracking the minima of a cost function that evolves with time. These methods, subsequently referred to as sequential Monte Carlo minimization (SMCM) procedures, have an algorithmic structure similar to particle filters: they involve the generation of random paths in the space of the signal of interest (SoI), the stochastic selection of the fittest paths and the ranking of the survivors according to their cost. In this paper, we propose an extension of the original SMCM methodology (that makes it applicable to a broader class of cost functions) and introduce an asymptotic-convergence analysis. Our analytical results are based on simple induction arguments and show how the SoI-estimates computed by a SMCM algorithm converge, in probability, to a sequence of minimizers of the cost function. We illustrate these results by means of two computer simulation examples. |

## 2009 |

## Inproceedings |

Miguez, Joaquin ; Maiz, Cristina S; Djuric, Petar M; Crisan, Dan Sequential Monte Carlo Optimization Using Artificial State-Space Models Inproceedings 2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, pp. 268–273, IEEE, Marco Island, FL, 2009. Abstract | Links | BibTeX | Tags: Acceleration, Cost function, Design optimization, discrete-time dynamical system, Educational institutions, Mathematics, maximum a posteriori estimate, maximum likelihood estimation, minimisation, Monte Carlo methods, Optimization methods, Probability distribution, sequential Monte Carlo optimization, Sequential optimization, Signal design, State-space methods, state-space model, Stochastic optimization @inproceedings{Miguez2009, title = {Sequential Monte Carlo Optimization Using Artificial State-Space Models}, author = {Miguez, Joaquin and Maiz, Cristina S. and Djuric, Petar M. and Crisan, Dan}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4785933}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop}, pages = {268--273}, publisher = {IEEE}, address = {Marco Island, FL}, abstract = {We introduce a method for sequential minimization of a certain class of (possibly non-convex) cost functions with respect to a high dimensional signal of interest. The proposed approach involves the transformation of the optimization problem into one of estimation in a discrete-time dynamical system. In particular, we describe a methodology for constructing an artificial state-space model which has the signal of interest as its unobserved dynamic state. The model is "adapted" to the cost function in the sense that the maximum a posteriori (MAP) estimate of the system state is also a global minimizer of the cost function. The advantage of the estimation framework is that we can draw from a pool of sequential Monte Carlo methods, for particle approximation of probability measures in dynamic systems, that enable the numerical computation of MAP estimates. We provide examples of how to apply the proposed methodology, including some illustrative simulation results.}, keywords = {Acceleration, Cost function, Design optimization, discrete-time dynamical system, Educational institutions, Mathematics, maximum a posteriori estimate, maximum likelihood estimation, minimisation, Monte Carlo methods, Optimization methods, Probability distribution, sequential Monte Carlo optimization, Sequential optimization, Signal design, State-space methods, state-space model, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } We introduce a method for sequential minimization of a certain class of (possibly non-convex) cost functions with respect to a high dimensional signal of interest. The proposed approach involves the transformation of the optimization problem into one of estimation in a discrete-time dynamical system. In particular, we describe a methodology for constructing an artificial state-space model which has the signal of interest as its unobserved dynamic state. The model is "adapted" to the cost function in the sense that the maximum a posteriori (MAP) estimate of the system state is also a global minimizer of the cost function. The advantage of the estimation framework is that we can draw from a pool of sequential Monte Carlo methods, for particle approximation of probability measures in dynamic systems, that enable the numerical computation of MAP estimates. We provide examples of how to apply the proposed methodology, including some illustrative simulation results. |