## 2015 |

## Journal Articles |

Elvira, Victor; Martino, Luca; Luengo, David; Bugallo, Monica Efficient Multiple Importance Sampling Estimators (Journal Article) IEEE Signal Processing Letters, 22 (10), pp. 1757–1761, 2015, ISSN: 1070-9908. (Abstract | Links | BibTeX | Tags: Adaptive importance sampling, classical mixture approach, computational complexity, Computational efficiency, Computer Simulation, deterministic mixture, estimation theory, Journal, Monte Carlo methods, multiple importance sampling, multiple importance sampling estimator, partial deterministic mixture MIS estimator, Proposals, signal sampling, Sociology, Standards, variance reduction, weight calculation) @article{Elvira2015b, title = {Efficient Multiple Importance Sampling Estimators}, author = {Elvira, Victor and Martino, Luca and Luengo, David and Bugallo, Monica F.}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7105865}, doi = {10.1109/LSP.2015.2432078}, issn = {1070-9908}, year = {2015}, date = {2015-10-01}, journal = {IEEE Signal Processing Letters}, volume = {22}, number = {10}, pages = {1757--1761}, publisher = {IEEE}, abstract = {Multiple importance sampling (MIS) methods use a set of proposal distributions from which samples are drawn. Each sample is then assigned an importance weight that can be obtained according to different strategies. This work is motivated by the trade-off between variance reduction and computational complexity of the different approaches (classical vs. deterministic mixture) available for the weight calculation. A new method that achieves an efficient compromise between both factors is introduced in this letter. It is based on forming a partition of the set of proposal distributions and computing the weights accordingly. Computer simulations show the excellent performance of the associated partial deterministic mixture MIS estimator.}, keywords = {Adaptive importance sampling, classical mixture approach, computational complexity, Computational efficiency, Computer Simulation, deterministic mixture, estimation theory, Journal, Monte Carlo methods, multiple importance sampling, multiple importance sampling estimator, partial deterministic mixture MIS estimator, Proposals, signal sampling, Sociology, Standards, variance reduction, weight calculation}, pubstate = {published}, tppubtype = {article} } Multiple importance sampling (MIS) methods use a set of proposal distributions from which samples are drawn. Each sample is then assigned an importance weight that can be obtained according to different strategies. This work is motivated by the trade-off between variance reduction and computational complexity of the different approaches (classical vs. deterministic mixture) available for the weight calculation. A new method that achieves an efficient compromise between both factors is introduced in this letter. It is based on forming a partition of the set of proposal distributions and computing the weights accordingly. Computer simulations show the excellent performance of the associated partial deterministic mixture MIS estimator. |

Martino, Luca; Elvira, Victor; Luengo, David; Corander, An Adaptive Population Importance Sampler: Learning From Uncertainty (Journal Article) IEEE Transactions on Signal Processing, 63 (16), pp. 4422–4437, 2015, ISSN: 1053-587X. (Abstract | Links | BibTeX | Tags: Adaptive importance sampling, adaptive multiple IS, adaptive population importance sampler, AMIS, APIS, Estimation, Importance sampling, IS estimators, iterative estimation, iterative methods, Journal, MC methods, Monte Carlo (MC) methods, Monte Carlo methods, population Monte Carlo, Proposals, Signal processing algorithms, simple temporal adaptation, Sociology, Standards, Wireless sensor network, Wireless Sensor Networks) @article{Martino2015b, title = {An Adaptive Population Importance Sampler: Learning From Uncertainty}, author = {Martino, Luca and Elvira, Victor and Luengo, David and Corander, J.}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7117437}, doi = {10.1109/TSP.2015.2440215}, issn = {1053-587X}, year = {2015}, date = {2015-08-01}, journal = {IEEE Transactions on Signal Processing}, volume = {63}, number = {16}, pages = {4422--4437}, publisher = {IEEE}, abstract = {Monte Carlo (MC) methods are well-known computational techniques, widely used in different fields such as signal processing, communications and machine learning. An important class of MC methods is composed of importance sampling (IS) and its adaptive extensions, such as population Monte Carlo (PMC) and adaptive multiple IS (AMIS). In this paper, we introduce a novel adaptive and iterated importance sampler using a population of proposal densities. The proposed algorithm, named adaptive population importance sampling (APIS), provides a global estimation of the variables of interest iteratively, making use of all the samples previously generated. APIS combines a sophisticated scheme to build the IS estimators (based on the deterministic mixture approach) with a simple temporal adaptation (based on epochs). In this way, APIS is able to keep all the advantages of both AMIS and PMC, while minimizing their drawbacks. Furthermore, APIS is easily parallelizable. The cloud of proposals is adapted in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. The result is a fast, simple, robust, and high-performance algorithm applicable to a wide range of problems. Numerical results show the advantages of the proposed sampling scheme in four synthetic examples and a localization problem in a wireless sensor network.}, keywords = {Adaptive importance sampling, adaptive multiple IS, adaptive population importance sampler, AMIS, APIS, Estimation, Importance sampling, IS estimators, iterative estimation, iterative methods, Journal, MC methods, Monte Carlo (MC) methods, Monte Carlo methods, population Monte Carlo, Proposals, Signal processing algorithms, simple temporal adaptation, Sociology, Standards, Wireless sensor network, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } Monte Carlo (MC) methods are well-known computational techniques, widely used in different fields such as signal processing, communications and machine learning. An important class of MC methods is composed of importance sampling (IS) and its adaptive extensions, such as population Monte Carlo (PMC) and adaptive multiple IS (AMIS). In this paper, we introduce a novel adaptive and iterated importance sampler using a population of proposal densities. The proposed algorithm, named adaptive population importance sampling (APIS), provides a global estimation of the variables of interest iteratively, making use of all the samples previously generated. APIS combines a sophisticated scheme to build the IS estimators (based on the deterministic mixture approach) with a simple temporal adaptation (based on epochs). In this way, APIS is able to keep all the advantages of both AMIS and PMC, while minimizing their drawbacks. Furthermore, APIS is easily parallelizable. The cloud of proposals is adapted in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. The result is a fast, simple, robust, and high-performance algorithm applicable to a wide range of problems. Numerical results show the advantages of the proposed sampling scheme in four synthetic examples and a localization problem in a wireless sensor network. |

## Inproceedings |

Martino, Luca; Elvira, Victor; Luengo, David; Corander, Jukka Parallel interacting Markov adaptive importance sampling (Inproceeding) 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 499–503, IEEE, Nice, 2015, ISBN: 978-0-9928-6263-3. (Abstract | Links | BibTeX | Tags: Adaptive importance sampling, Bayesian inference, MCMC methods, Monte Carlo methods, Parallel Chains, Probability density function, Proposals, Signal processing, Signal processing algorithms, Sociology) @inproceedings{Martino2015b, title = {Parallel interacting Markov adaptive importance sampling}, author = {Martino, Luca and Elvira, Victor and Luengo, David and Corander, Jukka}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7362433 http://www.eurasip.org/Proceedings/Eusipco/Eusipco2015/papers/1570111267.pdf}, doi = {10.1109/EUSIPCO.2015.7362433}, isbn = {978-0-9928-6263-3}, year = {2015}, date = {2015-08-01}, booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)}, pages = {499--503}, publisher = {IEEE}, address = {Nice}, abstract = {Monte Carlo (MC) methods are widely used for statistical inference in signal processing applications. A well-known class of MC methods is importance sampling (IS) and its adaptive extensions. In this work, we introduce an iterated importance sampler using a population of proposal densities, which are adapted according to an MCMC technique over the population of location parameters. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples weighted according to the deterministic mixture scheme. Numerical results, on a multi-modal example and a localization problem in wireless sensor networks, show the advantages of the proposed schemes.}, keywords = {Adaptive importance sampling, Bayesian inference, MCMC methods, Monte Carlo methods, Parallel Chains, Probability density function, Proposals, Signal processing, Signal processing algorithms, Sociology}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are widely used for statistical inference in signal processing applications. A well-known class of MC methods is importance sampling (IS) and its adaptive extensions. In this work, we introduce an iterated importance sampler using a population of proposal densities, which are adapted according to an MCMC technique over the population of location parameters. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples weighted according to the deterministic mixture scheme. Numerical results, on a multi-modal example and a localization problem in wireless sensor networks, show the advantages of the proposed schemes. |

Elvira, Victor; Martino, Luca; Luengo, David; Corander, Jukka A Gradient Adaptive Population Importance Sampler (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4075–4079, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: adaptive extensions, adaptive importance sampler, Adaptive importance sampling, gradient adaptive population, gradient matrix, Hamiltonian Monte Carlo, Hessian matrices, Hessian matrix, learning (artificial intelligence), Machine learning, MC methods, Monte Carlo, Monte Carlo methods, population Monte Carlo (PMC), proposal densities, Signal processing, Sociology, statistics, target distribution) @inproceedings{Elvira2015a, title = {A Gradient Adaptive Population Importance Sampler}, author = {Elvira, Victor and Martino, Luca and Luengo, David and Corander, Jukka}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178737 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_elvira.pdf}, doi = {10.1109/ICASSP.2015.7178737}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4075--4079}, publisher = {IEEE}, address = {Brisbane}, abstract = {Monte Carlo (MC) methods are widely used in signal processing and machine learning. A well-known class of MC methods is composed of importance sampling and its adaptive extensions (e.g., population Monte Carlo). In this paper, we introduce an adaptive importance sampler using a population of proposal densities. The novel algorithm dynamically optimizes the cloud of proposals, adapting them using information about the gradient and Hessian matrix of the target distribution. Moreover, a new kind of interaction in the adaptation of the proposal densities is introduced, establishing a trade-off between attaining a good performance in terms of mean square error and robustness to initialization.}, keywords = {adaptive extensions, adaptive importance sampler, Adaptive importance sampling, gradient adaptive population, gradient matrix, Hamiltonian Monte Carlo, Hessian matrices, Hessian matrix, learning (artificial intelligence), Machine learning, MC methods, Monte Carlo, Monte Carlo methods, population Monte Carlo (PMC), proposal densities, Signal processing, Sociology, statistics, target distribution}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are widely used in signal processing and machine learning. A well-known class of MC methods is composed of importance sampling and its adaptive extensions (e.g., population Monte Carlo). In this paper, we introduce an adaptive importance sampler using a population of proposal densities. The novel algorithm dynamically optimizes the cloud of proposals, adapting them using information about the gradient and Hessian matrix of the target distribution. Moreover, a new kind of interaction in the adaptation of the proposal densities is introduced, establishing a trade-off between attaining a good performance in terms of mean square error and robustness to initialization. |

Martino, Luca; Elvira, Victor; Luengo, David; Artés-Rodríguez, Antonio; Corander, Smelly Parallel MCMC Chains (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4070–4074, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization) @inproceedings{Martino2015a, title = {Smelly Parallel MCMC Chains}, author = {Martino, Luca and Elvira, Victor and Luengo, David and Artés-Rodríguez, Antonio and Corander, J.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178736 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_martino.pdf}, doi = {10.1109/ICASSP.2015.7178736}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4070--4074}, publisher = {IEEE}, address = {Brisbane}, abstract = {Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice.}, keywords = {Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice. |

Luengo, David; Martino, Luca; Elvira, Victor; Bugallo, Monica Efficient Linear Combination of Partial Monte Carlo Estimators (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4100–4104, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: covariance matrices, efficient linear combination, Estimation, fusion, Global estimator, global estimators, least mean squares methods, linear combination, minimum mean squared error estimators, Monte Carlo estimation, Monte Carlo methods, partial estimator, partial Monte Carlo estimators, Xenon) @inproceedings{Luengo2015b, title = {Efficient Linear Combination of Partial Monte Carlo Estimators}, author = {Luengo, David and Martino, Luca and Elvira, Victor and Bugallo, Monica F.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178742 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_luengo.pdf}, doi = {10.1109/ICASSP.2015.7178742}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4100--4104}, publisher = {IEEE}, address = {Brisbane}, abstract = {In many practical scenarios, including those dealing with large data sets, calculating global estimators of unknown variables of interest becomes unfeasible. A common solution is obtaining partial estimators and combining them to approximate the global one. In this paper, we focus on minimum mean squared error (MMSE) estimators, introducing two efficient linear schemes for the fusion of partial estimators. The proposed approaches are valid for any type of partial estimators, although in the simulated scenarios we concentrate on the combination of Monte Carlo estimators due to the nature of the problem addressed. Numerical results show the good performance of the novel fusion methods with only a fraction of the cost of the asymptotically optimal solution.}, keywords = {covariance matrices, efficient linear combination, Estimation, fusion, Global estimator, global estimators, least mean squares methods, linear combination, minimum mean squared error estimators, Monte Carlo estimation, Monte Carlo methods, partial estimator, partial Monte Carlo estimators, Xenon}, pubstate = {published}, tppubtype = {inproceedings} } In many practical scenarios, including those dealing with large data sets, calculating global estimators of unknown variables of interest becomes unfeasible. A common solution is obtaining partial estimators and combining them to approximate the global one. In this paper, we focus on minimum mean squared error (MMSE) estimators, introducing two efficient linear schemes for the fusion of partial estimators. The proposed approaches are valid for any type of partial estimators, although in the simulated scenarios we concentrate on the combination of Monte Carlo estimators due to the nature of the problem addressed. Numerical results show the good performance of the novel fusion methods with only a fraction of the cost of the asymptotically optimal solution. |

## 2014 |

## Journal Articles |

Impedovo, Sebastiano; Liu, Cheng-Lin; Impedovo, Donato; Pirlo, Giuseppe; Read, Jesse; Martino, Luca; Luengo, David Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains (Journal Article) Pattern Recognition, 47 (3), pp. 1535–1546, 2014. (Abstract | Links | BibTeX | Tags: Bayesian inference, Classifier chains, Monte Carlo methods, Multi-dimensional classification, Multi-label classification) @article{Impedovo2014b, title = {Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains}, author = {Impedovo, Sebastiano and Liu, Cheng-Lin and Impedovo, Donato and Pirlo, Giuseppe and Read, Jesse and Martino, Luca and Luengo, David}, url = {http://www.sciencedirect.com/science/article/pii/S0031320313004160}, year = {2014}, date = {2014-01-01}, journal = {Pattern Recognition}, volume = {47}, number = {3}, pages = {1535--1546}, abstract = {Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets.}, keywords = {Bayesian inference, Classifier chains, Monte Carlo methods, Multi-dimensional classification, Multi-label classification}, pubstate = {published}, tppubtype = {article} } Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets. |

## Inproceedings |

Miguez, Joaquin On the uniform asymptotic convergence of a distributed particle filter (Inproceeding) 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 241–244, IEEE, A Coruña, 2014, ISBN: 978-1-4799-1481-4. (Abstract | Links | BibTeX | Tags: ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs) @inproceedings{Miguez2014, title = {On the uniform asymptotic convergence of a distributed particle filter}, author = {Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6882385}, doi = {10.1109/SAM.2014.6882385}, isbn = {978-1-4799-1481-4}, year = {2014}, date = {2014-06-01}, booktitle = {2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM)}, pages = {241--244}, publisher = {IEEE}, address = {A Coruña}, abstract = {Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort.}, keywords = {ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs}, pubstate = {published}, tppubtype = {inproceedings} } Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort. |

Stinner, Markus; Olmos, Pablo Analyzing Finite-length Protograph-Based Spatially Coupled LDPC Codes (Inproceeding) 2014 IEEE International Symposium on Information Theory, pp. 891–895, IEEE, Honolulu, 2014, ISBN: 978-1-4799-5186-4. (Abstract | Links | BibTeX | Tags: binary erasure channel, covariance analysis, covariance evolution, Decoding, degree-one check nodes, Error analysis, finite-length protograph, mean evolution, Monte Carlo methods, parity check codes, peeling decoding, protograph-based SC-LDPC codes, spatially coupled low-density parity-check codes, stable decoding phase, Steady-state, Vectors) @inproceedings{Stinner2014, title = {Analyzing Finite-length Protograph-Based Spatially Coupled LDPC Codes}, author = {Stinner, Markus and Olmos, Pablo M.}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6874961}, isbn = {978-1-4799-5186-4}, year = {2014}, date = {2014-01-01}, booktitle = {2014 IEEE International Symposium on Information Theory}, pages = {891--895}, publisher = {IEEE}, address = {Honolulu}, abstract = {The peeling decoding for spatially coupled low-density parity-check (SC-LDPC) codes is analyzed for a binary erasure channel. An analytical calculation of the mean evolution of degree-one check nodes of protograph-based SC-LDPC codes is given and an estimate for the covariance evolution of degree-one check nodes is proposed in the stable decoding phase where the decoding wave propagates along the chain of coupled codes. Both results are verified numerically. Protograph-based SC-LDPC codes turn out to have a more robust behavior than unstructured random SC-LDPC codes. Using the analytically calculated parameters, the finite-length scaling laws for these constructions are given and verified by numerical simulations.}, keywords = {binary erasure channel, covariance analysis, covariance evolution, Decoding, degree-one check nodes, Error analysis, finite-length protograph, mean evolution, Monte Carlo methods, parity check codes, peeling decoding, protograph-based SC-LDPC codes, spatially coupled low-density parity-check codes, stable decoding phase, Steady-state, Vectors}, pubstate = {published}, tppubtype = {inproceedings} } The peeling decoding for spatially coupled low-density parity-check (SC-LDPC) codes is analyzed for a binary erasure channel. An analytical calculation of the mean evolution of degree-one check nodes of protograph-based SC-LDPC codes is given and an estimate for the covariance evolution of degree-one check nodes is proposed in the stable decoding phase where the decoding wave propagates along the chain of coupled codes. Both results are verified numerically. Protograph-based SC-LDPC codes turn out to have a more robust behavior than unstructured random SC-LDPC codes. Using the analytically calculated parameters, the finite-length scaling laws for these constructions are given and verified by numerical simulations. |

## 2013 |

## Inproceedings |

Read, Jesse; Martino, Luca; Luengo, David Eficient Monte Carlo Optimization for Multi-Label Classifier Chains (Inproceeding) ICASSP 2013: The 38th International Conference on Acoustics, Speech, and Signal Processing, Vancouver, 2013. (Abstract | BibTeX | Tags: Bayesian inference, Classifier chains, Monte Carlo methods, Multi-dimensional classification, Multi-label classification) @inproceedings{Read2013, title = {Eficient Monte Carlo Optimization for Multi-Label Classifier Chains}, author = {Read, Jesse and Martino, Luca and Luengo, David}, year = {2013}, date = {2013-01-01}, booktitle = {ICASSP 2013: The 38th International Conference on Acoustics, Speech, and Signal Processing}, address = {Vancouver}, abstract = {Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest- performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for nding a good chain sequence and performing ecient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets.}, keywords = {Bayesian inference, Classifier chains, Monte Carlo methods, Multi-dimensional classification, Multi-label classification}, pubstate = {published}, tppubtype = {inproceedings} } Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest- performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for nding a good chain sequence and performing ecient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets. |

Koblents, Eugenia; Miguez, Joaquin A Population Monte Carlo Scheme for Computational Inference in High Dimensional Spaces (Inproceeding) 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6318–6322, IEEE, Vancouver, 2013, ISSN: 1520-6149. (Abstract | Links | BibTeX | Tags: Approximation methods, computational inference, degeneracy of importance weights, high dimensional spaces, Importance sampling, importance weights, iterative importance sampling, iterative methods, mixture-PMC, mixture-PMC algorithm, Monte Carlo methods, MPMC, nonlinear transformations, population Monte Carlo, population Monte Carlo scheme, Probability density function, probability distributions, Proposals, Sociology, Standards) @inproceedings{Koblents2013a, title = {A Population Monte Carlo Scheme for Computational Inference in High Dimensional Spaces}, author = {Koblents, Eugenia and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6638881}, issn = {1520-6149}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {6318--6322}, publisher = {IEEE}, address = {Vancouver}, abstract = {In this paper we address the Monte Carlo approximation of integrals with respect to probability distributions in high-dimensional spaces. In particular, we investigate the population Monte Carlo (PMC) scheme, which is based on an iterative importance sampling (IS) approach. Both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse-of-dimensionality, and limits their applicability in large-scale practical problems. In this paper we investigate a novel PMC scheme that consists in performing nonlinear transformations of the IWs in order to smooth their variations and avoid degeneracy. We apply the modified IS scheme to the well-known mixture-PMC (MPMC) algorithm, which constructs the importance functions as mixtures of kernels. We present numerical results that show how the modified version of MPMC clearly outperforms the original scheme.}, keywords = {Approximation methods, computational inference, degeneracy of importance weights, high dimensional spaces, Importance sampling, importance weights, iterative importance sampling, iterative methods, mixture-PMC, mixture-PMC algorithm, Monte Carlo methods, MPMC, nonlinear transformations, population Monte Carlo, population Monte Carlo scheme, Probability density function, probability distributions, Proposals, Sociology, Standards}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the Monte Carlo approximation of integrals with respect to probability distributions in high-dimensional spaces. In particular, we investigate the population Monte Carlo (PMC) scheme, which is based on an iterative importance sampling (IS) approach. Both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse-of-dimensionality, and limits their applicability in large-scale practical problems. In this paper we investigate a novel PMC scheme that consists in performing nonlinear transformations of the IWs in order to smooth their variations and avoid degeneracy. We apply the modified IS scheme to the well-known mixture-PMC (MPMC) algorithm, which constructs the importance functions as mixtures of kernels. We present numerical results that show how the modified version of MPMC clearly outperforms the original scheme. |

## 2011 |

## Inproceedings |

Maiz, Cristina; Miguez, Joaquin On the Optimization of Transportation Routes with Multiple Destinations in Random Networks (Inproceeding) 2011 IEEE Statistical Signal Processing Workshop (SSP), pp. 349–352, IEEE, Nice, 2011, ISBN: 978-1-4577-0569-4. (Abstract | Links | BibTeX | Tags: Approximation algorithms, communication networks, Estimation, graph theory, Histograms, intelligent transportation, Monte Carlo algorithm, Monte Carlo methods, multiple destinations, optimisation, Optimization, random networks, route optimization, routing, Sequential Monte Carlo, Signal processing algorithms, stochastic graph, Stochastic processes, telecommunication network routing, time-varying graph, transportation routes) @inproceedings{Maiz2011, title = {On the Optimization of Transportation Routes with Multiple Destinations in Random Networks}, author = {Maiz, Cristina S. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5967701}, isbn = {978-1-4577-0569-4}, year = {2011}, date = {2011-01-01}, booktitle = {2011 IEEE Statistical Signal Processing Workshop (SSP)}, pages = {349--352}, publisher = {IEEE}, address = {Nice}, abstract = {Various practical problems in transportation research and routing in communication networks can be reduced to the computation of the best path that traverses a certain graph and visits a set of D specified destination nodes. Simple versions of this problem have received attention in the literature. Optimal solutions exist for the cases in which (a) D >; 1 and the graph is deterministic or (b) D = 1 and the graph is stochastic (and possibly time-dependent). Here, we address the general problem in which both D >; 1 and the costs of the edges in the graph are stochastic and time-varying. We tackle this complex global optimization problem by first converting it into an equivalent estimation problem and then computing a numerical solution using a sequential Monte Carlo algorithm. The advantage of the proposed technique over some standard methods (devised for graphs with time-invariant statistics) is illustrated by way of computer simulations.}, keywords = {Approximation algorithms, communication networks, Estimation, graph theory, Histograms, intelligent transportation, Monte Carlo algorithm, Monte Carlo methods, multiple destinations, optimisation, Optimization, random networks, route optimization, routing, Sequential Monte Carlo, Signal processing algorithms, stochastic graph, Stochastic processes, telecommunication network routing, time-varying graph, transportation routes}, pubstate = {published}, tppubtype = {inproceedings} } Various practical problems in transportation research and routing in communication networks can be reduced to the computation of the best path that traverses a certain graph and visits a set of D specified destination nodes. Simple versions of this problem have received attention in the literature. Optimal solutions exist for the cases in which (a) D >; 1 and the graph is deterministic or (b) D = 1 and the graph is stochastic (and possibly time-dependent). Here, we address the general problem in which both D >; 1 and the costs of the edges in the graph are stochastic and time-varying. We tackle this complex global optimization problem by first converting it into an equivalent estimation problem and then computing a numerical solution using a sequential Monte Carlo algorithm. The advantage of the proposed technique over some standard methods (devised for graphs with time-invariant statistics) is illustrated by way of computer simulations. |

## 2010 |

## Inproceedings |

Achutegui, Katrin; Rodas, Javier; Escudero, Carlos; Miguez, Joaquin A Model-Switching Sequential Monte Carlo Algorithm for Indoor Tracking with Experimental RSS Data (Inproceeding) 2010 International Conference on Indoor Positioning and Indoor Navigation, pp. 1–8, IEEE, Zurich, 2010, ISBN: 978-1-4244-5862-2. (Abstract | Links | BibTeX | Tags: Approximation methods, Computational modeling, Data models, generalized IMM system, GIMM approach, indoor radio, Indoor tracking, Kalman filters, maneuvering target motion, Mathematical model, model switching sequential Monte Carlo algorithm, Monte Carlo methods, multipath propagation, multiple model interaction, propagation environment, radio receivers, radio tracking, radio transmitters, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, sensors, state space model, target position dependent data, transmitter-to-receiver distance, wireless technology) @inproceedings{Achutegui2010, title = {A Model-Switching Sequential Monte Carlo Algorithm for Indoor Tracking with Experimental RSS Data}, author = {Achutegui, Katrin and Rodas, Javier and Escudero, Carlos J. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5648053}, isbn = {978-1-4244-5862-2}, year = {2010}, date = {2010-01-01}, booktitle = {2010 International Conference on Indoor Positioning and Indoor Navigation}, pages = {1--8}, publisher = {IEEE}, address = {Zurich}, abstract = {In this paper we address the problem of indoor tracking using received signal strength (RSS) as position-dependent data. This type of measurements are very appealing because they can be easily obtained with a variety of (inexpensive) wireless technologies. However, the extraction of accurate location information from RSS in indoor scenarios is not an easy task. Due to the multipath propagation, it is hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. For that reason, we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to different propagation environments. This methodology, called Interacting Multiple Models (IMM), has been used in the past either for modeling the motion of maneuvering targets or the relationship between the target position and the observations. Here, we extend its application to handle both types of uncertainty simultaneously and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data.}, keywords = {Approximation methods, Computational modeling, Data models, generalized IMM system, GIMM approach, indoor radio, Indoor tracking, Kalman filters, maneuvering target motion, Mathematical model, model switching sequential Monte Carlo algorithm, Monte Carlo methods, multipath propagation, multiple model interaction, propagation environment, radio receivers, radio tracking, radio transmitters, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, sensors, state space model, target position dependent data, transmitter-to-receiver distance, wireless technology}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the problem of indoor tracking using received signal strength (RSS) as position-dependent data. This type of measurements are very appealing because they can be easily obtained with a variety of (inexpensive) wireless technologies. However, the extraction of accurate location information from RSS in indoor scenarios is not an easy task. Due to the multipath propagation, it is hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. For that reason, we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to different propagation environments. This methodology, called Interacting Multiple Models (IMM), has been used in the past either for modeling the motion of maneuvering targets or the relationship between the target position and the observations. Here, we extend its application to handle both types of uncertainty simultaneously and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data. |

## 2009 |

## Inproceedings |

Martino, Luca; Miguez, Joaquin A Novel Rejection Sampling Scheme for Posterior Probability Distributions (Inproceeding) 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2921–2924, IEEE, Taipei, 2009, ISSN: 1520-6149. (Abstract | Links | BibTeX | Tags: Additive noise, arbitrary target probability distributions, Bayes methods, Bayesian methods, Monte Carlo integration, Monte Carlo methods, Monte Carlo techniques, Overbounding, posterior probability distributions, Probability density function, Probability distribution, Proposals, Rejection sampling, rejection sampling scheme, Sampling methods, Signal processing algorithms, signal sampling, Upper bound) @inproceedings{Martino2009, title = {A Novel Rejection Sampling Scheme for Posterior Probability Distributions}, author = {Martino, Luca and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4960235}, issn = {1520-6149}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {2921--2924}, publisher = {IEEE}, address = {Taipei}, abstract = {Rejection sampling (RS) is a well-known method to draw from arbitrary target probability distributions, which has important applications by itself or as a building block for more sophisticated Monte Carlo techniques. The main limitation to the use of RS is the need to find an adequate upper bound for the ratio of the target probability density function (pdf) over the proposal pdf from which the samples are generated. There are no general methods to analytically find this bound, except in the particular case in which the target pdf is log-concave. In this paper we adopt a Bayesian view of the problem and propose a general RS scheme to draw from the posterior pdf of a signal of interest using its prior density as a proposal function. The method enables the analytical calculation of the bound and can be applied to a large class of target densities. We illustrate its use with a simple numerical example.}, keywords = {Additive noise, arbitrary target probability distributions, Bayes methods, Bayesian methods, Monte Carlo integration, Monte Carlo methods, Monte Carlo techniques, Overbounding, posterior probability distributions, Probability density function, Probability distribution, Proposals, Rejection sampling, rejection sampling scheme, Sampling methods, Signal processing algorithms, signal sampling, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } Rejection sampling (RS) is a well-known method to draw from arbitrary target probability distributions, which has important applications by itself or as a building block for more sophisticated Monte Carlo techniques. The main limitation to the use of RS is the need to find an adequate upper bound for the ratio of the target probability density function (pdf) over the proposal pdf from which the samples are generated. There are no general methods to analytically find this bound, except in the particular case in which the target pdf is log-concave. In this paper we adopt a Bayesian view of the problem and propose a general RS scheme to draw from the posterior pdf of a signal of interest using its prior density as a proposal function. The method enables the analytical calculation of the bound and can be applied to a large class of target densities. We illustrate its use with a simple numerical example. |

Achutegui, Katrin; Martino, Luca; Rodas, Javier; Escudero, Carlos; Miguez, Joaquin A Multi-Model Particle Filtering Algorithm for Indoor Tracking of Mobile Terminals Using RSS Data (Inproceeding) 2009 IEEE International Conference on Control Applications, pp. 1702–1707, IEEE, Saint Petersburg, 2009, ISBN: 978-1-4244-4601-8. (Abstract | Links | BibTeX | Tags: Bayesian methods, Control systems, Filtering algorithms, generalized interacting multiple model, GIMM, indoor radio, Indoor tracking, mobile radio, mobile terminal, Monte Carlo methods, multipath propagation, position-dependent data measurement, random process, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, Sliding mode control, State-space methods, state-space model, Target tracking, tracking, transmitter-to-receiver distance, wireless network, wireless technology) @inproceedings{Achutegui2009, title = {A Multi-Model Particle Filtering Algorithm for Indoor Tracking of Mobile Terminals Using RSS Data}, author = {Achutegui, Katrin and Martino, Luca and Rodas, Javier and Escudero, Carlos J. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5280960}, isbn = {978-1-4244-4601-8}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Control Applications}, pages = {1702--1707}, publisher = {IEEE}, address = {Saint Petersburg}, abstract = {In this paper we address the problem of indoor tracking using received signal strength (RSS) as a position-dependent data measurement. This type of measurements is very appealing because they can be easily obtained with a variety of wireless technologies which are relatively inexpensive. The extraction of accurate location information from RSS in indoor scenarios is not an easy task, though. Since RSS is highly influenced by multipath propagation, it turns out very hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. The measurement models proposed in the literature are site-specific and require a great deal of information regarding the structure of the building where the tracking will be performed and therefore are not useful for a general application. For that reason we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to specific and different propagation environments. This methodology, is called interacting multiple models (IMM), has been used in the past for modeling the motion of maneuvering targets. Here, we extend its application to handle also the uncertainty in the RSS observations and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data.}, keywords = {Bayesian methods, Control systems, Filtering algorithms, generalized interacting multiple model, GIMM, indoor radio, Indoor tracking, mobile radio, mobile terminal, Monte Carlo methods, multipath propagation, position-dependent data measurement, random process, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, Sliding mode control, State-space methods, state-space model, Target tracking, tracking, transmitter-to-receiver distance, wireless network, wireless technology}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the problem of indoor tracking using received signal strength (RSS) as a position-dependent data measurement. This type of measurements is very appealing because they can be easily obtained with a variety of wireless technologies which are relatively inexpensive. The extraction of accurate location information from RSS in indoor scenarios is not an easy task, though. Since RSS is highly influenced by multipath propagation, it turns out very hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. The measurement models proposed in the literature are site-specific and require a great deal of information regarding the structure of the building where the tracking will be performed and therefore are not useful for a general application. For that reason we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to specific and different propagation environments. This methodology, is called interacting multiple models (IMM), has been used in the past for modeling the motion of maneuvering targets. Here, we extend its application to handle also the uncertainty in the RSS observations and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data. |

Martino, Luca; Miguez, Joaquin An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions (Inproceeding) 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 45–48, IEEE, Cardiff, 2009, ISBN: 978-1-4244-2709-3. (Abstract | Links | BibTeX | Tags: adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing) @inproceedings{Martino2009b, title = {An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions}, author = {Martino, Luca and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5278644}, isbn = {978-1-4244-2709-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE/SP 15th Workshop on Statistical Signal Processing}, pages = {45--48}, publisher = {IEEE}, address = {Cardiff}, abstract = {Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations.}, keywords = {adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing}, pubstate = {published}, tppubtype = {inproceedings} } Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations. |

Miguez, Joaquin; Maiz, Cristina; Djuric, Petar; Crisan, Dan Sequential Monte Carlo Optimization Using Artificial State-Space Models (Inproceeding) 2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, pp. 268–273, IEEE, Marco Island, FL, 2009. (Abstract | Links | BibTeX | Tags: Acceleration, Cost function, Design optimization, discrete-time dynamical system, Educational institutions, Mathematics, maximum a posteriori estimate, maximum likelihood estimation, minimisation, Monte Carlo methods, Optimization methods, Probability distribution, sequential Monte Carlo optimization, Sequential optimization, Signal design, State-space methods, state-space model, Stochastic optimization) @inproceedings{Miguez2009, title = {Sequential Monte Carlo Optimization Using Artificial State-Space Models}, author = {Miguez, Joaquin and Maiz, Cristina S. and Djuric, Petar M. and Crisan, Dan}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4785933}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop}, pages = {268--273}, publisher = {IEEE}, address = {Marco Island, FL}, abstract = {We introduce a method for sequential minimization of a certain class of (possibly non-convex) cost functions with respect to a high dimensional signal of interest. The proposed approach involves the transformation of the optimization problem into one of estimation in a discrete-time dynamical system. In particular, we describe a methodology for constructing an artificial state-space model which has the signal of interest as its unobserved dynamic state. The model is "adapted" to the cost function in the sense that the maximum a posteriori (MAP) estimate of the system state is also a global minimizer of the cost function. The advantage of the estimation framework is that we can draw from a pool of sequential Monte Carlo methods, for particle approximation of probability measures in dynamic systems, that enable the numerical computation of MAP estimates. We provide examples of how to apply the proposed methodology, including some illustrative simulation results.}, keywords = {Acceleration, Cost function, Design optimization, discrete-time dynamical system, Educational institutions, Mathematics, maximum a posteriori estimate, maximum likelihood estimation, minimisation, Monte Carlo methods, Optimization methods, Probability distribution, sequential Monte Carlo optimization, Sequential optimization, Signal design, State-space methods, state-space model, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } We introduce a method for sequential minimization of a certain class of (possibly non-convex) cost functions with respect to a high dimensional signal of interest. The proposed approach involves the transformation of the optimization problem into one of estimation in a discrete-time dynamical system. In particular, we describe a methodology for constructing an artificial state-space model which has the signal of interest as its unobserved dynamic state. The model is "adapted" to the cost function in the sense that the maximum a posteriori (MAP) estimate of the system state is also a global minimizer of the cost function. The advantage of the estimation framework is that we can draw from a pool of sequential Monte Carlo methods, for particle approximation of probability measures in dynamic systems, that enable the numerical computation of MAP estimates. We provide examples of how to apply the proposed methodology, including some illustrative simulation results. |