## 2018 |

## Journal Articles |

Koch, Tobias; Vazquez-Vilar, Gonzalo A Rigorous Approach to High-Resolution Entropy-Constrained Vector Quantization Journal Article IEEE Transactions on Information Theory, 64 (4), pp. 2609-2625, 2018, ISSN: 0018-9448. Links | BibTeX | Tags: Distortion, Distortion measurement, Entropy, Entropy constrained, high resolution, Probability density function, quantization, Rate-distortion, Rate-distortion theory, Vector quantization @article{koch-TIT2018a, title = {A Rigorous Approach to High-Resolution Entropy-Constrained Vector Quantization}, author = {Tobias Koch and Gonzalo Vazquez-Vilar}, doi = {10.1109/TIT.2018.2803064}, issn = {0018-9448}, year = {2018}, date = {2018-04-01}, journal = {IEEE Transactions on Information Theory}, volume = {64}, number = {4}, pages = {2609-2625}, keywords = {Distortion, Distortion measurement, Entropy, Entropy constrained, high resolution, Probability density function, quantization, Rate-distortion, Rate-distortion theory, Vector quantization}, pubstate = {published}, tppubtype = {article} } |

## 2015 |

## Inproceedings |

Luengo, David; Martino, Luca; Elvira, Victor; Bugallo, Monica F Bias correction for distributed Bayesian estimators Inproceedings 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 253–256, IEEE, Cancun, 2015, ISBN: 978-1-4799-1963-5. Abstract | Links | BibTeX | Tags: Bayes methods, Big data, Distributed databases, Estimation, Probability density function, Wireless Sensor Networks @inproceedings{Luengo2015a, title = {Bias correction for distributed Bayesian estimators}, author = {David Luengo and Luca Martino and Victor Elvira and Monica F Bugallo}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7383784}, doi = {10.1109/CAMSAP.2015.7383784}, isbn = {978-1-4799-1963-5}, year = {2015}, date = {2015-12-01}, booktitle = {2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)}, pages = {253--256}, publisher = {IEEE}, address = {Cancun}, abstract = {Dealing with the whole dataset in big data estimation problems is usually unfeasible. A common solution then consists of dividing the data into several smaller sets, performing distributed Bayesian estimation and combining these partial estimates to obtain a global estimate. A major problem of this approach is the presence of a non-negligible bias in the partial estimators, due to the mismatch between the unknown true prior and the prior assumed in the estimation. A simple method to mitigate the effect of this bias is proposed in this paper. Essentially, the approach is based on using a reference data set to obtain a rough estimation of the parameter of interest, i.e., a reference parameter. This information is then communicated to the partial filters that handle the smaller data sets, which can thus use a refined prior centered around this parameter. Simulation results confirm the good performance of this scheme.}, keywords = {Bayes methods, Big data, Distributed databases, Estimation, Probability density function, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } Dealing with the whole dataset in big data estimation problems is usually unfeasible. A common solution then consists of dividing the data into several smaller sets, performing distributed Bayesian estimation and combining these partial estimates to obtain a global estimate. A major problem of this approach is the presence of a non-negligible bias in the partial estimators, due to the mismatch between the unknown true prior and the prior assumed in the estimation. A simple method to mitigate the effect of this bias is proposed in this paper. Essentially, the approach is based on using a reference data set to obtain a rough estimation of the parameter of interest, i.e., a reference parameter. This information is then communicated to the partial filters that handle the smaller data sets, which can thus use a refined prior centered around this parameter. Simulation results confirm the good performance of this scheme. |

Martino, Luca; Elvira, Victor; Luengo, David; Corander, Jukka Parallel interacting Markov adaptive importance sampling Inproceedings 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 499–503, IEEE, Nice, 2015, ISBN: 978-0-9928-6263-3. Abstract | Links | BibTeX | Tags: Adaptive importance sampling, Bayesian inference, MCMC methods, Monte Carlo methods, Parallel Chains, Probability density function, Proposals, Signal processing, Signal processing algorithms, Sociology @inproceedings{Martino2015bb, title = {Parallel interacting Markov adaptive importance sampling}, author = {Luca Martino and Victor Elvira and David Luengo and Jukka Corander}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7362433 http://www.eurasip.org/Proceedings/Eusipco/Eusipco2015/papers/1570111267.pdf}, doi = {10.1109/EUSIPCO.2015.7362433}, isbn = {978-0-9928-6263-3}, year = {2015}, date = {2015-08-01}, booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)}, pages = {499--503}, publisher = {IEEE}, address = {Nice}, abstract = {Monte Carlo (MC) methods are widely used for statistical inference in signal processing applications. A well-known class of MC methods is importance sampling (IS) and its adaptive extensions. In this work, we introduce an iterated importance sampler using a population of proposal densities, which are adapted according to an MCMC technique over the population of location parameters. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples weighted according to the deterministic mixture scheme. Numerical results, on a multi-modal example and a localization problem in wireless sensor networks, show the advantages of the proposed schemes.}, keywords = {Adaptive importance sampling, Bayesian inference, MCMC methods, Monte Carlo methods, Parallel Chains, Probability density function, Proposals, Signal processing, Signal processing algorithms, Sociology}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are widely used for statistical inference in signal processing applications. A well-known class of MC methods is importance sampling (IS) and its adaptive extensions. In this work, we introduce an iterated importance sampler using a population of proposal densities, which are adapted according to an MCMC technique over the population of location parameters. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples weighted according to the deterministic mixture scheme. Numerical results, on a multi-modal example and a localization problem in wireless sensor networks, show the advantages of the proposed schemes. |

Martino, Luca; Elvira, Victor; Luengo, David; Artés-Rodríguez, Antonio; Corander, Jukka Smelly Parallel MCMC Chains Inproceedings 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4070–4074, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. Abstract | Links | BibTeX | Tags: Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization @inproceedings{Martino2015a, title = {Smelly Parallel MCMC Chains}, author = {Luca Martino and Victor Elvira and David Luengo and Antonio Artés-Rodríguez and Jukka Corander}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178736 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_martino.pdf}, doi = {10.1109/ICASSP.2015.7178736}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4070--4074}, publisher = {IEEE}, address = {Brisbane}, abstract = {Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice.}, keywords = {Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice. |

## 2013 |

## Journal Articles |

Salamanca, Luis; Olmos, Pablo M; Perez-Cruz, Fernando; Murillo-Fuentes, Juan Jose Tree-Structured Expectation Propagation for LDPC Decoding over BMS Channels Journal Article IEEE Transactions on Communications, 61 (10), pp. 4086–4095, 2013, ISSN: 0090-6778. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, BEC, belief propagation, binary erasure channel, binary memoryless symmetric channels, BMS channels, Channel Coding, Complexity theory, convolutional codes, convolutional low-density parity-check codes, Decoding, decoding block, expectation propagation, finite-length codes, LDPC decoding, message-passing algorithm, parity check codes, Probability density function, sparse linear codes, TEP algorithm, tree-structured expectation propagation, trees (mathematics), Vegetation @article{Salamanca2013a, title = {Tree-Structured Expectation Propagation for LDPC Decoding over BMS Channels}, author = {Luis Salamanca and Pablo M Olmos and Fernando Perez-Cruz and Juan Jose Murillo-Fuentes}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6587624}, issn = {0090-6778}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Communications}, volume = {61}, number = {10}, pages = {4086--4095}, abstract = {In this paper, we put forward the tree-structured expectation propagation (TEP) algorithm for decoding block and convolutional low-density parity-check codes over any binary channel. We have already shown that TEP improves belief propagation (BP) over the binary erasure channel (BEC) by imposing marginal constraints over a set of pairs of variables that form a tree or a forest. The TEP decoder is a message-passing algorithm that sequentially builds a tree/forest of erased variables to capture additional information disregarded by the standard BP decoder, which leads to a noticeable reduction of the error rate for finite-length codes. In this paper, we show how the TEP can be extended to any channel, specifically to binary memoryless symmetric (BMS) channels. We particularly focus on how the TEP algorithm can be adapted for any channel model and, more importantly, how to choose the tree/forest to keep the gains observed for block and convolutional LDPC codes over the BEC.}, keywords = {Approximation algorithms, Approximation methods, BEC, belief propagation, binary erasure channel, binary memoryless symmetric channels, BMS channels, Channel Coding, Complexity theory, convolutional codes, convolutional low-density parity-check codes, Decoding, decoding block, expectation propagation, finite-length codes, LDPC decoding, message-passing algorithm, parity check codes, Probability density function, sparse linear codes, TEP algorithm, tree-structured expectation propagation, trees (mathematics), Vegetation}, pubstate = {published}, tppubtype = {article} } In this paper, we put forward the tree-structured expectation propagation (TEP) algorithm for decoding block and convolutional low-density parity-check codes over any binary channel. We have already shown that TEP improves belief propagation (BP) over the binary erasure channel (BEC) by imposing marginal constraints over a set of pairs of variables that form a tree or a forest. The TEP decoder is a message-passing algorithm that sequentially builds a tree/forest of erased variables to capture additional information disregarded by the standard BP decoder, which leads to a noticeable reduction of the error rate for finite-length codes. In this paper, we show how the TEP can be extended to any channel, specifically to binary memoryless symmetric (BMS) channels. We particularly focus on how the TEP algorithm can be adapted for any channel model and, more importantly, how to choose the tree/forest to keep the gains observed for block and convolutional LDPC codes over the BEC. |

Olmos, Pablo M; Murillo-Fuentes, Juan Jose; Perez-Cruz, Fernando Tree-Structure Expectation Propagation for LDPC Decoding Over the BEC Journal Article IEEE Transactions on Information Theory, 59 (6), pp. 3354–3377, 2013, ISSN: 0018-9448. Abstract | Links | BibTeX | Tags: Algorithm design and analysis, Approximation algorithms, Approximation methods, BEC, belief propagation, Belief-propagation (BP), binary erasure channel, Complexity theory, decode low-density parity-check codes, Decoding, discrete memoryless channels, expectation propagation, finite-length analysis, LDPC codes, LDPC decoding, parity check codes, peeling-type algorithm, Probability density function, random graph evolution, Tanner graph, tree-structure expectation propagation @article{Olmos2013b, title = {Tree-Structure Expectation Propagation for LDPC Decoding Over the BEC}, author = {Pablo M Olmos and Juan Jose Murillo-Fuentes and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6451276}, issn = {0018-9448}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {59}, number = {6}, pages = {3354--3377}, abstract = {We present the tree-structure expectation propagation (Tree-EP) algorithm to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). Expectation propagation generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pairwise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph evolution as a function of the code parameters. The solution of these equations is used to predict the TEP decoder performance in both the asymptotic regime and the finite-length regimes over the BEC. While the asymptotic threshold of the TEP decoder is the same as the BP decoder for regular and optimized codes, we propose a scaling law for finite-length LDPC codes, which accurately approximates the TEP improved performance and facilitates its optimization.}, keywords = {Algorithm design and analysis, Approximation algorithms, Approximation methods, BEC, belief propagation, Belief-propagation (BP), binary erasure channel, Complexity theory, decode low-density parity-check codes, Decoding, discrete memoryless channels, expectation propagation, finite-length analysis, LDPC codes, LDPC decoding, parity check codes, peeling-type algorithm, Probability density function, random graph evolution, Tanner graph, tree-structure expectation propagation}, pubstate = {published}, tppubtype = {article} } We present the tree-structure expectation propagation (Tree-EP) algorithm to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). Expectation propagation generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pairwise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph evolution as a function of the code parameters. The solution of these equations is used to predict the TEP decoder performance in both the asymptotic regime and the finite-length regimes over the BEC. While the asymptotic threshold of the TEP decoder is the same as the BP decoder for regular and optimized codes, we propose a scaling law for finite-length LDPC codes, which accurately approximates the TEP improved performance and facilitates its optimization. |

## Inproceedings |

Koblents, Eugenia; Miguez, Joaquin A Population Monte Carlo Scheme for Computational Inference in High Dimensional Spaces Inproceedings 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6318–6322, IEEE, Vancouver, 2013, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Approximation methods, computational inference, degeneracy of importance weights, high dimensional spaces, Importance sampling, importance weights, iterative importance sampling, iterative methods, mixture-PMC, mixture-PMC algorithm, Monte Carlo methods, MPMC, nonlinear transformations, population Monte Carlo, population Monte Carlo scheme, Probability density function, probability distributions, Proposals, Sociology, Standards @inproceedings{Koblents2013a, title = {A Population Monte Carlo Scheme for Computational Inference in High Dimensional Spaces}, author = {Eugenia Koblents and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6638881}, issn = {1520-6149}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {6318--6322}, publisher = {IEEE}, address = {Vancouver}, abstract = {In this paper we address the Monte Carlo approximation of integrals with respect to probability distributions in high-dimensional spaces. In particular, we investigate the population Monte Carlo (PMC) scheme, which is based on an iterative importance sampling (IS) approach. Both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse-of-dimensionality, and limits their applicability in large-scale practical problems. In this paper we investigate a novel PMC scheme that consists in performing nonlinear transformations of the IWs in order to smooth their variations and avoid degeneracy. We apply the modified IS scheme to the well-known mixture-PMC (MPMC) algorithm, which constructs the importance functions as mixtures of kernels. We present numerical results that show how the modified version of MPMC clearly outperforms the original scheme.}, keywords = {Approximation methods, computational inference, degeneracy of importance weights, high dimensional spaces, Importance sampling, importance weights, iterative importance sampling, iterative methods, mixture-PMC, mixture-PMC algorithm, Monte Carlo methods, MPMC, nonlinear transformations, population Monte Carlo, population Monte Carlo scheme, Probability density function, probability distributions, Proposals, Sociology, Standards}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the Monte Carlo approximation of integrals with respect to probability distributions in high-dimensional spaces. In particular, we investigate the population Monte Carlo (PMC) scheme, which is based on an iterative importance sampling (IS) approach. Both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse-of-dimensionality, and limits their applicability in large-scale practical problems. In this paper we investigate a novel PMC scheme that consists in performing nonlinear transformations of the IWs in order to smooth their variations and avoid degeneracy. We apply the modified IS scheme to the well-known mixture-PMC (MPMC) algorithm, which constructs the importance functions as mixtures of kernels. We present numerical results that show how the modified version of MPMC clearly outperforms the original scheme. |

## 2012 |

## Journal Articles |

Maiz, Cristina S; Molanes-Lopez, Elisa M; Miguez, Joaquin; Djuric, Petar M A Particle Filtering Scheme for Processing Time Series Corrupted by Outliers Journal Article IEEE Transactions on Signal Processing, 60 (9), pp. 4611–4627, 2012, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: Kalman filters, Mathematical model, nonlinear state space model, Outlier detection, prediction theory, predictive distribution, Probability density function, State-space methods, state-space models, statistical distributions, Target tracking, time serie processing, Vectors, Yttrium @article{Maiz2012, title = {A Particle Filtering Scheme for Processing Time Series Corrupted by Outliers}, author = {Cristina S Maiz and Elisa M Molanes-Lopez and Joaquin Miguez and Petar M Djuric}, url = {http://www.tsc.uc3m.es/~jmiguez/papers/P34_2012_A Particle Filtering Scheme for Processing Time Series Corrupted by Outliers.pdf http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6203606}, issn = {1053-587X}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {60}, number = {9}, pages = {4611--4627}, abstract = {The literature in engineering and statistics is abounding in techniques for detecting and properly processing anomalous observations in the data. Most of these techniques have been developed in the framework of static models and it is only in recent years that we have seen attempts that address the presence of outliers in nonlinear time series. For a target tracking problem described by a nonlinear state-space model, we propose the online detection of outliers by including an outlier detection step within the standard particle filtering algorithm. The outlier detection step is implemented by a test involving a statistic of the predictive distribution of the observations, such as a concentration measure or an extreme upper quantile. We also provide asymptotic results about the convergence of the particle approximations of the predictive distribution (and its statistics) and assess the performance of the resulting algorithms by computer simulations of target tracking problems with signal power observations.}, keywords = {Kalman filters, Mathematical model, nonlinear state space model, Outlier detection, prediction theory, predictive distribution, Probability density function, State-space methods, state-space models, statistical distributions, Target tracking, time serie processing, Vectors, Yttrium}, pubstate = {published}, tppubtype = {article} } The literature in engineering and statistics is abounding in techniques for detecting and properly processing anomalous observations in the data. Most of these techniques have been developed in the framework of static models and it is only in recent years that we have seen attempts that address the presence of outliers in nonlinear time series. For a target tracking problem described by a nonlinear state-space model, we propose the online detection of outliers by including an outlier detection step within the standard particle filtering algorithm. The outlier detection step is implemented by a test involving a statistic of the predictive distribution of the observations, such as a concentration measure or an extreme upper quantile. We also provide asymptotic results about the convergence of the particle approximations of the predictive distribution (and its statistics) and assess the performance of the resulting algorithms by computer simulations of target tracking problems with signal power observations. |

## 2011 |

## Inproceedings |

Plata-Chaves, Jorge; Lazaro, Marcelino; Artés-Rodríguez, Antonio Optimal Neyman-Pearson Fusion in Two-Dimensional Densor Networks with Serial Architecture and Dependent Observations Inproceedings Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pp. 1–6, Chicago, 2011, ISBN: 978-1-4577-0267-9. Abstract | Links | BibTeX | Tags: Bayesian methods, binary distributed detection problem, decision theory, dependent observations, Joints, local decision rule, Measurement uncertainty, Network topology, Neyman-Pearson criterion, optimal Neyman-Pearson fusion, optimum distributed detection, Parallel architectures, Performance evaluation, Probability density function, sensor dependent observations, sensor fusion, serial architecture, serial network topology, two-dimensional sensor networks, Wireless Sensor Networks @inproceedings{Plata-Chaves2011b, title = {Optimal Neyman-Pearson Fusion in Two-Dimensional Densor Networks with Serial Architecture and Dependent Observations}, author = {Jorge Plata-Chaves and Marcelino Lazaro and Antonio Artés-Rodríguez}, url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5977545&searchWithin%3Dartes+rodriguez%26sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5977431%29}, isbn = {978-1-4577-0267-9}, year = {2011}, date = {2011-01-01}, booktitle = {Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on}, pages = {1--6}, address = {Chicago}, abstract = {In this correspondence, we consider a sensor network with serial architecture. When solving a binary distributed detection problem where the sensor observations are dependent under each one of the two possible hypothesis, each fusion stage of the network applies a local decision rule. We assume that, based on the information available at each fusion stage, the decision rules provide a binary message regarding the presence or absence of an event of interest. Under this scenario and under a Neyman-Pearson formulation, we derive the optimal decision rules associated with each fusion stage. As it happens when the sensor observations are independent, we are able to show that, under the Neyman-Pearson criterion, the optimal fusion rules of a serial configuration with dependent observations also match optimal Neyman-Pearson tests.}, keywords = {Bayesian methods, binary distributed detection problem, decision theory, dependent observations, Joints, local decision rule, Measurement uncertainty, Network topology, Neyman-Pearson criterion, optimal Neyman-Pearson fusion, optimum distributed detection, Parallel architectures, Performance evaluation, Probability density function, sensor dependent observations, sensor fusion, serial architecture, serial network topology, two-dimensional sensor networks, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } In this correspondence, we consider a sensor network with serial architecture. When solving a binary distributed detection problem where the sensor observations are dependent under each one of the two possible hypothesis, each fusion stage of the network applies a local decision rule. We assume that, based on the information available at each fusion stage, the decision rules provide a binary message regarding the presence or absence of an event of interest. Under this scenario and under a Neyman-Pearson formulation, we derive the optimal decision rules associated with each fusion stage. As it happens when the sensor observations are independent, we are able to show that, under the Neyman-Pearson criterion, the optimal fusion rules of a serial configuration with dependent observations also match optimal Neyman-Pearson tests. |

Balasingam, Balakumar; Bolic, Miodrag; Djuric, Petar M; Miguez, Joaquin Efficient Distributed Resampling for Particle Filters Inproceedings 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3772–3775, IEEE, Prague, 2011, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Approximation algorithms, Copper, Covariance matrix, distributed resampling, Markov processes, Probability density function, Sequential Monte-Carlo methods, Signal processing, Signal processing algorithms @inproceedings{Balasingam2011, title = {Efficient Distributed Resampling for Particle Filters}, author = {Balakumar Balasingam and Miodrag Bolic and Petar M Djuric and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5947172}, issn = {1520-6149}, year = {2011}, date = {2011-01-01}, booktitle = {2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {3772--3775}, publisher = {IEEE}, address = {Prague}, abstract = {In particle filtering, resampling is the only step that cannot be fully parallelized. Recently, we have proposed algorithms for distributed resampling implemented on architectures with concurrent processing elements (PEs). The objective of distributed resampling is to reduce the communication among the PEs while not compromising the performance of the particle filter. An additional objective for implementation is to reduce the communication among the PEs. In this paper, we report an improved version of the distributed resampling algorithm that optimally selects the particles for communication between the PEs of the distributed scheme. Computer simulations are provided that demonstrate the improved performance of the proposed algorithm.}, keywords = {Approximation algorithms, Copper, Covariance matrix, distributed resampling, Markov processes, Probability density function, Sequential Monte-Carlo methods, Signal processing, Signal processing algorithms}, pubstate = {published}, tppubtype = {inproceedings} } In particle filtering, resampling is the only step that cannot be fully parallelized. Recently, we have proposed algorithms for distributed resampling implemented on architectures with concurrent processing elements (PEs). The objective of distributed resampling is to reduce the communication among the PEs while not compromising the performance of the particle filter. An additional objective for implementation is to reduce the communication among the PEs. In this paper, we report an improved version of the distributed resampling algorithm that optimally selects the particles for communication between the PEs of the distributed scheme. Computer simulations are provided that demonstrate the improved performance of the proposed algorithm. |

## 2009 |

## Inproceedings |

Martino, Luca; Miguez, Joaquin An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions Inproceedings 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 45–48, IEEE, Cardiff, 2009, ISBN: 978-1-4244-2709-3. Abstract | Links | BibTeX | Tags: adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing @inproceedings{Martino2009b, title = {An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions}, author = {Luca Martino and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5278644}, isbn = {978-1-4244-2709-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE/SP 15th Workshop on Statistical Signal Processing}, pages = {45--48}, publisher = {IEEE}, address = {Cardiff}, abstract = {Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations.}, keywords = {adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing}, pubstate = {published}, tppubtype = {inproceedings} } Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations. |

Martino, Luca; Miguez, Joaquin A Novel Rejection Sampling Scheme for Posterior Probability Distributions Inproceedings 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2921–2924, IEEE, Taipei, 2009, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Additive noise, arbitrary target probability distributions, Bayes methods, Bayesian methods, Monte Carlo integration, Monte Carlo methods, Monte Carlo techniques, Overbounding, posterior probability distributions, Probability density function, Probability distribution, Proposals, Rejection sampling, rejection sampling scheme, Sampling methods, Signal processing algorithms, signal sampling, Upper bound @inproceedings{Martino2009, title = {A Novel Rejection Sampling Scheme for Posterior Probability Distributions}, author = {Luca Martino and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4960235}, issn = {1520-6149}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {2921--2924}, publisher = {IEEE}, address = {Taipei}, abstract = {Rejection sampling (RS) is a well-known method to draw from arbitrary target probability distributions, which has important applications by itself or as a building block for more sophisticated Monte Carlo techniques. The main limitation to the use of RS is the need to find an adequate upper bound for the ratio of the target probability density function (pdf) over the proposal pdf from which the samples are generated. There are no general methods to analytically find this bound, except in the particular case in which the target pdf is log-concave. In this paper we adopt a Bayesian view of the problem and propose a general RS scheme to draw from the posterior pdf of a signal of interest using its prior density as a proposal function. The method enables the analytical calculation of the bound and can be applied to a large class of target densities. We illustrate its use with a simple numerical example.}, keywords = {Additive noise, arbitrary target probability distributions, Bayes methods, Bayesian methods, Monte Carlo integration, Monte Carlo methods, Monte Carlo techniques, Overbounding, posterior probability distributions, Probability density function, Probability distribution, Proposals, Rejection sampling, rejection sampling scheme, Sampling methods, Signal processing algorithms, signal sampling, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } Rejection sampling (RS) is a well-known method to draw from arbitrary target probability distributions, which has important applications by itself or as a building block for more sophisticated Monte Carlo techniques. The main limitation to the use of RS is the need to find an adequate upper bound for the ratio of the target probability density function (pdf) over the proposal pdf from which the samples are generated. There are no general methods to analytically find this bound, except in the particular case in which the target pdf is log-concave. In this paper we adopt a Bayesian view of the problem and propose a general RS scheme to draw from the posterior pdf of a signal of interest using its prior density as a proposal function. The method enables the analytical calculation of the bound and can be applied to a large class of target densities. We illustrate its use with a simple numerical example. |