## 2010 |

## Journal Articles |

Perez-Cruz, Fernando ; Kulkarni, S R Robust and Low Complexity Distributed Kernel Least Squares Learning in Sensor Networks Journal Article IEEE Signal Processing Letters, 17 (4), pp. 355–358, 2010, ISSN: 1070-9908. Abstract | Links | BibTeX | Tags: communication complexity, Consensus, distributed learning, kernel methods, learning (artificial intelligence), low complexity distributed kernel least squares le, message passing, message-passing algorithms, robust nonparametric statistics, sensor network learning, sensor networks, telecommunication computing, Wireless Sensor Networks @article{Perez-Cruz2010, title = {Robust and Low Complexity Distributed Kernel Least Squares Learning in Sensor Networks}, author = {Perez-Cruz, Fernando and Kulkarni, S.R.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5395679}, issn = {1070-9908}, year = {2010}, date = {2010-01-01}, journal = {IEEE Signal Processing Letters}, volume = {17}, number = {4}, pages = {355--358}, abstract = {We present a novel mechanism for consensus building in sensor networks. The proposed algorithm has three main properties that make it suitable for sensor network learning. First, the proposed algorithm is based on robust nonparametric statistics and thereby needs little prior knowledge about the network and the function that needs to be estimated. Second, the algorithm uses only local information about the network and it communicates only with nearby sensors. Third, the algorithm is completely asynchronous and robust. It does not need to coordinate the sensors to estimate the underlying function and it is not affected if other sensors in the network stop working. Therefore, the proposed algorithm is an ideal candidate for sensor networks deployed in remote and inaccessible areas, which might need to change their objective once they have been set up.}, keywords = {communication complexity, Consensus, distributed learning, kernel methods, learning (artificial intelligence), low complexity distributed kernel least squares le, message passing, message-passing algorithms, robust nonparametric statistics, sensor network learning, sensor networks, telecommunication computing, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } We present a novel mechanism for consensus building in sensor networks. The proposed algorithm has three main properties that make it suitable for sensor network learning. First, the proposed algorithm is based on robust nonparametric statistics and thereby needs little prior knowledge about the network and the function that needs to be estimated. Second, the algorithm uses only local information about the network and it communicates only with nearby sensors. Third, the algorithm is completely asynchronous and robust. It does not need to coordinate the sensors to estimate the underlying function and it is not affected if other sensors in the network stop working. Therefore, the proposed algorithm is an ideal candidate for sensor networks deployed in remote and inaccessible areas, which might need to change their objective once they have been set up. |

Martino, Luca ; Miguez, Joaquin Generalized Rejection Sampling Schemes and Applications in Signal Processing Journal Article Signal Processing, 90 (11), pp. 2981–2995, 2010. Abstract | Links | BibTeX | Tags: Adaptive rejection sampling, Gibbs sampling, Monte Carlo integration, Rejection sampling, sensor networks, Target localization @article{Martino2010a, title = {Generalized Rejection Sampling Schemes and Applications in Signal Processing}, author = {Martino, Luca and Miguez, Joaquin}, url = {http://www.sciencedirect.com/science/article/pii/S0165168410001866}, year = {2010}, date = {2010-01-01}, journal = {Signal Processing}, volume = {90}, number = {11}, pages = {2981--2995}, abstract = {Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques, such as Markov chain Monte Carlo (MCMC) and particle filters, have become very popular in signal processing over the last years. However, in many problems of practical interest these techniques demand procedures for sampling from probability distributions with non-standard forms, hence we are often brought back to the consideration of fundamental simulation algorithms, such as rejection sampling (RS). Unfortunately, the use of RS techniques demands the calculation of tight upper bounds for the ratio of the target probability density function (pdf) over the proposal density from which candidate samples are drawn. Except for the class of log-concave target pdf's, for which an efficient algorithm exists, there are no general methods to analytically determine this bound, which has to be derived from scratch for each specific case. In this paper, we introduce new schemes for (a) obtaining upper bounds for likelihood functions and (b) adaptively computing proposal densities that approximate the target pdf closely. The former class of methods provides the tools to easily sample from a posteriori probability distributions (that appear very often in signal processing problems) by drawing candidates from the prior distribution. However, they are even more useful when they are exploited to derive the generalized adaptive RS (GARS) algorithm introduced in the second part of the paper. The proposed GARS method yields a sequence of proposal densities that converge towards the target pdf and enable a very efficient sampling of a broad class of probability distributions, possibly with multiple modes and non-standard forms. We provide some simple numerical examples to illustrate the use of the proposed techniques, including an example of target localization using range measurements, often encountered in sensor network applications.}, keywords = {Adaptive rejection sampling, Gibbs sampling, Monte Carlo integration, Rejection sampling, sensor networks, Target localization}, pubstate = {published}, tppubtype = {article} } Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques, such as Markov chain Monte Carlo (MCMC) and particle filters, have become very popular in signal processing over the last years. However, in many problems of practical interest these techniques demand procedures for sampling from probability distributions with non-standard forms, hence we are often brought back to the consideration of fundamental simulation algorithms, such as rejection sampling (RS). Unfortunately, the use of RS techniques demands the calculation of tight upper bounds for the ratio of the target probability density function (pdf) over the proposal density from which candidate samples are drawn. Except for the class of log-concave target pdf's, for which an efficient algorithm exists, there are no general methods to analytically determine this bound, which has to be derived from scratch for each specific case. In this paper, we introduce new schemes for (a) obtaining upper bounds for likelihood functions and (b) adaptively computing proposal densities that approximate the target pdf closely. The former class of methods provides the tools to easily sample from a posteriori probability distributions (that appear very often in signal processing problems) by drawing candidates from the prior distribution. However, they are even more useful when they are exploited to derive the generalized adaptive RS (GARS) algorithm introduced in the second part of the paper. The proposed GARS method yields a sequence of proposal densities that converge towards the target pdf and enable a very efficient sampling of a broad class of probability distributions, possibly with multiple modes and non-standard forms. We provide some simple numerical examples to illustrate the use of the proposed techniques, including an example of target localization using range measurements, often encountered in sensor network applications. |

## 2009 |

## Journal Articles |

Lazaro, M; Sanchez-Fernandez, M; Artés-Rodríguez, Antonio Optimal Sensor Selection in Binary Heterogeneous Sensor Networks Journal Article IEEE Transactions on Signal Processing, 57 (4), pp. 1577–1587, 2009, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: binary heterogeneous sensor networks, discrimination performance, Energy scaling, object detection, optimal sensor selection, performance-cost ratio, sensor networks, sensor selection, symmetric Kullback-Leibler divergence, target detection problem, Wireless Sensor Networks @article{Lazaro2009, title = {Optimal Sensor Selection in Binary Heterogeneous Sensor Networks}, author = {Lazaro, M. and Sanchez-Fernandez, M. and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4749309}, issn = {1053-587X}, year = {2009}, date = {2009-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {57}, number = {4}, pages = {1577--1587}, abstract = {We consider the problem of sensor selection in a heterogeneous sensor network when several types of binary sensors with different discrimination performance and costs are available. We want to analyze what is the optimal proportion of sensors of each class in a target detection problem when a total cost constraint is specified. We obtain the conditional distributions of the observations at the fusion center given the hypotheses, necessary to perform an optimal hypothesis test in this heterogeneous scenario. We characterize the performance of the tests by means of the symmetric Kullback-Leibler divergence, or J -divergence, applied to the conditional distributions under each hypothesis. By formulating the sensor selection as a constrained maximization problem, and showing the linearity of the J-divergence with the number of sensors of each class, we found that the optimal proportion of sensors is ldquowinner takes allrdquo like. The sensor class with the best performance/cost ratio is selected.}, keywords = {binary heterogeneous sensor networks, discrimination performance, Energy scaling, object detection, optimal sensor selection, performance-cost ratio, sensor networks, sensor selection, symmetric Kullback-Leibler divergence, target detection problem, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } We consider the problem of sensor selection in a heterogeneous sensor network when several types of binary sensors with different discrimination performance and costs are available. We want to analyze what is the optimal proportion of sensors of each class in a target detection problem when a total cost constraint is specified. We obtain the conditional distributions of the observations at the fusion center given the hypotheses, necessary to perform an optimal hypothesis test in this heterogeneous scenario. We characterize the performance of the tests by means of the symmetric Kullback-Leibler divergence, or J -divergence, applied to the conditional distributions under each hypothesis. By formulating the sensor selection as a constrained maximization problem, and showing the linearity of the J-divergence with the number of sensors of each class, we found that the optimal proportion of sensors is ldquowinner takes allrdquo like. The sensor class with the best performance/cost ratio is selected. |

## Inproceedings |

Martino, Luca ; Miguez, Joaquin An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions Inproceedings 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 45–48, IEEE, Cardiff, 2009, ISBN: 978-1-4244-2709-3. Abstract | Links | BibTeX | Tags: adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing @inproceedings{Martino2009b, title = {An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions}, author = {Martino, Luca and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5278644}, isbn = {978-1-4244-2709-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE/SP 15th Workshop on Statistical Signal Processing}, pages = {45--48}, publisher = {IEEE}, address = {Cardiff}, abstract = {Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations.}, keywords = {adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing}, pubstate = {published}, tppubtype = {inproceedings} } Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations. |