## 2016 |

Míguez, Joaquín; Vázquez, Manuel A A Proof of Uniform Convergence Over Time for a Distributed Particle Filter Journal Article Signal Processing, 122 , pp. 152–163, 2016, ISSN: 01651684. Abstract | Links | BibTeX | Tags: Convergence analysis, Distributed algorithms, Journal, Parallelization, Particle filtering, Wireless Sensor Networks @article{Miguez2016, title = {A Proof of Uniform Convergence Over Time for a Distributed Particle Filter}, author = {Joaquín Míguez and Manuel A Vázquez}, url = {http://www.sciencedirect.com/science/article/pii/S0165168415004077}, doi = {10.1016/j.sigpro.2015.11.015}, issn = {01651684}, year = {2016}, date = {2016-05-01}, journal = {Signal Processing}, volume = {122}, pages = {152--163}, abstract = {Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic et al (2005) and prove rigorously that, under certain stability assumptions, its asymptotic convergence is guaranteed uniformly over time, in such a way that approximation errors can be kept bounded with a fixed computational budget. To illustrate the theoretical findings, we carry out computer simulations for a target tracking problem. The numerical results show that the distributed PF has a negligible performance loss (compared to a centralized filter) for this problem and enable us to empirically validate the key assumptions of the analysis.}, keywords = {Convergence analysis, Distributed algorithms, Journal, Parallelization, Particle filtering, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic et al (2005) and prove rigorously that, under certain stability assumptions, its asymptotic convergence is guaranteed uniformly over time, in such a way that approximation errors can be kept bounded with a fixed computational budget. To illustrate the theoretical findings, we carry out computer simulations for a target tracking problem. The numerical results show that the distributed PF has a negligible performance loss (compared to a centralized filter) for this problem and enable us to empirically validate the key assumptions of the analysis. |

## 2015 |

Luengo, David; Martino, Luca; Elvira, Victor; Bugallo, Monica F Bias correction for distributed Bayesian estimators Inproceedings 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 253–256, IEEE, Cancun, 2015, ISBN: 978-1-4799-1963-5. Abstract | Links | BibTeX | Tags: Bayes methods, Big data, Distributed databases, Estimation, Probability density function, Wireless Sensor Networks @inproceedings{Luengo2015a, title = {Bias correction for distributed Bayesian estimators}, author = {David Luengo and Luca Martino and Victor Elvira and Monica F Bugallo}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7383784}, doi = {10.1109/CAMSAP.2015.7383784}, isbn = {978-1-4799-1963-5}, year = {2015}, date = {2015-12-01}, booktitle = {2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)}, pages = {253--256}, publisher = {IEEE}, address = {Cancun}, abstract = {Dealing with the whole dataset in big data estimation problems is usually unfeasible. A common solution then consists of dividing the data into several smaller sets, performing distributed Bayesian estimation and combining these partial estimates to obtain a global estimate. A major problem of this approach is the presence of a non-negligible bias in the partial estimators, due to the mismatch between the unknown true prior and the prior assumed in the estimation. A simple method to mitigate the effect of this bias is proposed in this paper. Essentially, the approach is based on using a reference data set to obtain a rough estimation of the parameter of interest, i.e., a reference parameter. This information is then communicated to the partial filters that handle the smaller data sets, which can thus use a refined prior centered around this parameter. Simulation results confirm the good performance of this scheme.}, keywords = {Bayes methods, Big data, Distributed databases, Estimation, Probability density function, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } Dealing with the whole dataset in big data estimation problems is usually unfeasible. A common solution then consists of dividing the data into several smaller sets, performing distributed Bayesian estimation and combining these partial estimates to obtain a global estimate. A major problem of this approach is the presence of a non-negligible bias in the partial estimators, due to the mismatch between the unknown true prior and the prior assumed in the estimation. A simple method to mitigate the effect of this bias is proposed in this paper. Essentially, the approach is based on using a reference data set to obtain a rough estimation of the parameter of interest, i.e., a reference parameter. This information is then communicated to the partial filters that handle the smaller data sets, which can thus use a refined prior centered around this parameter. Simulation results confirm the good performance of this scheme. |

Martino, Luca; Elvira, Victor; Luengo, David; Corander, Jukka An Adaptive Population Importance Sampler: Learning From Uncertainty Journal Article IEEE Transactions on Signal Processing, 63 (16), pp. 4422–4437, 2015, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: Adaptive importance sampling, adaptive multiple IS, adaptive population importance sampler, AMIS, APIS, Estimation, Importance sampling, IS estimators, iterative estimation, iterative methods, Journal, MC methods, Monte Carlo (MC) methods, Monte Carlo methods, population Monte Carlo, Proposals, Signal processing algorithms, simple temporal adaptation, Sociology, Standards, Wireless sensor network, Wireless Sensor Networks @article{Martino2015bbb, title = {An Adaptive Population Importance Sampler: Learning From Uncertainty}, author = {Luca Martino and Victor Elvira and David Luengo and Jukka Corander}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7117437}, doi = {10.1109/TSP.2015.2440215}, issn = {1053-587X}, year = {2015}, date = {2015-08-01}, journal = {IEEE Transactions on Signal Processing}, volume = {63}, number = {16}, pages = {4422--4437}, publisher = {IEEE}, abstract = {Monte Carlo (MC) methods are well-known computational techniques, widely used in different fields such as signal processing, communications and machine learning. An important class of MC methods is composed of importance sampling (IS) and its adaptive extensions, such as population Monte Carlo (PMC) and adaptive multiple IS (AMIS). In this paper, we introduce a novel adaptive and iterated importance sampler using a population of proposal densities. The proposed algorithm, named adaptive population importance sampling (APIS), provides a global estimation of the variables of interest iteratively, making use of all the samples previously generated. APIS combines a sophisticated scheme to build the IS estimators (based on the deterministic mixture approach) with a simple temporal adaptation (based on epochs). In this way, APIS is able to keep all the advantages of both AMIS and PMC, while minimizing their drawbacks. Furthermore, APIS is easily parallelizable. The cloud of proposals is adapted in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. The result is a fast, simple, robust, and high-performance algorithm applicable to a wide range of problems. Numerical results show the advantages of the proposed sampling scheme in four synthetic examples and a localization problem in a wireless sensor network.}, keywords = {Adaptive importance sampling, adaptive multiple IS, adaptive population importance sampler, AMIS, APIS, Estimation, Importance sampling, IS estimators, iterative estimation, iterative methods, Journal, MC methods, Monte Carlo (MC) methods, Monte Carlo methods, population Monte Carlo, Proposals, Signal processing algorithms, simple temporal adaptation, Sociology, Standards, Wireless sensor network, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } Monte Carlo (MC) methods are well-known computational techniques, widely used in different fields such as signal processing, communications and machine learning. An important class of MC methods is composed of importance sampling (IS) and its adaptive extensions, such as population Monte Carlo (PMC) and adaptive multiple IS (AMIS). In this paper, we introduce a novel adaptive and iterated importance sampler using a population of proposal densities. The proposed algorithm, named adaptive population importance sampling (APIS), provides a global estimation of the variables of interest iteratively, making use of all the samples previously generated. APIS combines a sophisticated scheme to build the IS estimators (based on the deterministic mixture approach) with a simple temporal adaptation (based on epochs). In this way, APIS is able to keep all the advantages of both AMIS and PMC, while minimizing their drawbacks. Furthermore, APIS is easily parallelizable. The cloud of proposals is adapted in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. The result is a fast, simple, robust, and high-performance algorithm applicable to a wide range of problems. Numerical results show the advantages of the proposed sampling scheme in four synthetic examples and a localization problem in a wireless sensor network. |

## 2014 |

Miguez, Joaquin On the uniform asymptotic convergence of a distributed particle filter Inproceedings 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 241–244, IEEE, A Coruña, 2014, ISBN: 978-1-4799-1481-4. Abstract | Links | BibTeX | Tags: ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs @inproceedings{Miguez2014, title = {On the uniform asymptotic convergence of a distributed particle filter}, author = {Joaquin Miguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6882385}, doi = {10.1109/SAM.2014.6882385}, isbn = {978-1-4799-1481-4}, year = {2014}, date = {2014-06-01}, booktitle = {2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM)}, pages = {241--244}, publisher = {IEEE}, address = {A Coruña}, abstract = {Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort.}, keywords = {ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs}, pubstate = {published}, tppubtype = {inproceedings} } Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort. |

## 2012 |

Henao-Mazo, W; Bravo-Santos, Ángel M Finding Diverse Shortest Paths for the Routing Task in Wireless Sensor Networks Inproceedings ICSNC 2012. The Seventh International Conference on Systems and Networks Communications, Lisboa, 2012. Abstract | Links | BibTeX | Tags: Diverse Paths., K Shortest, Paths, Wireless Sensor Networks @inproceedings{Henao-Mazo2012, title = {Finding Diverse Shortest Paths for the Routing Task in Wireless Sensor Networks}, author = {W Henao-Mazo and Ángel M Bravo-Santos}, url = {http://www.iaria.org/conferences2012/ProgramICSNC12.html}, year = {2012}, date = {2012-01-01}, booktitle = {ICSNC 2012. The Seventh International Conference on Systems and Networks Communications}, address = {Lisboa}, abstract = {Wireless Sensor Networks are deployed with the idea of collecting field information of different variables like temperature, position, humidity, etc., from several resourceconstrained sensor nodes, and then relay those data to a sink node or base station. Therefore, the path finding for routing must be carried out with strategies that make it possible to manage efficiently the network limited resources, whilst at the same time the network throughput is kept within appreciable levels. Many routing schemes search for one path, with low power dissipation that may not be convenient to increase the network lifetime and long-term connectivity. In an attempt to overcome such eventualities, we proposed a scenario for relaying that uses multiple diverse paths obtained considering the links among network nodes, that could provide reliable data transmission. When data is transmitted across various diverse paths in the network that offer low retransmission rates, the battery demand can be decreased and network lifetime is extended. We show, by using simulations, that the reliability in packets reception and the power dissipation that our scheme offers compare favourably with similar literature implementations.}, keywords = {Diverse Paths., K Shortest, Paths, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } Wireless Sensor Networks are deployed with the idea of collecting field information of different variables like temperature, position, humidity, etc., from several resourceconstrained sensor nodes, and then relay those data to a sink node or base station. Therefore, the path finding for routing must be carried out with strategies that make it possible to manage efficiently the network limited resources, whilst at the same time the network throughput is kept within appreciable levels. Many routing schemes search for one path, with low power dissipation that may not be convenient to increase the network lifetime and long-term connectivity. In an attempt to overcome such eventualities, we proposed a scenario for relaying that uses multiple diverse paths obtained considering the links among network nodes, that could provide reliable data transmission. When data is transmitted across various diverse paths in the network that offer low retransmission rates, the battery demand can be decreased and network lifetime is extended. We show, by using simulations, that the reliability in packets reception and the power dissipation that our scheme offers compare favourably with similar literature implementations. |

## 2011 |

Achutegui, Katrin; Miguez, Joaquin A Parallel Resampling Scheme and its Application to Distributed Particle Filtering in Wireless Networks Inproceedings 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 81–84, IEEE, San Juan, 2011, ISBN: 978-1-4577-2105-2. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, Artificial neural networks, distributed resampling, DRNA technique, Markov processes, nonproportional allocation algorithm, parallel resampling scheme, PF, quantization, Signal processing, Vectors, Wireless sensor network, Wireless Sensor Networks, WSN @inproceedings{Achutegui2011, title = {A Parallel Resampling Scheme and its Application to Distributed Particle Filtering in Wireless Networks}, author = {Katrin Achutegui and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6136051}, isbn = {978-1-4577-2105-2}, year = {2011}, date = {2011-01-01}, booktitle = {2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)}, pages = {81--84}, publisher = {IEEE}, address = {San Juan}, abstract = {We address the design of a particle filter (PF) that can be implemented in a distributed manner over a network of wireless sensor nodes, each of them collecting their own local data. This is a problem that has received considerable attention lately and several methods based on consensus, the transmission of likelihood information, the truncation and/or the quantization of data have been proposed. However, all existing schemes suffer from limitations related either to the amount of required communications among the nodes or the accuracy of the filter outputs. In this work we propose a novel distributed PF that is built around the distributed resampling with non-proportional allocation (DRNA) algorithm. This scheme guarantees the properness of the particle approximations produced by the filter and has been shown to be both efficient and accurate when compared with centralized PFs. The standard DRNA technique, however, places stringent demands on the communications among nodes that turn out impractical for a typical wireless sensor network (WSN). In this paper we investigate how to reduce this communication load by using (i) a random model for the spread of data over the WSN and (ii) methods that enable the out-of-sequence processing of sensor observations. A simple numerical illustration of the performance of the new algorithm compared with a centralized PF is provided.}, keywords = {Approximation algorithms, Approximation methods, Artificial neural networks, distributed resampling, DRNA technique, Markov processes, nonproportional allocation algorithm, parallel resampling scheme, PF, quantization, Signal processing, Vectors, Wireless sensor network, Wireless Sensor Networks, WSN}, pubstate = {published}, tppubtype = {inproceedings} } We address the design of a particle filter (PF) that can be implemented in a distributed manner over a network of wireless sensor nodes, each of them collecting their own local data. This is a problem that has received considerable attention lately and several methods based on consensus, the transmission of likelihood information, the truncation and/or the quantization of data have been proposed. However, all existing schemes suffer from limitations related either to the amount of required communications among the nodes or the accuracy of the filter outputs. In this work we propose a novel distributed PF that is built around the distributed resampling with non-proportional allocation (DRNA) algorithm. This scheme guarantees the properness of the particle approximations produced by the filter and has been shown to be both efficient and accurate when compared with centralized PFs. The standard DRNA technique, however, places stringent demands on the communications among nodes that turn out impractical for a typical wireless sensor network (WSN). In this paper we investigate how to reduce this communication load by using (i) a random model for the spread of data over the WSN and (ii) methods that enable the out-of-sequence processing of sensor observations. A simple numerical illustration of the performance of the new algorithm compared with a centralized PF is provided. |

Plata-Chaves, Jorge; Lazaro, Marcelino; Artés-Rodríguez, Antonio Optimal Neyman-Pearson Fusion in Two-Dimensional Densor Networks with Serial Architecture and Dependent Observations Inproceedings Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pp. 1–6, Chicago, 2011, ISBN: 978-1-4577-0267-9. Abstract | Links | BibTeX | Tags: Bayesian methods, binary distributed detection problem, decision theory, dependent observations, Joints, local decision rule, Measurement uncertainty, Network topology, Neyman-Pearson criterion, optimal Neyman-Pearson fusion, optimum distributed detection, Parallel architectures, Performance evaluation, Probability density function, sensor dependent observations, sensor fusion, serial architecture, serial network topology, two-dimensional sensor networks, Wireless Sensor Networks @inproceedings{Plata-Chaves2011bb, title = {Optimal Neyman-Pearson Fusion in Two-Dimensional Densor Networks with Serial Architecture and Dependent Observations}, author = {Jorge Plata-Chaves and Marcelino Lazaro and Antonio Artés-Rodríguez}, url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5977545&amp;searchWithin%3Dartes+rodriguez%26sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5977431%29}, isbn = {978-1-4577-0267-9}, year = {2011}, date = {2011-01-01}, booktitle = {Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on}, pages = {1--6}, address = {Chicago}, abstract = {In this correspondence, we consider a sensor network with serial architecture. When solving a binary distributed detection problem where the sensor observations are dependent under each one of the two possible hypothesis, each fusion stage of the network applies a local decision rule. We assume that, based on the information available at each fusion stage, the decision rules provide a binary message regarding the presence or absence of an event of interest. Under this scenario and under a Neyman-Pearson formulation, we derive the optimal decision rules associated with each fusion stage. As it happens when the sensor observations are independent, we are able to show that, under the Neyman-Pearson criterion, the optimal fusion rules of a serial configuration with dependent observations also match optimal Neyman-Pearson tests.}, keywords = {Bayesian methods, binary distributed detection problem, decision theory, dependent observations, Joints, local decision rule, Measurement uncertainty, Network topology, Neyman-Pearson criterion, optimal Neyman-Pearson fusion, optimum distributed detection, Parallel architectures, Performance evaluation, Probability density function, sensor dependent observations, sensor fusion, serial architecture, serial network topology, two-dimensional sensor networks, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } In this correspondence, we consider a sensor network with serial architecture. When solving a binary distributed detection problem where the sensor observations are dependent under each one of the two possible hypothesis, each fusion stage of the network applies a local decision rule. We assume that, based on the information available at each fusion stage, the decision rules provide a binary message regarding the presence or absence of an event of interest. Under this scenario and under a Neyman-Pearson formulation, we derive the optimal decision rules associated with each fusion stage. As it happens when the sensor observations are independent, we are able to show that, under the Neyman-Pearson criterion, the optimal fusion rules of a serial configuration with dependent observations also match optimal Neyman-Pearson tests. |

## 2010 |

Perez-Cruz, Fernando; Kulkarni, S R Robust and Low Complexity Distributed Kernel Least Squares Learning in Sensor Networks Journal Article IEEE Signal Processing Letters, 17 (4), pp. 355–358, 2010, ISSN: 1070-9908. Abstract | Links | BibTeX | Tags: communication complexity, Consensus, distributed learning, kernel methods, learning (artificial intelligence), low complexity distributed kernel least squares le, message passing, message-passing algorithms, robust nonparametric statistics, sensor network learning, sensor networks, telecommunication computing, Wireless Sensor Networks @article{Perez-Cruz2010, title = {Robust and Low Complexity Distributed Kernel Least Squares Learning in Sensor Networks}, author = {Fernando Perez-Cruz and S R Kulkarni}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5395679}, issn = {1070-9908}, year = {2010}, date = {2010-01-01}, journal = {IEEE Signal Processing Letters}, volume = {17}, number = {4}, pages = {355--358}, abstract = {We present a novel mechanism for consensus building in sensor networks. The proposed algorithm has three main properties that make it suitable for sensor network learning. First, the proposed algorithm is based on robust nonparametric statistics and thereby needs little prior knowledge about the network and the function that needs to be estimated. Second, the algorithm uses only local information about the network and it communicates only with nearby sensors. Third, the algorithm is completely asynchronous and robust. It does not need to coordinate the sensors to estimate the underlying function and it is not affected if other sensors in the network stop working. Therefore, the proposed algorithm is an ideal candidate for sensor networks deployed in remote and inaccessible areas, which might need to change their objective once they have been set up.}, keywords = {communication complexity, Consensus, distributed learning, kernel methods, learning (artificial intelligence), low complexity distributed kernel least squares le, message passing, message-passing algorithms, robust nonparametric statistics, sensor network learning, sensor networks, telecommunication computing, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } We present a novel mechanism for consensus building in sensor networks. The proposed algorithm has three main properties that make it suitable for sensor network learning. First, the proposed algorithm is based on robust nonparametric statistics and thereby needs little prior knowledge about the network and the function that needs to be estimated. Second, the algorithm uses only local information about the network and it communicates only with nearby sensors. Third, the algorithm is completely asynchronous and robust. It does not need to coordinate the sensors to estimate the underlying function and it is not affected if other sensors in the network stop working. Therefore, the proposed algorithm is an ideal candidate for sensor networks deployed in remote and inaccessible areas, which might need to change their objective once they have been set up. |

## 2009 |

Bravo-Santos, Ángel M; Djuric, Petar M Cooperative Relay Communications in Mesh Networks Inproceedings 2009 IEEE 10th Workshop on Signal Processing Advances in Wireless Communications, pp. 499–503, IEEE, Perugia, 2009, ISBN: 978-1-4244-3695-8. Abstract | Links | BibTeX | Tags: binary transmission, bit error probability, Bit error rate, cooperative relay communications, decode-and-forward relays, Detectors, error statistics, Maximum likelihood decoding, maximum likelihood detection, Mesh networks, mesh wireless networks, multi-hop networks, Network topology, optimal node decision rules, Peer to peer computing, radio networks, Relays, spread spectrum communication, telecommunication network topology, Wireless Sensor Networks @inproceedings{Bravo-Santos2009, title = {Cooperative Relay Communications in Mesh Networks}, author = {Ángel M Bravo-Santos and Petar M Djuric}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5161835}, isbn = {978-1-4244-3695-8}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE 10th Workshop on Signal Processing Advances in Wireless Communications}, pages = {499--503}, publisher = {IEEE}, address = {Perugia}, abstract = {In previous literature on cooperative relay communications, the emphasis has been on the study of multi-hop networks. In this paper we address mesh wireless networks that use decode-and-forward relays for which we derive the optimal node decision rules in case of binary transmission. We also obtain the expression for the overall bit error probability. We compare the mesh networks with multi-hop networks and show the improvement in performance that can be achieved with them when both networks have the same number of nodes and equal number of hops.}, keywords = {binary transmission, bit error probability, Bit error rate, cooperative relay communications, decode-and-forward relays, Detectors, error statistics, Maximum likelihood decoding, maximum likelihood detection, Mesh networks, mesh wireless networks, multi-hop networks, Network topology, optimal node decision rules, Peer to peer computing, radio networks, Relays, spread spectrum communication, telecommunication network topology, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } In previous literature on cooperative relay communications, the emphasis has been on the study of multi-hop networks. In this paper we address mesh wireless networks that use decode-and-forward relays for which we derive the optimal node decision rules in case of binary transmission. We also obtain the expression for the overall bit error probability. We compare the mesh networks with multi-hop networks and show the improvement in performance that can be achieved with them when both networks have the same number of nodes and equal number of hops. |

Perez-Cruz, Fernando; Kulkarni, S R Distributed Least Square for Consensus Building in Sensor Networks Inproceedings 2009 IEEE International Symposium on Information Theory, pp. 2877–2881, IEEE, Seoul, 2009, ISBN: 978-1-4244-4312-3. Abstract | Links | BibTeX | Tags: Change detection algorithms, Channel Coding, Distributed computing, distributed least square method, graphical models, Inference algorithms, Kernel, Least squares methods, nonparametric statistics, Parametric statistics, robustness, sensor-network learning, statistical analysis, Telecommunication network reliability, Wireless sensor network, Wireless Sensor Networks @inproceedings{Perez-Cruz2009, title = {Distributed Least Square for Consensus Building in Sensor Networks}, author = {Fernando Perez-Cruz and S R Kulkarni}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5205336}, isbn = {978-1-4244-4312-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Symposium on Information Theory}, pages = {2877--2881}, publisher = {IEEE}, address = {Seoul}, abstract = {We present a novel mechanism for consensus building in sensor networks. The proposed algorithm has three main properties that make it suitable for general sensor-network learning. First, the proposed algorithm is based on robust nonparametric statistics and thereby needs little prior knowledge about the network and the function that needs to be estimated. Second, the algorithm uses only local information about the network and it communicates only with nearby sensors. Third, the algorithm is completely asynchronous and robust. It does not need to coordinate the sensors to estimate the underlying function and it is not affected if other sensors in the network stop working. Therefore, the proposed algorithm is an ideal candidate for sensor networks deployed in remote and inaccessible areas, which might need to change their objective once they have been set up.}, keywords = {Change detection algorithms, Channel Coding, Distributed computing, distributed least square method, graphical models, Inference algorithms, Kernel, Least squares methods, nonparametric statistics, Parametric statistics, robustness, sensor-network learning, statistical analysis, Telecommunication network reliability, Wireless sensor network, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } We present a novel mechanism for consensus building in sensor networks. The proposed algorithm has three main properties that make it suitable for general sensor-network learning. First, the proposed algorithm is based on robust nonparametric statistics and thereby needs little prior knowledge about the network and the function that needs to be estimated. Second, the algorithm uses only local information about the network and it communicates only with nearby sensors. Third, the algorithm is completely asynchronous and robust. It does not need to coordinate the sensors to estimate the underlying function and it is not affected if other sensors in the network stop working. Therefore, the proposed algorithm is an ideal candidate for sensor networks deployed in remote and inaccessible areas, which might need to change their objective once they have been set up. |

Lazaro, Marcelino; Sanchez-Fernandez, Matilde; Artés-Rodríguez, Antonio Optimal Sensor Selection in Binary Heterogeneous Sensor Networks Journal Article IEEE Transactions on Signal Processing, 57 (4), pp. 1577–1587, 2009, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: binary heterogeneous sensor networks, discrimination performance, Energy scaling, object detection, optimal sensor selection, performance-cost ratio, sensor networks, sensor selection, symmetric Kullback-Leibler divergence, target detection problem, Wireless Sensor Networks @article{Lazaro2009bb, title = {Optimal Sensor Selection in Binary Heterogeneous Sensor Networks}, author = {Marcelino Lazaro and Matilde Sanchez-Fernandez and Antonio Artés-Rodríguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4749309}, issn = {1053-587X}, year = {2009}, date = {2009-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {57}, number = {4}, pages = {1577--1587}, abstract = {We consider the problem of sensor selection in a heterogeneous sensor network when several types of binary sensors with different discrimination performance and costs are available. We want to analyze what is the optimal proportion of sensors of each class in a target detection problem when a total cost constraint is specified. We obtain the conditional distributions of the observations at the fusion center given the hypotheses, necessary to perform an optimal hypothesis test in this heterogeneous scenario. We characterize the performance of the tests by means of the symmetric Kullback-Leibler divergence, or J -divergence, applied to the conditional distributions under each hypothesis. By formulating the sensor selection as a constrained maximization problem, and showing the linearity of the J-divergence with the number of sensors of each class, we found that the optimal proportion of sensors is ldquowinner takes allrdquo like. The sensor class with the best performance/cost ratio is selected.}, keywords = {binary heterogeneous sensor networks, discrimination performance, Energy scaling, object detection, optimal sensor selection, performance-cost ratio, sensor networks, sensor selection, symmetric Kullback-Leibler divergence, target detection problem, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } We consider the problem of sensor selection in a heterogeneous sensor network when several types of binary sensors with different discrimination performance and costs are available. We want to analyze what is the optimal proportion of sensors of each class in a target detection problem when a total cost constraint is specified. We obtain the conditional distributions of the observations at the fusion center given the hypotheses, necessary to perform an optimal hypothesis test in this heterogeneous scenario. We characterize the performance of the tests by means of the symmetric Kullback-Leibler divergence, or J -divergence, applied to the conditional distributions under each hypothesis. By formulating the sensor selection as a constrained maximization problem, and showing the linearity of the J-divergence with the number of sensors of each class, we found that the optimal proportion of sensors is ldquowinner takes allrdquo like. The sensor class with the best performance/cost ratio is selected. |