## 2015 |

Santos, Irene; Murillo-Fuentes, Juan Jose; Olmos, Pablo M Block Expectation Propagation Equalization for ISI Channels Inproceedings 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 379–383, IEEE, Nice, 2015, ISBN: 978-0-9928-6263-3. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, BCJR algorithm, channel equalization, Complexity theory, Decoding, Equalizers, expectation propagation, ISI, low complexity, Signal processing algorithms @inproceedings{Santos2015, title = {Block Expectation Propagation Equalization for ISI Channels}, author = {Irene Santos and Juan Jose Murillo-Fuentes and Pablo M Olmos}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7362409}, doi = {10.1109/EUSIPCO.2015.7362409}, isbn = {978-0-9928-6263-3}, year = {2015}, date = {2015-08-01}, booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)}, pages = {379--383}, publisher = {IEEE}, address = {Nice}, abstract = {Actual communications systems use high-order modulations and channels with memory. However, as the memory of the channels and the order of the constellations grow, optimal equalization such as BCJR algorithm is computationally intractable, as their complexity increases exponentially with the number of taps and size of modulation. In this paper, we propose a novel low-complexity hard and soft output equalizer based on the Expectation Propagation (EP) algorithm that provides high-accuracy posterior probability estimations at the input of the channel decoder with similar computational complexity than the linear MMSE. We experimentally show that this quasi-optimal solution outperforms classical solutions reducing the bit error probability with low complexity when LDPC channel decoding is used, avoiding the curse of dimensionality with channel memory and constellation size.}, keywords = {Approximation algorithms, Approximation methods, BCJR algorithm, channel equalization, Complexity theory, Decoding, Equalizers, expectation propagation, ISI, low complexity, Signal processing algorithms}, pubstate = {published}, tppubtype = {inproceedings} } Actual communications systems use high-order modulations and channels with memory. However, as the memory of the channels and the order of the constellations grow, optimal equalization such as BCJR algorithm is computationally intractable, as their complexity increases exponentially with the number of taps and size of modulation. In this paper, we propose a novel low-complexity hard and soft output equalizer based on the Expectation Propagation (EP) algorithm that provides high-accuracy posterior probability estimations at the input of the channel decoder with similar computational complexity than the linear MMSE. We experimentally show that this quasi-optimal solution outperforms classical solutions reducing the bit error probability with low complexity when LDPC channel decoding is used, avoiding the curse of dimensionality with channel memory and constellation size. |

## 2014 |

Miguez, Joaquin On the uniform asymptotic convergence of a distributed particle filter Inproceedings 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 241–244, IEEE, A Coruña, 2014, ISBN: 978-1-4799-1481-4. Abstract | Links | BibTeX | Tags: ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs @inproceedings{Miguez2014, title = {On the uniform asymptotic convergence of a distributed particle filter}, author = {Joaquin Miguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6882385}, doi = {10.1109/SAM.2014.6882385}, isbn = {978-1-4799-1481-4}, year = {2014}, date = {2014-06-01}, booktitle = {2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM)}, pages = {241--244}, publisher = {IEEE}, address = {A Coruña}, abstract = {Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort.}, keywords = {ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs}, pubstate = {published}, tppubtype = {inproceedings} } Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort. |

Cespedes, Javier; Olmos, Pablo M; Sanchez-Fernandez, Matilde; Perez-Cruz, Fernando Expectation Propagation Detection for High-order High-dimensional MIMO Systems Journal Article IEEE Transactions on Communications, PP (99), pp. 1–1, 2014, ISSN: 0090-6778. Abstract | Links | BibTeX | Tags: Approximation methods, computational complexity, Detectors, MIMO, Signal to noise ratio, Vectors @article{Cespedes2014, title = {Expectation Propagation Detection for High-order High-dimensional MIMO Systems}, author = {Javier Cespedes and Pablo M Olmos and Matilde Sanchez-Fernandez and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6841617}, issn = {0090-6778}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Communications}, volume = {PP}, number = {99}, pages = {1--1}, abstract = {Modern communications systems use multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the number of antennas and the order of the constellation grow, the design of efficient and low-complexity MIMO receivers possesses big technical challenges. For example, symbol detection can no longer rely on maximum likelihood detection or sphere-decoding methods, as their complexity increases exponentially with the number of transmitters/receivers. In this paper, we propose a low-complexity high-accuracy MIMO symbol detector based on the Expectation Propagation (EP) algorithm. EP allows approximating iteratively at polynomial-time the posterior distribution of the transmitted symbols. We also show that our EP MIMO detector outperforms classic and state-of-the-art solutions reducing the symbol error rate at a reduced computational complexity.}, keywords = {Approximation methods, computational complexity, Detectors, MIMO, Signal to noise ratio, Vectors}, pubstate = {published}, tppubtype = {article} } Modern communications systems use multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the number of antennas and the order of the constellation grow, the design of efficient and low-complexity MIMO receivers possesses big technical challenges. For example, symbol detection can no longer rely on maximum likelihood detection or sphere-decoding methods, as their complexity increases exponentially with the number of transmitters/receivers. In this paper, we propose a low-complexity high-accuracy MIMO symbol detector based on the Expectation Propagation (EP) algorithm. EP allows approximating iteratively at polynomial-time the posterior distribution of the transmitted symbols. We also show that our EP MIMO detector outperforms classic and state-of-the-art solutions reducing the symbol error rate at a reduced computational complexity. |

Cespedes, Javier; Olmos, Pablo M; Sanchez-Fernandez, Matilde; Perez-Cruz, Fernando Improved Performance of LDPC-Coded MIMO Systems with EP-based Soft-Decisions Inproceedings 2014 IEEE International Symposium on Information Theory, pp. 1997–2001, IEEE, Honolulu, 2014, ISBN: 978-1-4799-5186-4. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, approximation theory, Channel Coding, channel decoder, communication complexity, complexity, Complexity theory, Detectors, encoding scheme, EP soft bit probability, EP-based soft decision, error statistics, expectation propagation, expectation-maximisation algorithm, expectation-propagation algorithm, Gaussian approximation, Gaussian channels, LDPC, LDPC coded MIMO system, Low Complexity receiver, MIMO, MIMO communication, MIMO communication systems, MIMO receiver, modern communication system, multiple input multiple output, parity check codes, per-antenna soft bit probability, posterior marginalization problem, posterior probability computation, QAM constellation, Quadrature amplitude modulation, radio receivers, signaling, spectral analysis, spectral efficiency maximization, symbol detection, telecommunication signalling, Vectors @inproceedings{Cespedes2014b, title = {Improved Performance of LDPC-Coded MIMO Systems with EP-based Soft-Decisions}, author = {Javier Cespedes and Pablo M Olmos and Matilde Sanchez-Fernandez and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6875183}, isbn = {978-1-4799-5186-4}, year = {2014}, date = {2014-01-01}, booktitle = {2014 IEEE International Symposium on Information Theory}, pages = {1997--2001}, publisher = {IEEE}, address = {Honolulu}, abstract = {Modern communications systems use efficient encoding schemes, multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the dimensions of the system grow, the design of efficient and low-complexity MIMO receivers possesses technical challenges. Symbol detection can no longer rely on conventional approaches for posterior probability computation due to complexity. Marginalization of this posterior to obtain per-antenna soft-bit probabilities to be fed to a channel decoder is computationally challenging when realistic signaling is used. In this work, we propose to use Expectation Propagation (EP) algorithm to provide an accurate low-complexity Gaussian approximation to the posterior, easily solving the posterior marginalization problem. EP soft-bit probabilities are used in an LDPC-coded MIMO system, achieving outstanding performance improvement compared to similar approaches in the literature for low-complexity LDPC MIMO decoding.}, keywords = {Approximation algorithms, Approximation methods, approximation theory, Channel Coding, channel decoder, communication complexity, complexity, Complexity theory, Detectors, encoding scheme, EP soft bit probability, EP-based soft decision, error statistics, expectation propagation, expectation-maximisation algorithm, expectation-propagation algorithm, Gaussian approximation, Gaussian channels, LDPC, LDPC coded MIMO system, Low Complexity receiver, MIMO, MIMO communication, MIMO communication systems, MIMO receiver, modern communication system, multiple input multiple output, parity check codes, per-antenna soft bit probability, posterior marginalization problem, posterior probability computation, QAM constellation, Quadrature amplitude modulation, radio receivers, signaling, spectral analysis, spectral efficiency maximization, symbol detection, telecommunication signalling, Vectors}, pubstate = {published}, tppubtype = {inproceedings} } Modern communications systems use efficient encoding schemes, multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the dimensions of the system grow, the design of efficient and low-complexity MIMO receivers possesses technical challenges. Symbol detection can no longer rely on conventional approaches for posterior probability computation due to complexity. Marginalization of this posterior to obtain per-antenna soft-bit probabilities to be fed to a channel decoder is computationally challenging when realistic signaling is used. In this work, we propose to use Expectation Propagation (EP) algorithm to provide an accurate low-complexity Gaussian approximation to the posterior, easily solving the posterior marginalization problem. EP soft-bit probabilities are used in an LDPC-coded MIMO system, achieving outstanding performance improvement compared to similar approaches in the literature for low-complexity LDPC MIMO decoding. |

## 2013 |

Salamanca, Luis; Olmos, Pablo M; Perez-Cruz, Fernando; Murillo-Fuentes, Juan Jose Tree-Structured Expectation Propagation for LDPC Decoding over BMS Channels Journal Article IEEE Transactions on Communications, 61 (10), pp. 4086–4095, 2013, ISSN: 0090-6778. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, BEC, belief propagation, binary erasure channel, binary memoryless symmetric channels, BMS channels, Channel Coding, Complexity theory, convolutional codes, convolutional low-density parity-check codes, Decoding, decoding block, expectation propagation, finite-length codes, LDPC decoding, message-passing algorithm, parity check codes, Probability density function, sparse linear codes, TEP algorithm, tree-structured expectation propagation, trees (mathematics), Vegetation @article{Salamanca2013a, title = {Tree-Structured Expectation Propagation for LDPC Decoding over BMS Channels}, author = {Luis Salamanca and Pablo M Olmos and Fernando Perez-Cruz and Juan Jose Murillo-Fuentes}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6587624}, issn = {0090-6778}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Communications}, volume = {61}, number = {10}, pages = {4086--4095}, abstract = {In this paper, we put forward the tree-structured expectation propagation (TEP) algorithm for decoding block and convolutional low-density parity-check codes over any binary channel. We have already shown that TEP improves belief propagation (BP) over the binary erasure channel (BEC) by imposing marginal constraints over a set of pairs of variables that form a tree or a forest. The TEP decoder is a message-passing algorithm that sequentially builds a tree/forest of erased variables to capture additional information disregarded by the standard BP decoder, which leads to a noticeable reduction of the error rate for finite-length codes. In this paper, we show how the TEP can be extended to any channel, specifically to binary memoryless symmetric (BMS) channels. We particularly focus on how the TEP algorithm can be adapted for any channel model and, more importantly, how to choose the tree/forest to keep the gains observed for block and convolutional LDPC codes over the BEC.}, keywords = {Approximation algorithms, Approximation methods, BEC, belief propagation, binary erasure channel, binary memoryless symmetric channels, BMS channels, Channel Coding, Complexity theory, convolutional codes, convolutional low-density parity-check codes, Decoding, decoding block, expectation propagation, finite-length codes, LDPC decoding, message-passing algorithm, parity check codes, Probability density function, sparse linear codes, TEP algorithm, tree-structured expectation propagation, trees (mathematics), Vegetation}, pubstate = {published}, tppubtype = {article} } In this paper, we put forward the tree-structured expectation propagation (TEP) algorithm for decoding block and convolutional low-density parity-check codes over any binary channel. We have already shown that TEP improves belief propagation (BP) over the binary erasure channel (BEC) by imposing marginal constraints over a set of pairs of variables that form a tree or a forest. The TEP decoder is a message-passing algorithm that sequentially builds a tree/forest of erased variables to capture additional information disregarded by the standard BP decoder, which leads to a noticeable reduction of the error rate for finite-length codes. In this paper, we show how the TEP can be extended to any channel, specifically to binary memoryless symmetric (BMS) channels. We particularly focus on how the TEP algorithm can be adapted for any channel model and, more importantly, how to choose the tree/forest to keep the gains observed for block and convolutional LDPC codes over the BEC. |

Salamanca, Luis; Olmos, Pablo M; Murillo-Fuentes, Juan Jose; Perez-Cruz, Fernando Tree Expectation Propagation for ML Decoding of LDPC Codes over the BEC Journal Article IEEE Transactions on Communications, 61 (2), pp. 465–473, 2013, ISSN: 0090-6778. Abstract | Links | BibTeX | Tags: approximate inference, Approximation algorithms, Approximation methods, BEC, binary codes, binary erasure channel, code graph, Complexity theory, equivalent complexity, Gaussian elimination method, Gaussian processes, generalized tree-structured expectation propagatio, graphical message-passing procedure, graphical models, LDPC codes, Maximum likelihood decoding, maximum likelihood solution, ML decoding, parity check codes, peeling decoder, tree expectation propagation, tree graph, Tree graphs, tree-structured expectation propagation, tree-structured expectation propagation decoder, trees (mathematics) @article{Salamanca2013b, title = {Tree Expectation Propagation for ML Decoding of LDPC Codes over the BEC}, author = {Luis Salamanca and Pablo M Olmos and Juan Jose Murillo-Fuentes and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6384612}, issn = {0090-6778}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Communications}, volume = {61}, number = {2}, pages = {465--473}, abstract = {We propose a decoding algorithm for LDPC codes that achieves the maximum likelihood (ML) solution over the binary erasure channel (BEC). In this channel, the tree-structured expectation propagation (TEP) decoder improves the peeling decoder (PD) by processing check nodes of degree one and two. However, it does not achieve the ML solution, as the tree structure of the TEP allows only for approximate inference. In this paper, we provide the procedure to construct the structure needed for exact inference. This algorithm, denoted as generalized tree-structured expectation propagation (GTEP), modifies the code graph by recursively eliminating any check node and merging this information in the remaining graph. The GTEP decoder upon completion either provides the unique ML solution or a tree graph in which the number of parent nodes indicates the multiplicity of the ML solution. We also explain the algorithm as a Gaussian elimination method, relating the GTEP to other ML solutions. Compared to previous approaches, it presents an equivalent complexity, it exhibits a simpler graphical message-passing procedure and, most interesting, the algorithm can be generalized to other channels.}, keywords = {approximate inference, Approximation algorithms, Approximation methods, BEC, binary codes, binary erasure channel, code graph, Complexity theory, equivalent complexity, Gaussian elimination method, Gaussian processes, generalized tree-structured expectation propagatio, graphical message-passing procedure, graphical models, LDPC codes, Maximum likelihood decoding, maximum likelihood solution, ML decoding, parity check codes, peeling decoder, tree expectation propagation, tree graph, Tree graphs, tree-structured expectation propagation, tree-structured expectation propagation decoder, trees (mathematics)}, pubstate = {published}, tppubtype = {article} } We propose a decoding algorithm for LDPC codes that achieves the maximum likelihood (ML) solution over the binary erasure channel (BEC). In this channel, the tree-structured expectation propagation (TEP) decoder improves the peeling decoder (PD) by processing check nodes of degree one and two. However, it does not achieve the ML solution, as the tree structure of the TEP allows only for approximate inference. In this paper, we provide the procedure to construct the structure needed for exact inference. This algorithm, denoted as generalized tree-structured expectation propagation (GTEP), modifies the code graph by recursively eliminating any check node and merging this information in the remaining graph. The GTEP decoder upon completion either provides the unique ML solution or a tree graph in which the number of parent nodes indicates the multiplicity of the ML solution. We also explain the algorithm as a Gaussian elimination method, relating the GTEP to other ML solutions. Compared to previous approaches, it presents an equivalent complexity, it exhibits a simpler graphical message-passing procedure and, most interesting, the algorithm can be generalized to other channels. |

Olmos, Pablo M; Murillo-Fuentes, Juan Jose; Perez-Cruz, Fernando Tree-Structure Expectation Propagation for LDPC Decoding Over the BEC Journal Article IEEE Transactions on Information Theory, 59 (6), pp. 3354–3377, 2013, ISSN: 0018-9448. Abstract | Links | BibTeX | Tags: Algorithm design and analysis, Approximation algorithms, Approximation methods, BEC, belief propagation, Belief-propagation (BP), binary erasure channel, Complexity theory, decode low-density parity-check codes, Decoding, discrete memoryless channels, expectation propagation, finite-length analysis, LDPC codes, LDPC decoding, parity check codes, peeling-type algorithm, Probability density function, random graph evolution, Tanner graph, tree-structure expectation propagation @article{Olmos2013b, title = {Tree-Structure Expectation Propagation for LDPC Decoding Over the BEC}, author = {Pablo M Olmos and Juan Jose Murillo-Fuentes and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6451276}, issn = {0018-9448}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {59}, number = {6}, pages = {3354--3377}, abstract = {We present the tree-structure expectation propagation (Tree-EP) algorithm to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). Expectation propagation generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pairwise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph evolution as a function of the code parameters. The solution of these equations is used to predict the TEP decoder performance in both the asymptotic regime and the finite-length regimes over the BEC. While the asymptotic threshold of the TEP decoder is the same as the BP decoder for regular and optimized codes, we propose a scaling law for finite-length LDPC codes, which accurately approximates the TEP improved performance and facilitates its optimization.}, keywords = {Algorithm design and analysis, Approximation algorithms, Approximation methods, BEC, belief propagation, Belief-propagation (BP), binary erasure channel, Complexity theory, decode low-density parity-check codes, Decoding, discrete memoryless channels, expectation propagation, finite-length analysis, LDPC codes, LDPC decoding, parity check codes, peeling-type algorithm, Probability density function, random graph evolution, Tanner graph, tree-structure expectation propagation}, pubstate = {published}, tppubtype = {article} } We present the tree-structure expectation propagation (Tree-EP) algorithm to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). Expectation propagation generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pairwise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph evolution as a function of the code parameters. The solution of these equations is used to predict the TEP decoder performance in both the asymptotic regime and the finite-length regimes over the BEC. While the asymptotic threshold of the TEP decoder is the same as the BP decoder for regular and optimized codes, we propose a scaling law for finite-length LDPC codes, which accurately approximates the TEP improved performance and facilitates its optimization. |

Koblents, Eugenia; Miguez, Joaquin A Population Monte Carlo Scheme for Computational Inference in High Dimensional Spaces Inproceedings 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6318–6322, IEEE, Vancouver, 2013, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Approximation methods, computational inference, degeneracy of importance weights, high dimensional spaces, Importance sampling, importance weights, iterative importance sampling, iterative methods, mixture-PMC, mixture-PMC algorithm, Monte Carlo methods, MPMC, nonlinear transformations, population Monte Carlo, population Monte Carlo scheme, Probability density function, probability distributions, Proposals, Sociology, Standards @inproceedings{Koblents2013a, title = {A Population Monte Carlo Scheme for Computational Inference in High Dimensional Spaces}, author = {Eugenia Koblents and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6638881}, issn = {1520-6149}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {6318--6322}, publisher = {IEEE}, address = {Vancouver}, abstract = {In this paper we address the Monte Carlo approximation of integrals with respect to probability distributions in high-dimensional spaces. In particular, we investigate the population Monte Carlo (PMC) scheme, which is based on an iterative importance sampling (IS) approach. Both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse-of-dimensionality, and limits their applicability in large-scale practical problems. In this paper we investigate a novel PMC scheme that consists in performing nonlinear transformations of the IWs in order to smooth their variations and avoid degeneracy. We apply the modified IS scheme to the well-known mixture-PMC (MPMC) algorithm, which constructs the importance functions as mixtures of kernels. We present numerical results that show how the modified version of MPMC clearly outperforms the original scheme.}, keywords = {Approximation methods, computational inference, degeneracy of importance weights, high dimensional spaces, Importance sampling, importance weights, iterative importance sampling, iterative methods, mixture-PMC, mixture-PMC algorithm, Monte Carlo methods, MPMC, nonlinear transformations, population Monte Carlo, population Monte Carlo scheme, Probability density function, probability distributions, Proposals, Sociology, Standards}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the Monte Carlo approximation of integrals with respect to probability distributions in high-dimensional spaces. In particular, we investigate the population Monte Carlo (PMC) scheme, which is based on an iterative importance sampling (IS) approach. Both IS and PMC suffer from the well known problem of degeneracy of the importance weights (IWs), which is closely related to the curse-of-dimensionality, and limits their applicability in large-scale practical problems. In this paper we investigate a novel PMC scheme that consists in performing nonlinear transformations of the IWs in order to smooth their variations and avoid degeneracy. We apply the modified IS scheme to the well-known mixture-PMC (MPMC) algorithm, which constructs the importance functions as mixtures of kernels. We present numerical results that show how the modified version of MPMC clearly outperforms the original scheme. |

Luengo, David; Via, Javier; Monzon, Sandra; Trigano, Tom; Artés-Rodríguez, Antonio Cross-Products LASSO Inproceedings 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6118–6122, IEEE, Vancouver, 2013, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Approximation methods, approximation theory, concave programming, convex programming, Cost function, cross-product LASSO cost function, Dictionaries, dictionary, Encoding, LASSO, learning (artificial intelligence), negative co-occurrence, negative cooccurrence phenomenon, nonconvex optimization problem, Signal processing, signal processing application, signal reconstruction, sparse coding, sparse learning approach, Sparse matrices, sparsity-aware learning, successive convex approximation, Vectors @inproceedings{Luengo2013, title = {Cross-Products LASSO}, author = {David Luengo and Javier Via and Sandra Monzon and Tom Trigano and Antonio Artés-Rodríguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6638840}, issn = {1520-6149}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {6118--6122}, publisher = {IEEE}, address = {Vancouver}, abstract = {Negative co-occurrence is a common phenomenon in many signal processing applications. In some cases the signals involved are sparse, and this information can be exploited to recover them. In this paper, we present a sparse learning approach that explicitly takes into account negative co-occurrence. This is achieved by adding a novel penalty term to the LASSO cost function based on the cross-products between the reconstruction coefficients. Although the resulting optimization problem is non-convex, we develop a new and efficient method for solving it based on successive convex approximations. Results on synthetic data, for both complete and overcomplete dictionaries, are provided to validate the proposed approach.}, keywords = {Approximation methods, approximation theory, concave programming, convex programming, Cost function, cross-product LASSO cost function, Dictionaries, dictionary, Encoding, LASSO, learning (artificial intelligence), negative co-occurrence, negative cooccurrence phenomenon, nonconvex optimization problem, Signal processing, signal processing application, signal reconstruction, sparse coding, sparse learning approach, Sparse matrices, sparsity-aware learning, successive convex approximation, Vectors}, pubstate = {published}, tppubtype = {inproceedings} } Negative co-occurrence is a common phenomenon in many signal processing applications. In some cases the signals involved are sparse, and this information can be exploited to recover them. In this paper, we present a sparse learning approach that explicitly takes into account negative co-occurrence. This is achieved by adding a novel penalty term to the LASSO cost function based on the cross-products between the reconstruction coefficients. Although the resulting optimization problem is non-convex, we develop a new and efficient method for solving it based on successive convex approximations. Results on synthetic data, for both complete and overcomplete dictionaries, are provided to validate the proposed approach. |

Salamanca, Luis; Murillo-Fuentes, Juan Jose; Olmos, Pablo M; Perez-Cruz, Fernando Improving the BP Estimate over the AWGN Channel Using Tree-Structured Expectation Propagation Inproceedings 2013 IEEE International Symposium on Information Theory, pp. 2990–2994, IEEE, Istanbul, 2013, ISSN: 2157-8095. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, AWGN channels, BEC, belief propagation decoding, BI-AWGN channel, binary additive white Gaussian noise channel, binary erasure channel, BP estimation, Channel Coding, Complexity theory, error rate reduction, error statistics, Expectation, finite-length codes, Iterative decoding, LDPC codes, LDPC decoding, low-density parity-check decoding, Maximum likelihood decoding, parity check codes, posterior distribution, Propagation, TEP algorithm, tree-structured expectation propagation algorithm, trees (mathematics) @inproceedings{Salamanca2013, title = {Improving the BP Estimate over the AWGN Channel Using Tree-Structured Expectation Propagation}, author = {Luis Salamanca and Juan Jose Murillo-Fuentes and Pablo M Olmos and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6620774}, issn = {2157-8095}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Symposium on Information Theory}, pages = {2990--2994}, publisher = {IEEE}, address = {Istanbul}, abstract = {In this paper, we propose the tree-structured expectation propagation (TEP) algorithm for low-density parity-check (LDPC) decoding over the binary additive white Gaussian noise (BI-AWGN) channel. By approximating the posterior distribution by a tree-structure factorization, the TEP has been proven to improve belief propagation (BP) decoding over the binary erasure channel (BEC). We show for the AWGN channel how the TEP decoder is also able to capture additional information disregarded by the BP solution, which leads to a noticeable reduction of the error rate for finite-length codes. We show that for the range of codes of interest, the TEP gain is obtained with a slight increase in complexity over that of the BP algorithm. An efficient way of constructing the tree-like structure is also described.}, keywords = {Approximation algorithms, Approximation methods, AWGN channels, BEC, belief propagation decoding, BI-AWGN channel, binary additive white Gaussian noise channel, binary erasure channel, BP estimation, Channel Coding, Complexity theory, error rate reduction, error statistics, Expectation, finite-length codes, Iterative decoding, LDPC codes, LDPC decoding, low-density parity-check decoding, Maximum likelihood decoding, parity check codes, posterior distribution, Propagation, TEP algorithm, tree-structured expectation propagation algorithm, trees (mathematics)}, pubstate = {published}, tppubtype = {inproceedings} } In this paper, we propose the tree-structured expectation propagation (TEP) algorithm for low-density parity-check (LDPC) decoding over the binary additive white Gaussian noise (BI-AWGN) channel. By approximating the posterior distribution by a tree-structure factorization, the TEP has been proven to improve belief propagation (BP) decoding over the binary erasure channel (BEC). We show for the AWGN channel how the TEP decoder is also able to capture additional information disregarded by the BP solution, which leads to a noticeable reduction of the error rate for finite-length codes. We show that for the range of codes of interest, the TEP gain is obtained with a slight increase in complexity over that of the BP algorithm. An efficient way of constructing the tree-like structure is also described. |

## 2012 |

Salamanca, Luis; Murillo-Fuentes, Juan Jose; Olmos, Pablo M; Perez-Cruz, Fernando Tree-Structured Expectation Propagation for LDPC Decoding over the AWGN Channel Inproceedings 2012 IEEE International Workshop on Machine Learning for Signal Processing, pp. 1–6, IEEE, Santander, 2012, ISSN: 1551-2541. Abstract | Links | BibTeX | Tags: additive white Gaussian noise channel, Approximation algorithms, Approximation methods, approximation theory, AWGN channel, AWGN channels, belief propagation solution, Bit error rate, Decoding, error floor reduction, finite-length regime, Gain, Joints, LDPC decoding, low-density parity-check decoding, pairwise marginal constraint, parity check codes, TEP decoder, tree-like approximation, tree-structured expectation propagation, trees (mathematics) @inproceedings{Salamanca2012, title = {Tree-Structured Expectation Propagation for LDPC Decoding over the AWGN Channel}, author = {Luis Salamanca and Juan Jose Murillo-Fuentes and Pablo M Olmos and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6349716}, issn = {1551-2541}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE International Workshop on Machine Learning for Signal Processing}, pages = {1--6}, publisher = {IEEE}, address = {Santander}, abstract = {In this paper, we propose the tree-structured expectation propagation (TEP) algorithm for low-density parity-check (LDPC) decoding over the additive white Gaussian noise (AWGN) channel. By imposing a tree-like approximation over the graphical model of the code, this algorithm introduces pairwise marginal constraints over pairs of variables, which provide joint information of the variables related. Thanks to this, the proposed TEP decoder improves the performance of the standard belief propagation (BP) solution. An efficient way of constructing the tree-like structure is also described. The simulation results illustrate the TEP decoder gain in the finite-length regime, compared to the standard BP solution. For code lengths shorter than n = 512, the gain in the waterfall region achieves up to 0.25 dB. We also notice a remarkable reduction of the error floor.}, keywords = {additive white Gaussian noise channel, Approximation algorithms, Approximation methods, approximation theory, AWGN channel, AWGN channels, belief propagation solution, Bit error rate, Decoding, error floor reduction, finite-length regime, Gain, Joints, LDPC decoding, low-density parity-check decoding, pairwise marginal constraint, parity check codes, TEP decoder, tree-like approximation, tree-structured expectation propagation, trees (mathematics)}, pubstate = {published}, tppubtype = {inproceedings} } In this paper, we propose the tree-structured expectation propagation (TEP) algorithm for low-density parity-check (LDPC) decoding over the additive white Gaussian noise (AWGN) channel. By imposing a tree-like approximation over the graphical model of the code, this algorithm introduces pairwise marginal constraints over pairs of variables, which provide joint information of the variables related. Thanks to this, the proposed TEP decoder improves the performance of the standard belief propagation (BP) solution. An efficient way of constructing the tree-like structure is also described. The simulation results illustrate the TEP decoder gain in the finite-length regime, compared to the standard BP solution. For code lengths shorter than n = 512, the gain in the waterfall region achieves up to 0.25 dB. We also notice a remarkable reduction of the error floor. |

Durisi, Giuseppe; Koch, Tobias; Polyanskiy, Yury Diversity Versus Channel Knowledge at Finite Block-Length Inproceedings 2012 IEEE Information Theory Workshop, pp. 572–576, IEEE, Lausanne, 2012, ISBN: 978-1-4673-0223-4. Abstract | Links | BibTeX | Tags: Approximation methods, block error probability, channel coherence time, Channel estimation, channel knowledge, Coherence, diversity, diversity reception, error statistics, Fading, finite block-length, maximal achievable rate, noncoherent setting, Rayleigh block-fading channels, Rayleigh channels, Receivers, Signal to noise ratio, Upper bound @inproceedings{Durisi2012, title = {Diversity Versus Channel Knowledge at Finite Block-Length}, author = {Giuseppe Durisi and Tobias Koch and Yury Polyanskiy}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6404740}, isbn = {978-1-4673-0223-4}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE Information Theory Workshop}, pages = {572--576}, publisher = {IEEE}, address = {Lausanne}, abstract = {We study the maximal achievable rate R*(n, ∈) for a given block-length n and block error probability o over Rayleigh block-fading channels in the noncoherent setting and in the finite block-length regime. Our results show that for a given block-length and error probability, R*(n, ∈) is not monotonic in the channel's coherence time, but there exists a rate maximizing coherence time that optimally trades between diversity and cost of estimating the channel.}, keywords = {Approximation methods, block error probability, channel coherence time, Channel estimation, channel knowledge, Coherence, diversity, diversity reception, error statistics, Fading, finite block-length, maximal achievable rate, noncoherent setting, Rayleigh block-fading channels, Rayleigh channels, Receivers, Signal to noise ratio, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } We study the maximal achievable rate R*(n, ∈) for a given block-length n and block error probability o over Rayleigh block-fading channels in the noncoherent setting and in the finite block-length regime. Our results show that for a given block-length and error probability, R*(n, ∈) is not monotonic in the channel's coherence time, but there exists a rate maximizing coherence time that optimally trades between diversity and cost of estimating the channel. |

Garcia-Moreno, Pablo; Artés-Rodríguez, Antonio; Hansen, Lars Kai A Hold-out Method to Correct PCA Variance Inflation Inproceedings 2012 3rd International Workshop on Cognitive Information Processing (CIP), pp. 1–6, IEEE, Baiona, 2012, ISBN: 978-1-4673-1878-5. Abstract | Links | BibTeX | Tags: Approximation methods, classification scenario, computational complexity, computational cost, Computational efficiency, correction method, hold-out method, hold-out procedure, leave-one-out procedure, LOO method, LOO procedure, Mathematical model, PCA algorithm, PCA variance inflation, Principal component analysis, singular value decomposition, Standards, SVD, Training @inproceedings{Garcia-Moreno2012, title = {A Hold-out Method to Correct PCA Variance Inflation}, author = {Pablo Garcia-Moreno and Antonio Artés-Rodríguez and Lars Kai Hansen}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6232926}, isbn = {978-1-4673-1878-5}, year = {2012}, date = {2012-01-01}, booktitle = {2012 3rd International Workshop on Cognitive Information Processing (CIP)}, pages = {1--6}, publisher = {IEEE}, address = {Baiona}, abstract = {In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was introduced. We propose a Hold-out procedure whose computational cost is lower and, unlike the LOO method, the number of SVD's does not scale with the sample size. We analyze its properties from a theoretical and empirical point of view. Finally we apply it to a real classification scenario.}, keywords = {Approximation methods, classification scenario, computational complexity, computational cost, Computational efficiency, correction method, hold-out method, hold-out procedure, leave-one-out procedure, LOO method, LOO procedure, Mathematical model, PCA algorithm, PCA variance inflation, Principal component analysis, singular value decomposition, Standards, SVD, Training}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was introduced. We propose a Hold-out procedure whose computational cost is lower and, unlike the LOO method, the number of SVD's does not scale with the sample size. We analyze its properties from a theoretical and empirical point of view. Finally we apply it to a real classification scenario. |

Olmos, Pablo M; Perez-Cruz, Fernando; Salamanca, Luis; Murillo-Fuentes, Juan Jose Finite-Length Analysis of the TEP Decoder for LDPC Ensembles over the BEC Inproceedings 2012 IEEE International Symposium on Information Theory Proceedings, pp. 2346–2350, IEEE, Cambridge, MA, 2012, ISSN: 2157-8095. Abstract | Links | BibTeX | Tags: Approximation methods, BEC, binary codes, binary erasure channel, Decoding, Error analysis, error probability, finite-length analysis, LDPC ensembles, low-density parity check ensembles, parity check codes, TEP decoder, Trajectory, tree-expectation propagation algorithm, waterfall region @inproceedings{Olmos2012a, title = {Finite-Length Analysis of the TEP Decoder for LDPC Ensembles over the BEC}, author = {Pablo M Olmos and Fernando Perez-Cruz and Luis Salamanca and Juan Jose Murillo-Fuentes}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6283932}, issn = {2157-8095}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE International Symposium on Information Theory Proceedings}, pages = {2346--2350}, publisher = {IEEE}, address = {Cambridge, MA}, abstract = {In this work, we analyze the finite-length performance of low-density parity check (LDPC) ensembles decoded over the binary erasure channel (BEC) using the tree-expectation propagation (TEP) algorithm. In a previous paper, we showed that the TEP improves the BP performance for decoding regular and irregular short LDPC codes, but the perspective was mainly empirical. In this work, given the degree-distribution of an LDPC ensemble, we explain and predict the range of code lengths for which the TEP improves the BP solution. In addition, for LDPC ensembles that present a single critical point, we propose a scaling law to accurately predict the performance in the waterfall region. These results are of critical importance to design practical LDPC codes for the TEP decoder.}, keywords = {Approximation methods, BEC, binary codes, binary erasure channel, Decoding, Error analysis, error probability, finite-length analysis, LDPC ensembles, low-density parity check ensembles, parity check codes, TEP decoder, Trajectory, tree-expectation propagation algorithm, waterfall region}, pubstate = {published}, tppubtype = {inproceedings} } In this work, we analyze the finite-length performance of low-density parity check (LDPC) ensembles decoded over the binary erasure channel (BEC) using the tree-expectation propagation (TEP) algorithm. In a previous paper, we showed that the TEP improves the BP performance for decoding regular and irregular short LDPC codes, but the perspective was mainly empirical. In this work, given the degree-distribution of an LDPC ensemble, we explain and predict the range of code lengths for which the TEP improves the BP solution. In addition, for LDPC ensembles that present a single critical point, we propose a scaling law to accurately predict the performance in the waterfall region. These results are of critical importance to design practical LDPC codes for the TEP decoder. |

Salamanca, Luis; Murillo-Fuentes, Juan Jose; Perez-Cruz, Fernando Bayesian Equalization for LDPC Channel Decoding Journal Article IEEE Transactions on Signal Processing, 60 (5), pp. 2672–2676, 2012, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: Approximation methods, Bayes methods, Bayesian equalization, Bayesian estimation problem, Bayesian inference, Bayesian methods, BCJR (Bahl–Cocke–Jelinek–Raviv) algorithm, BCJR algorithm, Channel Coding, channel decoding, channel equalization, channel equalization problem, Channel estimation, channel state information, CSI, Decoding, equalisers, Equalizers, expectation propagation, expectation propagation algorithm, fading channels, graphical model representation, intersymbol interference, Kullback-Leibler divergence, LDPC, LDPC coding, low-density parity-check decoder, Modulation, parity check codes, symbol posterior estimates, Training @article{Salamanca2012b, title = {Bayesian Equalization for LDPC Channel Decoding}, author = {Luis Salamanca and Juan Jose Murillo-Fuentes and Fernando Perez-Cruz}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6129544}, issn = {1053-587X}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {60}, number = {5}, pages = {2672--2676}, abstract = {We describe the channel equalization problem, and its prior estimate of the channel state information (CSI), as a joint Bayesian estimation problem to improve each symbol posterior estimates at the input of the channel decoder. Our approach takes into consideration not only the uncertainty due to the noise in the channel, but also the uncertainty in the CSI estimate. However, this solution cannot be computed in linear time, because it depends on all the transmitted symbols. Hence, we also put forward an approximation for each symbol's posterior, using the expectation propagation algorithm, which is optimal from the Kullback-Leibler divergence viewpoint and yields an equalization with a complexity identical to the BCJR algorithm. We also use a graphical model representation of the full posterior, in which the proposed approximation can be readily understood. The proposed posterior estimates are more accurate than those computed using the ML estimate for the CSI. In order to illustrate this point, we measure the error rate at the output of a low-density parity-check decoder, which needs the exact posterior for each symbol to detect the incoming word and it is sensitive to a mismatch in those posterior estimates. For example, for QPSK modulation and a channel with three taps, we can expect gains over 0.5 dB with same computational complexity as the ML receiver.}, keywords = {Approximation methods, Bayes methods, Bayesian equalization, Bayesian estimation problem, Bayesian inference, Bayesian methods, BCJR (Bahl–Cocke–Jelinek–Raviv) algorithm, BCJR algorithm, Channel Coding, channel decoding, channel equalization, channel equalization problem, Channel estimation, channel state information, CSI, Decoding, equalisers, Equalizers, expectation propagation, expectation propagation algorithm, fading channels, graphical model representation, intersymbol interference, Kullback-Leibler divergence, LDPC, LDPC coding, low-density parity-check decoder, Modulation, parity check codes, symbol posterior estimates, Training}, pubstate = {published}, tppubtype = {article} } We describe the channel equalization problem, and its prior estimate of the channel state information (CSI), as a joint Bayesian estimation problem to improve each symbol posterior estimates at the input of the channel decoder. Our approach takes into consideration not only the uncertainty due to the noise in the channel, but also the uncertainty in the CSI estimate. However, this solution cannot be computed in linear time, because it depends on all the transmitted symbols. Hence, we also put forward an approximation for each symbol's posterior, using the expectation propagation algorithm, which is optimal from the Kullback-Leibler divergence viewpoint and yields an equalization with a complexity identical to the BCJR algorithm. We also use a graphical model representation of the full posterior, in which the proposed approximation can be readily understood. The proposed posterior estimates are more accurate than those computed using the ML estimate for the CSI. In order to illustrate this point, we measure the error rate at the output of a low-density parity-check decoder, which needs the exact posterior for each symbol to detect the incoming word and it is sensitive to a mismatch in those posterior estimates. For example, for QPSK modulation and a channel with three taps, we can expect gains over 0.5 dB with same computational complexity as the ML receiver. |

## 2011 |

Achutegui, Katrin; Miguez, Joaquin A Parallel Resampling Scheme and its Application to Distributed Particle Filtering in Wireless Networks Inproceedings 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 81–84, IEEE, San Juan, 2011, ISBN: 978-1-4577-2105-2. Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, Artificial neural networks, distributed resampling, DRNA technique, Markov processes, nonproportional allocation algorithm, parallel resampling scheme, PF, quantization, Signal processing, Vectors, Wireless sensor network, Wireless Sensor Networks, WSN @inproceedings{Achutegui2011, title = {A Parallel Resampling Scheme and its Application to Distributed Particle Filtering in Wireless Networks}, author = {Katrin Achutegui and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6136051}, isbn = {978-1-4577-2105-2}, year = {2011}, date = {2011-01-01}, booktitle = {2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)}, pages = {81--84}, publisher = {IEEE}, address = {San Juan}, abstract = {We address the design of a particle filter (PF) that can be implemented in a distributed manner over a network of wireless sensor nodes, each of them collecting their own local data. This is a problem that has received considerable attention lately and several methods based on consensus, the transmission of likelihood information, the truncation and/or the quantization of data have been proposed. However, all existing schemes suffer from limitations related either to the amount of required communications among the nodes or the accuracy of the filter outputs. In this work we propose a novel distributed PF that is built around the distributed resampling with non-proportional allocation (DRNA) algorithm. This scheme guarantees the properness of the particle approximations produced by the filter and has been shown to be both efficient and accurate when compared with centralized PFs. The standard DRNA technique, however, places stringent demands on the communications among nodes that turn out impractical for a typical wireless sensor network (WSN). In this paper we investigate how to reduce this communication load by using (i) a random model for the spread of data over the WSN and (ii) methods that enable the out-of-sequence processing of sensor observations. A simple numerical illustration of the performance of the new algorithm compared with a centralized PF is provided.}, keywords = {Approximation algorithms, Approximation methods, Artificial neural networks, distributed resampling, DRNA technique, Markov processes, nonproportional allocation algorithm, parallel resampling scheme, PF, quantization, Signal processing, Vectors, Wireless sensor network, Wireless Sensor Networks, WSN}, pubstate = {published}, tppubtype = {inproceedings} } We address the design of a particle filter (PF) that can be implemented in a distributed manner over a network of wireless sensor nodes, each of them collecting their own local data. This is a problem that has received considerable attention lately and several methods based on consensus, the transmission of likelihood information, the truncation and/or the quantization of data have been proposed. However, all existing schemes suffer from limitations related either to the amount of required communications among the nodes or the accuracy of the filter outputs. In this work we propose a novel distributed PF that is built around the distributed resampling with non-proportional allocation (DRNA) algorithm. This scheme guarantees the properness of the particle approximations produced by the filter and has been shown to be both efficient and accurate when compared with centralized PFs. The standard DRNA technique, however, places stringent demands on the communications among nodes that turn out impractical for a typical wireless sensor network (WSN). In this paper we investigate how to reduce this communication load by using (i) a random model for the spread of data over the WSN and (ii) methods that enable the out-of-sequence processing of sensor observations. A simple numerical illustration of the performance of the new algorithm compared with a centralized PF is provided. |

## 2010 |

Achutegui, Katrin; Rodas, Javier; Escudero, Carlos J; Miguez, Joaquin A Model-Switching Sequential Monte Carlo Algorithm for Indoor Tracking with Experimental RSS Data Inproceedings 2010 International Conference on Indoor Positioning and Indoor Navigation, pp. 1–8, IEEE, Zurich, 2010, ISBN: 978-1-4244-5862-2. Abstract | Links | BibTeX | Tags: Approximation methods, Computational modeling, Data models, generalized IMM system, GIMM approach, indoor radio, Indoor tracking, Kalman filters, maneuvering target motion, Mathematical model, model switching sequential Monte Carlo algorithm, Monte Carlo methods, multipath propagation, multiple model interaction, propagation environment, radio receivers, radio tracking, radio transmitters, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, sensors, state space model, target position dependent data, transmitter-to-receiver distance, wireless technology @inproceedings{Achutegui2010, title = {A Model-Switching Sequential Monte Carlo Algorithm for Indoor Tracking with Experimental RSS Data}, author = {Katrin Achutegui and Javier Rodas and Carlos J Escudero and Joaquin Miguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5648053}, isbn = {978-1-4244-5862-2}, year = {2010}, date = {2010-01-01}, booktitle = {2010 International Conference on Indoor Positioning and Indoor Navigation}, pages = {1--8}, publisher = {IEEE}, address = {Zurich}, abstract = {In this paper we address the problem of indoor tracking using received signal strength (RSS) as position-dependent data. This type of measurements are very appealing because they can be easily obtained with a variety of (inexpensive) wireless technologies. However, the extraction of accurate location information from RSS in indoor scenarios is not an easy task. Due to the multipath propagation, it is hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. For that reason, we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to different propagation environments. This methodology, called Interacting Multiple Models (IMM), has been used in the past either for modeling the motion of maneuvering targets or the relationship between the target position and the observations. Here, we extend its application to handle both types of uncertainty simultaneously and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data.}, keywords = {Approximation methods, Computational modeling, Data models, generalized IMM system, GIMM approach, indoor radio, Indoor tracking, Kalman filters, maneuvering target motion, Mathematical model, model switching sequential Monte Carlo algorithm, Monte Carlo methods, multipath propagation, multiple model interaction, propagation environment, radio receivers, radio tracking, radio transmitters, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, sensors, state space model, target position dependent data, transmitter-to-receiver distance, wireless technology}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the problem of indoor tracking using received signal strength (RSS) as position-dependent data. This type of measurements are very appealing because they can be easily obtained with a variety of (inexpensive) wireless technologies. However, the extraction of accurate location information from RSS in indoor scenarios is not an easy task. Due to the multipath propagation, it is hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. For that reason, we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to different propagation environments. This methodology, called Interacting Multiple Models (IMM), has been used in the past either for modeling the motion of maneuvering targets or the relationship between the target position and the observations. Here, we extend its application to handle both types of uncertainty simultaneously and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data. |