## 2015 |

## Journal Articles |

Luengo, David; Monzon, Sandra; Trigano, Tom; Vía, Javier; Artés-Rodríguez, Antonio Blind Analysis of Atrial Fibrillation Electrograms: A Sparsity-Aware Formulation (Journal Article) Integrated Computer-Aided Engineering, 22 (1), pp. 71–85, 2015. (Abstract | Links | BibTeX | Tags: atrial fibrillation, biomedical signal processing, Journal) @article{Luengo2014b, title = {Blind Analysis of Atrial Fibrillation Electrograms: A Sparsity-Aware Formulation}, author = {Luengo, David and Monzon, Sandra and Trigano, Tom and Vía, Javier and Artés-Rodríguez, Antonio}, url = {http://iospress.metapress.com/content/e6313w6767u73462/}, year = {2015}, date = {2015-01-01}, journal = {Integrated Computer-Aided Engineering}, volume = {22}, number = {1}, pages = {71--85}, abstract = {The problem of blind sparse analysis of electrogram (EGM) signals under atrial fibrillation (AF) conditions is considered in this paper. A mathematical model for the observed signals that takes into account the multiple foci typically appearing inside the heart during AF is firstly introduced. Then, a reconstruction model based on a fixed dictionary is developed and several alternatives for choosing the dictionary are discussed. In order to obtain a sparse solution, which takes into account the biological restrictions of the problem at the same time, the paper proposes using a Least Absolute Shrinkage and Selection Operator (LASSO) regularization followed by a post-processing stage that removes low amplitude coefficients violating the refractory period characteristic of cardiac cells. Finally, spectral analysis is performed on the clean activation sequence obtained from the sparse learning stage in order to estimate the number of latent foci and their frequencies. Simulations on synthetic signals and applications on real data are provided to validate the proposed approach.}, keywords = {atrial fibrillation, biomedical signal processing, Journal}, pubstate = {published}, tppubtype = {article} } The problem of blind sparse analysis of electrogram (EGM) signals under atrial fibrillation (AF) conditions is considered in this paper. A mathematical model for the observed signals that takes into account the multiple foci typically appearing inside the heart during AF is firstly introduced. Then, a reconstruction model based on a fixed dictionary is developed and several alternatives for choosing the dictionary are discussed. In order to obtain a sparse solution, which takes into account the biological restrictions of the problem at the same time, the paper proposes using a Least Absolute Shrinkage and Selection Operator (LASSO) regularization followed by a post-processing stage that removes low amplitude coefficients violating the refractory period characteristic of cardiac cells. Finally, spectral analysis is performed on the clean activation sequence obtained from the sparse learning stage in order to estimate the number of latent foci and their frequencies. Simulations on synthetic signals and applications on real data are provided to validate the proposed approach. |

Varando, Gherardo; Bielza, Concha; Larrañaga, Pedro Decision boundary for discrete Bayesian network classifiers (Journal Article) Journal of Machine Learning Research, 2015. (Abstract | Links | BibTeX | Tags: Bayesian networks, CASI CAM CM, CIG UPM, decision boundary, Journal, Lagrange basis, polynomial, supervised classication, threshold function) @article{Varando2015, title = {Decision boundary for discrete Bayesian network classifiers}, author = {Varando, Gherardo and Bielza, Concha and Larrañaga, Pedro}, url = {http://cig.fi.upm.es/node/881 http://cig.fi.upm.es/articles/2015/Varando-2015-JMLR.pdf}, year = {2015}, date = {2015-01-01}, journal = {Journal of Machine Learning Research}, abstract = {Bayesian network classi ers are a powerful machine learning tool. In order to evaluate the expressive power of these models, we compute families of polynomials that sign-represent decision functions induced by Bayesian network classi ers. We prove that those families are linear combinations of products of Lagrange basis polynomials. In absence of V -structures in the predictor sub-graph, we are also able to prove that this family of polynomials does indeed characterize the speci c classi er considered. We then use this representation to bound the number of decision functions representable by Bayesian network classi ers with a given structure}, keywords = {Bayesian networks, CASI CAM CM, CIG UPM, decision boundary, Journal, Lagrange basis, polynomial, supervised classication, threshold function}, pubstate = {published}, tppubtype = {article} } Bayesian network classi ers are a powerful machine learning tool. In order to evaluate the expressive power of these models, we compute families of polynomials that sign-represent decision functions induced by Bayesian network classi ers. We prove that those families are linear combinations of products of Lagrange basis polynomials. In absence of V -structures in the predictor sub-graph, we are also able to prove that this family of polynomials does indeed characterize the speci c classi er considered. We then use this representation to bound the number of decision functions representable by Bayesian network classi ers with a given structure |

Manzano, Mario; Espinosa, Felipe; Bravo-Santos, Ángel; Gardel-Vicente, Alfredo Cognitive Self-Scheduled Mechanism for Access Control in Noisy Vehicular Ad Hoc Networks (Journal Article) Mathematical Problems in Engineering., 2015 , pp. 1–12, 2015. (Abstract | Links | BibTeX | Tags: Journal) @article{Manzano2015b, title = {Cognitive Self-Scheduled Mechanism for Access Control in Noisy Vehicular Ad Hoc Networks}, author = {Manzano, Mario and Espinosa, Felipe and Bravo-Santos, Ángel M. and Gardel-Vicente, Alfredo}, url = {http://www.hindawi.com/journals/mpe/2015/354292/ http://dx.doi.org/10.1155/2015/354292}, doi = {10.1155/2015/354292}, year = {2015}, date = {2015-01-01}, journal = {Mathematical Problems in Engineering.}, volume = {2015}, pages = {1--12}, abstract = {Within the challenging environment of intelligent transportation systems (ITS), networked control systems such as platooning guidance of autonomous vehicles require innovative mechanisms to provide real-time communications. Although several proposals are currently under discussion, the design of a rapid, efficient, flexible, and reliable medium access control mechanism which meets the specific constraints of such real-time communications applications remains unsolved in this highly dynamic environment. However, cognitive radio (CR) combines the capacity to sense the radio spectrum with the flexibility to adapt to transmission parameters in order to maximize system performance and has thus become an effective approach for the design of dynamic spectrum access (DSA) mechanisms. This paper presents the enhanced noncooperative cognitive division multiple access (ENCCMA) proposal combining time division multiple access (TDMA) and frequency division multiple access (FDMA) schemes with CR techniques to obtain a mechanism fulfilling the requirements of real-time communications. The analysis presented here considers the IEEE WAVE and 802.11p as reference standards; however, the proposed medium access control (MAC) mechanism can be adapted to operate on the physical layer of different standards. The mechanism also offers the advantage of avoiding signaling, thus enhancing system autonomy as well as behavior in adverse scenarios.}, keywords = {Journal}, pubstate = {published}, tppubtype = {article} } Within the challenging environment of intelligent transportation systems (ITS), networked control systems such as platooning guidance of autonomous vehicles require innovative mechanisms to provide real-time communications. Although several proposals are currently under discussion, the design of a rapid, efficient, flexible, and reliable medium access control mechanism which meets the specific constraints of such real-time communications applications remains unsolved in this highly dynamic environment. However, cognitive radio (CR) combines the capacity to sense the radio spectrum with the flexibility to adapt to transmission parameters in order to maximize system performance and has thus become an effective approach for the design of dynamic spectrum access (DSA) mechanisms. This paper presents the enhanced noncooperative cognitive division multiple access (ENCCMA) proposal combining time division multiple access (TDMA) and frequency division multiple access (FDMA) schemes with CR techniques to obtain a mechanism fulfilling the requirements of real-time communications. The analysis presented here considers the IEEE WAVE and 802.11p as reference standards; however, the proposed medium access control (MAC) mechanism can be adapted to operate on the physical layer of different standards. The mechanism also offers the advantage of avoiding signaling, thus enhancing system autonomy as well as behavior in adverse scenarios. |

Luengo, David; Monzon, Sandra; Trigano, Tom; Vía, Javier; Artés-Rodríguez, Antonio Blind Analysis of Atrial Fibrillation Electrograms: A Sparsity-Aware Formulation (Journal Article) Integrated Computer-Aided Engineering, 22 (1), pp. 71–85, 2015. (Abstract | Links | BibTeX | Tags: atrial fibrillation, biomedical signal processing) @article{Luengo2014b, title = {Blind Analysis of Atrial Fibrillation Electrograms: A Sparsity-Aware Formulation}, author = {Luengo, David and Monzon, Sandra and Trigano, Tom and Vía, Javier and Artés-Rodríguez, Antonio}, url = {http://content.iospress.com/articles/integrated-computer-aided-engineering/ica00471 http://www.tsc.uc3m.es/~dluengo/sparseEGM.pdf}, year = {2015}, date = {2015-01-01}, journal = {Integrated Computer-Aided Engineering}, volume = {22}, number = {1}, pages = {71--85}, abstract = {The problem of blind sparse analysis of electrogram (EGM) signals under atrial fibrillation (AF) conditions is considered in this paper. A mathematical model for the observed signals that takes into account the multiple foci typically appearing inside the heart during AF is firstly introduced. Then, a reconstruction model based on a fixed dictionary is developed and several alternatives for choosing the dictionary are discussed. In order to obtain a sparse solution, which takes into account the biological restrictions of the problem at the same time, the paper proposes using a Least Absolute Shrinkage and Selection Operator (LASSO) regularization followed by a post-processing stage that removes low amplitude coefficients violating the refractory period characteristic of cardiac cells. Finally, spectral analysis is performed on the clean activation sequence obtained from the sparse learning stage in order to estimate the number of latent foci and their frequencies. Simulations on synthetic signals and applications on real data are provided to validate the proposed approach.}, keywords = {atrial fibrillation, biomedical signal processing}, pubstate = {published}, tppubtype = {article} } The problem of blind sparse analysis of electrogram (EGM) signals under atrial fibrillation (AF) conditions is considered in this paper. A mathematical model for the observed signals that takes into account the multiple foci typically appearing inside the heart during AF is firstly introduced. Then, a reconstruction model based on a fixed dictionary is developed and several alternatives for choosing the dictionary are discussed. In order to obtain a sparse solution, which takes into account the biological restrictions of the problem at the same time, the paper proposes using a Least Absolute Shrinkage and Selection Operator (LASSO) regularization followed by a post-processing stage that removes low amplitude coefficients violating the refractory period characteristic of cardiac cells. Finally, spectral analysis is performed on the clean activation sequence obtained from the sparse learning stage in order to estimate the number of latent foci and their frequencies. Simulations on synthetic signals and applications on real data are provided to validate the proposed approach. |

Martín-Fernández, Laura; Ruiz, Diego; Torija, Antonio; Miguez, Joaquin A Bayesian Method for Model Selection in Environmental Noise Prediction (Journal Article) Journal of Environmental Informatics, January 20 , 2015, ISSN: 1726-2135. (Abstract | Links | BibTeX | Tags: ) @article{Martin-Fernandez2015, title = {A Bayesian Method for Model Selection in Environmental Noise Prediction}, author = {Martín-Fernández, Laura and Ruiz, Diego and Torija, Antonio and Miguez, Joaquin}, url = {http://www.researchgate.net/publication/268213140_A_Bayesian_method_for_model_selection_in_environmental_noise_prediction}, issn = {1726-2135}, year = {2015}, date = {2015-01-01}, journal = {Journal of Environmental Informatics}, volume = {January 20}, abstract = {Environmental noise prediction and modeling are key factors for addressing a proper planning and management of urban sound environments. In this paper we propose a maximum a posteriori (MAP) method to compare nonlinear state-space models that describe the problem of predicting environmental sound levels. The numerical implementation of this method is based on particle filtering and we use a Markov chain Monte Carlo technique to improve the resampling step. In order to demonstrate the validity of the proposed approach for this particular problem, we have conducted a set of experiments where two prediction models are quantitatively compared using real noise measurement data collected in different urban areas.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Environmental noise prediction and modeling are key factors for addressing a proper planning and management of urban sound environments. In this paper we propose a maximum a posteriori (MAP) method to compare nonlinear state-space models that describe the problem of predicting environmental sound levels. The numerical implementation of this method is based on particle filtering and we use a Markov chain Monte Carlo technique to improve the resampling step. In order to demonstrate the validity of the proposed approach for this particular problem, we have conducted a set of experiments where two prediction models are quantitatively compared using real noise measurement data collected in different urban areas. |

Bravo-Santos, Ángel; Djuric, Petar Detectors for Cooperative Mesh Networks with Decode-and-Forward Relays (Journal Article) IEEE Transactions on Signal Processing, 63 (1), pp. 5–17, 2015, ISSN: 1053-587X. (Abstract | Links | BibTeX | Tags: Cooperative systems, Detectors, Mesh networks, Modulation, Relays, spread spectrum communication, Wireless communication) @article{Bravo-Santos2014b, title = {Detectors for Cooperative Mesh Networks with Decode-and-Forward Relays}, author = {Bravo-Santos, Ángel M. and Djuric, Petar M. }, url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6928514}, doi = {10.1109/TSP.2014.2364016}, issn = {1053-587X}, year = {2015}, date = {2015-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {63}, number = {1}, pages = {5--17}, publisher = {IEEE}, abstract = {We consider mesh networks composed of groups of relaying nodes which operate in decode-andforward mode. Each node from a group relays information to all the nodes in the next group. We study these networks in two setups, one where the nodes have complete state information about the channels through which they receive the signals, and another when they only have the statistics of the channels. We derive recursive expressions for the probabilities of errors of the nodes and present several implementations of detectors used in these networks. We compare the mesh networks with multi-hop networks formed by a set of parallel sections of multiple relaying nodes. We demonstrate with numerous simulations that there are significant improvements in performance of mesh over multi-hop networks in various scenarios.}, keywords = {Cooperative systems, Detectors, Mesh networks, Modulation, Relays, spread spectrum communication, Wireless communication}, pubstate = {published}, tppubtype = {article} } We consider mesh networks composed of groups of relaying nodes which operate in decode-andforward mode. Each node from a group relays information to all the nodes in the next group. We study these networks in two setups, one where the nodes have complete state information about the channels through which they receive the signals, and another when they only have the statistics of the channels. We derive recursive expressions for the probabilities of errors of the nodes and present several implementations of detectors used in these networks. We compare the mesh networks with multi-hop networks formed by a set of parallel sections of multiple relaying nodes. We demonstrate with numerous simulations that there are significant improvements in performance of mesh over multi-hop networks in various scenarios. |

## Inproceedings |

Valera, Isabel; Ruiz, Francisco; Svensson, Lennart; Perez-Cruz, Fernando Infinite Factorial Dynamical Model (Inproceeding) Advances in Neural Information Processing Systems, pp. 1657–1665, Montreal, 2015. (Abstract | Links | BibTeX | Tags: CASI CAM CM, GAMMA-L+ UC3M) @inproceedings{Valera2015a, title = {Infinite Factorial Dynamical Model}, author = {Valera, Isabel and Ruiz, Francisco J. R. and Svensson, Lennart and Perez-Cruz, Fernando}, url = {http://papers.nips.cc/paper/5667-infinite-factorial-dynamical-model}, year = {2015}, date = {2015-12-01}, booktitle = {Advances in Neural Information Processing Systems}, pages = {1657--1665}, address = {Montreal}, abstract = {We propose the infinite factorial dynamic model (iFDM), a general Bayesian nonparametric model for source separation. Our model builds on the Markov Indian buffet process to consider a potentially unbounded number of hidden Markov chains (sources) that evolve independently according to some dynamics, in which the state space can be either discrete or continuous. For posterior inference, we develop an algorithm based on particle Gibbs with ancestor sampling that can be efficiently applied to a wide range of source separation problems. We evaluate the performance of our iFDM on four well-known applications: multitarget tracking, cocktail party, power disaggregation, and multiuser detection. Our experimental results show that our approach for source separation does not only outperform previous approaches, but it can also handle problems that were computationally intractable for existing approaches.}, keywords = {CASI CAM CM, GAMMA-L+ UC3M}, pubstate = {published}, tppubtype = {inproceedings} } We propose the infinite factorial dynamic model (iFDM), a general Bayesian nonparametric model for source separation. Our model builds on the Markov Indian buffet process to consider a potentially unbounded number of hidden Markov chains (sources) that evolve independently according to some dynamics, in which the state space can be either discrete or continuous. For posterior inference, we develop an algorithm based on particle Gibbs with ancestor sampling that can be efficiently applied to a wide range of source separation problems. We evaluate the performance of our iFDM on four well-known applications: multitarget tracking, cocktail party, power disaggregation, and multiuser detection. Our experimental results show that our approach for source separation does not only outperform previous approaches, but it can also handle problems that were computationally intractable for existing approaches. |

Dashi,; Yiu,; Yousefi, Siamak; Perez-Cruz, Fernando; Claussen, RSSI Localization with Gaussian Processes and Tracking (Inproceeding) IEEE Globecom, San Diego, 2015. @inproceedings{Dashi2015, title = {RSSI Localization with Gaussian Processes and Tracking}, author = {Dashi, M. and Yiu, S. and Yousefi, Siamak and Perez-Cruz, Fernando and Claussen, H.}, url = {http://globecom2015.ieee-globecom.org/}, year = {2015}, date = {2015-12-01}, booktitle = {IEEE Globecom}, address = {San Diego}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Luengo, David; Martino, Luca; Elvira, Victor; Bugallo, Monica Bias correction for distributed Bayesian estimators (Inproceeding) 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 253–256, IEEE, Cancun, 2015, ISBN: 978-1-4799-1963-5. (Abstract | Links | BibTeX | Tags: Bayes methods, Big data, Distributed databases, Estimation, Probability density function, Wireless Sensor Networks) @inproceedings{Luengo2015a, title = {Bias correction for distributed Bayesian estimators}, author = {Luengo, David and Martino, Luca and Elvira, Victor and Bugallo, Monica F. }, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7383784}, doi = {10.1109/CAMSAP.2015.7383784}, isbn = {978-1-4799-1963-5}, year = {2015}, date = {2015-12-01}, booktitle = {2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)}, pages = {253--256}, publisher = {IEEE}, address = {Cancun}, abstract = {Dealing with the whole dataset in big data estimation problems is usually unfeasible. A common solution then consists of dividing the data into several smaller sets, performing distributed Bayesian estimation and combining these partial estimates to obtain a global estimate. A major problem of this approach is the presence of a non-negligible bias in the partial estimators, due to the mismatch between the unknown true prior and the prior assumed in the estimation. A simple method to mitigate the effect of this bias is proposed in this paper. Essentially, the approach is based on using a reference data set to obtain a rough estimation of the parameter of interest, i.e., a reference parameter. This information is then communicated to the partial filters that handle the smaller data sets, which can thus use a refined prior centered around this parameter. Simulation results confirm the good performance of this scheme.}, keywords = {Bayes methods, Big data, Distributed databases, Estimation, Probability density function, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } Dealing with the whole dataset in big data estimation problems is usually unfeasible. A common solution then consists of dividing the data into several smaller sets, performing distributed Bayesian estimation and combining these partial estimates to obtain a global estimate. A major problem of this approach is the presence of a non-negligible bias in the partial estimators, due to the mismatch between the unknown true prior and the prior assumed in the estimation. A simple method to mitigate the effect of this bias is proposed in this paper. Essentially, the approach is based on using a reference data set to obtain a rough estimation of the parameter of interest, i.e., a reference parameter. This information is then communicated to the partial filters that handle the smaller data sets, which can thus use a refined prior centered around this parameter. Simulation results confirm the good performance of this scheme. |

Elvira, Victor; Martino, Luca; Luengo, David; Bugallo, Monica On Sample Generation and Weight Calculation in Multiple Importance Sampling (Inproceeding) IEEE Conference on Signals, Systems, and Computers (ASILOMAR 2015), Pacific Groove, 2015. (Abstract | Links | BibTeX | Tags: ) @inproceedings{Elvira2015b, title = {On Sample Generation and Weight Calculation in Multiple Importance Sampling}, author = {Elvira, Victor and Martino, Luca and Luengo, David and Bugallo, Monica F.}, url = {http://www.asilomarsscconf.org/webpage/asil15/Asilomar 2015 Book of Abstracts v005.pdf}, year = {2015}, date = {2015-11-01}, booktitle = {IEEE Conference on Signals, Systems, and Computers (ASILOMAR 2015)}, address = {Pacific Groove}, abstract = {We investigate various sampling and weight updating techniques, which are the two crucial steps of importance sampling. We discuss the standard mixture sampling that randomly draws samples from a set of proposals and the deterministic mixture sampling, where exactly one sample is drawn from each proposal. For weight calculation, we either compute the weights by considering the particular proposal used for each sample or by interpreting the proposal as a mixture formed by all available proposals. All combinations of sampling and weight calculation and some modifications that improve the performance and/or reduce the computational complexity are examined through computer simulations}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We investigate various sampling and weight updating techniques, which are the two crucial steps of importance sampling. We discuss the standard mixture sampling that randomly draws samples from a set of proposals and the deterministic mixture sampling, where exactly one sample is drawn from each proposal. For weight calculation, we either compute the weights by considering the particular proposal used for each sample or by interpreting the proposal as a mixture formed by all available proposals. All combinations of sampling and weight calculation and some modifications that improve the performance and/or reduce the computational complexity are examined through computer simulations |

Acer, Utku Gunay; Boran, Aidan; Forlivesi, Claudio; Liekens, Werner; Perez-cruz, Fernando; Kawsar, Fahim Sensing WiFi Network for Personal IoT Analytics (Inproceeding) 2015 5th International Conference on the Internet of Things (IOT), pp. 104–111, IEEE, Seoul, 2015, ISBN: 978-1-4673-8056-0. (Abstract | Links | BibTeX | Tags: Accelerometers, cloud based query server, cloud computing, data transport mechanism, digital signatures, Distance measurement, Internet of Things, internetworking, IoT analytic, Logic gates, Mobile communication, motion signatures, network servers, Probes, proximity ranging algorithm, Search problems, telecommunication network management, WiFi gateway captures, WiFi management probes, WiFi network, wireless LAN) @inproceedings{Acer2015, title = {Sensing WiFi Network for Personal IoT Analytics}, author = {Acer, Utku Gunay and Boran, Aidan and Forlivesi, Claudio and Liekens, Werner and Perez-cruz, Fernando and Kawsar, Fahim}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7356554}, doi = {10.1109/IOT.2015.7356554}, isbn = {978-1-4673-8056-0}, year = {2015}, date = {2015-10-01}, booktitle = {2015 5th International Conference on the Internet of Things (IOT)}, pages = {104--111}, publisher = {IEEE}, address = {Seoul}, abstract = {We present the design, implementation and evaluation of an enabling platform for locating and querying physical objects using existing WiFi network. We propose the use of WiFi management probes as a data transport mechanism for physical objects that are tagged with WiFi-enabled accelerometers and are capable of determining their state-of-use based on motion signatures. A local WiFi gateway captures these probes emitted from the connected objects and stores them locally after annotating them with a coarse grained location estimate using a proximity ranging algorithm. External applications can query the aggregated views of state-of-use and location traces of connected objects through a cloud-based query server. We present the technical architecture and algorithms of the proposed platform together with a prototype personal object analytics application and assess the feasibility of our different design decisions. This work makes important contributions by demonstrating that it is possible to build a pure network-based IoT analytics platform with only location and motion signatures of connected objects, and that the WiFi network is the key enabler for the future IoT applications.}, keywords = {Accelerometers, cloud based query server, cloud computing, data transport mechanism, digital signatures, Distance measurement, Internet of Things, internetworking, IoT analytic, Logic gates, Mobile communication, motion signatures, network servers, Probes, proximity ranging algorithm, Search problems, telecommunication network management, WiFi gateway captures, WiFi management probes, WiFi network, wireless LAN}, pubstate = {published}, tppubtype = {inproceedings} } We present the design, implementation and evaluation of an enabling platform for locating and querying physical objects using existing WiFi network. We propose the use of WiFi management probes as a data transport mechanism for physical objects that are tagged with WiFi-enabled accelerometers and are capable of determining their state-of-use based on motion signatures. A local WiFi gateway captures these probes emitted from the connected objects and stores them locally after annotating them with a coarse grained location estimate using a proximity ranging algorithm. External applications can query the aggregated views of state-of-use and location traces of connected objects through a cloud-based query server. We present the technical architecture and algorithms of the proposed platform together with a prototype personal object analytics application and assess the feasibility of our different design decisions. This work makes important contributions by demonstrating that it is possible to build a pure network-based IoT analytics platform with only location and motion signatures of connected objects, and that the WiFi network is the key enabler for the future IoT applications. |

Luengo, David; Ríos-Muñoz, Gonzalo; Elvira, Victor Causality Analysis of Atrial Fibrillation Electrograms (Inproceeding) Computing in Cardiology, Nice, 2015. @inproceedings{Luengo2015, title = {Causality Analysis of Atrial Fibrillation Electrograms}, author = {Luengo, David and Ríos-Muñoz, Gonzalo and Elvira, Victor}, url = {http://www.cinc2015.org/}, year = {2015}, date = {2015-09-01}, booktitle = {Computing in Cardiology}, address = {Nice}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Valera, Isabel; Ruiz, Francisco; Svensson, Lennart; Perez-Cruz, Fernando A Bayesian Nonparametric Approach for Blind Multiuser Channel Estimation (Inproceeding) 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 2766–2770, IEEE, Nice, 2015, ISBN: 978-0-9928-6263-3. (Abstract | Links | BibTeX | Tags: Bayes methods, Bayesian nonparametric, communication systems, factorial HMM, Hidden Markov models, machine-to-machine, multiuser communication, Receiving antennas, Signal to noise ratio, Transmitters) @inproceedings{Valera2015, title = {A Bayesian Nonparametric Approach for Blind Multiuser Channel Estimation}, author = {Valera, Isabel and Ruiz, Francisco J. R. and Svensson, Lennart and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7362888 http://www.eurasip.org/Proceedings/Eusipco/Eusipco2015/papers/1570096659.pdf}, doi = {10.1109/EUSIPCO.2015.7362888}, isbn = {978-0-9928-6263-3}, year = {2015}, date = {2015-08-01}, booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)}, pages = {2766--2770}, publisher = {IEEE}, address = {Nice}, abstract = {In many modern multiuser communication systems, users are allowed to enter and leave the system at any given time. Thus, the number of active users is an unknown and time-varying parameter, and the performance of the system depends on how accurately this parameter is estimated over time. We address the problem of blind joint channel parameter and data estimation in a multiuser communication channel in which the number of transmitters is not known. For that purpose, we develop a Bayesian nonparametric model based on the Markov Indian buffet process and an inference algorithm that makes use of slice sampling and particle Gibbs with ancestor sampling. Our experimental results show that the proposed approach can effectively recover the data-generating process for a wide range of scenarios.}, keywords = {Bayes methods, Bayesian nonparametric, communication systems, factorial HMM, Hidden Markov models, machine-to-machine, multiuser communication, Receiving antennas, Signal to noise ratio, Transmitters}, pubstate = {published}, tppubtype = {inproceedings} } In many modern multiuser communication systems, users are allowed to enter and leave the system at any given time. Thus, the number of active users is an unknown and time-varying parameter, and the performance of the system depends on how accurately this parameter is estimated over time. We address the problem of blind joint channel parameter and data estimation in a multiuser communication channel in which the number of transmitters is not known. For that purpose, we develop a Bayesian nonparametric model based on the Markov Indian buffet process and an inference algorithm that makes use of slice sampling and particle Gibbs with ancestor sampling. Our experimental results show that the proposed approach can effectively recover the data-generating process for a wide range of scenarios. |

Santos, Irene; Murillo-Fuentes, Juan Jose; Olmos, Pablo Block Expectation Propagation Equalization for ISI Channels (Inproceeding) 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 379–383, IEEE, Nice, 2015, ISBN: 978-0-9928-6263-3. (Abstract | Links | BibTeX | Tags: Approximation algorithms, Approximation methods, BCJR algorithm, channel equalization, Complexity theory, Decoding, Equalizers, expectation propagation, ISI, low complexity, Signal processing algorithms) @inproceedings{Santos2015, title = {Block Expectation Propagation Equalization for ISI Channels}, author = {Santos, Irene and Murillo-Fuentes, Juan Jose and Olmos, Pablo M.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7362409}, doi = {10.1109/EUSIPCO.2015.7362409}, isbn = {978-0-9928-6263-3}, year = {2015}, date = {2015-08-01}, booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)}, pages = {379--383}, publisher = {IEEE}, address = {Nice}, abstract = {Actual communications systems use high-order modulations and channels with memory. However, as the memory of the channels and the order of the constellations grow, optimal equalization such as BCJR algorithm is computationally intractable, as their complexity increases exponentially with the number of taps and size of modulation. In this paper, we propose a novel low-complexity hard and soft output equalizer based on the Expectation Propagation (EP) algorithm that provides high-accuracy posterior probability estimations at the input of the channel decoder with similar computational complexity than the linear MMSE. We experimentally show that this quasi-optimal solution outperforms classical solutions reducing the bit error probability with low complexity when LDPC channel decoding is used, avoiding the curse of dimensionality with channel memory and constellation size.}, keywords = {Approximation algorithms, Approximation methods, BCJR algorithm, channel equalization, Complexity theory, Decoding, Equalizers, expectation propagation, ISI, low complexity, Signal processing algorithms}, pubstate = {published}, tppubtype = {inproceedings} } Actual communications systems use high-order modulations and channels with memory. However, as the memory of the channels and the order of the constellations grow, optimal equalization such as BCJR algorithm is computationally intractable, as their complexity increases exponentially with the number of taps and size of modulation. In this paper, we propose a novel low-complexity hard and soft output equalizer based on the Expectation Propagation (EP) algorithm that provides high-accuracy posterior probability estimations at the input of the channel decoder with similar computational complexity than the linear MMSE. We experimentally show that this quasi-optimal solution outperforms classical solutions reducing the bit error probability with low complexity when LDPC channel decoding is used, avoiding the curse of dimensionality with channel memory and constellation size. |

Martino, Luca; Elvira, Victor; Luengo, David; Corander, Jukka Parallel interacting Markov adaptive importance sampling (Inproceeding) 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 499–503, IEEE, Nice, 2015, ISBN: 978-0-9928-6263-3. (Abstract | Links | BibTeX | Tags: Adaptive importance sampling, Bayesian inference, MCMC methods, Monte Carlo methods, Parallel Chains, Probability density function, Proposals, Signal processing, Signal processing algorithms, Sociology) @inproceedings{Martino2015b, title = {Parallel interacting Markov adaptive importance sampling}, author = {Martino, Luca and Elvira, Victor and Luengo, David and Corander, Jukka}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7362433 http://www.eurasip.org/Proceedings/Eusipco/Eusipco2015/papers/1570111267.pdf}, doi = {10.1109/EUSIPCO.2015.7362433}, isbn = {978-0-9928-6263-3}, year = {2015}, date = {2015-08-01}, booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)}, pages = {499--503}, publisher = {IEEE}, address = {Nice}, abstract = {Monte Carlo (MC) methods are widely used for statistical inference in signal processing applications. A well-known class of MC methods is importance sampling (IS) and its adaptive extensions. In this work, we introduce an iterated importance sampler using a population of proposal densities, which are adapted according to an MCMC technique over the population of location parameters. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples weighted according to the deterministic mixture scheme. Numerical results, on a multi-modal example and a localization problem in wireless sensor networks, show the advantages of the proposed schemes.}, keywords = {Adaptive importance sampling, Bayesian inference, MCMC methods, Monte Carlo methods, Parallel Chains, Probability density function, Proposals, Signal processing, Signal processing algorithms, Sociology}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are widely used for statistical inference in signal processing applications. A well-known class of MC methods is importance sampling (IS) and its adaptive extensions. In this work, we introduce an iterated importance sampler using a population of proposal densities, which are adapted according to an MCMC technique over the population of location parameters. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples weighted according to the deterministic mixture scheme. Numerical results, on a multi-modal example and a localization problem in wireless sensor networks, show the advantages of the proposed schemes. |

Olmos, Pablo; Mitchell, David; Costello, Daniel Analyzing the Finite-Length Performance of Generalized LDPC Codes (Inproceeding) 2015 IEEE International Symposium on Information Theory (ISIT), pp. 2683–2687, IEEE, Hong Kong, 2015, ISBN: 978-1-4673-7704-1. (Abstract | Links | BibTeX | Tags: BEC, binary codes, binary erasure channel, Block codes, Codes on graphs, Decoding, Differential equations, error probability, finite-length generalized LDPC block codes, finite-length performance analysis, generalized LDPC codes, generalized peeling decoder, GLDPC block codes, graph degree distribution, graph theory, Iterative decoding, parity check codes, protographs) @inproceedings{Olmos2015b, title = {Analyzing the Finite-Length Performance of Generalized LDPC Codes}, author = {Olmos, Pablo M. and Mitchell, David G. M. and Costello, Daniel J.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7282943}, doi = {10.1109/ISIT.2015.7282943}, isbn = {978-1-4673-7704-1}, year = {2015}, date = {2015-06-01}, booktitle = {2015 IEEE International Symposium on Information Theory (ISIT)}, pages = {2683--2687}, publisher = {IEEE}, address = {Hong Kong}, abstract = {In this paper, we analyze the performance of finite-length generalized LDPC (GLDPC) block codes constructed from protographs when transmission takes place over the binary erasure channel (BEC). A generalized peeling decoder is proposed and we derive a system of differential equations that gives the expected evolution of the graph degree distribution during decoding. We then show that the finite-length performance of a GLDPC code can be estimated by means of a simple scaling law, where a single scaling parameter represents the finite-length properties of the code. We also show that, as we consider stronger component codes, both the asymptotic threshold and the finite-length scaling parameter are improved.}, keywords = {BEC, binary codes, binary erasure channel, Block codes, Codes on graphs, Decoding, Differential equations, error probability, finite-length generalized LDPC block codes, finite-length performance analysis, generalized LDPC codes, generalized peeling decoder, GLDPC block codes, graph degree distribution, graph theory, Iterative decoding, parity check codes, protographs}, pubstate = {published}, tppubtype = {inproceedings} } In this paper, we analyze the performance of finite-length generalized LDPC (GLDPC) block codes constructed from protographs when transmission takes place over the binary erasure channel (BEC). A generalized peeling decoder is proposed and we derive a system of differential equations that gives the expected evolution of the graph degree distribution during decoding. We then show that the finite-length performance of a GLDPC code can be estimated by means of a simple scaling law, where a single scaling parameter represents the finite-length properties of the code. We also show that, as we consider stronger component codes, both the asymptotic threshold and the finite-length scaling parameter are improved. |

Stinner, Markus; Olmos, Pablo Finite-Length Performance of Multi-Edge Protograph-Based Spatially Coupled LDPC Codes (Inproceeding) 2015 IEEE International Symposium on Information Theory (ISIT), pp. 889–893, IEEE, Hong Kong, 2015, ISBN: 978-1-4673-7704-1. (Abstract | Links | BibTeX | Tags: binary erasure channel, Block codes, Couplings, Decoding, Error analysis, finite length performance, finite-length performance, graph theory, Iterative decoding, low density parity check codes, multiedge protograph, parity check codes, spatially coupled LDPC codes, spatially-coupled LDPC codes, Steady-state) @inproceedings{Stinner2015, title = {Finite-Length Performance of Multi-Edge Protograph-Based Spatially Coupled LDPC Codes}, author = {Stinner, Markus and Olmos, Pablo M.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7282583}, doi = {10.1109/ISIT.2015.7282583}, isbn = {978-1-4673-7704-1}, year = {2015}, date = {2015-06-01}, booktitle = {2015 IEEE International Symposium on Information Theory (ISIT)}, pages = {889--893}, publisher = {IEEE}, address = {Hong Kong}, abstract = {The finite-length performance of multi-edge spatially coupled low-density parity-check (SC-LDPC) codes over the binary erasure channel (BEC) is analyzed. Existing scaling laws are extended to arbitrary protograph base matrices that include puncturing patterns and multiple edges between nodes. A regular protograph-based SC-LDPC construction based on the (4; 8)-regular LDPC block code works well in the waterfall region compared to more involved rate-1/2 structures proposed to improve the threshold to minimum distance trade-off. Scaling laws are also used for code design and to estimate the block length of a given SC-LDPC code ensemble to match the performance of some other code. Estimates on the performance degradation are developed if the chain length varies.}, keywords = {binary erasure channel, Block codes, Couplings, Decoding, Error analysis, finite length performance, finite-length performance, graph theory, Iterative decoding, low density parity check codes, multiedge protograph, parity check codes, spatially coupled LDPC codes, spatially-coupled LDPC codes, Steady-state}, pubstate = {published}, tppubtype = {inproceedings} } The finite-length performance of multi-edge spatially coupled low-density parity-check (SC-LDPC) codes over the binary erasure channel (BEC) is analyzed. Existing scaling laws are extended to arbitrary protograph base matrices that include puncturing patterns and multiple edges between nodes. A regular protograph-based SC-LDPC construction based on the (4; 8)-regular LDPC block code works well in the waterfall region compared to more involved rate-1/2 structures proposed to improve the threshold to minimum distance trade-off. Scaling laws are also used for code design and to estimate the block length of a given SC-LDPC code ensemble to match the performance of some other code. Estimates on the performance degradation are developed if the chain length varies. |

Vazquez-Vilar, Gonzalo; Martinez, Alfonso; Guillen i Fabregas, Albert A derivation of the Cost-Constrained Sphere-Packing Exponent (Inproceeding) 2015 IEEE International Symposium on Information Theory (ISIT), pp. 929–933, IEEE, Hong Kong, 2015, ISBN: 978-1-4673-7704-1. (Links | BibTeX | Tags: Channel Coding, channel-coding cost-constrained sphere-packing exp, continuous channel, continuous memoryless channel, cost constraint, error probability, hypothesis testing, Lead, Memoryless systems, Optimization, per-codeword cost constraint, reliability function, spherepacking exponent, Testing) @inproceedings{Vazquez-Vilar2015, title = {A derivation of the Cost-Constrained Sphere-Packing Exponent}, author = {Vazquez-Vilar, Gonzalo and Martinez, Alfonso and Guillen i Fabregas, Albert}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7282591}, doi = {10.1109/ISIT.2015.7282591}, isbn = {978-1-4673-7704-1}, year = {2015}, date = {2015-06-01}, booktitle = {2015 IEEE International Symposium on Information Theory (ISIT)}, pages = {929--933}, publisher = {IEEE}, address = {Hong Kong}, keywords = {Channel Coding, channel-coding cost-constrained sphere-packing exp, continuous channel, continuous memoryless channel, cost constraint, error probability, hypothesis testing, Lead, Memoryless systems, Optimization, per-codeword cost constraint, reliability function, spherepacking exponent, Testing}, pubstate = {published}, tppubtype = {inproceedings} } |

Elvira, Victor; Martino, Luca; Luengo, David; Corander, Jukka A Gradient Adaptive Population Importance Sampler (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4075–4079, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: adaptive extensions, adaptive importance sampler, Adaptive importance sampling, gradient adaptive population, gradient matrix, Hamiltonian Monte Carlo, Hessian matrices, Hessian matrix, learning (artificial intelligence), Machine learning, MC methods, Monte Carlo, Monte Carlo methods, population Monte Carlo (PMC), proposal densities, Signal processing, Sociology, statistics, target distribution) @inproceedings{Elvira2015a, title = {A Gradient Adaptive Population Importance Sampler}, author = {Elvira, Victor and Martino, Luca and Luengo, David and Corander, Jukka}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178737 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_elvira.pdf}, doi = {10.1109/ICASSP.2015.7178737}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4075--4079}, publisher = {IEEE}, address = {Brisbane}, abstract = {Monte Carlo (MC) methods are widely used in signal processing and machine learning. A well-known class of MC methods is composed of importance sampling and its adaptive extensions (e.g., population Monte Carlo). In this paper, we introduce an adaptive importance sampler using a population of proposal densities. The novel algorithm dynamically optimizes the cloud of proposals, adapting them using information about the gradient and Hessian matrix of the target distribution. Moreover, a new kind of interaction in the adaptation of the proposal densities is introduced, establishing a trade-off between attaining a good performance in terms of mean square error and robustness to initialization.}, keywords = {adaptive extensions, adaptive importance sampler, Adaptive importance sampling, gradient adaptive population, gradient matrix, Hamiltonian Monte Carlo, Hessian matrices, Hessian matrix, learning (artificial intelligence), Machine learning, MC methods, Monte Carlo, Monte Carlo methods, population Monte Carlo (PMC), proposal densities, Signal processing, Sociology, statistics, target distribution}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are widely used in signal processing and machine learning. A well-known class of MC methods is composed of importance sampling and its adaptive extensions (e.g., population Monte Carlo). In this paper, we introduce an adaptive importance sampler using a population of proposal densities. The novel algorithm dynamically optimizes the cloud of proposals, adapting them using information about the gradient and Hessian matrix of the target distribution. Moreover, a new kind of interaction in the adaptation of the proposal densities is introduced, establishing a trade-off between attaining a good performance in terms of mean square error and robustness to initialization. |

Fernandez-Bes, Jesus; Elvira, Victor; Van Vaerenbergh, Steven A Probabilistic Least-Mean-Squares Filter (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2199–2203, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: adaptable step size LMS algorithm, Adaptation models, adaptive filtering, Approximation algorithms, Bayesian machine learning techniques, efficient approximation algorithm, filtering theory, Least squares approximations, least-mean-squares, probabilistic least mean squares filter, Probabilistic logic, probabilisticmodels, Probability, Signal processing algorithms, Standards, state-space models) @inproceedings{Fernandez-Bes2015, title = {A Probabilistic Least-Mean-Squares Filter}, author = {Fernandez-Bes, Jesus and Elvira, Victor and Van Vaerenbergh, Steven}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178361 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_bes.pdf}, doi = {10.1109/ICASSP.2015.7178361}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {2199--2203}, publisher = {IEEE}, address = {Brisbane}, abstract = {We introduce a probabilistic approach to the LMS filter. By means of an efficient approximation, this approach provides an adaptable step-size LMS algorithm together with a measure of uncertainty about the estimation. In addition, the proposed approximation preserves the linear complexity of the standard LMS. Numerical results show the improved performance of the algorithm with respect to standard LMS and state-of-the-art algorithms with similar complexity. The goal of this work, therefore, is to open the door to bring somemore Bayesian machine learning techniques to adaptive filtering.}, keywords = {adaptable step size LMS algorithm, Adaptation models, adaptive filtering, Approximation algorithms, Bayesian machine learning techniques, efficient approximation algorithm, filtering theory, Least squares approximations, least-mean-squares, probabilistic least mean squares filter, Probabilistic logic, probabilisticmodels, Probability, Signal processing algorithms, Standards, state-space models}, pubstate = {published}, tppubtype = {inproceedings} } We introduce a probabilistic approach to the LMS filter. By means of an efficient approximation, this approach provides an adaptable step-size LMS algorithm together with a measure of uncertainty about the estimation. In addition, the proposed approximation preserves the linear complexity of the standard LMS. Numerical results show the improved performance of the algorithm with respect to standard LMS and state-of-the-art algorithms with similar complexity. The goal of this work, therefore, is to open the door to bring somemore Bayesian machine learning techniques to adaptive filtering. |

Martino, Luca; Elvira, Victor; Luengo, David; Artés-Rodríguez, Antonio; Corander, Smelly Parallel MCMC Chains (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4070–4074, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization) @inproceedings{Martino2015a, title = {Smelly Parallel MCMC Chains}, author = {Martino, Luca and Elvira, Victor and Luengo, David and Artés-Rodríguez, Antonio and Corander, J.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178736 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_martino.pdf}, doi = {10.1109/ICASSP.2015.7178736}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4070--4074}, publisher = {IEEE}, address = {Brisbane}, abstract = {Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice.}, keywords = {Bayesian inference, learning (artificial intelligence), Machine learning, Markov chain Monte Carlo, Markov chain Monte Carlo algorithms, Markov processes, MC methods, MCMC algorithms, MCMC scheme, mean square error, mean square error methods, Monte Carlo methods, optimisation, parallel and interacting chains, Probability density function, Proposals, robustness, Sampling methods, Signal processing, Signal processing algorithms, signal sampling, smelly parallel chains, smelly parallel MCMC chains, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } Monte Carlo (MC) methods are useful tools for Bayesian inference and stochastic optimization that have been widely applied in signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information, thus yielding a faster exploration of the state space. The interaction is carried out generating a dynamic repulsion among the “smelly” parallel chains that takes into account the entire population of current states. The ergodicity of the scheme and its relationship with other sampling methods are discussed. Numerical results show the advantages of the proposed approach in terms of mean square error, robustness w.r.t. to initial values and parameter choice. |

Luengo, David; Martino, Luca; Elvira, Victor; Bugallo, Monica Efficient Linear Combination of Partial Monte Carlo Estimators (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4100–4104, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: covariance matrices, efficient linear combination, Estimation, fusion, Global estimator, global estimators, least mean squares methods, linear combination, minimum mean squared error estimators, Monte Carlo estimation, Monte Carlo methods, partial estimator, partial Monte Carlo estimators, Xenon) @inproceedings{Luengo2015b, title = {Efficient Linear Combination of Partial Monte Carlo Estimators}, author = {Luengo, David and Martino, Luca and Elvira, Victor and Bugallo, Monica F.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178742 http://www.tsc.uc3m.es/~velvira/papers/ICASSP2015_luengo.pdf}, doi = {10.1109/ICASSP.2015.7178742}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {4100--4104}, publisher = {IEEE}, address = {Brisbane}, abstract = {In many practical scenarios, including those dealing with large data sets, calculating global estimators of unknown variables of interest becomes unfeasible. A common solution is obtaining partial estimators and combining them to approximate the global one. In this paper, we focus on minimum mean squared error (MMSE) estimators, introducing two efficient linear schemes for the fusion of partial estimators. The proposed approaches are valid for any type of partial estimators, although in the simulated scenarios we concentrate on the combination of Monte Carlo estimators due to the nature of the problem addressed. Numerical results show the good performance of the novel fusion methods with only a fraction of the cost of the asymptotically optimal solution.}, keywords = {covariance matrices, efficient linear combination, Estimation, fusion, Global estimator, global estimators, least mean squares methods, linear combination, minimum mean squared error estimators, Monte Carlo estimation, Monte Carlo methods, partial estimator, partial Monte Carlo estimators, Xenon}, pubstate = {published}, tppubtype = {inproceedings} } In many practical scenarios, including those dealing with large data sets, calculating global estimators of unknown variables of interest becomes unfeasible. A common solution is obtaining partial estimators and combining them to approximate the global one. In this paper, we focus on minimum mean squared error (MMSE) estimators, introducing two efficient linear schemes for the fusion of partial estimators. The proposed approaches are valid for any type of partial estimators, although in the simulated scenarios we concentrate on the combination of Monte Carlo estimators due to the nature of the problem addressed. Numerical results show the good performance of the novel fusion methods with only a fraction of the cost of the asymptotically optimal solution. |

Nazabal, Alfredo; Artés-Rodríguez, Antonio Discriminative spectral learning of hidden markov models for human activity recognition (Inproceeding) 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1966–1970, IEEE, Brisbane, 2015, ISBN: 978-1-4673-6997-8. (Abstract | Links | BibTeX | Tags: Accuracy, Bayesian estimation, classify sequential data, Data models, Databases, Discriminative learning, discriminative spectral learning, Hidden Markov models, HMM parameters, Human activity recognition, learning (artificial intelligence), maximum likelihood, maximum likelihood estimation, ML, moment matching learning technique, Observable operator models, sensors, Spectral algorithm, spectral learning, Speech recognition, Training) @inproceedings{Nazabal2015, title = {Discriminative spectral learning of hidden markov models for human activity recognition}, author = {Nazabal, Alfredo and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7178314}, doi = {10.1109/ICASSP.2015.7178314}, isbn = {978-1-4673-6997-8}, year = {2015}, date = {2015-04-01}, booktitle = {2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {1966--1970}, publisher = {IEEE}, address = {Brisbane}, abstract = {Hidden Markov Models (HMMs) are one of the most important techniques to model and classify sequential data. Maximum Likelihood (ML) and (parametric and non-parametric) Bayesian estimation of the HMM parameters suffers from local maxima and in massive datasets they can be specially time consuming. In this paper, we extend the spectral learning of HMMs, a moment matching learning technique free from local maxima, to discriminative HMMs. The resulting method provides the posterior probabilities of the classes without explicitly determining the HMM parameters, and is able to deal with missing labels. We apply the method to Human Activity Recognition (HAR) using two different types of sensors: portable inertial sensors, and fixed, wireless binary sensor networks. Our algorithm outperforms the standard discriminative HMM learning in both complexity and accuracy.}, keywords = {Accuracy, Bayesian estimation, classify sequential data, Data models, Databases, Discriminative learning, discriminative spectral learning, Hidden Markov models, HMM parameters, Human activity recognition, learning (artificial intelligence), maximum likelihood, maximum likelihood estimation, ML, moment matching learning technique, Observable operator models, sensors, Spectral algorithm, spectral learning, Speech recognition, Training}, pubstate = {published}, tppubtype = {inproceedings} } Hidden Markov Models (HMMs) are one of the most important techniques to model and classify sequential data. Maximum Likelihood (ML) and (parametric and non-parametric) Bayesian estimation of the HMM parameters suffers from local maxima and in massive datasets they can be specially time consuming. In this paper, we extend the spectral learning of HMMs, a moment matching learning technique free from local maxima, to discriminative HMMs. The resulting method provides the posterior probabilities of the classes without explicitly determining the HMM parameters, and is able to deal with missing labels. We apply the method to Human Activity Recognition (HAR) using two different types of sensors: portable inertial sensors, and fixed, wireless binary sensor networks. Our algorithm outperforms the standard discriminative HMM learning in both complexity and accuracy. |

Perez-Cruz, Fernando; Huang, Howard A Blind Nonparametric Non-line of Sight Bias Model for Accurate Localization (Inproceeding) Information Theory and Applications (ITA), San Diego, 2015. (Abstract | Links | BibTeX | Tags: ) @inproceedings{Perez-Cruz2015, title = {A Blind Nonparametric Non-line of Sight Bias Model for Accurate Localization}, author = {Perez-Cruz, Fernando and Huang, Howard}, url = {http://ita.ucsd.edu/workshop/15/files/abstract/abstract_1462.txt}, year = {2015}, date = {2015-02-01}, booktitle = {Information Theory and Applications (ITA)}, address = {San Diego}, abstract = {One of the most promising solutions for accurate localization services is estimating the Time Difference of Arrival (TDoA) with a cellular infrastructure and triangulating the position of the sought device. There are three different elements that limit the accuracy of TDoA: bandwidth/snr, clock accuracy and non-line-of-sight (NLOS) bias. The Cramer-Rao lower bound is well known and can be made sufficiently low (centimeters) with existing technologies. GPS clock accuracy is below 15ns (less than 5 meters). NLOS is difficult to characterize and depends heavily on the environment. We cannot rely on simple distributions to model it and we should not expect it to follow a few typical scenarios. In this talk, we present a nonparametric model for estimating the NLOS bias and an algorithm that learns the model on the fly without feedback on the true position. This procedure allows getting accurate localization in any environment and without needing to fine-tune a priori de NLOS for each base station. The actual accuracy depends on the number of bases that hear the device, but uncontrolled outliers no longer limit it. For a dense infrastructure, we show that the localization error can be measured in a few meters.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } One of the most promising solutions for accurate localization services is estimating the Time Difference of Arrival (TDoA) with a cellular infrastructure and triangulating the position of the sought device. There are three different elements that limit the accuracy of TDoA: bandwidth/snr, clock accuracy and non-line-of-sight (NLOS) bias. The Cramer-Rao lower bound is well known and can be made sufficiently low (centimeters) with existing technologies. GPS clock accuracy is below 15ns (less than 5 meters). NLOS is difficult to characterize and depends heavily on the environment. We cannot rely on simple distributions to model it and we should not expect it to follow a few typical scenarios. In this talk, we present a nonparametric model for estimating the NLOS bias and an algorithm that learns the model on the fly without feedback on the true position. This procedure allows getting accurate localization in any environment and without needing to fine-tune a priori de NLOS for each base station. The actual accuracy depends on the number of bases that hear the device, but uncontrolled outliers no longer limit it. For a dense infrastructure, we show that the localization error can be measured in a few meters. |

## 2014 |

## Journal Articles |

Santiago-Mozos, Ricardo; Perez-Cruz, Fernando; Madden, Michael; Artés-Rodríguez, Antonio An Automated Screening System for Tuberculosis (Journal Article) IEEE journal of biomedical and health informatics, 18 (3), pp. 855-862, 2014, ISSN: 2168-2208. (Abstract | Links | BibTeX | Tags: Automated screening, Bayesian, Decision making, Sequential analysis, Tuberculosis) @article{Santiago-Mozos2013, title = {An Automated Screening System for Tuberculosis}, author = {Santiago-Mozos, Ricardo and Perez-Cruz, Fernando and Madden, Michael and Artés-Rodríguez, Antonio}, url = {http://www.tsc.uc3m.es/~antonio/papers/P47_2014_An Automated Screening System for Tuberculosis.pdf http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6630069}, issn = {2168-2208}, year = {2014}, date = {2014-05-01}, journal = {IEEE journal of biomedical and health informatics}, volume = {18}, number = {3}, pages = {855-862}, publisher = {IEEE}, abstract = {Automated screening systems are commonly used to detect some agent in a sample and take a global decision about the subject (e.g. ill/healthy) based on these detections. We propose a Bayesian methodology for taking decisions in (sequential) screening systems that considers the false alarm rate of the detector. Our approach assesses the quality of its decisions and provides lower bounds on the achievable performance of the screening system from the training data. In addition, we develop a complete screening system for sputum smears in tuberculosis diagnosis, and show, using a real-world database, the advantages of the proposed framework when compared to the commonly used count detections and threshold approach.}, keywords = {Automated screening, Bayesian, Decision making, Sequential analysis, Tuberculosis}, pubstate = {published}, tppubtype = {article} } Automated screening systems are commonly used to detect some agent in a sample and take a global decision about the subject (e.g. ill/healthy) based on these detections. We propose a Bayesian methodology for taking decisions in (sequential) screening systems that considers the false alarm rate of the detector. Our approach assesses the quality of its decisions and provides lower bounds on the achievable performance of the screening system from the training data. In addition, we develop a complete screening system for sputum smears in tuberculosis diagnosis, and show, using a real-world database, the advantages of the proposed framework when compared to the commonly used count detections and threshold approach. |

Impedovo, Sebastiano; Liu, Cheng-Lin; Impedovo, Donato; Pirlo, Giuseppe; Read, Jesse; Martino, Luca; Luengo, David Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains (Journal Article) Pattern Recognition, 47 (3), pp. 1535–1546, 2014. (Abstract | Links | BibTeX | Tags: Bayesian inference, Classifier chains, Monte Carlo methods, Multi-dimensional classification, Multi-label classification) @article{Impedovo2014b, title = {Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains}, author = {Impedovo, Sebastiano and Liu, Cheng-Lin and Impedovo, Donato and Pirlo, Giuseppe and Read, Jesse and Martino, Luca and Luengo, David}, url = {http://www.sciencedirect.com/science/article/pii/S0031320313004160}, year = {2014}, date = {2014-01-01}, journal = {Pattern Recognition}, volume = {47}, number = {3}, pages = {1535--1546}, abstract = {Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets.}, keywords = {Bayesian inference, Classifier chains, Monte Carlo methods, Multi-dimensional classification, Multi-label classification}, pubstate = {published}, tppubtype = {article} } Multi-dimensional classification (MDC) is the supervised learning problem where an instance is associated with multiple classes, rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance – at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies, one of the most popular and highest-performing methods for multi-label classification (MLC), a particular case of MDC which involves only binary classes (i.e., labels). The original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors along the chain. Here we present novel Monte Carlo schemes, both for finding a good chain sequence and performing efficient inference. Our algorithms remain tractable for high-dimensional data sets and obtain the best predictive performance across several real data sets. |

Read, Jesse; Achutegui, Katrin; Miguez, Joaquin A Distributed Particle Filter for Nonlinear Tracking in Wireless Sensor Networks (Journal Article) Signal Processing, 98 , pp. 121–134, 2014. (Abstract | Links | BibTeX | Tags: Distributed filtering, Target tracking, Wireless sensor network) @article{Read2014b, title = {A Distributed Particle Filter for Nonlinear Tracking in Wireless Sensor Networks}, author = {Read, Jesse and Achutegui, Katrin and Miguez, Joaquin}, url = {http://www.tsc.uc3m.es/~jmiguez/papers/P40_2014_A Distributed Particle Filter for Nonlinear Tracking in Wireless Sensor Networks.pdf http://www.sciencedirect.com/science/article/pii/S0165168413004568}, year = {2014}, date = {2014-01-01}, journal = {Signal Processing}, volume = {98}, pages = {121--134}, abstract = {The use of distributed particle filters for tracking in sensor networks has become popular in recent years. The distributed particle filters proposed in the literature up to now are only approximations of the centralized particle filter or, if they are a proper distributed version of the particle filter, their implementation in a wireless sensor network demands a prohibitive communication capability. In this work, we propose a mathematically sound distributed particle filter for tracking in a real-world indoor wireless sensor network composed of low-power nodes. We provide formal and general descriptions of our methodology and then present the results of both real-world experiments and/or computer simulations that use models fitted with real data. With the same number of particles as a centralized filter, the distributed algorithm is over four times faster, yet our simulations show that, even assuming the same processing speed, the accuracy of the centralized and distributed algorithms is practically identical. The main limitation of the proposed scheme is the need to make all the sensor observations available to every processing node. Therefore, it is better suited to broadcast networks or multihop networks where the volume of generated data is kept low, e.g., by an adequate local pre-processing of the observations.}, keywords = {Distributed filtering, Target tracking, Wireless sensor network}, pubstate = {published}, tppubtype = {article} } The use of distributed particle filters for tracking in sensor networks has become popular in recent years. The distributed particle filters proposed in the literature up to now are only approximations of the centralized particle filter or, if they are a proper distributed version of the particle filter, their implementation in a wireless sensor network demands a prohibitive communication capability. In this work, we propose a mathematically sound distributed particle filter for tracking in a real-world indoor wireless sensor network composed of low-power nodes. We provide formal and general descriptions of our methodology and then present the results of both real-world experiments and/or computer simulations that use models fitted with real data. With the same number of particles as a centralized filter, the distributed algorithm is over four times faster, yet our simulations show that, even assuming the same processing speed, the accuracy of the centralized and distributed algorithms is practically identical. The main limitation of the proposed scheme is the need to make all the sensor observations available to every processing node. Therefore, it is better suited to broadcast networks or multihop networks where the volume of generated data is kept low, e.g., by an adequate local pre-processing of the observations. |

Alvarado, Alex; Brannstrom, Fredrik; Agrell, Erik; Koch, Tobias High-SNR Asymptotics of Mutual Information for Discrete Constellations With Applications to BICM (Journal Article) IEEE Transactions on Information Theory, 60 (2), pp. 1061–1076, 2014, ISSN: 0018-9448. (Abstract | Links | BibTeX | Tags: additive white Gaussian noise channel, Anti-Gray code, bit-interleaved coded modulation, discrete constellations, Entropy, Gray code, high-SNR asymptotics, IP networks, Labeling, minimum-mean square error, Modulation, Mutual information, Signal to noise ratio, Vectors) @article{Alvarado2014, title = {High-SNR Asymptotics of Mutual Information for Discrete Constellations With Applications to BICM}, author = {Alvarado, Alex and Brannstrom, Fredrik and Agrell, Erik and Koch, Tobias}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6671479 http://www.tsc.uc3m.es/~koch/files/IEEE_TIT_60%282%29.pdf}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {60}, number = {2}, pages = {1061--1076}, abstract = {Asymptotic expressions of the mutual information between any discrete input and the corresponding output of the scalar additive white Gaussian noise channel are presented in the limit as the signal-to-noise ratio (SNR) tends to infinity. Asymptotic expressions of the symbol-error probability (SEP) and the minimum mean-square error (MMSE) achieved by estimating the channel input given the channel output are also developed. It is shown that for any input distribution, the conditional entropy of the channel input given the output, MMSE, and SEP have an asymptotic behavior proportional to the Gaussian Q-function. The argument of the Q-function depends only on the minimum Euclidean distance (MED) of the constellation and the SNR, and the proportionality constants are functions of the MED and the probabilities of the pairs of constellation points at MED. The developed expressions are then generalized to study the high-SNR behavior of the generalized mutual information (GMI) for bit-interleaved coded modulation (BICM). By means of these asymptotic expressions, the long-standing conjecture that Gray codes are the binary labelings that maximize the BICM-GMI at high SNR is proven. It is further shown that for any equally spaced constellation whose size is a power of two, there always exists an anti-Gray code giving the lowest BICM-GMI at high SNR.}, keywords = {additive white Gaussian noise channel, Anti-Gray code, bit-interleaved coded modulation, discrete constellations, Entropy, Gray code, high-SNR asymptotics, IP networks, Labeling, minimum-mean square error, Modulation, Mutual information, Signal to noise ratio, Vectors}, pubstate = {published}, tppubtype = {article} } Asymptotic expressions of the mutual information between any discrete input and the corresponding output of the scalar additive white Gaussian noise channel are presented in the limit as the signal-to-noise ratio (SNR) tends to infinity. Asymptotic expressions of the symbol-error probability (SEP) and the minimum mean-square error (MMSE) achieved by estimating the channel input given the channel output are also developed. It is shown that for any input distribution, the conditional entropy of the channel input given the output, MMSE, and SEP have an asymptotic behavior proportional to the Gaussian Q-function. The argument of the Q-function depends only on the minimum Euclidean distance (MED) of the constellation and the SNR, and the proportionality constants are functions of the MED and the probabilities of the pairs of constellation points at MED. The developed expressions are then generalized to study the high-SNR behavior of the generalized mutual information (GMI) for bit-interleaved coded modulation (BICM). By means of these asymptotic expressions, the long-standing conjecture that Gray codes are the binary labelings that maximize the BICM-GMI at high SNR is proven. It is further shown that for any equally spaced constellation whose size is a power of two, there always exists an anti-Gray code giving the lowest BICM-GMI at high SNR. |

Martin-Fernandez,; Gilioli,; Lanzarone,; Miguez, Joaquin; Pasquali,; Ruggeri,; Ruiz, A Rao-Blackwellized Particle Filter for Joint Parameter Estimation and Biomass Tracking in a Stochastic Predator-Prey System (Journal Article) Mathematical Biosciences and Engineering, 11 (3), pp. 573–597, 2014. (Abstract | Links | BibTeX | Tags: ) @article{Martin-Fernandez2014, title = {A Rao-Blackwellized Particle Filter for Joint Parameter Estimation and Biomass Tracking in a Stochastic Predator-Prey System}, author = {Martin-Fernandez, L. and Gilioli, G. and Lanzarone, E. and Miguez, Joaquin and Pasquali, S. and Ruggeri, F. and Ruiz, D. P.}, url = {http://www.tsc.uc3m.es/~jmiguez/papers/P42_2014_A Rao-Blackwellized Particle Filter for Joint Parameter Estimation and Biomass Tracking in a Stochastic Predator-Prey System.pdf https://www.aimsciences.org/journals/pdfs.jsp?paperID=9557&mode=full http://gts.tsc.uc3m.es/wp-content/uploads/2014/01/LMF_et_al_MBE13_A-RAO-BLACKWELLIZED-PARTICLE-FILTER_-jma.pdf https://www.aimsciences.org/journals/displayArticlesnew.jsp?paperID=9557}, year = {2014}, date = {2014-01-01}, journal = {Mathematical Biosciences and Engineering}, volume = {11}, number = {3}, pages = {573--597}, abstract = {Functional response estimation and population tracking in predator- prey systems are critical problems in ecology. In this paper we consider a stochastic predator-prey system with a Lotka-Volterra functional response and propose a particle ltering method for: (a) estimating the behavioral parameter representing the rate of e ective search per predator in the functional response and (b) forecasting the population biomass using eld data. In particular, the proposed technique combines a sequential Monte Carlo sampling scheme for tracking the time-varying biomass with the analytical integration of the unknown behavioral parameter. In order to assess the performance of the method, we show results for both synthetic and observed data collected in an acarine predator-prey system, namely the pest mite Tetranychus urticae and the predatory mite Phytoseiulus persimilis.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Functional response estimation and population tracking in predator- prey systems are critical problems in ecology. In this paper we consider a stochastic predator-prey system with a Lotka-Volterra functional response and propose a particle ltering method for: (a) estimating the behavioral parameter representing the rate of e ective search per predator in the functional response and (b) forecasting the population biomass using eld data. In particular, the proposed technique combines a sequential Monte Carlo sampling scheme for tracking the time-varying biomass with the analytical integration of the unknown behavioral parameter. In order to assess the performance of the method, we show results for both synthetic and observed data collected in an acarine predator-prey system, namely the pest mite Tetranychus urticae and the predatory mite Phytoseiulus persimilis. |

Piñeiro-Ave, José; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Artés-Rodríguez, Antonio Target Detection for Low Cost Uncooled MWIR Cameras Based on Empirical Mode Decomposition (Journal Article) Infrared Physics & Technology, 63 , pp. 222–231, 2014, ISSN: 13504495. (Abstract | Links | BibTeX | Tags: Background subtraction, Change detection, Denoising, Drift, Empirical Mode Decomposition (EMD), Intrinsic Mode Function (IMF)) @article{Pineiro-Ave2014, title = {Target Detection for Low Cost Uncooled MWIR Cameras Based on Empirical Mode Decomposition}, author = {Piñeiro-Ave, José and Blanco-Velasco, Manuel and Cruz-Roldán, Fernando and Artés-Rodríguez, Antonio}, url = {http://www.tsc.uc3m.es/~antonio/papers/P49_2014_Target Detection for Low Cost Uncooled MWIR Cameras Based on Empirical Mode Decomposition.pdf http://www.sciencedirect.com/science/article/pii/S1350449514000085}, issn = {13504495}, year = {2014}, date = {2014-01-01}, journal = {Infrared Physics & Technology}, volume = {63}, pages = {222--231}, abstract = {In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance.}, keywords = {Background subtraction, Change detection, Denoising, Drift, Empirical Mode Decomposition (EMD), Intrinsic Mode Function (IMF)}, pubstate = {published}, tppubtype = {article} } In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance. |

Koblents, Eugenia; Miguez, Joaquin A Population Monte Carlo Scheme with Transformed Weights and Its Application to Stochastic Kinetic Models (Journal Article) Statistics and Computing, ((to appear)), 2014, ISSN: 0960-3174. (Abstract | Links | BibTeX | Tags: degeneracy of importance weights, Importance sampling, population Monte Carlo, Stochastic kinetic models) @article{Koblents2014, title = {A Population Monte Carlo Scheme with Transformed Weights and Its Application to Stochastic Kinetic Models}, author = {Koblents, Eugenia and Miguez, Joaquin}, url = {http://link.springer.com/10.1007/s11222-013-9440-2 http://gts.tsc.uc3m.es/wp-content/uploads/2014/01/NPMC_A-population-Monte-Carlo-scheme-with-transformed_jma.pdf}, issn = {0960-3174}, year = {2014}, date = {2014-01-01}, journal = {Statistics and Computing}, number = {(to appear)}, abstract = {This paper addresses the Monte Carlo approximation of posterior probability distributions. In particular, we consider the population Monte Carlo (PMC) technique, which is based on an iterative importance sampling (IS) approach. An important drawback of this methodology is the degeneracy of the importance weights (IWs) when the dimension of either the observations or the variables of interest is high. To alleviate this difficulty, we propose a new method that performs a nonlinear transformation of the IWs. This operation reduces the weight variation, hence it avoids degeneracy and increases the efficiency of the IS scheme, specially when drawing from proposal functions which are poorly adapted to the true posterior. For the sake of illustration, we have applied the proposed algorithm to the estimation of the parameters of a Gaussian mixture model. This is a simple problem that enables us to discuss the main features of the proposed technique. As a practical application, we have also considered the challenging problem of estimating the rate parameters of a stochastic kinetic model (SKM). SKMs are multivariate systems that model molecular interactions in biological and chemical problems. We introduce a particularization of the proposed algorithm to SKMs and present numerical results.}, keywords = {degeneracy of importance weights, Importance sampling, population Monte Carlo, Stochastic kinetic models}, pubstate = {published}, tppubtype = {article} } This paper addresses the Monte Carlo approximation of posterior probability distributions. In particular, we consider the population Monte Carlo (PMC) technique, which is based on an iterative importance sampling (IS) approach. An important drawback of this methodology is the degeneracy of the importance weights (IWs) when the dimension of either the observations or the variables of interest is high. To alleviate this difficulty, we propose a new method that performs a nonlinear transformation of the IWs. This operation reduces the weight variation, hence it avoids degeneracy and increases the efficiency of the IS scheme, specially when drawing from proposal functions which are poorly adapted to the true posterior. For the sake of illustration, we have applied the proposed algorithm to the estimation of the parameters of a Gaussian mixture model. This is a simple problem that enables us to discuss the main features of the proposed technique. As a practical application, we have also considered the challenging problem of estimating the rate parameters of a stochastic kinetic model (SKM). SKMs are multivariate systems that model molecular interactions in biological and chemical problems. We introduce a particularization of the proposed algorithm to SKMs and present numerical results. |

Crisan, Dan; Miguez, Joaquin Particle-Kernel Estimation of the Filter Density in State-Space Models (Journal Article) Bernoulli, (to appear , 2014. (Abstract | Links | BibTeX | Tags: density estimation, Markov systems., Models, Sequential Monte Carlo, state-space, stochastic filtering) @article{Crisan2014, title = {Particle-Kernel Estimation of the Filter Density in State-Space Models}, author = {Crisan, Dan and Miguez, Joaquin}, url = {http://www.tsc.uc3m.es/~jmiguez/papers/P43_2014_Particle-Kernel Estimation of the Filter Density in State-Space Models.pdf http://www.bernoulli-society.org/index.php/publications/bernoulli-journal/bernoulli-journal-papers}, year = {2014}, date = {2014-01-01}, journal = {Bernoulli}, volume = {(to appear}, abstract = {Sequential Monte Carlo (SMC) methods, also known as particle filters, are simulation-based recursive algorithms for the approximation of the a posteriori probability measures generated by state-space dynamical models. At any given time t, a SMC method produces a set of samples over the state space of the system of interest (often termed “particles”) that is used to build a discrete and random approximation of the posterior probability distribution of the state variables, conditional on a sequence of available observations. One potential application of the methodology is the estimation of the densities associated to the sequence of a posteriori distributions. While practitioners have rather freely applied such density approximations in the past, the issue has received less attention from a theoretical perspective. In this paper, we address the problem of constructing kernel-based estimates of the posterior probability density function and its derivatives, and obtain asymptotic convergence results for the estimation errors. In particular, we find convergence rates for the approximation errors that hold uniformly on the state space and guarantee that the error vanishes almost surely as the number of particles in the filter grows. Based on this uniform convergence result, we first show how to build continuous measures that converge almost surely (with known rate) toward the posterior measure and then address a few applications. The latter include maximum a posteriori estimation of the system state using the approximate derivatives of the posterior density and the approximation of functionals of it, e.g., Shannon’s entropy.}, keywords = {density estimation, Markov systems., Models, Sequential Monte Carlo, state-space, stochastic filtering}, pubstate = {published}, tppubtype = {article} } Sequential Monte Carlo (SMC) methods, also known as particle filters, are simulation-based recursive algorithms for the approximation of the a posteriori probability measures generated by state-space dynamical models. At any given time t, a SMC method produces a set of samples over the state space of the system of interest (often termed “particles”) that is used to build a discrete and random approximation of the posterior probability distribution of the state variables, conditional on a sequence of available observations. One potential application of the methodology is the estimation of the densities associated to the sequence of a posteriori distributions. While practitioners have rather freely applied such density approximations in the past, the issue has received less attention from a theoretical perspective. In this paper, we address the problem of constructing kernel-based estimates of the posterior probability density function and its derivatives, and obtain asymptotic convergence results for the estimation errors. In particular, we find convergence rates for the approximation errors that hold uniformly on the state space and guarantee that the error vanishes almost surely as the number of particles in the filter grows. Based on this uniform convergence result, we first show how to build continuous measures that converge almost surely (with known rate) toward the posterior measure and then address a few applications. The latter include maximum a posteriori estimation of the system state using the approximate derivatives of the posterior density and the approximation of functionals of it, e.g., Shannon’s entropy. |

Ruiz, Francisco; Valera, Isabel; Blanco, Carlos; Perez-Cruz, Fernando Bayesian Nonparametric Comorbidity Analysis of Psychiatric Disorders (Journal Article) Journal of Machine Learning Research, 15 (1), pp. 1215–1248, 2014. (Abstract | Links | BibTeX | Tags: ALCIT, Bayesian Non-parametrics, categorical observations, Indian Buet Process, Laplace approximation, multinomial-logit function, variational inference) @article{Ruiz2014, title = {Bayesian Nonparametric Comorbidity Analysis of Psychiatric Disorders}, author = {Ruiz, Francisco J. R. and Valera, Isabel and Blanco, Carlos and Perez-Cruz, Fernando}, url = {http://jmlr.org/papers/volume15/ruiz14a/ruiz14a.pdf http://arxiv.org/abs/1401.7620}, year = {2014}, date = {2014-01-01}, journal = {Journal of Machine Learning Research}, volume = {15}, number = {1}, pages = {1215--1248}, abstract = {The analysis of comorbidity is an open and complex research field in the branch of psychiatry, where clinical experience and several studies suggest that the relation among the psychiatric disorders may have etiological and treatment implications. In this paper, we are interested in applying latent feature modeling to find the latent structure behind the psychiatric disorders that can help to examine and explain the relationships among them. To this end, we use the large amount of information collected in the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) database and propose to model these data using a nonparametric latent model based on the Indian Buffet Process (IBP). Due to the discrete nature of the data, we first need to adapt the observation model for discrete random variables. We propose a generative model in which the observations are drawn from a multinomial-logit distribution given the IBP matrix. The implementation of an efficient Gibbs sampler is accomplished using the Laplace approximation, which allows integrating out the weighting factors of the multinomial-logit likelihood model. We also provide a variational inference algorithm for this model, which provides a complementary (and less expensive in terms of computational complexity) alternative to the Gibbs sampler allowing us to deal with a larger number of data. Finally, we use the model to analyze comorbidity among the psychiatric disorders diagnosed by experts from the NESARC database.}, keywords = {ALCIT, Bayesian Non-parametrics, categorical observations, Indian Buet Process, Laplace approximation, multinomial-logit function, variational inference}, pubstate = {published}, tppubtype = {article} } The analysis of comorbidity is an open and complex research field in the branch of psychiatry, where clinical experience and several studies suggest that the relation among the psychiatric disorders may have etiological and treatment implications. In this paper, we are interested in applying latent feature modeling to find the latent structure behind the psychiatric disorders that can help to examine and explain the relationships among them. To this end, we use the large amount of information collected in the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) database and propose to model these data using a nonparametric latent model based on the Indian Buffet Process (IBP). Due to the discrete nature of the data, we first need to adapt the observation model for discrete random variables. We propose a generative model in which the observations are drawn from a multinomial-logit distribution given the IBP matrix. The implementation of an efficient Gibbs sampler is accomplished using the Laplace approximation, which allows integrating out the weighting factors of the multinomial-logit likelihood model. We also provide a variational inference algorithm for this model, which provides a complementary (and less expensive in terms of computational complexity) alternative to the Gibbs sampler allowing us to deal with a larger number of data. Finally, we use the model to analyze comorbidity among the psychiatric disorders diagnosed by experts from the NESARC database. |

Gil Taborda, Camilo; Guo, Dongning; Perez-Cruz, Fernando Information--Estimation Relationships over Binomial and Negative Binomial Models (Journal Article) IEEE Transactions on Information Theory, to appear , pp. 1–1, 2014, ISSN: 0018-9448. (Abstract | Links | BibTeX | Tags: ALCIT) @article{GilTaborda2014, title = {Information--Estimation Relationships over Binomial and Negative Binomial Models}, author = {Gil Taborda, Camilo and Guo, Dongning and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6746122}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {to appear}, pages = {1--1}, publisher = {IEEE}, abstract = {In recent years, a number of new connections between information measures and estimation have been found under various models, including, predominantly, Gaussian and Poisson models. This paper develops similar results for the binomial and negative binomial models. In particular, it is shown that the derivative of the relative entropy and the derivative of the mutual information for the binomial and negative binomial models can be expressed through the expectation of closed-form expressions that have conditional estimates as the main argument. Under mild conditions, those derivatives take the form of an expected Bregman divergence}, keywords = {ALCIT}, pubstate = {published}, tppubtype = {article} } In recent years, a number of new connections between information measures and estimation have been found under various models, including, predominantly, Gaussian and Poisson models. This paper develops similar results for the binomial and negative binomial models. In particular, it is shown that the derivative of the relative entropy and the derivative of the mutual information for the binomial and negative binomial models can be expressed through the expectation of closed-form expressions that have conditional estimates as the main argument. Under mild conditions, those derivatives take the form of an expected Bregman divergence |

O'Mahony, Niamh; Florentino-Liaño, Blanca; Carballo, Juan; Baca-García, Enrique; Artés-Rodríguez, Antonio Objective diagnosis of ADHD using IMUs (Journal Article) Medical engineering & physics, 36 (7), pp. 922–6, 2014, ISSN: 1873-4030. (Abstract | Links | BibTeX | Tags: Attention deficit/hyperactivity disorder, Classification, Inertial sensors, Machine learning, Objective diagnosis) @article{O'Mahony2014, title = {Objective diagnosis of ADHD using IMUs}, author = {O'Mahony, Niamh and Florentino-Liaño, Blanca and Carballo, Juan J and Baca-García, Enrique and Artés-Rodríguez, Antonio}, url = {http://www.tsc.uc3m.es/~antonio/papers/P50_2014_Objective Diagnosis of ADHD Using IMUs.pdf http://www.sciencedirect.com/science/article/pii/S1350453314000459}, issn = {1873-4030}, year = {2014}, date = {2014-01-01}, journal = {Medical engineering & physics}, volume = {36}, number = {7}, pages = {922--6}, abstract = {This work proposes the use of miniature wireless inertial sensors as an objective tool for the diagnosis of ADHD. The sensors, consisting of both accelerometers and gyroscopes to measure linear and rotational movement, respectively, are used to characterize the motion of subjects in the setting of a psychiatric consultancy. A support vector machine is used to classify a group of subjects as either ADHD or non-ADHD and a classification accuracy of greater than 95% has been achieved. Separate analyses of the motion data recorded during various activities throughout the visit to the psychiatric consultancy show that motion recorded during a continuous performance test (a forced concentration task) provides a better classification performance than that recorded during \"free time\".}, keywords = {Attention deficit/hyperactivity disorder, Classification, Inertial sensors, Machine learning, Objective diagnosis}, pubstate = {published}, tppubtype = {article} } This work proposes the use of miniature wireless inertial sensors as an objective tool for the diagnosis of ADHD. The sensors, consisting of both accelerometers and gyroscopes to measure linear and rotational movement, respectively, are used to characterize the motion of subjects in the setting of a psychiatric consultancy. A support vector machine is used to classify a group of subjects as either ADHD or non-ADHD and a classification accuracy of greater than 95% has been achieved. Separate analyses of the motion data recorded during various activities throughout the visit to the psychiatric consultancy show that motion recorded during a continuous performance test (a forced concentration task) provides a better classification performance than that recorded during "free time". |

Montoya-Martinez, Jair; Artés-Rodríguez, Antonio; Pontil, Massimiliano; Kai Hansen, Lars A Regularized Matrix Factorization Approach to Induce Structured Sparse-Low Rank Solutions in the EEG Inverse Problem (Journal Article) EURASIP Journal on Advances in Signal Processing, 2014 (1), pp. 97, 2014, ISSN: 1687-6180. (Abstract | Links | BibTeX | Tags: Low rank, Matrix factorization, Nonsmooth-nonconvex optimization, Regularization, Structured sparsity) @article{Montoya-Martinez2014b, title = {A Regularized Matrix Factorization Approach to Induce Structured Sparse-Low Rank Solutions in the EEG Inverse Problem}, author = {Montoya-Martinez, Jair and Artés-Rodríguez, Antonio and Pontil, Massimiliano and Kai Hansen, Lars}, url = {http://www.tsc.uc3m.es/~antonio/papers/P48_2014_A Regularized Matrix Factorization Approach to Induce Structured Sparse-Low Rank Solutions in the EEG Inverse Problem.pdf http://asp.eurasipjournals.com/content/2014/1/97/abstract}, issn = {1687-6180}, year = {2014}, date = {2014-01-01}, journal = {EURASIP Journal on Advances in Signal Processing}, volume = {2014}, number = {1}, pages = {97}, publisher = {Springer}, abstract = {We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy Electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured sparsity and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low rank structure is enforced by minimizing a regularized functional that includes the l21-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios, the performance of our method respect to the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix.}, keywords = {Low rank, Matrix factorization, Nonsmooth-nonconvex optimization, Regularization, Structured sparsity}, pubstate = {published}, tppubtype = {article} } We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy Electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured sparsity and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low rank structure is enforced by minimizing a regularized functional that includes the l21-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios, the performance of our method respect to the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix. |

Pastore A,; Koch, Tobias; Fonollosa, Javier Rodriguez A Rate-Splitting Approach to Fading Channels With Imperfect Channel-State Information (Journal Article) IEEE Transactions on Information Theory, 60 (7), pp. 4266–4285, 2014, ISSN: 0018-9448. (Abstract | Links | BibTeX | Tags: channel capacity, COMONSENS, DEIPRO, Entropy, Fading, fading channels, flat fading, imperfect channel-state information, MobileNET, Mutual information, OTOSiS, Random variables, Receivers, Signal to noise ratio, Upper bound) @article{Pastore2014a, title = {A Rate-Splitting Approach to Fading Channels With Imperfect Channel-State Information}, author = {Pastore, A, and Koch, Tobias and Fonollosa, Javier Rodriguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6832779 http://www.tsc.uc3m.es/~koch/files/IEEE_TIT_60(7).pdf http://arxiv.org/pdf/1301.6120.pdf}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {60}, number = {7}, pages = {4266--4285}, publisher = {IEEE}, abstract = {As shown by Médard, the capacity of fading channels with imperfect channel-state information can be lower-bounded by assuming a Gaussian channel input (X) with power (P) and by upper-bounding the conditional entropy (h(X|Y,hat Ħ)) by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating (X) from ((Y,hat Ħ)) . We demonstrate that, using a rate-splitting approach, this lower bound can be sharpened: by expressing the Gaussian input (X) as the sum of two independent Gaussian variables (X_1) and (X_2) and by applying Médard\'s lower bound first to bound the mutual information between (X_1) and (Y) while treating (X_2) as noise, and by applying it a second time to the mutual information between (X_2) and (Y) while assuming (X_1) to be known, we obtain a capacity lower bound that is strictly larger than Médard\'s lower bound. We then generalize this approach to an arbi- rary number (L) of layers, where (X) is expressed as the sum of (L) independent Gaussian random variables of respective variances (P_ell ) , (ell = 1,dotsc ,L) summing up to (P) . Among all such rate-splitting bounds, we determine the supremum over power allocations (P_ell ) and total number of layers (L) . This supremum is achieved for (L rightarrow infty ) and gives rise to an analytically expressible capacity lower bound. For Gaussian fading, this novel bound is shown to converge to the Gaussian-input mutual information as the signal-to-noise ratio (SNR) grows, provided that the variance of the channel estimation error (H-hat Ħ) tends to zero as the SNR tends to infinity.}, keywords = {channel capacity, COMONSENS, DEIPRO, Entropy, Fading, fading channels, flat fading, imperfect channel-state information, MobileNET, Mutual information, OTOSiS, Random variables, Receivers, Signal to noise ratio, Upper bound}, pubstate = {published}, tppubtype = {article} } As shown by Médard, the capacity of fading channels with imperfect channel-state information can be lower-bounded by assuming a Gaussian channel input (X) with power (P) and by upper-bounding the conditional entropy (h(X|Y,hat Ħ)) by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating (X) from ((Y,hat Ħ)) . We demonstrate that, using a rate-splitting approach, this lower bound can be sharpened: by expressing the Gaussian input (X) as the sum of two independent Gaussian variables (X_1) and (X_2) and by applying Médard's lower bound first to bound the mutual information between (X_1) and (Y) while treating (X_2) as noise, and by applying it a second time to the mutual information between (X_2) and (Y) while assuming (X_1) to be known, we obtain a capacity lower bound that is strictly larger than Médard's lower bound. We then generalize this approach to an arbi- rary number (L) of layers, where (X) is expressed as the sum of (L) independent Gaussian random variables of respective variances (P_ell ) , (ell = 1,dotsc ,L) summing up to (P) . Among all such rate-splitting bounds, we determine the supremum over power allocations (P_ell ) and total number of layers (L) . This supremum is achieved for (L rightarrow infty ) and gives rise to an analytically expressible capacity lower bound. For Gaussian fading, this novel bound is shown to converge to the Gaussian-input mutual information as the signal-to-noise ratio (SNR) grows, provided that the variance of the channel estimation error (H-hat Ħ) tends to zero as the SNR tends to infinity. |

Tauste Campo,; Vazquez-Vilar, Gonzalo; Guillen i Fabregas, Albert; Koch, Tobias; Martinez, A Derivation of the Source-Channel Error Exponent Using Nonidentical Product Distributions (Journal Article) IEEE Transactions on Information Theory, 60 (6), pp. 3209–3217, 2014, ISSN: 0018-9448. (Abstract | Links | BibTeX | Tags: ALCIT, Channel Coding, COMONSENS, DEIPRO, error probability, joint source-channel coding, Joints, MobileNET, Probability distribution, product distributions, random coding, Reliability, reliability function, sphere-packing bound, Upper bound) @article{TausteCampo2014, title = {A Derivation of the Source-Channel Error Exponent Using Nonidentical Product Distributions}, author = {Tauste Campo, A. and Vazquez-Vilar, Gonzalo and Guillen i Fabregas, Albert and Koch, Tobias and Martinez, A.}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6803047 http://www.tsc.uc3m.es/~koch/files/IEEE_TIT_60(6).pdf}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {60}, number = {6}, pages = {3209--3217}, publisher = {IEEE}, abstract = {This paper studies the random-coding exponent of joint source-channel coding for a scheme where source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class index of the source message. For discrete memoryless systems, two optimally chosen classes and product distributions are found to be sufficient to attain the sphere-packing exponent in those cases where it is tight.}, keywords = {ALCIT, Channel Coding, COMONSENS, DEIPRO, error probability, joint source-channel coding, Joints, MobileNET, Probability distribution, product distributions, random coding, Reliability, reliability function, sphere-packing bound, Upper bound}, pubstate = {published}, tppubtype = {article} } This paper studies the random-coding exponent of joint source-channel coding for a scheme where source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class index of the source message. For discrete memoryless systems, two optimally chosen classes and product distributions are found to be sufficient to attain the sphere-packing exponent in those cases where it is tight. |

Yang,; Durisi, Giuseppe; Koch, Tobias; Polyanskiy, Yury Quasi-Static Multiple-Antenna Fading Channels at Finite Blocklength (Journal Article) IEEE Transactions on Information Theory, 60 (7), pp. 4232–4265, 2014, ISSN: 0018-9448. (Abstract | Links | BibTeX | Tags: channel dispersion, Decoding, error probability, finite blocklength regime, MIMO, MIMO channel, outage probability, quasi-static fading channel, Rayleigh channels, Receivers, Transmitters) @article{Yang2014, title = {Quasi-Static Multiple-Antenna Fading Channels at Finite Blocklength}, author = {Yang, W. and Durisi, Giuseppe and Koch, Tobias and Polyanskiy, Yury}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6802432 http://arxiv.org/abs/1311.2012}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {60}, number = {7}, pages = {4232--4265}, publisher = {IEEE}, abstract = {This paper investigates the maximal achievable rate for a given blocklength and error probability over quasi-static multiple-input multiple-output fading channels, with and without channel state information at the transmitter and/or the receiver. The principal finding is that outage capacity, despite being an asymptotic quantity, is a sharp proxy for the finite-blocklength fundamental limits of slow-fading channels. Specifically, the channel dispersion is shown to be zero regardless of whether the fading realizations are available at both transmitter and receiver, at only one of them, or at neither of them. These results follow from analytically tractable converse and achievability bounds. Numerical evaluation of these bounds verifies that zero dispersion may indeed imply fast convergence to the outage capacity as the blocklength increases. In the example of a particular 1 $,times,$ 2 single-input multiple-output Rician fading channel, the blocklength required to achieve 90% of capacity is about an order of magnitude smaller compared with the blocklength required for an AWGN channel with the same capacity. For this specific scenario, the coding/decoding schemes adopted in the LTE-Advanced standard are benchmarked against the finite-blocklength achievability and converse bounds.}, keywords = {channel dispersion, Decoding, error probability, finite blocklength regime, MIMO, MIMO channel, outage probability, quasi-static fading channel, Rayleigh channels, Receivers, Transmitters}, pubstate = {published}, tppubtype = {article} } This paper investigates the maximal achievable rate for a given blocklength and error probability over quasi-static multiple-input multiple-output fading channels, with and without channel state information at the transmitter and/or the receiver. The principal finding is that outage capacity, despite being an asymptotic quantity, is a sharp proxy for the finite-blocklength fundamental limits of slow-fading channels. Specifically, the channel dispersion is shown to be zero regardless of whether the fading realizations are available at both transmitter and receiver, at only one of them, or at neither of them. These results follow from analytically tractable converse and achievability bounds. Numerical evaluation of these bounds verifies that zero dispersion may indeed imply fast convergence to the outage capacity as the blocklength increases. In the example of a particular 1 $,times,$ 2 single-input multiple-output Rician fading channel, the blocklength required to achieve 90% of capacity is about an order of magnitude smaller compared with the blocklength required for an AWGN channel with the same capacity. For this specific scenario, the coding/decoding schemes adopted in the LTE-Advanced standard are benchmarked against the finite-blocklength achievability and converse bounds. |

Cespedes, Javier; Olmos, Pablo; Sanchez-Fernandez, Matilde; Perez-Cruz, Fernando Expectation Propagation Detection for High-order High-dimensional MIMO Systems (Journal Article) IEEE Transactions on Communications, PP (99), pp. 1–1, 2014, ISSN: 0090-6778. (Abstract | Links | BibTeX | Tags: Approximation methods, computational complexity, Detectors, MIMO, Signal to noise ratio, Vectors) @article{Cespedes2014, title = {Expectation Propagation Detection for High-order High-dimensional MIMO Systems}, author = {Cespedes, Javier and Olmos, Pablo M. and Sanchez-Fernandez, Matilde and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6841617}, issn = {0090-6778}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Communications}, volume = {PP}, number = {99}, pages = {1--1}, abstract = {Modern communications systems use multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the number of antennas and the order of the constellation grow, the design of efficient and low-complexity MIMO receivers possesses big technical challenges. For example, symbol detection can no longer rely on maximum likelihood detection or sphere-decoding methods, as their complexity increases exponentially with the number of transmitters/receivers. In this paper, we propose a low-complexity high-accuracy MIMO symbol detector based on the Expectation Propagation (EP) algorithm. EP allows approximating iteratively at polynomial-time the posterior distribution of the transmitted symbols. We also show that our EP MIMO detector outperforms classic and state-of-the-art solutions reducing the symbol error rate at a reduced computational complexity.}, keywords = {Approximation methods, computational complexity, Detectors, MIMO, Signal to noise ratio, Vectors}, pubstate = {published}, tppubtype = {article} } Modern communications systems use multiple-input multiple-output (MIMO) and high-order QAM constellations for maximizing spectral efficiency. However, as the number of antennas and the order of the constellation grow, the design of efficient and low-complexity MIMO receivers possesses big technical challenges. For example, symbol detection can no longer rely on maximum likelihood detection or sphere-decoding methods, as their complexity increases exponentially with the number of transmitters/receivers. In this paper, we propose a low-complexity high-accuracy MIMO symbol detector based on the Expectation Propagation (EP) algorithm. EP allows approximating iteratively at polynomial-time the posterior distribution of the transmitted symbols. We also show that our EP MIMO detector outperforms classic and state-of-the-art solutions reducing the symbol error rate at a reduced computational complexity. |

Read, Jesse; Bielza, Concha; Larranaga, Pedro Multi-Dimensional Classification with Super-Classes (Journal Article) IEEE Transactions on Knowledge and Data Engineering, 26 (7), pp. 1720–1733, 2014, ISSN: 1041-4347. (Abstract | Links | BibTeX | Tags: Accuracy, Bayes methods, Classification, COMPRHENSION, conditional dependence, Context, core goals, data instance, evaluation metrics, Integrated circuit modeling, modeling class dependencies, multi-dimensional, Multi-dimensional classification, multidimensional classification problem, multidimensional datasets, multidimensional learners, multilabel classification, multilabel research, multiple class variables, ordinary class, pattern classification, problem transformation, recently-popularized task, super classes, super-class partitions, tractable running time, Training, Vectors) @article{Read2014, title = {Multi-Dimensional Classification with Super-Classes}, author = {Read, Jesse and Bielza, Concha and Larranaga, Pedro}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6648319}, issn = {1041-4347}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Knowledge and Data Engineering}, volume = {26}, number = {7}, pages = {1720--1733}, publisher = {IEEE}, abstract = {The multi-dimensional classification problem is a generalisation of the recently-popularised task of multi-label classification, where each data instance is associated with multiple class variables. There has been relatively little research carried out specific to multi-dimensional classification and, although one of the core goals is similar (modelling dependencies among classes), there are important differences; namely a higher number of possible classifications. In this paper we present method for multi-dimensional classification, drawing from the most relevant multi-label research, and combining it with important novel developments. Using a fast method to model the conditional dependence between class variables, we form super-class partitions and use them to build multi-dimensional learners, learning each super-class as an ordinary class, and thus explicitly modelling class dependencies. Additionally, we present a mechanism to deal with the many class values inherent to super-classes, and thus make learning efficient. To investigate the effectiveness of this approach we carry out an empirical evaluation on a range of multi-dimensional datasets, under different evaluation metrics, and in comparison with high-performing existing multi-dimensional approaches from the literature. Analysis of results shows that our approach offers important performance gains over competing methods, while also exhibiting tractable running time.}, keywords = {Accuracy, Bayes methods, Classification, COMPRHENSION, conditional dependence, Context, core goals, data instance, evaluation metrics, Integrated circuit modeling, modeling class dependencies, multi-dimensional, Multi-dimensional classification, multidimensional classification problem, multidimensional datasets, multidimensional learners, multilabel classification, multilabel research, multiple class variables, ordinary class, pattern classification, problem transformation, recently-popularized task, super classes, super-class partitions, tractable running time, Training, Vectors}, pubstate = {published}, tppubtype = {article} } The multi-dimensional classification problem is a generalisation of the recently-popularised task of multi-label classification, where each data instance is associated with multiple class variables. There has been relatively little research carried out specific to multi-dimensional classification and, although one of the core goals is similar (modelling dependencies among classes), there are important differences; namely a higher number of possible classifications. In this paper we present method for multi-dimensional classification, drawing from the most relevant multi-label research, and combining it with important novel developments. Using a fast method to model the conditional dependence between class variables, we form super-class partitions and use them to build multi-dimensional learners, learning each super-class as an ordinary class, and thus explicitly modelling class dependencies. Additionally, we present a mechanism to deal with the many class values inherent to super-classes, and thus make learning efficient. To investigate the effectiveness of this approach we carry out an empirical evaluation on a range of multi-dimensional datasets, under different evaluation metrics, and in comparison with high-performing existing multi-dimensional approaches from the literature. Analysis of results shows that our approach offers important performance gains over competing methods, while also exhibiting tractable running time. |

Salamanca, Luis; Murillo-Fuentes, Juan; Olmos, Pablo; Perez-Cruz, Fernando; Verdu, Sergio Near DT Bound Achieving Linear Codes in the Short Blocklength Regime (Journal Article) IEEE Communications Letters, PP (99), pp. 1–1, 2014, ISSN: 1089-7798. (Abstract | Links | BibTeX | Tags: binary erasure channel, Channel Coding, Complexity theory, finite blocklength regime, LDPC codes, Maximum likelihood decoding, ML decoding, parity check codes, random coding) @article{Salamanca2014, title = {Near DT Bound Achieving Linear Codes in the Short Blocklength Regime}, author = {Salamanca, Luis and Murillo-Fuentes, Juan and Olmos, Pablo M. and Perez-Cruz, Fernando and Verdu, Sergio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6957577}, issn = {1089-7798}, year = {2014}, date = {2014-01-01}, journal = {IEEE Communications Letters}, volume = {PP}, number = {99}, pages = {1--1}, abstract = {The dependence-testing (DT) bound is one of the strongest achievability bounds for the binary erasure channel (BEC) in the finite block length regime. In this paper, we show that maximum likelihood decoded regular low-density paritycheck (LDPC) codes with at least 5 ones per column almost achieve the DT bound. Specifically, using quasi-regular LDPC codes with block length of 256 bits, we achieve a rate that is less than 1% away from the rate predicted by the DT bound for a word error rate below 103. The results also indicate that the maximum-likelihood solution is computationally feasible for decoding block codes over the BEC with several hundred bits.}, keywords = {binary erasure channel, Channel Coding, Complexity theory, finite blocklength regime, LDPC codes, Maximum likelihood decoding, ML decoding, parity check codes, random coding}, pubstate = {published}, tppubtype = {article} } The dependence-testing (DT) bound is one of the strongest achievability bounds for the binary erasure channel (BEC) in the finite block length regime. In this paper, we show that maximum likelihood decoded regular low-density paritycheck (LDPC) codes with at least 5 ones per column almost achieve the DT bound. Specifically, using quasi-regular LDPC codes with block length of 256 bits, we achieve a rate that is less than 1% away from the rate predicted by the DT bound for a word error rate below 103. The results also indicate that the maximum-likelihood solution is computationally feasible for decoding block codes over the BEC with several hundred bits. |

Luengo, David; Monzón, Sandra; Trigano, Tom; Vía, Javier; Artés-Rodríguez, Antonio Blind Analysis of Atrial Fibrillation Electrograms: A Sparsity-Aware Formulation (Journal Article) 2014. (Abstract | Links | BibTeX | Tags: atrial fibrillation, biomedical signal processing) @article{Luengo2014, title = {Blind Analysis of Atrial Fibrillation Electrograms: A Sparsity-Aware Formulation}, author = {Luengo, David and Monzón, Sandra and Trigano, Tom and Vía, Javier and Artés-Rodríguez, Antonio}, url = {http://www.tsc.uc3m.es/~antonio/papers/P46_2014_Blind Analysis of Atrial Fibrillation Electrograms A Sparsity Aware Formulation.pdf http://iospress.metapress.com/content/e6313w6767u73462/}, year = {2014}, date = {2014-01-01}, urldate = {20/11/14}, booktitle = {Integrated Computer-Aided Engineering}, abstract = {The problem of blind sparse analysis of electrogram (EGM) signals under atrial fibrillation (AF) conditions is considered in this paper. A mathematical model for the observed signals that takes into account the multiple foci typically appearing inside the heart during AF is firstly introduced. Then, a reconstruction model based on a fixed dictionary is developed and several alternatives for choosing the dictionary are discussed. In order to obtain a sparse solution, which takes into account the biological restrictions of the problem at the same time, the paper proposes using a Least Absolute Shrinkage and Selection Operator (LASSO) regularization followed by a post-processing stage that removes low amplitude coefficients violating the refractory period characteristic of cardiac cells. Finally, spectral analysis is performed on the clean activation sequence obtained from the sparse learning stage in order to estimate the number of latent foci and their frequencies. Simulations on synthetic signals and applications on real data are provided to validate the proposed approach.}, keywords = {atrial fibrillation, biomedical signal processing}, pubstate = {published}, tppubtype = {article} } The problem of blind sparse analysis of electrogram (EGM) signals under atrial fibrillation (AF) conditions is considered in this paper. A mathematical model for the observed signals that takes into account the multiple foci typically appearing inside the heart during AF is firstly introduced. Then, a reconstruction model based on a fixed dictionary is developed and several alternatives for choosing the dictionary are discussed. In order to obtain a sparse solution, which takes into account the biological restrictions of the problem at the same time, the paper proposes using a Least Absolute Shrinkage and Selection Operator (LASSO) regularization followed by a post-processing stage that removes low amplitude coefficients violating the refractory period characteristic of cardiac cells. Finally, spectral analysis is performed on the clean activation sequence obtained from the sparse learning stage in order to estimate the number of latent foci and their frequencies. Simulations on synthetic signals and applications on real data are provided to validate the proposed approach. |

## Inproceedings |

Farajtabar, Mehrdad; Du, Nan; Gomez-rodriguez, Manuel; Valera, Isabel; Zha, Hongyuan; Song, Le Shaping Social Activity by Incentivizing Users (Inproceeding) Advances in Neural Information Processing Systems, pp. 2474–2482, Montreal, 2014. (Abstract | Links | BibTeX | Tags: ) @inproceedings{Farajtabar2014, title = {Shaping Social Activity by Incentivizing Users}, author = {Farajtabar, Mehrdad and Du, Nan and Gomez-rodriguez, Manuel and Valera, Isabel and Zha, Hongyuan and Song, Le}, url = {http://papers.nips.cc/paper/5365-shaping-social-activity-by-incentivizing-users.pdf}, year = {2014}, date = {2014-12-01}, booktitle = {Advances in Neural Information Processing Systems}, volume = {December}, pages = {2474--2482}, address = {Montreal}, abstract = {Events in an online social network can be categorized roughly into endogenous events, where users just respond to the actions of their neighbors within the network, or exogenous events, where users take actions due to drives external to the network. How much external drive should be provided to each user, such that the network activity can be steered towards a target state? In this paper, we model social events using multivariate Hawkes processes, which can capture both endogenous and exogenous event intensities, and derive a time dependent linear relation between the intensity of exogenous events and the overall network activity. Exploiting this connection, we develop a convex optimization framework for determining the required level of external drive in order for the network to reach a desired activity level. We experimented with event data gathered from Twitter, and show that our method can steer the activity of the network more accurately than alternatives.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Events in an online social network can be categorized roughly into endogenous events, where users just respond to the actions of their neighbors within the network, or exogenous events, where users take actions due to drives external to the network. How much external drive should be provided to each user, such that the network activity can be steered towards a target state? In this paper, we model social events using multivariate Hawkes processes, which can capture both endogenous and exogenous event intensities, and derive a time dependent linear relation between the intensity of exogenous events and the overall network activity. Exploiting this connection, we develop a convex optimization framework for determining the required level of external drive in order for the network to reach a desired activity level. We experimented with event data gathered from Twitter, and show that our method can steer the activity of the network more accurately than alternatives. |

Taborda, Camilo; Perez-Cruz, Fernando; Guo, Dongning New Information-Estimation Results for Poisson, Binomial and Negative Binomial Models (Inproceeding) 2014 IEEE International Symposium on Information Theory, pp. 2207–2211, IEEE, Honolulu, 2014, ISBN: 978-1-4799-5186-4. (Abstract | Links | BibTeX | Tags: Bregman divergence, Estimation, estimation measures, Gaussian models, Gaussian processes, information measures, information theory, information-estimation results, negative binomial models, Poisson models, Stochastic processes) @inproceedings{Taborda2014, title = {New Information-Estimation Results for Poisson, Binomial and Negative Binomial Models}, author = {Taborda, Camilo G. and Perez-Cruz, Fernando and Guo, Dongning}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6875225}, doi = {10.1109/ISIT.2014.6875225}, isbn = {978-1-4799-5186-4}, year = {2014}, date = {2014-06-01}, booktitle = {2014 IEEE International Symposium on Information Theory}, pages = {2207--2211}, publisher = {IEEE}, address = {Honolulu}, abstract = {In recent years, a number of mathematical relationships have been established between information measures and estimation measures for various models, including Gaussian, Poisson and binomial models. In this paper, it is shown that the second derivative of the input-output mutual information with respect to the input scaling can be expressed as the expectation of a certain Bregman divergence pertaining to the conditional expectations of the input and the input power. This result is similar to that found for the Gaussian model where the Bregman divergence therein is the square distance. In addition, the Poisson, binomial and negative binomial models are shown to be similar in the small scaling regime in the sense that the derivative of the mutual information and the derivative of the relative entropy converge to the same value.}, keywords = {Bregman divergence, Estimation, estimation measures, Gaussian models, Gaussian processes, information measures, information theory, information-estimation results, negative binomial models, Poisson models, Stochastic processes}, pubstate = {published}, tppubtype = {inproceedings} } In recent years, a number of mathematical relationships have been established between information measures and estimation measures for various models, including Gaussian, Poisson and binomial models. In this paper, it is shown that the second derivative of the input-output mutual information with respect to the input scaling can be expressed as the expectation of a certain Bregman divergence pertaining to the conditional expectations of the input and the input power. This result is similar to that found for the Gaussian model where the Bregman divergence therein is the square distance. In addition, the Poisson, binomial and negative binomial models are shown to be similar in the small scaling regime in the sense that the derivative of the mutual information and the derivative of the relative entropy converge to the same value. |

Miguez, Joaquin On the uniform asymptotic convergence of a distributed particle filter (Inproceeding) 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 241–244, IEEE, A Coruña, 2014, ISBN: 978-1-4799-1481-4. (Abstract | Links | BibTeX | Tags: ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs) @inproceedings{Miguez2014, title = {On the uniform asymptotic convergence of a distributed particle filter}, author = {Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6882385}, doi = {10.1109/SAM.2014.6882385}, isbn = {978-1-4799-1481-4}, year = {2014}, date = {2014-06-01}, booktitle = {2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM)}, pages = {241--244}, publisher = {IEEE}, address = {A Coruña}, abstract = {Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort.}, keywords = {ad hoc networks, Approximation algorithms, approximation errors, Approximation methods, classical convergence theorems, Convergence, convergence of numerical methods, distributed particle filter scheme, distributed signal processing algorithms, Monte Carlo methods, parallel computing systems, particle filtering (numerical methods), Signal processing, Signal processing algorithms, stability assumptions, uniform asymptotic convergence, Wireless Sensor Networks, WSNs}, pubstate = {published}, tppubtype = {inproceedings} } Distributed signal processing algorithms suitable for their implementation over wireless sensor networks (WSNs) and ad hoc networks with communications and computing capabilities have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters. However, most distributed versions of this type of methods involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard particle filters do not hold for their distributed counterparts. In this paper, we look into a distributed particle filter scheme that has been proposed for implementation in both parallel computing systems and WSNs, and prove that, under certain stability assumptions regarding the physical system of interest, its asymptotic convergence is guaranteed. Moreover, we show that convergence is attained uniformly over time. This means that approximation errors can be kept bounded for an arbitrarily long period of time without having to progressively increase the computational effort. |

Crisan, Dan; Miguez, Joaquin Nested Particle Filters for Sequential Parameter Estimation in Discrete-time State-space Models (Inproceeding) SIAM 2014 Conference on Uncertainty Quantification, Savannah, 2014. @inproceedings{Crisan2014b, title = {Nested Particle Filters for Sequential Parameter Estimation in Discrete-time State-space Models}, author = {Crisan, Dan and Miguez, Joaquin}, year = {2014}, date = {2014-03-01}, booktitle = {SIAM 2014 Conference on Uncertainty Quantification}, address = {Savannah}, abstract = {The problem of estimating the parameters of nonlinear, possibly non-Gaussian discrete-time state models has drawn considerable attention during the past few years, leading to the appearance of general methodologies (SMC2, particle MCMC, recursive ML) that have improved on earlier, simpler extensions of the standard particle filter. However, there is still a lack of recursive (online) methods that can provide a theoretically-grounded approximation of the joint posterior probability distribution of the parameters and the dynamic state variables of the model. In the talk, we will describe a two-layer particle filtering scheme that addresses this problem. Both a recursive algorithm, suitable for online implementation, and some results regarding its asymptotic convergence will be presented.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The problem of estimating the parameters of nonlinear, possibly non-Gaussian discrete-time state models has drawn considerable attention during the past few years, leading to the appearance of general methodologies (SMC2, particle MCMC, recursive ML) that have improved on earlier, simpler extensions of the standard particle filter. However, there is still a lack of recursive (online) methods that can provide a theoretically-grounded approximation of the joint posterior probability distribution of the parameters and the dynamic state variables of the model. In the talk, we will describe a two-layer particle filtering scheme that addresses this problem. Both a recursive algorithm, suitable for online implementation, and some results regarding its asymptotic convergence will be presented. |

Martino, Luca; Elvira, Víctor; Luengo, David An Adaptive Population Importance Sampler (Inproceeding) IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2014), Florencia, 2014. (Links | BibTeX | Tags: ALCIT, COMPREHENSION) @inproceedings{Martino2014, title = {An Adaptive Population Importance Sampler}, author = {Martino, Luca and Elvira, Víctor and Luengo, David}, url = {http://www.icassp2014.org/home.html}, year = {2014}, date = {2014-01-01}, booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2014)}, address = {Florencia}, keywords = {ALCIT, COMPREHENSION}, pubstate = {published}, tppubtype = {inproceedings} } |

Pastore, Adriano; Koch, Tobias; Fonollosa, Javier Rodriguez A Rate-Splitting Approach to Fading Multiple-Access Channels with Imperfect Channel-State Information (Inproceeding) International Zurich Seminar on Communications (IZS), Zurich, 2014. (Abstract | Links | BibTeX | Tags: ALCIT) @inproceedings{Pastore2014, title = {A Rate-Splitting Approach to Fading Multiple-Access Channels with Imperfect Channel-State Information}, author = {Pastore, Adriano and Koch, Tobias and Fonollosa, Javier Rodriguez}, url = {http://www.tsc.uc3m.es/~koch/files/IZS_2014_009-012.pdf http://e-collection.library.ethz.ch/eserv/eth:8192/eth-8192-01.pdf}, year = {2014}, date = {2014-01-01}, booktitle = {International Zurich Seminar on Communications (IZS)}, address = {Zurich}, abstract = {As shown by M´edard, the capacity of fading channels with imperfect channel-state information (CSI) can be lowerbounded by assuming a Gaussian channel input and by treating the unknown portion of the channel multiplied by the channel input as independent worst-case (Gaussian) noise. Recently, we have demonstrated that this lower bound can be sharpened by a rate-splitting approach: by expressing the channel input as the sum of two independent Gaussian random variables (referred to as layers), say X = X1+X2, and by applying M´edard’s bounding technique to first lower-bound the capacity of the virtual channel from X1 to the channel output Y (while treating X2 as noise), and then lower-bound the capacity of the virtual channel from X2 to Y (while assuming X1 to be known), one obtains a lower bound that is strictly larger than M´edard’s bound. This ratesplitting approach is reminiscent of an approach used by Rimoldi and Urbanke to achieve points on the capacity region of the Gaussian multiple-access channel (MAC). Here we blend these two rate-splitting approaches to derive a novel inner bound on the capacity region of the memoryless fading MAC with imperfect CSI. Generalizing the above rate-splitting approach to more than two layers, we show that, irrespective of how we assign powers to each layer, the supremum of all rate-splitting bounds is approached as the number of layers tends to infinity, and we derive an integral expression for this supremum. We further derive an expression for the vertices of the best inner bound, maximized over the number of layers and over all power assignments.}, keywords = {ALCIT}, pubstate = {published}, tppubtype = {inproceedings} } As shown by M´edard, the capacity of fading channels with imperfect channel-state information (CSI) can be lowerbounded by assuming a Gaussian channel input and by treating the unknown portion of the channel multiplied by the channel input as independent worst-case (Gaussian) noise. Recently, we have demonstrated that this lower bound can be sharpened by a rate-splitting approach: by expressing the channel input as the sum of two independent Gaussian random variables (referred to as layers), say X = X1+X2, and by applying M´edard’s bounding technique to first lower-bound the capacity of the virtual channel from X1 to the channel output Y (while treating X2 as noise), and then lower-bound the capacity of the virtual channel from X2 to Y (while assuming X1 to be known), one obtains a lower bound that is strictly larger than M´edard’s bound. This ratesplitting approach is reminiscent of an approach used by Rimoldi and Urbanke to achieve points on the capacity region of the Gaussian multiple-access channel (MAC). Here we blend these two rate-splitting approaches to derive a novel inner bound on the capacity region of the memoryless fading MAC with imperfect CSI. Generalizing the above rate-splitting approach to more than two layers, we show that, irrespective of how we assign powers to each layer, the supremum of all rate-splitting bounds is approached as the number of layers tends to infinity, and we derive an integral expression for this supremum. We further derive an expression for the vertices of the best inner bound, maximized over the number of layers and over all power assignments. |

Montoya-Martinez, Jair; Artés-Rodríguez, Antonio; Pontil, Massimiliano Structured Sparse-Low Rank Matrix Factorization for the EEG Inverse Problem (Inproceeding) 4th International Workshop on Cognitive Information Processing (CIP 2014), Copenhagen, 2014. (Abstract | Links | BibTeX | Tags: ) @inproceedings{Montoya-Martinez2014, title = {Structured Sparse-Low Rank Matrix Factorization for the EEG Inverse Problem}, author = {Montoya-Martinez, Jair and Artés-Rodríguez, Antonio and Pontil, Massimiliano}, url = {http://www.conwiz.dk/cgi-all/cip2014/view_abstract.pl?idno=21}, year = {2014}, date = {2014-01-01}, booktitle = {4th International Workshop on Cognitive Information Processing (CIP 2014)}, address = {Copenhagen}, abstract = {We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy EEG measurements, commonly named as the EEG inverse problem. We propose a new method based on the factorization of the BES as a product of a sparse coding matrix and a dense latent source matrix. This structure is enforced by minimizing a regularized functional that includes the $backslash ell_21$-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We have evaluated our approach under a simulated scenario consisting on estimating a synthetic BES matrix with 5124 sources. We compare the performance of our method respect to the Lasso, Group Lasso, Sparse Group Lasso and Trace norm regularizers.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy EEG measurements, commonly named as the EEG inverse problem. We propose a new method based on the factorization of the BES as a product of a sparse coding matrix and a dense latent source matrix. This structure is enforced by minimizing a regularized functional that includes the $backslash ell_21$-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We have evaluated our approach under a simulated scenario consisting on estimating a synthetic BES matrix with 5124 sources. We compare the performance of our method respect to the Lasso, Group Lasso, Sparse Group Lasso and Trace norm regularizers. |