## 2015 |

## Journal Articles |

Elvira, Victor; Martino, Luca; Luengo, David; Bugallo, Monica Efficient Multiple Importance Sampling Estimators (Journal Article) IEEE Signal Processing Letters, 22 (10), pp. 1757–1761, 2015, ISSN: 1070-9908. (Abstract | Links | BibTeX | Tags: Adaptive importance sampling, classical mixture approach, computational complexity, Computational efficiency, Computer Simulation, deterministic mixture, estimation theory, Journal, Monte Carlo methods, multiple importance sampling, multiple importance sampling estimator, partial deterministic mixture MIS estimator, Proposals, signal sampling, Sociology, Standards, variance reduction, weight calculation) @article{Elvira2015b, title = {Efficient Multiple Importance Sampling Estimators}, author = {Elvira, Victor and Martino, Luca and Luengo, David and Bugallo, Monica F.}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7105865}, doi = {10.1109/LSP.2015.2432078}, issn = {1070-9908}, year = {2015}, date = {2015-10-01}, journal = {IEEE Signal Processing Letters}, volume = {22}, number = {10}, pages = {1757--1761}, publisher = {IEEE}, abstract = {Multiple importance sampling (MIS) methods use a set of proposal distributions from which samples are drawn. Each sample is then assigned an importance weight that can be obtained according to different strategies. This work is motivated by the trade-off between variance reduction and computational complexity of the different approaches (classical vs. deterministic mixture) available for the weight calculation. A new method that achieves an efficient compromise between both factors is introduced in this letter. It is based on forming a partition of the set of proposal distributions and computing the weights accordingly. Computer simulations show the excellent performance of the associated partial deterministic mixture MIS estimator.}, keywords = {Adaptive importance sampling, classical mixture approach, computational complexity, Computational efficiency, Computer Simulation, deterministic mixture, estimation theory, Journal, Monte Carlo methods, multiple importance sampling, multiple importance sampling estimator, partial deterministic mixture MIS estimator, Proposals, signal sampling, Sociology, Standards, variance reduction, weight calculation}, pubstate = {published}, tppubtype = {article} } Multiple importance sampling (MIS) methods use a set of proposal distributions from which samples are drawn. Each sample is then assigned an importance weight that can be obtained according to different strategies. This work is motivated by the trade-off between variance reduction and computational complexity of the different approaches (classical vs. deterministic mixture) available for the weight calculation. A new method that achieves an efficient compromise between both factors is introduced in this letter. It is based on forming a partition of the set of proposal distributions and computing the weights accordingly. Computer simulations show the excellent performance of the associated partial deterministic mixture MIS estimator. |

## 2011 |

## Journal Articles |

Santiago-Mozos, Ricardo; Perez-Cruz, Fernando; Artés-Rodríguez, Antonio Extended Input Space Support Vector Machine (Journal Article) IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, 22 (1), pp. 158–163, 2011, ISSN: 1941-0093. (Abstract | Links | BibTeX | Tags: Algorithms, Artificial Intelligence, Automated, Automated: standards, Computer Simulation, Computer Simulation: standards, Neural Networks (Computer), Pattern recognition, Problem Solving, Software Design, Software Validation) @article{Santiago-Mozos2011, title = {Extended Input Space Support Vector Machine}, author = {Santiago-Mozos, Ricardo and Perez-Cruz, Fernando and Artés-Rodríguez, Antonio}, url = {http://www.tsc.uc3m.es/~antonio/papers/P38_2011_Extended Input Space Support Vector Machine.pdf http://www.ncbi.nlm.nih.gov/pubmed/21095866}, issn = {1941-0093}, year = {2011}, date = {2011-01-01}, journal = {IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council}, volume = {22}, number = {1}, pages = {158--163}, abstract = {In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for K test samples. In the second approach, we build a new classifier that directly computes the label for K test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases.}, keywords = {Algorithms, Artificial Intelligence, Automated, Automated: standards, Computer Simulation, Computer Simulation: standards, Neural Networks (Computer), Pattern recognition, Problem Solving, Software Design, Software Validation}, pubstate = {published}, tppubtype = {article} } In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for K test samples. In the second approach, we build a new classifier that directly computes the label for K test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases. |

## 2009 |

## Inproceedings |

Djuric, Petar; Miguez, Joaquin Model Assessment with Kolmogorov-Smirnov Statistics (Inproceeding) 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2973–2976, IEEE, Taipei, 2009, ISSN: 1520-6149. (Abstract | Links | BibTeX | Tags: Bayesian methods, Computer Simulation, Context modeling, Electronic mail, Filtering, ill-conditioned problem, Kolmogorov-Smirnov statistics, model assessment, modelling, Predictive models, Probability, statistical analysis, statistics, Testing) @inproceedings{Djuric2009, title = {Model Assessment with Kolmogorov-Smirnov Statistics}, author = {Djuric, Petar M. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4960248}, issn = {1520-6149}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {2973--2976}, publisher = {IEEE}, address = {Taipei}, abstract = {One of the most basic problems in science and engineering is the assessment of a considered model. The model should describe a set of observed data and the objective is to find ways of deciding if the model should be rejected. It seems that this is an ill-conditioned problem because we have to test the model against all the possible alternative models. In this paper we use the Kolmogorov-Smirnov statistic to develop a test that shows if the model should be kept or it should be rejected. We explain how this testing can be implemented in the context of particle filtering. We demonstrate the performance of the proposed method by computer simulations.}, keywords = {Bayesian methods, Computer Simulation, Context modeling, Electronic mail, Filtering, ill-conditioned problem, Kolmogorov-Smirnov statistics, model assessment, modelling, Predictive models, Probability, statistical analysis, statistics, Testing}, pubstate = {published}, tppubtype = {inproceedings} } One of the most basic problems in science and engineering is the assessment of a considered model. The model should describe a set of observed data and the objective is to find ways of deciding if the model should be rejected. It seems that this is an ill-conditioned problem because we have to test the model against all the possible alternative models. In this paper we use the Kolmogorov-Smirnov statistic to develop a test that shows if the model should be kept or it should be rejected. We explain how this testing can be implemented in the context of particle filtering. We demonstrate the performance of the proposed method by computer simulations. |

Martino, Luca; Miguez, Joaquin An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions (Inproceeding) 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 45–48, IEEE, Cardiff, 2009, ISBN: 978-1-4244-2709-3. (Abstract | Links | BibTeX | Tags: adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing) @inproceedings{Martino2009b, title = {An Adaptive Accept/Reject Sampling Algorithm for Posterior Probability Distributions}, author = {Martino, Luca and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5278644}, isbn = {978-1-4244-2709-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE/SP 15th Workshop on Statistical Signal Processing}, pages = {45--48}, publisher = {IEEE}, address = {Cardiff}, abstract = {Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations.}, keywords = {adaptive accept/reject sampling, Adaptive rejection sampling, arbitrary target probability distributions, Computer Simulation, Filtering, Monte Carlo integration, Monte Carlo methods, posterior probability distributions, Probability, Probability density function, Probability distribution, Proposals, Rejection sampling, Sampling methods, sensor networks, Signal processing algorithms, signal sampling, Testing}, pubstate = {published}, tppubtype = {inproceedings} } Accept/reject sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. In this paper we introduce an adaptive method to build a sequence of proposal pdf's that approximate the target density and hence can ensure a high acceptance rate. In order to illustrate the application of the method we design an accept/reject particle filter and then assess its performance and sampling efficiency numerically, by means of computer simulations. |

Vinuelas-Peris, Pablo; Artés-Rodríguez, Antonio Sensing Matrix Optimization in Distributed Compressed Sensing (Inproceeding) 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 638–641, IEEE, Cardiff, 2009, ISBN: 978-1-4244-2709-3. (Abstract | Links | BibTeX | Tags: Compressed sensing, Computer Simulation, computer simulations, correlated signal, Correlated signals, correlation theory, Dictionaries, distributed coding strategy, distributed compressed sensing, Distributed control, efficient projection method, Encoding, joint recovery method, Matching pursuit algorithms, Optimization methods, orthogonal matching pursuit, Projection Matrix Optimization, sensing matrix optimization, Sensor Network, Sensor phenomena and characterization, Sensor systems, Signal processing, Sparse matrices, Technological innovation) @inproceedings{Vinuelas-Peris2009, title = {Sensing Matrix Optimization in Distributed Compressed Sensing}, author = {Vinuelas-Peris, Pablo and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5278496}, isbn = {978-1-4244-2709-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE/SP 15th Workshop on Statistical Signal Processing}, pages = {638--641}, publisher = {IEEE}, address = {Cardiff}, abstract = {Distributed compressed sensing (DCS) seeks to simultaneously measure signals that are each individually sparse in some domain(s) and also mutually correlated. In this paper we consider the scenario in which the (overcomplete) bases for common component and innovations are different. We propose and analyze a distributed coding strategy for the common component, and also the use of efficient projection (EP) method for optimizing the sensing matrices in this setting. We show the effectiveness of our approach by computer simulations using the orthogonal matching pursuit (OMP) as joint recovery method, and we discuss the configuration of the distribution strategy.}, keywords = {Compressed sensing, Computer Simulation, computer simulations, correlated signal, Correlated signals, correlation theory, Dictionaries, distributed coding strategy, distributed compressed sensing, Distributed control, efficient projection method, Encoding, joint recovery method, Matching pursuit algorithms, Optimization methods, orthogonal matching pursuit, Projection Matrix Optimization, sensing matrix optimization, Sensor Network, Sensor phenomena and characterization, Sensor systems, Signal processing, Sparse matrices, Technological innovation}, pubstate = {published}, tppubtype = {inproceedings} } Distributed compressed sensing (DCS) seeks to simultaneously measure signals that are each individually sparse in some domain(s) and also mutually correlated. In this paper we consider the scenario in which the (overcomplete) bases for common component and innovations are different. We propose and analyze a distributed coding strategy for the common component, and also the use of efficient projection (EP) method for optimizing the sensing matrices in this setting. We show the effectiveness of our approach by computer simulations using the orthogonal matching pursuit (OMP) as joint recovery method, and we discuss the configuration of the distribution strategy. |

## 2008 |

## Inproceedings |

Vazquez, Manuel; Miguez, Joaquin A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order (Inproceeding) 2008 International ITG Workshop on Smart Antennas, pp. 387–391, IEEE, Vienna, 2008, ISBN: 978-1-4244-1756-8. (Abstract | Links | BibTeX | Tags: Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas) @inproceedings{Vazquez2008, title = {A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order}, author = {Vazquez, Manuel A. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4475587}, isbn = {978-1-4244-1756-8}, year = {2008}, date = {2008-01-01}, booktitle = {2008 International ITG Workshop on Smart Antennas}, pages = {387--391}, publisher = {IEEE}, address = {Vienna}, abstract = {In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver}, keywords = {Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas}, pubstate = {published}, tppubtype = {inproceedings} } In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver |

Vazquez, Manuel; Miguez, Joaquin A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order (Inproceeding) 2008 International ITG Workshop on Smart Antennas, pp. 387–391, IEEE, Vienna, 2008, ISBN: 978-1-4244-1756-8. (Abstract | Links | BibTeX | Tags: Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas) @inproceedings{Vazquez2008a, title = {A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order}, author = {Vazquez, Manuel A. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4475587}, isbn = {978-1-4244-1756-8}, year = {2008}, date = {2008-01-01}, booktitle = {2008 International ITG Workshop on Smart Antennas}, pages = {387--391}, publisher = {IEEE}, address = {Vienna}, abstract = {In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver.}, keywords = {Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas}, pubstate = {published}, tppubtype = {inproceedings} } In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver. |

## 2007 |

## Journal Articles |

Leiva-Murillo, Jose; Artés-Rodríguez, Antonio Maximization of Mutual Information for Supervised Linear Feature Extraction (Journal Article) IEEE Transactions on Neural Networks, 18 (5), pp. 1433–1441, 2007, ISSN: 1045-9227. (Abstract | Links | BibTeX | Tags: Algorithms, Artificial Intelligence, Automated, component-by-component gradient-ascent method, Computer Simulation, Data Mining, Entropy, Feature extraction, gradient methods, gradient-based entropy, Independent component analysis, Information Storage and Retrieval, information theory, Iron, learning (artificial intelligence), Linear discriminant analysis, Linear Models, Mutual information, Optimization methods, Pattern recognition, Reproducibility of Results, Sensitivity and Specificity, supervised linear feature extraction, Vectors) @article{Leiva-Murillo2007, title = {Maximization of Mutual Information for Supervised Linear Feature Extraction}, author = {Leiva-Murillo, Jose M. and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4298118}, issn = {1045-9227}, year = {2007}, date = {2007-01-01}, journal = {IEEE Transactions on Neural Networks}, volume = {18}, number = {5}, pages = {1433--1441}, publisher = {IEEE}, abstract = {In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes.}, keywords = {Algorithms, Artificial Intelligence, Automated, component-by-component gradient-ascent method, Computer Simulation, Data Mining, Entropy, Feature extraction, gradient methods, gradient-based entropy, Independent component analysis, Information Storage and Retrieval, information theory, Iron, learning (artificial intelligence), Linear discriminant analysis, Linear Models, Mutual information, Optimization methods, Pattern recognition, Reproducibility of Results, Sensitivity and Specificity, supervised linear feature extraction, Vectors}, pubstate = {published}, tppubtype = {article} } In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes. |