## 2009 |

## Inproceedings |

Perez-Cruz, Fernando ; Rodrigues, Miguel R D; Verdu, Sergio Optimal Precoding for Multiple-Input Multiple-Output Gaussian Channels Inproceedings Seminar PIIRS, Princeton, 2009. Abstract | Links | BibTeX | Tags: Theory & Algorithms @inproceedings{Perez-Cruz2009a, title = {Optimal Precoding for Multiple-Input Multiple-Output Gaussian Channels}, author = {Perez-Cruz, Fernando and Rodrigues, Miguel R. D. and Verdu, Sergio}, url = {http://eprints.pascal-network.org/archive/00006754/}, year = {2009}, date = {2009-01-01}, booktitle = {Seminar PIIRS}, address = {Princeton}, abstract = {We investigate the linear precoding and power allocation policies that maximize the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean-square error. The optimal linear precoder satisfies a fixed-point equation as a function of the channel and the input constellation. For nonGaussian inputs, a nondiagonal precoding matrix in general increases the information transmission rate, even for parallel noninteracting channels. Whenever precoding is precluded, the optimal power allocation policy also satisfies a fixed-point equation; we put forth a generalization of the mercury/waterfilling algorithm, previously proposed for parallel noninterfering channels, in which the mercury level accounts not only for the nonGaussian input distributions, but also for the interference among inputs.}, keywords = {Theory & Algorithms}, pubstate = {published}, tppubtype = {inproceedings} } We investigate the linear precoding and power allocation policies that maximize the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean-square error. The optimal linear precoder satisfies a fixed-point equation as a function of the channel and the input constellation. For nonGaussian inputs, a nondiagonal precoding matrix in general increases the information transmission rate, even for parallel noninteracting channels. Whenever precoding is precluded, the optimal power allocation policy also satisfies a fixed-point equation; we put forth a generalization of the mercury/waterfilling algorithm, previously proposed for parallel noninterfering channels, in which the mercury level accounts not only for the nonGaussian input distributions, but also for the interference among inputs. |

Miguez, Joaquin ; Maiz, Cristina S; Djuric, Petar M; Crisan, Dan Sequential Monte Carlo Optimization Using Artificial State-Space Models Inproceedings 2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, pp. 268–273, IEEE, Marco Island, FL, 2009. Abstract | Links | BibTeX | Tags: Acceleration, Cost function, Design optimization, discrete-time dynamical system, Educational institutions, Mathematics, maximum a posteriori estimate, maximum likelihood estimation, minimisation, Monte Carlo methods, Optimization methods, Probability distribution, sequential Monte Carlo optimization, Sequential optimization, Signal design, State-space methods, state-space model, Stochastic optimization @inproceedings{Miguez2009, title = {Sequential Monte Carlo Optimization Using Artificial State-Space Models}, author = {Miguez, Joaquin and Maiz, Cristina S. and Djuric, Petar M. and Crisan, Dan}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4785933}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop}, pages = {268--273}, publisher = {IEEE}, address = {Marco Island, FL}, abstract = {We introduce a method for sequential minimization of a certain class of (possibly non-convex) cost functions with respect to a high dimensional signal of interest. The proposed approach involves the transformation of the optimization problem into one of estimation in a discrete-time dynamical system. In particular, we describe a methodology for constructing an artificial state-space model which has the signal of interest as its unobserved dynamic state. The model is "adapted" to the cost function in the sense that the maximum a posteriori (MAP) estimate of the system state is also a global minimizer of the cost function. The advantage of the estimation framework is that we can draw from a pool of sequential Monte Carlo methods, for particle approximation of probability measures in dynamic systems, that enable the numerical computation of MAP estimates. We provide examples of how to apply the proposed methodology, including some illustrative simulation results.}, keywords = {Acceleration, Cost function, Design optimization, discrete-time dynamical system, Educational institutions, Mathematics, maximum a posteriori estimate, maximum likelihood estimation, minimisation, Monte Carlo methods, Optimization methods, Probability distribution, sequential Monte Carlo optimization, Sequential optimization, Signal design, State-space methods, state-space model, Stochastic optimization}, pubstate = {published}, tppubtype = {inproceedings} } We introduce a method for sequential minimization of a certain class of (possibly non-convex) cost functions with respect to a high dimensional signal of interest. The proposed approach involves the transformation of the optimization problem into one of estimation in a discrete-time dynamical system. In particular, we describe a methodology for constructing an artificial state-space model which has the signal of interest as its unobserved dynamic state. The model is "adapted" to the cost function in the sense that the maximum a posteriori (MAP) estimate of the system state is also a global minimizer of the cost function. The advantage of the estimation framework is that we can draw from a pool of sequential Monte Carlo methods, for particle approximation of probability measures in dynamic systems, that enable the numerical computation of MAP estimates. We provide examples of how to apply the proposed methodology, including some illustrative simulation results. |

Fresia, Maria ; Perez-Cruz, Fernando ; Poor, Vincent H; Verdu, Sergio Joint Source-Channel Coding with Concatenated LDPC Codes Inproceedings Information Theory and Applications (ITA), San Diego, 2009. Abstract | Links | BibTeX | Tags: Learning/Statistics & Optimisation @inproceedings{Fresia2009a, title = {Joint Source-Channel Coding with Concatenated LDPC Codes}, author = {Fresia, Maria and Perez-Cruz, Fernando and Poor, H. Vincent and Verdu, Sergio}, url = {http://eprints.pascal-network.org/archive/00004905/}, year = {2009}, date = {2009-01-01}, booktitle = {Information Theory and Applications (ITA)}, address = {San Diego}, abstract = {The separation principle, a milestone in information theory, establishes that for stationary sources and channels there is no loss of optimality when a channel-independent source encoder followed by a source-independent channel encoder are used to transmit the data, as the code length tends to infinity. Thereby, the source and channel encoding have been typically treated as independent problems. For finite-length codes, the separation principle does not hold and a joint encoder and decoder can potentially increase the achieved information transmission rate. In this paper, a scheme for joint source-channel coding based on low-density parity-check (LDPC) codes is presented. The source is compressed and protected with two concatenated LDPC codes and a joint belief propagation decoder is implemented. EXIT chart performance of the proposed schemes is studied. The results are verified with some illustrative experiments.}, keywords = {Learning/Statistics & Optimisation}, pubstate = {published}, tppubtype = {inproceedings} } The separation principle, a milestone in information theory, establishes that for stationary sources and channels there is no loss of optimality when a channel-independent source encoder followed by a source-independent channel encoder are used to transmit the data, as the code length tends to infinity. Thereby, the source and channel encoding have been typically treated as independent problems. For finite-length codes, the separation principle does not hold and a joint encoder and decoder can potentially increase the achieved information transmission rate. In this paper, a scheme for joint source-channel coding based on low-density parity-check (LDPC) codes is presented. The source is compressed and protected with two concatenated LDPC codes and a joint belief propagation decoder is implemented. EXIT chart performance of the proposed schemes is studied. The results are verified with some illustrative experiments. |

Goez, Roger ; Lazaro, Marcelino Training of Neural Classifiers by Separating Distributions at the Hidden Layer Inproceedings 2009 IEEE International Workshop on Machine Learning for Signal Processing, pp. 1–6, IEEE, Grenoble, 2009, ISBN: 978-1-4244-4947-7. Abstract | Links | BibTeX | Tags: Artificial neural networks, Bayesian methods, Cost function, Curve fitting, Databases, Function approximation, Neural networks, Speech recognition, Support vector machine classification, Support vector machines @inproceedings{Goez2009, title = {Training of Neural Classifiers by Separating Distributions at the Hidden Layer}, author = {Goez, Roger and Lazaro, Marcelino}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5306240}, isbn = {978-1-4244-4947-7}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Workshop on Machine Learning for Signal Processing}, pages = {1--6}, publisher = {IEEE}, address = {Grenoble}, abstract = {A new cost function for training of binary classifiers based on neural networks is proposed. This cost function aims at separating the distributions for patterns of each class at the output of the hidden layer of the network. It has been implemented in a Generalized Radial Basis Function (GRBF) network and its performance has been evaluated under three different databases, showing advantages with respect to the conventional Mean Squared Error (MSE) cost function. With respect to the Support Vector Machine (SVM) classifier, the proposed method has also advantages both in terms of performance and complexity.}, keywords = {Artificial neural networks, Bayesian methods, Cost function, Curve fitting, Databases, Function approximation, Neural networks, Speech recognition, Support vector machine classification, Support vector machines}, pubstate = {published}, tppubtype = {inproceedings} } A new cost function for training of binary classifiers based on neural networks is proposed. This cost function aims at separating the distributions for patterns of each class at the output of the hidden layer of the network. It has been implemented in a Generalized Radial Basis Function (GRBF) network and its performance has been evaluated under three different databases, showing advantages with respect to the conventional Mean Squared Error (MSE) cost function. With respect to the Support Vector Machine (SVM) classifier, the proposed method has also advantages both in terms of performance and complexity. |

Alvarez, Mauricio ; Luengo, David ; Lawrence, N D Latent Force Models Inproceedings Conf. on Artificial Intelligence and Statistics, Clearwater Beach, 2009. BibTeX | Tags: @inproceedings{Alvarez2009, title = {Latent Force Models}, author = {Alvarez, Mauricio and Luengo, David and Lawrence, N. D.}, year = {2009}, date = {2009-01-01}, booktitle = {Conf. on Artificial Intelligence and Statistics}, address = {Clearwater Beach}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Plata-Chaves, Jorge ; Lazaro, Marcelino Closed-Form Error Exponent for the Neyman-Pearson Fusion of Markov Local Decisions Inproceedings 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 533–536, IEEE, Cardiff, 2009, ISBN: 978-1-4244-2709-3. Abstract | Links | BibTeX | Tags: @inproceedings{Plata-Chaves2009, title = {Closed-Form Error Exponent for the Neyman-Pearson Fusion of Markov Local Decisions}, author = {Plata-Chaves, Jorge and Lazaro, Marcelino}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=5278522}, isbn = {978-1-4244-2709-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE/SP 15th Workshop on Statistical Signal Processing}, pages = {533--536}, publisher = {IEEE}, address = {Cardiff}, abstract = {In this correspondence, we derive a closed-form expression of the error exponent associated with the binary Neyman-Pearson test performed at the fusion center of a distributed detection system where a large number of local detectors take dependent binary decisions regarding a specific phenomenon. We assume that the sensors are equally spaced along a straight line, that their local decisions are taken with no kind of cooperation, and that they are transmitted to the fusion center over an error free parallel access channel. Under each one of the two possible hypothesis, H0 and H1 the correlation structure of the local binary decisions is modelled with a first-order binary Markov chain whose transition probabilities are linked with different physical parameters of the network. Through different simulations based on the error exponent and a deterministic physical model of the aforementioned transition probabilities we study the effect of network density on the overall detection performance.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this correspondence, we derive a closed-form expression of the error exponent associated with the binary Neyman-Pearson test performed at the fusion center of a distributed detection system where a large number of local detectors take dependent binary decisions regarding a specific phenomenon. We assume that the sensors are equally spaced along a straight line, that their local decisions are taken with no kind of cooperation, and that they are transmitted to the fusion center over an error free parallel access channel. Under each one of the two possible hypothesis, H0 and H1 the correlation structure of the local binary decisions is modelled with a first-order binary Markov chain whose transition probabilities are linked with different physical parameters of the network. Through different simulations based on the error exponent and a deterministic physical model of the aforementioned transition probabilities we study the effect of network density on the overall detection performance. |

## 2008 |

## Journal Articles |

Perez-Cruz, Fernando ; Murillo-Fuentes, Juan Jose ; Caro, S Nonlinear Channel Equalization With Gaussian Processes for Regression Journal Article IEEE Transactions on Signal Processing, 56 (10), pp. 5283–5286, 2008, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: Channel estimation, digital communications receivers, equalisers, equalization, Gaussian processes, kernel adaline, least mean squares methods, maximum likelihood estimation, nonlinear channel equalization, nonlinear equalization, nonlinear minimum mean square error estimator, regression, regression analysis, short training sequences, Support vector machines @article{Perez-Cruz2008c, title = {Nonlinear Channel Equalization With Gaussian Processes for Regression}, author = {Perez-Cruz, Fernando and Murillo-Fuentes, Juan Jose and Caro, S.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4563433}, issn = {1053-587X}, year = {2008}, date = {2008-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {56}, number = {10}, pages = {5283--5286}, abstract = {We propose Gaussian processes for regression (GPR) as a novel nonlinear equalizer for digital communications receivers. GPR's main advantage, compared to previous nonlinear estimation approaches, lies on their capability to optimize the kernel hyperparameters by maximum likelihood, which improves its performance significantly for short training sequences. Besides, GPR can be understood as a nonlinear minimum mean square error estimator, a standard criterion for training equalizers that trades off the inversion of the channel and the amplification of the noise. In the experiment section, we show that the GPR-based equalizer clearly outperforms support vector machine and kernel adaline approaches, exhibiting outstanding results for short training sequences.}, keywords = {Channel estimation, digital communications receivers, equalisers, equalization, Gaussian processes, kernel adaline, least mean squares methods, maximum likelihood estimation, nonlinear channel equalization, nonlinear equalization, nonlinear minimum mean square error estimator, regression, regression analysis, short training sequences, Support vector machines}, pubstate = {published}, tppubtype = {article} } We propose Gaussian processes for regression (GPR) as a novel nonlinear equalizer for digital communications receivers. GPR's main advantage, compared to previous nonlinear estimation approaches, lies on their capability to optimize the kernel hyperparameters by maximum likelihood, which improves its performance significantly for short training sequences. Besides, GPR can be understood as a nonlinear minimum mean square error estimator, a standard criterion for training equalizers that trades off the inversion of the channel and the amplification of the noise. In the experiment section, we show that the GPR-based equalizer clearly outperforms support vector machine and kernel adaline approaches, exhibiting outstanding results for short training sequences. |

Baca-García, Enrique ; Perez-Rodriguez, Mercedes M; Basurte-Villamor, Ignacio ; Quintero-Gutierrez, Javier F; Sevilla-Vicente, Juncal ; Martinez-Vigo, Maria ; Artés-Rodríguez, Antonio ; Fernandez del Moral, Antonio L; Jimenez-Arriero, Miguel A; Gonzalez de Rivera, Jose L European archives of psychiatry and clinical neuroscience, 258 (2), pp. 117–123, 2008, ISSN: 0940-1334. Abstract | Links | BibTeX | Tags: 80 and over, Adolescent, Adult, Age Distribution, Aged, Ambulatory Care, Ambulatory Care: statistics & numerical data, Ambulatory Care: utilization, Child, Diagnosis-Related Groups, Female, General, General: statistics & numerical data, General: utilization, Health Care Costs, Health Care Costs: statistics & numerical data, Health Services Accessibility, Health Services Accessibility: statistics & numeri, Health Services Needs and Demand, Health Services Needs and Demand: statistics & num, Hospitals, Humans, Male, Mental Disorders, Mental Disorders: classification, Mental Disorders: diagnosis, Mental Disorders: epidemiology, Mental Disorders: therapy, Mental Health Services, Mental Health Services: economics, Mental Health Services: utilization, Middle Aged, Outcome and Process Assessment (Health Care), Preschool, Psychiatry, Psychiatry: economics, Psychiatry: statistics & numerical data, Sex Distribution, Spain, Spain: epidemiology, Utilization Review, Utilization Review: statistics & numerical data @article{Baca-Garcia2008, title = {Patterns of Mental Health Service Utilization in a General Hospital and Outpatient Mental Health Facilities: Analysis of 365,262 Psychiatric Consultations}, author = {Baca-García, Enrique and Perez-Rodriguez, M Mercedes and Basurte-Villamor, Ignacio and Quintero-Gutierrez, F Javier and Sevilla-Vicente, Juncal and Martinez-Vigo, Maria and Artés-Rodríguez, Antonio and Fernandez del Moral, Antonio L and Jimenez-Arriero, Miguel A and Gonzalez de Rivera, Jose L}, url = {http://www.ncbi.nlm.nih.gov/pubmed/17990050}, issn = {0940-1334}, year = {2008}, date = {2008-01-01}, journal = {European archives of psychiatry and clinical neuroscience}, volume = {258}, number = {2}, pages = {117--123}, abstract = {PURPOSE: Mental health is one of the priorities of the European Commission. Studies of the use and cost of mental health facilities are needed in order to improve the planning and efficiey of mental health resources. We analyze the patterns of mental health service use in multiple clinical settings to identify factors associated with high cost. SUBJECTS AND METHODS: 22,859 patients received psychiatric care in the catchment area of a Spanish hospital (2000-2004). They had 365,262 psychiatric consultations in multiple settings. Two groups were selected that generated 80% of total costs: the medium cost group (N = 4,212; 50% of costs), and the high cost group (N = 236; 30% of costs). Statistical analyses were performed using univariate and multivariate techniques. Significant variables in univariate analyses were introduced as independent variables in a logistic regression analysis using "high cost" (>7,263$) as dependent variable. RESULTS: Costs were not evenly distributed throughout the sample. 19.4% of patients generated 80% of costs. The variables associated with high cost were: age group 1 (0-14 years) at the first evaluation, permanent disability, and ICD-10 diagnoses: Organic, including symptomatic, mental disorders; Mental and behavioural disorders due to psychoactive substance use; Schizophrenia, schizotypal and delusional disorders; Behavioural syndromes associated with physiological disturbances and physical factors; External causes of morbidity and mortality; and Factors influencing health status and contact with health services. DISCUSSION: Mental healthcare costs were not evenly distributed throughout the patient population. The highest costs are associated with early onset of the mental disorder, permanent disability, organic mental disorders, substance-related disorders, psychotic disorders, and external factors that influence the health status and contact with health services or cause morbidity and mortality. CONCLUSION: Variables related to psychiatric diagnoses and sociodemographic factors have influence on the cost of mental healthcare.}, keywords = {80 and over, Adolescent, Adult, Age Distribution, Aged, Ambulatory Care, Ambulatory Care: statistics & numerical data, Ambulatory Care: utilization, Child, Diagnosis-Related Groups, Female, General, General: statistics & numerical data, General: utilization, Health Care Costs, Health Care Costs: statistics & numerical data, Health Services Accessibility, Health Services Accessibility: statistics & numeri, Health Services Needs and Demand, Health Services Needs and Demand: statistics & num, Hospitals, Humans, Male, Mental Disorders, Mental Disorders: classification, Mental Disorders: diagnosis, Mental Disorders: epidemiology, Mental Disorders: therapy, Mental Health Services, Mental Health Services: economics, Mental Health Services: utilization, Middle Aged, Outcome and Process Assessment (Health Care), Preschool, Psychiatry, Psychiatry: economics, Psychiatry: statistics & numerical data, Sex Distribution, Spain, Spain: epidemiology, Utilization Review, Utilization Review: statistics & numerical data}, pubstate = {published}, tppubtype = {article} } PURPOSE: Mental health is one of the priorities of the European Commission. Studies of the use and cost of mental health facilities are needed in order to improve the planning and efficiey of mental health resources. We analyze the patterns of mental health service use in multiple clinical settings to identify factors associated with high cost. SUBJECTS AND METHODS: 22,859 patients received psychiatric care in the catchment area of a Spanish hospital (2000-2004). They had 365,262 psychiatric consultations in multiple settings. Two groups were selected that generated 80% of total costs: the medium cost group (N = 4,212; 50% of costs), and the high cost group (N = 236; 30% of costs). Statistical analyses were performed using univariate and multivariate techniques. Significant variables in univariate analyses were introduced as independent variables in a logistic regression analysis using "high cost" (>7,263$) as dependent variable. RESULTS: Costs were not evenly distributed throughout the sample. 19.4% of patients generated 80% of costs. The variables associated with high cost were: age group 1 (0-14 years) at the first evaluation, permanent disability, and ICD-10 diagnoses: Organic, including symptomatic, mental disorders; Mental and behavioural disorders due to psychoactive substance use; Schizophrenia, schizotypal and delusional disorders; Behavioural syndromes associated with physiological disturbances and physical factors; External causes of morbidity and mortality; and Factors influencing health status and contact with health services. DISCUSSION: Mental healthcare costs were not evenly distributed throughout the patient population. The highest costs are associated with early onset of the mental disorder, permanent disability, organic mental disorders, substance-related disorders, psychotic disorders, and external factors that influence the health status and contact with health services or cause morbidity and mortality. CONCLUSION: Variables related to psychiatric diagnoses and sociodemographic factors have influence on the cost of mental healthcare. |

Perez-Cruz, Fernando ; Murillo-Fuentes, Juan Jose Digital Communication Receivers Using Gaussian Processes for Machine Learning Journal Article EURASIP Journal on Advances in Signal Processing, 2008 (1), pp. 1–13, 2008, ISSN: 1687-6172. Abstract | Links | BibTeX | Tags: @article{Perez-Cruz2008d, title = {Digital Communication Receivers Using Gaussian Processes for Machine Learning}, author = {Perez-Cruz, Fernando and Murillo-Fuentes, Juan Jose}, url = {http://asp.eurasipjournals.com/content/2008/1/491503}, issn = {1687-6172}, year = {2008}, date = {2008-01-01}, journal = {EURASIP Journal on Advances in Signal Processing}, volume = {2008}, number = {1}, pages = {1--13}, publisher = {Springer}, abstract = {We propose Gaussian processes (GPs) as a novel nonlinear receiver for digital communication systems. The GPs framework can be used to solve both classification (GPC) and regression (GPR) problems. The minimum mean squared error solution is the expectation of the transmitted symbol given the information at the receiver, which is a nonlinear function of the received symbols for discrete inputs. GPR can be presented as a nonlinear MMSE estimator and thus capable of achieving optimal performance from MMSE viewpoint. Also, the design of digital communication receivers can be viewed as a detection problem, for which GPC is specially suited as it assigns posterior probabilities to each transmitted symbol. We explore the suitability of GPs as nonlinear digital communication receivers. GPs are Bayesian machine learning tools that formulates a likelihood function for its hyperparameters, which can then be set optimally. GPs outperform state-of-the-art nonlinear machine learning approaches that prespecify their hyperparameters or rely on cross validation. We illustrate the advantages of GPs as digital communication receivers for linear and nonlinear channel models for short training sequences and compare them to state-of-the-art nonlinear machine learning tools, such as support vector machines.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We propose Gaussian processes (GPs) as a novel nonlinear receiver for digital communication systems. The GPs framework can be used to solve both classification (GPC) and regression (GPR) problems. The minimum mean squared error solution is the expectation of the transmitted symbol given the information at the receiver, which is a nonlinear function of the received symbols for discrete inputs. GPR can be presented as a nonlinear MMSE estimator and thus capable of achieving optimal performance from MMSE viewpoint. Also, the design of digital communication receivers can be viewed as a detection problem, for which GPC is specially suited as it assigns posterior probabilities to each transmitted symbol. We explore the suitability of GPs as nonlinear digital communication receivers. GPs are Bayesian machine learning tools that formulates a likelihood function for its hyperparameters, which can then be set optimally. GPs outperform state-of-the-art nonlinear machine learning approaches that prespecify their hyperparameters or rely on cross validation. We illustrate the advantages of GPs as digital communication receivers for linear and nonlinear channel models for short training sequences and compare them to state-of-the-art nonlinear machine learning tools, such as support vector machines. |

Leiva-Murillo, Jose M; Salcedo-Sanz, Sancho ; Gallardo-Antolín, Ascensión ; Artés-Rodríguez, Antonio A Simulated Annealing Approach to Speaker Segmentation in Audio Databases Journal Article Engineering Applications of Artificial Intelligence, 21 (4), pp. 499–508, 2008. Abstract | Links | BibTeX | Tags: Audio indexing, information theory, Simulated annealing, Speaker segmentation @article{Leiva-Murillo2008, title = {A Simulated Annealing Approach to Speaker Segmentation in Audio Databases}, author = {Leiva-Murillo, Jose M. and Salcedo-Sanz, Sancho and Gallardo-Antolín, Ascensión and Artés-Rodríguez, Antonio}, url = {http://www.sciencedirect.com/science/article/pii/S0952197607000954}, year = {2008}, date = {2008-01-01}, journal = {Engineering Applications of Artificial Intelligence}, volume = {21}, number = {4}, pages = {499--508}, abstract = {In this paper we present a novel approach to the problem of speaker segmentation, which is an unavoidable previous step to audio indexing. Mutual information is used for evaluating the accuracy of the segmentation, as a function to be maximized by a simulated annealing (SA) algorithm. We introduce a novel mutation operator for the SA, the Consecutive Bits Mutation operator, which improves the performance of the SA in this problem. We also use the so-called Compaction Factor, which allows the SA to operate in a reduced search space. Our algorithm has been tested in the segmentation of real audio databases, and it has been compared to several existing algorithms for speaker segmentation, obtaining very good results in the test problems considered.}, keywords = {Audio indexing, information theory, Simulated annealing, Speaker segmentation}, pubstate = {published}, tppubtype = {article} } In this paper we present a novel approach to the problem of speaker segmentation, which is an unavoidable previous step to audio indexing. Mutual information is used for evaluating the accuracy of the segmentation, as a function to be maximized by a simulated annealing (SA) algorithm. We introduce a novel mutation operator for the SA, the Consecutive Bits Mutation operator, which improves the performance of the SA in this problem. We also use the so-called Compaction Factor, which allows the SA to operate in a reduced search space. Our algorithm has been tested in the segmentation of real audio databases, and it has been compared to several existing algorithms for speaker segmentation, obtaining very good results in the test problems considered. |

Vazquez, Manuel A; Bugallo, Monica F; Miguez, Joaquin Sequential Monte Carlo Methods for Complexity-Constrained MAP Equalization of Dispersive MIMO Channels Journal Article Signal Processing, 88 (4), pp. 1017–1034, 2008. Abstract | Links | BibTeX | Tags: joint channel and data estimation, Multiple Input Multiple Output (MIMO), Sequential Monte Carlo (SMC) @article{Vazquez2008b, title = {Sequential Monte Carlo Methods for Complexity-Constrained MAP Equalization of Dispersive MIMO Channels}, author = {Vazquez, Manuel A. and Bugallo, Monica F. and Miguez, Joaquin}, url = {http://www.sciencedirect.com/science/article/pii/S0165168407003763}, year = {2008}, date = {2008-01-01}, journal = {Signal Processing}, volume = {88}, number = {4}, pages = {1017--1034}, abstract = {The ability to perform nearly optimal equalization of multiple input multiple output (MIMO) wireless channels using sequential Monte Carlo (SMC) techniques has recently been demonstrated. SMC methods allow to recursively approximate the a posteriori probabilities of the transmitted symbols, as observations are sequentially collected, using samples from adequate probability distributions. Hence, they are a class of online (adaptive) algorithms, suitable to handle the time-varying channels typical of high speed mobile communication applications. The main drawback of the SMC-based MIMO-channel equalizers so far proposed is that their computational complexity grows exponentially with the number of input data streams and the length of the channel impulse response, rendering these methods impractical. In this paper, we introduce novel SMC schemes that overcome this limitation by the adequate design of proposal probability distribution functions that can be sampled with a lesser computational burden, yet provide a close-to-optimal performance in terms of the resulting equalizer bit error rate and channel estimation error. We show that the complexity of the resulting receivers grows polynomially with the number of input data streams and the length of the channel response, and present computer simulation results that illustrate their performance in some typical scenarios.}, keywords = {joint channel and data estimation, Multiple Input Multiple Output (MIMO), Sequential Monte Carlo (SMC)}, pubstate = {published}, tppubtype = {article} } The ability to perform nearly optimal equalization of multiple input multiple output (MIMO) wireless channels using sequential Monte Carlo (SMC) techniques has recently been demonstrated. SMC methods allow to recursively approximate the a posteriori probabilities of the transmitted symbols, as observations are sequentially collected, using samples from adequate probability distributions. Hence, they are a class of online (adaptive) algorithms, suitable to handle the time-varying channels typical of high speed mobile communication applications. The main drawback of the SMC-based MIMO-channel equalizers so far proposed is that their computational complexity grows exponentially with the number of input data streams and the length of the channel impulse response, rendering these methods impractical. In this paper, we introduce novel SMC schemes that overcome this limitation by the adequate design of proposal probability distribution functions that can be sampled with a lesser computational burden, yet provide a close-to-optimal performance in terms of the resulting equalizer bit error rate and channel estimation error. We show that the complexity of the resulting receivers grows polynomially with the number of input data streams and the length of the channel response, and present computer simulation results that illustrate their performance in some typical scenarios. |

## Inproceedings |

Vazquez-Vilar, Gonzalo ; Majjigi, Vinay ; Sezgin, Aydin ; Paulraj, Arogyaswami Mobility Dependent Feedback Scheme for point-to-point MIMO Systems Inproceedings Asilomar Conference on Signals, Systems, and Computers (Asilomar SSC 2008), Pacific Grove, CA, U.S.A., 2008. BibTeX | Tags: @inproceedings{asilomar2008, title = {Mobility Dependent Feedback Scheme for point-to-point MIMO Systems}, author = {Vazquez-Vilar, Gonzalo and Majjigi, Vinay and Sezgin, Aydin and Paulraj, Arogyaswami}, year = {2008}, date = {2008-10-01}, booktitle = {Asilomar Conference on Signals, Systems, and Computers (Asilomar SSC 2008)}, address = {Pacific Grove, CA, U.S.A.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Koch, Tobias ; Lapidoth, Amos On Multipath Fading Channels at High SNR Inproceedings 2008 IEEE International Symposium on Information Theory, pp. 1572–1576, IEEE, Toronto, 2008, ISBN: 978-1-4244-2256-2. Abstract | Links | BibTeX | Tags: channel capacity, Delay, discrete time systems, discrete-time channels, Entropy, Fading, fading channels, Frequency, Mathematical model, multipath channels, multipath fading channels, noncoherent channel model, Random variables, Signal to noise ratio, signal-to-noise ratios, SNR, statistics, Transmitters @inproceedings{Koch2008, title = {On Multipath Fading Channels at High SNR}, author = {Koch, Tobias and Lapidoth, Amos}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4595252}, isbn = {978-1-4244-2256-2}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE International Symposium on Information Theory}, pages = {1572--1576}, publisher = {IEEE}, address = {Toronto}, abstract = {This paper studies the capacity of discrete-time multipath fading channels. It is assumed that the number of paths is finite, i.e., that the channel output is influenced by the present and by the L previous channel inputs. A noncoherent channel model is considered where neither transmitter nor receiver are cognizant of the fading's realization, but both are aware of its statistic. The focus is on capacity at high signal-to-noise ratios (SNR). In particular, the capacity pre-loglog-defined as the limiting ratio of the capacity to loglog(SNR) as SNR tends to infinity-is studied. It is shown that, irrespective of the number of paths L, the capacity pre-loglog is 1.}, keywords = {channel capacity, Delay, discrete time systems, discrete-time channels, Entropy, Fading, fading channels, Frequency, Mathematical model, multipath channels, multipath fading channels, noncoherent channel model, Random variables, Signal to noise ratio, signal-to-noise ratios, SNR, statistics, Transmitters}, pubstate = {published}, tppubtype = {inproceedings} } This paper studies the capacity of discrete-time multipath fading channels. It is assumed that the number of paths is finite, i.e., that the channel output is influenced by the present and by the L previous channel inputs. A noncoherent channel model is considered where neither transmitter nor receiver are cognizant of the fading's realization, but both are aware of its statistic. The focus is on capacity at high signal-to-noise ratios (SNR). In particular, the capacity pre-loglog-defined as the limiting ratio of the capacity to loglog(SNR) as SNR tends to infinity-is studied. It is shown that, irrespective of the number of paths L, the capacity pre-loglog is 1. |

Vazquez, Manuel A; Miguez, Joaquin A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order Inproceedings 2008 International ITG Workshop on Smart Antennas, pp. 387–391, IEEE, Vienna, 2008, ISBN: 978-1-4244-1756-8. Abstract | Links | BibTeX | Tags: Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas @inproceedings{Vazquez2008, title = {A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order}, author = {Vazquez, Manuel A. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4475587}, isbn = {978-1-4244-1756-8}, year = {2008}, date = {2008-01-01}, booktitle = {2008 International ITG Workshop on Smart Antennas}, pages = {387--391}, publisher = {IEEE}, address = {Vienna}, abstract = {In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver}, keywords = {Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas}, pubstate = {published}, tppubtype = {inproceedings} } In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver |

Miguez, Joaquin Analysis of a Sequential Monte Carlo Optimization Methodology Inproceedings 16th European Signal Processing Conference (EUSIPCO 2008, Lausanne, 2008. Abstract | Links | BibTeX | Tags: @inproceedings{Miguez2008, title = {Analysis of a Sequential Monte Carlo Optimization Methodology}, author = {Miguez, Joaquin}, url = {http://www.eurasip.org/Proceedings/Eusipco/Eusipco2008/papers/1569105254.pdf}, year = {2008}, date = {2008-01-01}, booktitle = {16th European Signal Processing Conference (EUSIPCO 2008}, address = {Lausanne}, abstract = {We investigate a family of stochastic exploration methods that has been recently proposed to carry out estimation and prediction in discrete-time random dynamical systems. The key of the novel approach is to identify a cost function whose minima provide valid estimates of the system state at successive time instants. This function is recursively optimized using a sequential Monte Carlo minimization (SMCM) procedure which is similar to standard particle filtering algorithms but does not require a explicit probabilistic model to be imposed on the system. In this paper, we analyze the asymptotic convergence of SMCM methods and show that a properly designed algorithm produces a sequence of system-state estimates with individually minimal contributions to the cost function. We apply the SMCM method to a target tracking problem in order to illustrate how convergence is achieved in the way predicted by the theory.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We investigate a family of stochastic exploration methods that has been recently proposed to carry out estimation and prediction in discrete-time random dynamical systems. The key of the novel approach is to identify a cost function whose minima provide valid estimates of the system state at successive time instants. This function is recursively optimized using a sequential Monte Carlo minimization (SMCM) procedure which is similar to standard particle filtering algorithms but does not require a explicit probabilistic model to be imposed on the system. In this paper, we analyze the asymptotic convergence of SMCM methods and show that a properly designed algorithm produces a sequence of system-state estimates with individually minimal contributions to the cost function. We apply the SMCM method to a target tracking problem in order to illustrate how convergence is achieved in the way predicted by the theory. |

Perez-Cruz, Fernando Kullback-Leibler Divergence Estimation of Continuous Distributions Inproceedings 2008 IEEE International Symposium on Information Theory, pp. 1666–1670, IEEE, Toronto, 2008, ISBN: 978-1-4244-2256-2. Abstract | Links | BibTeX | Tags: Convergence, density estimation, Density measurement, Entropy, Frequency estimation, H infinity control, information theory, k-nearest-neighbour density estimation, Kullback-Leibler divergence estimation, Machine learning, Mutual information, neuroscience, Random variables, statistical distributions, waiting-times distributions @inproceedings{Perez-Cruz2008, title = {Kullback-Leibler Divergence Estimation of Continuous Distributions}, author = {Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4595271}, isbn = {978-1-4244-2256-2}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE International Symposium on Information Theory}, pages = {1666--1670}, publisher = {IEEE}, address = {Toronto}, abstract = {We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be either estimated using the empirical cdf or k-nearest-neighbour density estimation, which does not converge to the true measure for finite k. The convergence proof is based on describing the statistics of our estimator using waiting-times distributions, as the exponential or Erlang. We illustrate the proposed estimators and show how they compare to existing methods based on density estimation, and we also outline how our divergence estimators can be used for solving the two-sample problem.}, keywords = {Convergence, density estimation, Density measurement, Entropy, Frequency estimation, H infinity control, information theory, k-nearest-neighbour density estimation, Kullback-Leibler divergence estimation, Machine learning, Mutual information, neuroscience, Random variables, statistical distributions, waiting-times distributions}, pubstate = {published}, tppubtype = {inproceedings} } We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be either estimated using the empirical cdf or k-nearest-neighbour density estimation, which does not converge to the true measure for finite k. The convergence proof is based on describing the statistics of our estimator using waiting-times distributions, as the exponential or Erlang. We illustrate the proposed estimators and show how they compare to existing methods based on density estimation, and we also outline how our divergence estimators can be used for solving the two-sample problem. |

Perez-Cruz, Fernando ; Rodrigues, Miguel R D; Verdu, Sergio Optimal Precoding for Digital Subscriber Lines Inproceedings 2008 IEEE International Conference on Communications, pp. 1200–1204, IEEE, Beijing, 2008, ISBN: 978-1-4244-2075-9. Abstract | Links | BibTeX | Tags: Bit error rate, channel matrix diagonalization, Communications Society, Computer science, digital subscriber lines, DSL, Equations, fixed-point equation, Gaussian channels, least mean squares methods, linear codes, matrix algebra, MIMO, MIMO communication, MIMO Gaussian channel, minimum mean squared error method, MMSE, multiple-input multiple-output communication, Mutual information, optimal linear precoder, precoding, Telecommunications, Telephony @inproceedings{Perez-Cruz2008a, title = {Optimal Precoding for Digital Subscriber Lines}, author = {Perez-Cruz, Fernando and Rodrigues, Miguel R. D. and Verdu, Sergio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4533270}, isbn = {978-1-4244-2075-9}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE International Conference on Communications}, pages = {1200--1204}, publisher = {IEEE}, address = {Beijing}, abstract = {We determine the linear precoding policy that maximizes the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean squared error (MMSE). The optimal linear precoder can be computed by means of a fixed- point equation as a function of the channel and the input constellation. We show that diagonalizing the channel matrix does not maximize the information transmission rate for nonGaussian inputs. A full precoding matrix may significantly increase the information transmission rate, even for parallel non-interacting channels. We illustrate the application of our results to typical Gigabit DSL systems.}, keywords = {Bit error rate, channel matrix diagonalization, Communications Society, Computer science, digital subscriber lines, DSL, Equations, fixed-point equation, Gaussian channels, least mean squares methods, linear codes, matrix algebra, MIMO, MIMO communication, MIMO Gaussian channel, minimum mean squared error method, MMSE, multiple-input multiple-output communication, Mutual information, optimal linear precoder, precoding, Telecommunications, Telephony}, pubstate = {published}, tppubtype = {inproceedings} } We determine the linear precoding policy that maximizes the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean squared error (MMSE). The optimal linear precoder can be computed by means of a fixed- point equation as a function of the channel and the input constellation. We show that diagonalizing the channel matrix does not maximize the information transmission rate for nonGaussian inputs. A full precoding matrix may significantly increase the information transmission rate, even for parallel non-interacting channels. We illustrate the application of our results to typical Gigabit DSL systems. |

Koch, Tobias ; Lapidoth, Amos Multipath Channels of Bounded Capacity Inproceedings 2008 IEEE Information Theory Workshop, pp. 6–10, IEEE, Oporto, 2008, ISBN: 978-1-4244-2269-2. Abstract | Links | BibTeX | Tags: @inproceedings{Koch2008a, title = {Multipath Channels of Bounded Capacity}, author = {Koch, Tobias and Lapidoth, Amos}, url = {http://www.researchgate.net/publication/4353168_Multipath_channels_of_bounded_capacity}, isbn = {978-1-4244-2269-2}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE Information Theory Workshop}, pages = {6--10}, publisher = {IEEE}, address = {Oporto}, abstract = {The capacity of discrete-time, non-coherent, multi-path fading channels is considered. It is shown that if the delay spread is large in the sense that the variances of the path gains do not decay faster than geometrically, then capacity is bounded in the signal-to-noise ratio.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The capacity of discrete-time, non-coherent, multi-path fading channels is considered. It is shown that if the delay spread is large in the sense that the variances of the path gains do not decay faster than geometrically, then capacity is bounded in the signal-to-noise ratio. |

Leiva-murillo, Jose M; Artés-Rodríguez, Antonio Linear Dimensionality Reduction With Gausian Mixture Models Inproceedings Cognitive Information Processing, (CIP) 2008, Santorini, 2008. Abstract | Links | BibTeX | Tags: @inproceedings{JoseM.Leiva-murillo2008, title = {Linear Dimensionality Reduction With Gausian Mixture Models}, author = {Leiva-murillo, Jose M. and Artés-Rodríguez, Antonio}, url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.167.798}, year = {2008}, date = {2008-01-01}, booktitle = {Cognitive Information Processing, (CIP) 2008}, address = {Santorini}, abstract = {In this paper, we explore the application of several informationtheoretic criteria to the problem of reducing the dimension in pattern recognition. We consider the use of Gaussian mixture models for estimating the distribution of the data. Three algorithms are proposed for linear feature extraction by the maximization of the mutual information, the likelihood or the hypotheses test, respectively. The experiments show that the proposed methods outperform the classical methods based on parametric Gaussian models, and avoid the intense computational complexity of nonparametric kernel density estimators.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this paper, we explore the application of several informationtheoretic criteria to the problem of reducing the dimension in pattern recognition. We consider the use of Gaussian mixture models for estimating the distribution of the data. Three algorithms are proposed for linear feature extraction by the maximization of the mutual information, the likelihood or the hypotheses test, respectively. The experiments show that the proposed methods outperform the classical methods based on parametric Gaussian models, and avoid the intense computational complexity of nonparametric kernel density estimators. |

Koch, Tobias ; Lapidoth, Amos Multipath Channels of Unbounded Capacity Inproceedings 2008 IEEE 25th Convention of Electrical and Electronics Engineers in Israel, pp. 640–644, IEEE, Eilat, 2008, ISBN: 978-1-4244-2481-8. Abstract | Links | BibTeX | Tags: channel capacity, discrete-time capacity, Entropy, Fading, fading channels, Frequency, H infinity control, Information rates, multipath channels, multipath fading channels, noncoherent, noncoherent capacity, path gains decay, Signal to noise ratio, statistics, Transmitters, unbounded capacity @inproceedings{Koch2008b, title = {Multipath Channels of Unbounded Capacity}, author = {Koch, Tobias and Lapidoth, Amos}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4736611}, isbn = {978-1-4244-2481-8}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE 25th Convention of Electrical and Electronics Engineers in Israel}, pages = {640--644}, publisher = {IEEE}, address = {Eilat}, abstract = {The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power.}, keywords = {channel capacity, discrete-time capacity, Entropy, Fading, fading channels, Frequency, H infinity control, Information rates, multipath channels, multipath fading channels, noncoherent, noncoherent capacity, path gains decay, Signal to noise ratio, statistics, Transmitters, unbounded capacity}, pubstate = {published}, tppubtype = {inproceedings} } The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power. |

Rodrigues, Miguel R D; Perez-Cruz, Fernando ; Verdu, Sergio Multiple-Input Multiple-Output Gaussian Channels: Optimal Covariance for Non-Gaussian Inputs Inproceedings 2008 IEEE Information Theory Workshop, pp. 445–449, IEEE, Porto, 2008, ISBN: 978-1-4244-2269-2. Abstract | Links | BibTeX | Tags: Binary phase shift keying, covariance matrices, Covariance matrix, deterministic MIMO Gaussian channel, fixed-point equation, Gaussian channels, Gaussian noise, Information rates, intersymbol interference, least mean squares methods, Magnetic recording, mercury-waterfilling power allocation policy, MIMO, MIMO communication, minimum mean-squared error, MMSE, MMSE matrix, multiple-input multiple-output system, Multiple-Input Multiple-Output Systems, Mutual information, Optimal Input Covariance, Optimization, Telecommunications @inproceedings{Rodrigues2008, title = {Multiple-Input Multiple-Output Gaussian Channels: Optimal Covariance for Non-Gaussian Inputs}, author = {Rodrigues, Miguel R. D. and Perez-Cruz, Fernando and Verdu, Sergio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4578704}, isbn = {978-1-4244-2269-2}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE Information Theory Workshop}, pages = {445--449}, publisher = {IEEE}, address = {Porto}, abstract = {We investigate the input covariance that maximizes the mutual information of deterministic multiple-input multipleo-utput (MIMO) Gaussian channels with arbitrary (not necessarily Gaussian) input distributions, by capitalizing on the relationship between the gradient of the mutual information and the minimum mean-squared error (MMSE) matrix. We show that the optimal input covariance satisfies a simple fixed-point equation involving key system quantities, including the MMSE matrix. We also specialize the form of the optimal input covariance to the asymptotic regimes of low and high snr. We demonstrate that in the low-snr regime the optimal covariance fully correlates the inputs to better combat noise. In contrast, in the high-snr regime the optimal covariance is diagonal with diagonal elements obeying the generalized mercury/waterfilling power allocation policy. Numerical results illustrate that covariance optimization may lead to significant gains with respect to conventional strategies based on channel diagonalization followed by mercury/waterfilling or waterfilling power allocation, particularly in the regimes of medium and high snr.}, keywords = {Binary phase shift keying, covariance matrices, Covariance matrix, deterministic MIMO Gaussian channel, fixed-point equation, Gaussian channels, Gaussian noise, Information rates, intersymbol interference, least mean squares methods, Magnetic recording, mercury-waterfilling power allocation policy, MIMO, MIMO communication, minimum mean-squared error, MMSE, MMSE matrix, multiple-input multiple-output system, Multiple-Input Multiple-Output Systems, Mutual information, Optimal Input Covariance, Optimization, Telecommunications}, pubstate = {published}, tppubtype = {inproceedings} } We investigate the input covariance that maximizes the mutual information of deterministic multiple-input multipleo-utput (MIMO) Gaussian channels with arbitrary (not necessarily Gaussian) input distributions, by capitalizing on the relationship between the gradient of the mutual information and the minimum mean-squared error (MMSE) matrix. We show that the optimal input covariance satisfies a simple fixed-point equation involving key system quantities, including the MMSE matrix. We also specialize the form of the optimal input covariance to the asymptotic regimes of low and high snr. We demonstrate that in the low-snr regime the optimal covariance fully correlates the inputs to better combat noise. In contrast, in the high-snr regime the optimal covariance is diagonal with diagonal elements obeying the generalized mercury/waterfilling power allocation policy. Numerical results illustrate that covariance optimization may lead to significant gains with respect to conventional strategies based on channel diagonalization followed by mercury/waterfilling or waterfilling power allocation, particularly in the regimes of medium and high snr. |

Vazquez, Manuel A; Miguez, Joaquin A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order Inproceedings 2008 International ITG Workshop on Smart Antennas, pp. 387–391, IEEE, Vienna, 2008, ISBN: 978-1-4244-1756-8. Abstract | Links | BibTeX | Tags: Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas @inproceedings{Vazquez2008a, title = {A Per-Survivor Processing Algorithm for Maximum Likelihood Equalization of MIMO Channels with Unknown Order}, author = {Vazquez, Manuel A. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4475587}, isbn = {978-1-4244-1756-8}, year = {2008}, date = {2008-01-01}, booktitle = {2008 International ITG Workshop on Smart Antennas}, pages = {387--391}, publisher = {IEEE}, address = {Vienna}, abstract = {In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver.}, keywords = {Channel estimation, channel impulse response, computational complexity, Computer science education, Computer Simulation, Degradation, Frequency, frequency-selective multiple-input multiple-output, maximum likelihood detection, maximum likelihood equalization, maximum likelihood estimation, maximum likelihood sequence detection, maximum likelihood sequence estimation, MIMO, MIMO channels, MIMO communication, per-survivor processing algorithm, time-selective channels, Transmitting antennas}, pubstate = {published}, tppubtype = {inproceedings} } In the equalization of frequency-selective multiple-input multiple-output (MIMO) channels it is usually assumed that the length of the channel impulse response (CIR), also referred to as the channel order, is known. However, this is not true in most practical situations and, in order to avoid the serious performance degradation that occurs when the CIR length is underestimated, a channel with "more than enough" taps is usually considered. This possibly means overestimating the channel order, and is not desirable since the computational complexity of maximum likelihood sequence detection (MLSD) in frequency-selective channels grows exponentially with the channel order. In addition to that, the higher the channel order considered, the more the number of channel coefficients that need to be estimated from the same set of observations. In this paper, we introduce an algorithm for MLSD that incorporates the full estimation of the MIMO CIR parameters, including its order. The proposed technique is based on the per survivor processing (PSP) methodology, it admits both blind and semiblind implementations, depending on the availability of pilot data, and is designed to work with time-selective channels. Besides the analytical derivation of the algorithm, we provide computer simulation results that illustrate the effectiveness of the resulting receiver. |

Leiva-Murillo, Jose M; Artés-Rodríguez, Antonio Algorithms for Gaussian Bandwidth Selection in Kernel Density Estimators Inproceedings NIPS 2008, Workshop on Optimization for Machine Learning Vancouver, Vancouver, 2008. Abstract | Links | BibTeX | Tags: @inproceedings{Leiva-Murillo2008a, title = {Algorithms for Gaussian Bandwidth Selection in Kernel Density Estimators}, author = {Leiva-Murillo, Jose M. and Artés-Rodríguez, Antonio}, url = {http://www.researchgate.net/publication/228859873_Algorithms_for_gaussian_bandwidth_selection_in_kernel_density_estimators}, year = {2008}, date = {2008-01-01}, booktitle = {NIPS 2008, Workshop on Optimization for Machine Learning Vancouver}, address = {Vancouver}, abstract = {In this paper we study the classical statistical problem of choos-ing an appropriate bandwidth for Kernel Density Estimators. For the special case of Gaussian kernel, two algorithms are proposed for the spherical covariance matrix and for the general case, respec-tively. These methods avoid the unsatisfactory procedure of tuning the bandwidth while evaluating the likelihood, which is impractical with multivariate data in the general case. The convergence con-ditions are provided together with the algorithms proposed. We measure the accuracy of the models obtained by a set of classifica-tion experiments.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we study the classical statistical problem of choos-ing an appropriate bandwidth for Kernel Density Estimators. For the special case of Gaussian kernel, two algorithms are proposed for the spherical covariance matrix and for the general case, respec-tively. These methods avoid the unsatisfactory procedure of tuning the bandwidth while evaluating the likelihood, which is impractical with multivariate data in the general case. The convergence con-ditions are provided together with the algorithms proposed. We measure the accuracy of the models obtained by a set of classifica-tion experiments. |

Mario de-Prado-Cumplido, Mario ; Artés-Rodríguez, Antonio SVM Discovery of Causation Direction by Machine Learning Techniques Inproceedings NIPS’08, Workshop on Causality, Vancouver, 2008. BibTeX | Tags: @inproceedings{Mariode-Prado-Cumplido2008, title = {SVM Discovery of Causation Direction by Machine Learning Techniques}, author = {Mario de-Prado-Cumplido, Mario and Artés-Rodríguez, Antonio}, year = {2008}, date = {2008-01-01}, booktitle = {NIPS’08, Workshop on Causality}, address = {Vancouver}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Martinez Ruiz, Manuel ; Artés-Rodríguez, Antonio ; Sabatini, R Progressive Still Image Transmission over a Tactical Data Link Network Inproceedings RTO 2008 Information Systems Technology Panel (IST) Symposium, Praga, 2008. @inproceedings{MartinezRuiz2008, title = {Progressive Still Image Transmission over a Tactical Data Link Network}, author = {Martinez Ruiz, Manuel and Artés-Rodríguez, Antonio and Sabatini, R.}, year = {2008}, date = {2008-01-01}, booktitle = {RTO 2008 Information Systems Technology Panel (IST) Symposium}, address = {Praga}, abstract = {Future military communications will be required to provide higher data capacity and wideband in real time, greater flexibility, reliability, robustness and seamless networking capabilities. The next generation of communication systems and standards should be able to outperform in a littoral combat environment with a high density of civilian emissions and “ad-hoc” spot jammers. In this operational context it is extremely important to ensure the proper performance of the information grid and to provide not all the available but only the required information in real time either by broadcasting or upon demand, with the best possible “quality of service”. Existing tactical data link systems and standards have being designed to convey mainly textual information such as surveillance and identification data, electronic warfare parameters, aircraft control information, coded voice. The future tactical data link systems and standards should take into consideration the multimedia nature of most of the dispersed and “fuzzy” information available in the battlefield to correlate the ISR components in a way to better contribute to the Network Centric Operations. For this to be accomplished new wideband coalition waveforms should be developed and new coding and image compression standards should be taken into account, such as MPEG-7 (Multimedia Content Description Interface), MPEG-21, JPEG2000 and many others. In the meantime it is important to find new applications for the current tactical data links in order to better exploit their capabilities and to overcome or minimize their limitations.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Future military communications will be required to provide higher data capacity and wideband in real time, greater flexibility, reliability, robustness and seamless networking capabilities. The next generation of communication systems and standards should be able to outperform in a littoral combat environment with a high density of civilian emissions and “ad-hoc” spot jammers. In this operational context it is extremely important to ensure the proper performance of the information grid and to provide not all the available but only the required information in real time either by broadcasting or upon demand, with the best possible “quality of service”. Existing tactical data link systems and standards have being designed to convey mainly textual information such as surveillance and identification data, electronic warfare parameters, aircraft control information, coded voice. The future tactical data link systems and standards should take into consideration the multimedia nature of most of the dispersed and “fuzzy” information available in the battlefield to correlate the ISR components in a way to better contribute to the Network Centric Operations. For this to be accomplished new wideband coalition waveforms should be developed and new coding and image compression standards should be taken into account, such as MPEG-7 (Multimedia Content Description Interface), MPEG-21, JPEG2000 and many others. In the meantime it is important to find new applications for the current tactical data links in order to better exploit their capabilities and to overcome or minimize their limitations. |

Bravo-Santos, Ángel M Multireception Systems in Mobile Environments Inproceedings 2008 International Workshop on Advances in Communications, Victoria BC, 2008. BibTeX | Tags: @inproceedings{Bravo-Santos2008, title = {Multireception Systems in Mobile Environments}, author = {Bravo-Santos, Ángel M.}, year = {2008}, date = {2008-01-01}, booktitle = {2008 International Workshop on Advances in Communications}, address = {Victoria BC}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Plata-Chaves, Jorge ; Lázaro, Marcelino ; Artés-Rodríguez, Antonio Decentralized Detection in a Dense Wireless Sensor Network with Correlated Observations Inproceedings International Workshop on Information Theory for Sensor Networks (WITS 2008), Santorini, 2008. @inproceedings{Plata-Chaves2008, title = {Decentralized Detection in a Dense Wireless Sensor Network with Correlated Observations}, author = {Plata-Chaves, Jorge and Lázaro, Marcelino and Artés-Rodríguez, Antonio}, url = {http://www.dcc.fc.up.pt/wits08/wits-advance-program.pdf}, year = {2008}, date = {2008-01-01}, booktitle = {International Workshop on Information Theory for Sensor Networks (WITS 2008)}, address = {Santorini}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

Santiago-Mozos, Ricardo ; Fernandez-Lorenzana, R; Perez-Cruz, Fernando ; Artés-Rodríguez, Antonio On the Uncertainty in Sequential Hypothesis Testing Inproceedings 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1223–1226, IEEE, Paris, 2008, ISBN: 978-1-4244-2002-5. Abstract | Links | BibTeX | Tags: binary hypothesis test, Biomedical imaging, Detectors, H infinity control, likelihood ratio, Medical diagnostic imaging, medical image application, medical image processing, Medical tests, patient diagnosis, Probability, Random variables, Sequential analysis, sequential hypothesis testing, sequential probability ratio test, Signal processing, Testing, tuberculosis diagnosis, Uncertainty @inproceedings{Santiago-Mozos2008, title = {On the Uncertainty in Sequential Hypothesis Testing}, author = {Santiago-Mozos, Ricardo and Fernandez-Lorenzana, R. and Perez-Cruz, Fernando and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4541223}, isbn = {978-1-4244-2002-5}, year = {2008}, date = {2008-01-01}, booktitle = {2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro}, pages = {1223--1226}, publisher = {IEEE}, address = {Paris}, abstract = {We consider the problem of sequential hypothesis testing when the exact pdfs are not known but instead a set of iid samples are used to describe the hypotheses. We modify the classical test by introducing a likelihood ratio interval which accommodates the uncertainty in the pdfs. The test finishes when the whole likelihood ratio interval crosses one of the thresholds and reduces to the classical test as the number of samples to describe the hypotheses tend to infinity. We illustrate the performance of this test in a medical image application related to tuberculosis diagnosis. We show in this example how the test confidence level can be accurately determined.}, keywords = {binary hypothesis test, Biomedical imaging, Detectors, H infinity control, likelihood ratio, Medical diagnostic imaging, medical image application, medical image processing, Medical tests, patient diagnosis, Probability, Random variables, Sequential analysis, sequential hypothesis testing, sequential probability ratio test, Signal processing, Testing, tuberculosis diagnosis, Uncertainty}, pubstate = {published}, tppubtype = {inproceedings} } We consider the problem of sequential hypothesis testing when the exact pdfs are not known but instead a set of iid samples are used to describe the hypotheses. We modify the classical test by introducing a likelihood ratio interval which accommodates the uncertainty in the pdfs. The test finishes when the whole likelihood ratio interval crosses one of the thresholds and reduces to the classical test as the number of samples to describe the hypotheses tend to infinity. We illustrate the performance of this test in a medical image application related to tuberculosis diagnosis. We show in this example how the test confidence level can be accurately determined. |

Vila-Forcen, J E; Artés-Rodríguez, Antonio ; Garcia-Frias, J Compressive Sensing Detection of Stochastic Signals Inproceedings 2008 42nd Annual Conference on Information Sciences and Systems, pp. 956–960, IEEE, Princeton, 2008, ISBN: 978-1-4244-2246-3. Abstract | Links | BibTeX | Tags: Additive white noise, AWGN, compressive sensing detection, dimensionality reduction techniques, Distortion measurement, Gaussian noise, matrix algebra, Mutual information, optimized projections, projection matrix, signal detection, Signal processing, signal reconstruction, Stochastic processes, stochastic signals, Support vector machine classification, Support vector machines, SVM @inproceedings{Vila-Forcen2008, title = {Compressive Sensing Detection of Stochastic Signals}, author = {Vila-Forcen, J.E. and Artés-Rodríguez, Antonio and Garcia-Frias, J.}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4558656}, isbn = {978-1-4244-2246-3}, year = {2008}, date = {2008-01-01}, booktitle = {2008 42nd Annual Conference on Information Sciences and Systems}, pages = {956--960}, publisher = {IEEE}, address = {Princeton}, abstract = {Inspired by recent work in compressive sensing, we propose a framework for the detection of stochastic signals from optimized projections. In order to generate a good projection matrix, we use dimensionality reduction techniques based on the maximization of the mutual information between the projected signals and their corresponding class labels. In addition, classification techniques based on support vector machines (SVMs) are applied for the final decision process. Simulation results show that the realizations of the stochastic process are detected with higher accuracy and lower complexity than a scheme performing signal reconstruction first, followed by detection based on the reconstructed signal.}, keywords = {Additive white noise, AWGN, compressive sensing detection, dimensionality reduction techniques, Distortion measurement, Gaussian noise, matrix algebra, Mutual information, optimized projections, projection matrix, signal detection, Signal processing, signal reconstruction, Stochastic processes, stochastic signals, Support vector machine classification, Support vector machines, SVM}, pubstate = {published}, tppubtype = {inproceedings} } Inspired by recent work in compressive sensing, we propose a framework for the detection of stochastic signals from optimized projections. In order to generate a good projection matrix, we use dimensionality reduction techniques based on the maximization of the mutual information between the projected signals and their corresponding class labels. In addition, classification techniques based on support vector machines (SVMs) are applied for the final decision process. Simulation results show that the realizations of the stochastic process are detected with higher accuracy and lower complexity than a scheme performing signal reconstruction first, followed by detection based on the reconstructed signal. |

Perez-Cruz, Fernando Estimation of Information Theoretic Measures for Continuous Random Variables Inproceedings Advances in Neural Information Processing Systems, pp. 1257–1264, Vancouver, 2008. @inproceedings{Perez-Cruz2008b, title = {Estimation of Information Theoretic Measures for Continuous Random Variables}, author = {Perez-Cruz, Fernando}, url = {http://papers.nips.cc/paper/3417-estimation-of-information-theoretic-measures-for-continuous-random-variables}, year = {2008}, date = {2008-01-01}, booktitle = {Advances in Neural Information Processing Systems}, pages = {1257--1264}, address = {Vancouver}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |

## 2007 |

## Journal Articles |

Leiva-Murillo, Jose M; Artés-Rodríguez, Antonio Maximization of Mutual Information for Supervised Linear Feature Extraction Journal Article IEEE Transactions on Neural Networks, 18 (5), pp. 1433–1441, 2007, ISSN: 1045-9227. Abstract | Links | BibTeX | Tags: Algorithms, Artificial Intelligence, Automated, component-by-component gradient-ascent method, Computer Simulation, Data Mining, Entropy, Feature extraction, gradient methods, gradient-based entropy, Independent component analysis, Information Storage and Retrieval, information theory, Iron, learning (artificial intelligence), Linear discriminant analysis, Linear Models, Mutual information, Optimization methods, Pattern recognition, Reproducibility of Results, Sensitivity and Specificity, supervised linear feature extraction, Vectors @article{Leiva-Murillo2007, title = {Maximization of Mutual Information for Supervised Linear Feature Extraction}, author = {Leiva-Murillo, Jose M. and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4298118}, issn = {1045-9227}, year = {2007}, date = {2007-01-01}, journal = {IEEE Transactions on Neural Networks}, volume = {18}, number = {5}, pages = {1433--1441}, publisher = {IEEE}, abstract = {In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes.}, keywords = {Algorithms, Artificial Intelligence, Automated, component-by-component gradient-ascent method, Computer Simulation, Data Mining, Entropy, Feature extraction, gradient methods, gradient-based entropy, Independent component analysis, Information Storage and Retrieval, information theory, Iron, learning (artificial intelligence), Linear discriminant analysis, Linear Models, Mutual information, Optimization methods, Pattern recognition, Reproducibility of Results, Sensitivity and Specificity, supervised linear feature extraction, Vectors}, pubstate = {published}, tppubtype = {article} } In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes. |