## 2018 |

## Journal Articles |

Koch, Tobias ; Vazquez-Vilar, Gonzalo A Rigorous Approach to High-Resolution Entropy-Constrained Vector Quantization Journal Article IEEE Transactions on Information Theory, 64 (4), pp. 2609-2625, 2018, ISSN: 0018-9448. Links | BibTeX | Tags: Distortion, Distortion measurement, Entropy, Entropy constrained, high resolution, Probability density function, quantization, Rate-distortion, Rate-distortion theory, Vector quantization @article{koch-TIT2018a, title = {A Rigorous Approach to High-Resolution Entropy-Constrained Vector Quantization}, author = {Koch, Tobias and Vazquez-Vilar, Gonzalo}, doi = {10.1109/TIT.2018.2803064}, issn = {0018-9448}, year = {2018}, date = {2018-04-01}, journal = {IEEE Transactions on Information Theory}, volume = {64}, number = {4}, pages = {2609-2625}, keywords = {Distortion, Distortion measurement, Entropy, Entropy constrained, high resolution, Probability density function, quantization, Rate-distortion, Rate-distortion theory, Vector quantization}, pubstate = {published}, tppubtype = {article} } |

## 2014 |

## Journal Articles |

Alvarado, Alex ; Brannstrom, Fredrik ; Agrell, Erik ; Koch, Tobias High-SNR Asymptotics of Mutual Information for Discrete Constellations With Applications to BICM Journal Article IEEE Transactions on Information Theory, 60 (2), pp. 1061–1076, 2014, ISSN: 0018-9448. Abstract | Links | BibTeX | Tags: additive white Gaussian noise channel, Anti-Gray code, bit-interleaved coded modulation, discrete constellations, Entropy, Gray code, high-SNR asymptotics, IP networks, Labeling, minimum-mean square error, Modulation, Mutual information, Signal to noise ratio, Vectors @article{Alvarado2014, title = {High-SNR Asymptotics of Mutual Information for Discrete Constellations With Applications to BICM}, author = {Alvarado, Alex and Brannstrom, Fredrik and Agrell, Erik and Koch, Tobias}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6671479 http://www.tsc.uc3m.es/~koch/files/IEEE_TIT_60%282%29.pdf}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {60}, number = {2}, pages = {1061--1076}, abstract = {Asymptotic expressions of the mutual information between any discrete input and the corresponding output of the scalar additive white Gaussian noise channel are presented in the limit as the signal-to-noise ratio (SNR) tends to infinity. Asymptotic expressions of the symbol-error probability (SEP) and the minimum mean-square error (MMSE) achieved by estimating the channel input given the channel output are also developed. It is shown that for any input distribution, the conditional entropy of the channel input given the output, MMSE, and SEP have an asymptotic behavior proportional to the Gaussian Q-function. The argument of the Q-function depends only on the minimum Euclidean distance (MED) of the constellation and the SNR, and the proportionality constants are functions of the MED and the probabilities of the pairs of constellation points at MED. The developed expressions are then generalized to study the high-SNR behavior of the generalized mutual information (GMI) for bit-interleaved coded modulation (BICM). By means of these asymptotic expressions, the long-standing conjecture that Gray codes are the binary labelings that maximize the BICM-GMI at high SNR is proven. It is further shown that for any equally spaced constellation whose size is a power of two, there always exists an anti-Gray code giving the lowest BICM-GMI at high SNR.}, keywords = {additive white Gaussian noise channel, Anti-Gray code, bit-interleaved coded modulation, discrete constellations, Entropy, Gray code, high-SNR asymptotics, IP networks, Labeling, minimum-mean square error, Modulation, Mutual information, Signal to noise ratio, Vectors}, pubstate = {published}, tppubtype = {article} } Asymptotic expressions of the mutual information between any discrete input and the corresponding output of the scalar additive white Gaussian noise channel are presented in the limit as the signal-to-noise ratio (SNR) tends to infinity. Asymptotic expressions of the symbol-error probability (SEP) and the minimum mean-square error (MMSE) achieved by estimating the channel input given the channel output are also developed. It is shown that for any input distribution, the conditional entropy of the channel input given the output, MMSE, and SEP have an asymptotic behavior proportional to the Gaussian Q-function. The argument of the Q-function depends only on the minimum Euclidean distance (MED) of the constellation and the SNR, and the proportionality constants are functions of the MED and the probabilities of the pairs of constellation points at MED. The developed expressions are then generalized to study the high-SNR behavior of the generalized mutual information (GMI) for bit-interleaved coded modulation (BICM). By means of these asymptotic expressions, the long-standing conjecture that Gray codes are the binary labelings that maximize the BICM-GMI at high SNR is proven. It is further shown that for any equally spaced constellation whose size is a power of two, there always exists an anti-Gray code giving the lowest BICM-GMI at high SNR. |

Pastore A, ; Koch, Tobias ; Fonollosa, Javier Rodriguez A Rate-Splitting Approach to Fading Channels With Imperfect Channel-State Information Journal Article IEEE Transactions on Information Theory, 60 (7), pp. 4266–4285, 2014, ISSN: 0018-9448. Abstract | Links | BibTeX | Tags: channel capacity, COMONSENS, DEIPRO, Entropy, Fading, fading channels, flat fading, imperfect channel-state information, MobileNET, Mutual information, OTOSiS, Random variables, Receivers, Signal to noise ratio, Upper bound @article{Pastore2014a, title = {A Rate-Splitting Approach to Fading Channels With Imperfect Channel-State Information}, author = {Pastore, A, and Koch, Tobias and Fonollosa, Javier Rodriguez}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6832779 http://www.tsc.uc3m.es/~koch/files/IEEE_TIT_60(7).pdf http://arxiv.org/pdf/1301.6120.pdf}, issn = {0018-9448}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Information Theory}, volume = {60}, number = {7}, pages = {4266--4285}, publisher = {IEEE}, abstract = {As shown by Médard, the capacity of fading channels with imperfect channel-state information can be lower-bounded by assuming a Gaussian channel input (X) with power (P) and by upper-bounding the conditional entropy (h(X|Y,hat Ħ)) by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating (X) from ((Y,hat Ħ)) . We demonstrate that, using a rate-splitting approach, this lower bound can be sharpened: by expressing the Gaussian input (X) as the sum of two independent Gaussian variables (X_1) and (X_2) and by applying Médard\'s lower bound first to bound the mutual information between (X_1) and (Y) while treating (X_2) as noise, and by applying it a second time to the mutual information between (X_2) and (Y) while assuming (X_1) to be known, we obtain a capacity lower bound that is strictly larger than Médard\'s lower bound. We then generalize this approach to an arbi- rary number (L) of layers, where (X) is expressed as the sum of (L) independent Gaussian random variables of respective variances (P_ell ) , (ell = 1,dotsc ,L) summing up to (P) . Among all such rate-splitting bounds, we determine the supremum over power allocations (P_ell ) and total number of layers (L) . This supremum is achieved for (L rightarrow infty ) and gives rise to an analytically expressible capacity lower bound. For Gaussian fading, this novel bound is shown to converge to the Gaussian-input mutual information as the signal-to-noise ratio (SNR) grows, provided that the variance of the channel estimation error (H-hat Ħ) tends to zero as the SNR tends to infinity.}, keywords = {channel capacity, COMONSENS, DEIPRO, Entropy, Fading, fading channels, flat fading, imperfect channel-state information, MobileNET, Mutual information, OTOSiS, Random variables, Receivers, Signal to noise ratio, Upper bound}, pubstate = {published}, tppubtype = {article} } As shown by Médard, the capacity of fading channels with imperfect channel-state information can be lower-bounded by assuming a Gaussian channel input (X) with power (P) and by upper-bounding the conditional entropy (h(X|Y,hat Ħ)) by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating (X) from ((Y,hat Ħ)) . We demonstrate that, using a rate-splitting approach, this lower bound can be sharpened: by expressing the Gaussian input (X) as the sum of two independent Gaussian variables (X_1) and (X_2) and by applying Médard's lower bound first to bound the mutual information between (X_1) and (Y) while treating (X_2) as noise, and by applying it a second time to the mutual information between (X_2) and (Y) while assuming (X_1) to be known, we obtain a capacity lower bound that is strictly larger than Médard's lower bound. We then generalize this approach to an arbi- rary number (L) of layers, where (X) is expressed as the sum of (L) independent Gaussian random variables of respective variances (P_ell ) , (ell = 1,dotsc ,L) summing up to (P) . Among all such rate-splitting bounds, we determine the supremum over power allocations (P_ell ) and total number of layers (L) . This supremum is achieved for (L rightarrow infty ) and gives rise to an analytically expressible capacity lower bound. For Gaussian fading, this novel bound is shown to converge to the Gaussian-input mutual information as the signal-to-noise ratio (SNR) grows, provided that the variance of the channel estimation error (H-hat Ħ) tends to zero as the SNR tends to infinity. |

## Inproceedings |

Koch, Tobias On the Dither-Quantized Gaussian Channel at Low SNR Inproceedings 2014 IEEE International Symposium on Information Theory, pp. 186–190, IEEE, Honolulu, 2014, ISBN: 978-1-4799-5186-4. Abstract | Links | BibTeX | Tags: Additive noise, channel capacity, dither quantized Gaussian channel, Entropy, Gaussian channels, low signal-to-noise-ratio, low-SNR asymptotic capacity, peak power constraint, peak-and-average-power-limited Gaussian channel, Quantization (signal), Signal to noise ratio @inproceedings{Koch2014, title = {On the Dither-Quantized Gaussian Channel at Low SNR}, author = {Koch, Tobias}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6874820}, isbn = {978-1-4799-5186-4}, year = {2014}, date = {2014-01-01}, booktitle = {2014 IEEE International Symposium on Information Theory}, pages = {186--190}, publisher = {IEEE}, address = {Honolulu}, abstract = {We study the capacity of the peak-and-average-power-limited Gaussian channel when its output is quantized using a dithered, infinite-level, uniform quantizer of step size $Delta$. We focus on the low signal-to-noise-ratio (SNR) regime, where communication at low spectral efficiencies takes place. We show that, when the peak-power constraint is absent, the low-SNR asymptotic capacity is equal to that of the unquantized channel irrespective of $Delta$. We further derive an expression for the low-SNR asymptotic capacity for finite peak-to-average-power ratios and evaluate it in the low- and high-resolution limit. We demonstrate that, in this case, the low-SNR asymptotic capacity converges to that of the unquantized channel when $Delta$ tends to zero, and it tends to zero when $Delta$ tends to infinity.}, keywords = {Additive noise, channel capacity, dither quantized Gaussian channel, Entropy, Gaussian channels, low signal-to-noise-ratio, low-SNR asymptotic capacity, peak power constraint, peak-and-average-power-limited Gaussian channel, Quantization (signal), Signal to noise ratio}, pubstate = {published}, tppubtype = {inproceedings} } We study the capacity of the peak-and-average-power-limited Gaussian channel when its output is quantized using a dithered, infinite-level, uniform quantizer of step size $Delta$. We focus on the low signal-to-noise-ratio (SNR) regime, where communication at low spectral efficiencies takes place. We show that, when the peak-power constraint is absent, the low-SNR asymptotic capacity is equal to that of the unquantized channel irrespective of $Delta$. We further derive an expression for the low-SNR asymptotic capacity for finite peak-to-average-power ratios and evaluate it in the low- and high-resolution limit. We demonstrate that, in this case, the low-SNR asymptotic capacity converges to that of the unquantized channel when $Delta$ tends to zero, and it tends to zero when $Delta$ tends to infinity. |

## 2013 |

## Inproceedings |

Alvarado, Alex ; Brannstrom, Fredrik ; Agrell, Erik ; Koch, Tobias High-SNR Asymptotics of Mutual Information for Discrete Constellations Inproceedings 2013 IEEE International Symposium on Information Theory, pp. 2274–2278, IEEE, Istanbul, 2013, ISSN: 2157-8095. Abstract | Links | BibTeX | Tags: AWGN channels, discrete constellations, Entropy, Fading, Gaussian Q-function, high-SNR asymptotics, IP networks, least mean squares methods, minimum mean-square error, MMSE, Mutual information, scalar additive white Gaussian noise channel, Signal to noise ratio, signal-to-noise ratio, Upper bound @inproceedings{Alvarado2013b, title = {High-SNR Asymptotics of Mutual Information for Discrete Constellations}, author = {Alvarado, Alex and Brannstrom, Fredrik and Agrell, Erik and Koch, Tobias}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6620631}, issn = {2157-8095}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Symposium on Information Theory}, pages = {2274--2278}, publisher = {IEEE}, address = {Istanbul}, abstract = {The asymptotic behavior of the mutual information (MI) at high signal-to-noise ratio (SNR) for discrete constellations over the scalar additive white Gaussian noise channel is studied. Exact asymptotic expressions for the MI for arbitrary one-dimensional constellations and input distributions are presented in the limit as the SNR tends to infinity. Asymptotics of the minimum mean-square error (MMSE) are also developed. It is shown that for any input distribution, the MI and the MMSE have an asymptotic behavior proportional to a Gaussian Q-function, whose argument depends on the minimum Euclidean distance of the constellation and the SNR. Closed-form expressions for the coefficients of these Q-functions are calculated.}, keywords = {AWGN channels, discrete constellations, Entropy, Fading, Gaussian Q-function, high-SNR asymptotics, IP networks, least mean squares methods, minimum mean-square error, MMSE, Mutual information, scalar additive white Gaussian noise channel, Signal to noise ratio, signal-to-noise ratio, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } The asymptotic behavior of the mutual information (MI) at high signal-to-noise ratio (SNR) for discrete constellations over the scalar additive white Gaussian noise channel is studied. Exact asymptotic expressions for the MI for arbitrary one-dimensional constellations and input distributions are presented in the limit as the SNR tends to infinity. Asymptotics of the minimum mean-square error (MMSE) are also developed. It is shown that for any input distribution, the MI and the MMSE have an asymptotic behavior proportional to a Gaussian Q-function, whose argument depends on the minimum Euclidean distance of the constellation and the SNR. Closed-form expressions for the coefficients of these Q-functions are calculated. |

## 2012 |

## Journal Articles |

Leiva-Murillo, Jose M; Artés-Rodríguez, Antonio Information-Theoretic Linear Feature Extraction Based on Kernel Density Estimators: A Review Journal Article IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42 (6), pp. 1180–1189, 2012, ISSN: 1094-6977. Abstract | Links | BibTeX | Tags: Bandwidth, Density, detection theory, Entropy, Estimation, Feature extraction, Feature extraction (FE), information theoretic linear feature extraction, information theory, information-theoretic learning (ITL), Kernel, Kernel density estimation, kernel density estimators, Machine learning @article{Leiva-Murillo2012a, title = {Information-Theoretic Linear Feature Extraction Based on Kernel Density Estimators: A Review}, author = {Leiva-Murillo, Jose M. and Artés-Rodríguez, Antonio}, url = {http://www.tsc.uc3m.es/~antonio/papers/P44_2012_Information Theoretic Linear Feature Extraction Based on Kernel Density Estimators A Review.pdf http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6185689}, issn = {1094-6977}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)}, volume = {42}, number = {6}, pages = {1180--1189}, abstract = {In this paper, we provide a unified study of the application of kernel density estimators to supervised linear feature extraction by means of criteria inspired by information and detection theory. We enrich this study by the incorporation of two novel criteria to the study, i.e., the mutual information and the likelihood ratio test, and perform both a theoretical and an experimental comparison between the new methods and other ones previously described in the literature. The impact of the bandwidth selection of the density estimator in the classification performance is discussed. Some theoretical results that bound classification performance as a function or mutual information are also compiled. A set of experiments on different real-world datasets allows us to perform an empirical comparison of the methods, in terms of both accuracy and computational complexity. We show the suitability of these methods to determine the dimension of the subspace that contains the discriminative information.}, keywords = {Bandwidth, Density, detection theory, Entropy, Estimation, Feature extraction, Feature extraction (FE), information theoretic linear feature extraction, information theory, information-theoretic learning (ITL), Kernel, Kernel density estimation, kernel density estimators, Machine learning}, pubstate = {published}, tppubtype = {article} } In this paper, we provide a unified study of the application of kernel density estimators to supervised linear feature extraction by means of criteria inspired by information and detection theory. We enrich this study by the incorporation of two novel criteria to the study, i.e., the mutual information and the likelihood ratio test, and perform both a theoretical and an experimental comparison between the new methods and other ones previously described in the literature. The impact of the bandwidth selection of the density estimator in the classification performance is discussed. Some theoretical results that bound classification performance as a function or mutual information are also compiled. A set of experiments on different real-world datasets allows us to perform an empirical comparison of the methods, in terms of both accuracy and computational complexity. We show the suitability of these methods to determine the dimension of the subspace that contains the discriminative information. |

## Inproceedings |

Koch, Tobias ; Martinez, Alfonso ; Guillen i Fabregas, Albert The Capacity Loss of Dense Constellations Inproceedings 2012 IEEE International Symposium on Information Theory Proceedings, pp. 572–576, IEEE, Cambridge, MA, 2012, ISSN: 2157-8095. Abstract | Links | BibTeX | Tags: capacity loss, channel capacity, Constellation diagram, dense constellations, Entropy, general complex-valued additive-noise channels, high signal-to-noise ratio, loss 1.53 dB, power loss, Quadrature amplitude modulation, Random variables, signal constellations, Signal processing, Signal to noise ratio, square signal constellations, Upper bound @inproceedings{Koch2012, title = {The Capacity Loss of Dense Constellations}, author = {Koch, Tobias and Martinez, Alfonso and Guillen i Fabregas, Albert}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6283482}, issn = {2157-8095}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE International Symposium on Information Theory Proceedings}, pages = {572--576}, publisher = {IEEE}, address = {Cambridge, MA}, abstract = {We determine the loss in capacity incurred by using signal constellations with a bounded support over general complex-valued additive-noise channels for suitably high signal-to-noise ratio. Our expression for the capacity loss recovers the power loss of 1.53 dB for square signal constellations.}, keywords = {capacity loss, channel capacity, Constellation diagram, dense constellations, Entropy, general complex-valued additive-noise channels, high signal-to-noise ratio, loss 1.53 dB, power loss, Quadrature amplitude modulation, Random variables, signal constellations, Signal processing, Signal to noise ratio, square signal constellations, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } We determine the loss in capacity incurred by using signal constellations with a bounded support over general complex-valued additive-noise channels for suitably high signal-to-noise ratio. Our expression for the capacity loss recovers the power loss of 1.53 dB for square signal constellations. |

Taborda, Camilo G; Perez-Cruz, Fernando Derivative of the Relative Entropy over the Poisson and Binomial Channel Inproceedings 2012 IEEE Information Theory Workshop, pp. 386–390, IEEE, Lausanne, 2012, ISBN: 978-1-4673-0223-4. Abstract | Links | BibTeX | Tags: binomial channel, binomial distribution, Channel estimation, conditional distribution, Entropy, Estimation, function expectation, Mutual information, mutual information concept, Poisson channel, Poisson distribution, Random variables, relative entropy derivative, similar expression @inproceedings{Taborda2012, title = {Derivative of the Relative Entropy over the Poisson and Binomial Channel}, author = {Taborda, Camilo G. and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6404699}, isbn = {978-1-4673-0223-4}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE Information Theory Workshop}, pages = {386--390}, publisher = {IEEE}, address = {Lausanne}, abstract = {In this paper it is found that, regardless of the statistics of the input, the derivative of the relative entropy over the Binomial channel can be seen as the expectation of a function that has as argument the mean of the conditional distribution that models the channel. Based on this relationship we formulate a similar expression for the mutual information concept. In addition to this, using the connection between the Binomial and Poisson distribution we develop similar results for the Poisson channel. Novelty of the results presented here lies on the fact that, expressions obtained can be applied to a wide range of scenarios.}, keywords = {binomial channel, binomial distribution, Channel estimation, conditional distribution, Entropy, Estimation, function expectation, Mutual information, mutual information concept, Poisson channel, Poisson distribution, Random variables, relative entropy derivative, similar expression}, pubstate = {published}, tppubtype = {inproceedings} } In this paper it is found that, regardless of the statistics of the input, the derivative of the relative entropy over the Binomial channel can be seen as the expectation of a function that has as argument the mean of the conditional distribution that models the channel. Based on this relationship we formulate a similar expression for the mutual information concept. In addition to this, using the connection between the Binomial and Poisson distribution we develop similar results for the Poisson channel. Novelty of the results presented here lies on the fact that, expressions obtained can be applied to a wide range of scenarios. |

Pastore, Adriano ; Koch, Tobias ; Fonollosa, Javier Rodriguez Improved Capacity Lower Bounds for Fading Channels with Imperfect CSI Using Rate Splitting Inproceedings 2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel, pp. 1–5, IEEE, Eilat, 2012, ISBN: 978-1-4673-4681-8. Abstract | Links | BibTeX | Tags: channel capacity, channel capacity lower bounds, conditional entropy, Decoding, Entropy, Fading, fading channels, Gaussian channel, Gaussian channels, Gaussian random variable, imperfect channel-state information, imperfect CSI, independent Gaussian variables, linear minimum mean-square error, mean square error methods, Medard lower bound, Mutual information, Random variables, rate splitting approach, Resource management, Upper bound, wireless communications @inproceedings{Pastore2012, title = {Improved Capacity Lower Bounds for Fading Channels with Imperfect CSI Using Rate Splitting}, author = {Pastore, Adriano and Koch, Tobias and Fonollosa, Javier Rodriguez}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6377031}, isbn = {978-1-4673-4681-8}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel}, pages = {1--5}, publisher = {IEEE}, address = {Eilat}, abstract = {As shown by Medard (“The effect upon channel capacity in wireless communications of perfect and imperfect knowledge of the channel,” IEEE Trans. Inform. Theory, May 2000), the capacity of fading channels with imperfect channel-state information (CSI) can be lower-bounded by assuming a Gaussian channel input X, and by upper-bounding the conditional entropy h(XY, Ĥ), conditioned on the channel output Y and the CSI Ĥ, by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating X from (Y, Ĥ). We demonstrate that, by using a rate-splitting approach, this lower bound can be sharpened: we show that by expressing the Gaussian input X as as the sum of two independent Gaussian variables X(1) and X(2), and by applying Medard's lower bound first to analyze the mutual information between X(1) and Y conditioned on Ĥ while treating X(2) as noise, and by applying the lower bound then to analyze the mutual information between X(2) and Y conditioned on (X(1), Ĥ), we obtain a lower bound on the capacity that is larger than Medard's lower bound.}, keywords = {channel capacity, channel capacity lower bounds, conditional entropy, Decoding, Entropy, Fading, fading channels, Gaussian channel, Gaussian channels, Gaussian random variable, imperfect channel-state information, imperfect CSI, independent Gaussian variables, linear minimum mean-square error, mean square error methods, Medard lower bound, Mutual information, Random variables, rate splitting approach, Resource management, Upper bound, wireless communications}, pubstate = {published}, tppubtype = {inproceedings} } As shown by Medard (“The effect upon channel capacity in wireless communications of perfect and imperfect knowledge of the channel,” IEEE Trans. Inform. Theory, May 2000), the capacity of fading channels with imperfect channel-state information (CSI) can be lower-bounded by assuming a Gaussian channel input X, and by upper-bounding the conditional entropy h(XY, Ĥ), conditioned on the channel output Y and the CSI Ĥ, by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating X from (Y, Ĥ). We demonstrate that, by using a rate-splitting approach, this lower bound can be sharpened: we show that by expressing the Gaussian input X as as the sum of two independent Gaussian variables X(1) and X(2), and by applying Medard's lower bound first to analyze the mutual information between X(1) and Y conditioned on Ĥ while treating X(2) as noise, and by applying the lower bound then to analyze the mutual information between X(2) and Y conditioned on (X(1), Ĥ), we obtain a lower bound on the capacity that is larger than Medard's lower bound. |

Taborda, Camilo G; Perez-Cruz, Fernando Mutual Information and Relative Entropy over the Binomial and Negative Binomial Channels Inproceedings 2012 IEEE International Symposium on Information Theory Proceedings, pp. 696–700, IEEE, Cambridge, MA, 2012, ISSN: 2157-8095. Abstract | Links | BibTeX | Tags: Channel estimation, conditional mean estimation, Entropy, Estimation, estimation theoretical quantity, estimation theory, Gaussian channel, Gaussian channels, information theory concept, loss function, mean square error methods, Mutual information, negative binomial channel, Poisson channel, Random variables, relative entropy @inproceedings{Taborda2012a, title = {Mutual Information and Relative Entropy over the Binomial and Negative Binomial Channels}, author = {Taborda, Camilo G. and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6284304}, issn = {2157-8095}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE International Symposium on Information Theory Proceedings}, pages = {696--700}, publisher = {IEEE}, address = {Cambridge, MA}, abstract = {We study the relation of the mutual information and relative entropy over the Binomial and Negative Binomial channels with estimation theoretical quantities, in which we extend already known results for Gaussian and Poisson channels. We establish general expressions for these information theory concepts with a direct connection with estimation theory through the conditional mean estimation and a particular loss function.}, keywords = {Channel estimation, conditional mean estimation, Entropy, Estimation, estimation theoretical quantity, estimation theory, Gaussian channel, Gaussian channels, information theory concept, loss function, mean square error methods, Mutual information, negative binomial channel, Poisson channel, Random variables, relative entropy}, pubstate = {published}, tppubtype = {inproceedings} } We study the relation of the mutual information and relative entropy over the Binomial and Negative Binomial channels with estimation theoretical quantities, in which we extend already known results for Gaussian and Poisson channels. We establish general expressions for these information theory concepts with a direct connection with estimation theory through the conditional mean estimation and a particular loss function. |

## 2009 |

## Inproceedings |

Fresia, Maria ; Perez-Cruz, Fernando ; Poor, Vincent H Optimized Concatenated LDPC Codes for Joint Source-Channel Coding Inproceedings 2009 IEEE International Symposium on Information Theory, pp. 2131–2135, IEEE, Seoul, 2009, ISBN: 978-1-4244-4312-3. Abstract | Links | BibTeX | Tags: approximation theory, asymptotic behavior analysis, Channel Coding, combined source-channel coding, Concatenated codes, Decoding, Entropy, EXIT chart, extrinsic information transfer, H infinity control, Information analysis, joint belief propagation decoder, joint source-channel coding, low-density-parity-check code, optimized concatenated independent LDPC codes, parity check codes, Redundancy, source coding, transmitter, Transmitters @inproceedings{Fresia2009, title = {Optimized Concatenated LDPC Codes for Joint Source-Channel Coding}, author = {Fresia, Maria and Perez-Cruz, Fernando and Poor, H. Vincent}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5205766}, isbn = {978-1-4244-4312-3}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Symposium on Information Theory}, pages = {2131--2135}, publisher = {IEEE}, address = {Seoul}, abstract = {In this paper a scheme for joint source-channel coding based on low-density-parity-check (LDPC) codes is investigated. Two concatenated independent LDPC codes are used in the transmitter: one for source coding and the other for channel coding, with a joint belief propagation decoder. The asymptotic behavior is analyzed using EXtrinsic Information Transfer (EXIT) charts and this approximation is corroborated with illustrative experiments. The optimization of the degree distributions for our sparse code to maximize the information transmission rate is also considered.}, keywords = {approximation theory, asymptotic behavior analysis, Channel Coding, combined source-channel coding, Concatenated codes, Decoding, Entropy, EXIT chart, extrinsic information transfer, H infinity control, Information analysis, joint belief propagation decoder, joint source-channel coding, low-density-parity-check code, optimized concatenated independent LDPC codes, parity check codes, Redundancy, source coding, transmitter, Transmitters}, pubstate = {published}, tppubtype = {inproceedings} } In this paper a scheme for joint source-channel coding based on low-density-parity-check (LDPC) codes is investigated. Two concatenated independent LDPC codes are used in the transmitter: one for source coding and the other for channel coding, with a joint belief propagation decoder. The asymptotic behavior is analyzed using EXtrinsic Information Transfer (EXIT) charts and this approximation is corroborated with illustrative experiments. The optimization of the degree distributions for our sparse code to maximize the information transmission rate is also considered. |

## 2008 |

## Inproceedings |

Koch, Tobias ; Lapidoth, Amos On Multipath Fading Channels at High SNR Inproceedings 2008 IEEE International Symposium on Information Theory, pp. 1572–1576, IEEE, Toronto, 2008, ISBN: 978-1-4244-2256-2. Abstract | Links | BibTeX | Tags: channel capacity, Delay, discrete time systems, discrete-time channels, Entropy, Fading, fading channels, Frequency, Mathematical model, multipath channels, multipath fading channels, noncoherent channel model, Random variables, Signal to noise ratio, signal-to-noise ratios, SNR, statistics, Transmitters @inproceedings{Koch2008, title = {On Multipath Fading Channels at High SNR}, author = {Koch, Tobias and Lapidoth, Amos}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4595252}, isbn = {978-1-4244-2256-2}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE International Symposium on Information Theory}, pages = {1572--1576}, publisher = {IEEE}, address = {Toronto}, abstract = {This paper studies the capacity of discrete-time multipath fading channels. It is assumed that the number of paths is finite, i.e., that the channel output is influenced by the present and by the L previous channel inputs. A noncoherent channel model is considered where neither transmitter nor receiver are cognizant of the fading's realization, but both are aware of its statistic. The focus is on capacity at high signal-to-noise ratios (SNR). In particular, the capacity pre-loglog-defined as the limiting ratio of the capacity to loglog(SNR) as SNR tends to infinity-is studied. It is shown that, irrespective of the number of paths L, the capacity pre-loglog is 1.}, keywords = {channel capacity, Delay, discrete time systems, discrete-time channels, Entropy, Fading, fading channels, Frequency, Mathematical model, multipath channels, multipath fading channels, noncoherent channel model, Random variables, Signal to noise ratio, signal-to-noise ratios, SNR, statistics, Transmitters}, pubstate = {published}, tppubtype = {inproceedings} } This paper studies the capacity of discrete-time multipath fading channels. It is assumed that the number of paths is finite, i.e., that the channel output is influenced by the present and by the L previous channel inputs. A noncoherent channel model is considered where neither transmitter nor receiver are cognizant of the fading's realization, but both are aware of its statistic. The focus is on capacity at high signal-to-noise ratios (SNR). In particular, the capacity pre-loglog-defined as the limiting ratio of the capacity to loglog(SNR) as SNR tends to infinity-is studied. It is shown that, irrespective of the number of paths L, the capacity pre-loglog is 1. |

Perez-Cruz, Fernando Kullback-Leibler Divergence Estimation of Continuous Distributions Inproceedings 2008 IEEE International Symposium on Information Theory, pp. 1666–1670, IEEE, Toronto, 2008, ISBN: 978-1-4244-2256-2. Abstract | Links | BibTeX | Tags: Convergence, density estimation, Density measurement, Entropy, Frequency estimation, H infinity control, information theory, k-nearest-neighbour density estimation, Kullback-Leibler divergence estimation, Machine learning, Mutual information, neuroscience, Random variables, statistical distributions, waiting-times distributions @inproceedings{Perez-Cruz2008, title = {Kullback-Leibler Divergence Estimation of Continuous Distributions}, author = {Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4595271}, isbn = {978-1-4244-2256-2}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE International Symposium on Information Theory}, pages = {1666--1670}, publisher = {IEEE}, address = {Toronto}, abstract = {We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be either estimated using the empirical cdf or k-nearest-neighbour density estimation, which does not converge to the true measure for finite k. The convergence proof is based on describing the statistics of our estimator using waiting-times distributions, as the exponential or Erlang. We illustrate the proposed estimators and show how they compare to existing methods based on density estimation, and we also outline how our divergence estimators can be used for solving the two-sample problem.}, keywords = {Convergence, density estimation, Density measurement, Entropy, Frequency estimation, H infinity control, information theory, k-nearest-neighbour density estimation, Kullback-Leibler divergence estimation, Machine learning, Mutual information, neuroscience, Random variables, statistical distributions, waiting-times distributions}, pubstate = {published}, tppubtype = {inproceedings} } We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be either estimated using the empirical cdf or k-nearest-neighbour density estimation, which does not converge to the true measure for finite k. The convergence proof is based on describing the statistics of our estimator using waiting-times distributions, as the exponential or Erlang. We illustrate the proposed estimators and show how they compare to existing methods based on density estimation, and we also outline how our divergence estimators can be used for solving the two-sample problem. |

Koch, Tobias ; Lapidoth, Amos Multipath Channels of Unbounded Capacity Inproceedings 2008 IEEE 25th Convention of Electrical and Electronics Engineers in Israel, pp. 640–644, IEEE, Eilat, 2008, ISBN: 978-1-4244-2481-8. Abstract | Links | BibTeX | Tags: channel capacity, discrete-time capacity, Entropy, Fading, fading channels, Frequency, H infinity control, Information rates, multipath channels, multipath fading channels, noncoherent, noncoherent capacity, path gains decay, Signal to noise ratio, statistics, Transmitters, unbounded capacity @inproceedings{Koch2008b, title = {Multipath Channels of Unbounded Capacity}, author = {Koch, Tobias and Lapidoth, Amos}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4736611}, isbn = {978-1-4244-2481-8}, year = {2008}, date = {2008-01-01}, booktitle = {2008 IEEE 25th Convention of Electrical and Electronics Engineers in Israel}, pages = {640--644}, publisher = {IEEE}, address = {Eilat}, abstract = {The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power.}, keywords = {channel capacity, discrete-time capacity, Entropy, Fading, fading channels, Frequency, H infinity control, Information rates, multipath channels, multipath fading channels, noncoherent, noncoherent capacity, path gains decay, Signal to noise ratio, statistics, Transmitters, unbounded capacity}, pubstate = {published}, tppubtype = {inproceedings} } The capacity of discrete-time, noncoherent, multipath fading channels is considered. It is shown that if the variances of the path gains decay faster than exponentially, then capacity is unbounded in the transmit power. |

## 2007 |

## Journal Articles |

Leiva-Murillo, Jose M; Artés-Rodríguez, Antonio Maximization of Mutual Information for Supervised Linear Feature Extraction Journal Article IEEE Transactions on Neural Networks, 18 (5), pp. 1433–1441, 2007, ISSN: 1045-9227. Abstract | Links | BibTeX | Tags: Algorithms, Artificial Intelligence, Automated, component-by-component gradient-ascent method, Computer Simulation, Data Mining, Entropy, Feature extraction, gradient methods, gradient-based entropy, Independent component analysis, Information Storage and Retrieval, information theory, Iron, learning (artificial intelligence), Linear discriminant analysis, Linear Models, Mutual information, Optimization methods, Pattern recognition, Reproducibility of Results, Sensitivity and Specificity, supervised linear feature extraction, Vectors @article{Leiva-Murillo2007, title = {Maximization of Mutual Information for Supervised Linear Feature Extraction}, author = {Leiva-Murillo, Jose M. and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4298118}, issn = {1045-9227}, year = {2007}, date = {2007-01-01}, journal = {IEEE Transactions on Neural Networks}, volume = {18}, number = {5}, pages = {1433--1441}, publisher = {IEEE}, abstract = {In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes.}, keywords = {Algorithms, Artificial Intelligence, Automated, component-by-component gradient-ascent method, Computer Simulation, Data Mining, Entropy, Feature extraction, gradient methods, gradient-based entropy, Independent component analysis, Information Storage and Retrieval, information theory, Iron, learning (artificial intelligence), Linear discriminant analysis, Linear Models, Mutual information, Optimization methods, Pattern recognition, Reproducibility of Results, Sensitivity and Specificity, supervised linear feature extraction, Vectors}, pubstate = {published}, tppubtype = {article} } In this paper, we present a novel scheme for linear feature extraction in classification. The method is based on the maximization of the mutual information (MI) between the features extracted and the classes. The sum of the MI corresponding to each of the features is taken as an heuristic that approximates the MI of the whole output vector. Then, a component-by-component gradient-ascent method is proposed for the maximization of the MI, similar to the gradient-based entropy optimization used in independent component analysis (ICA). The simulation results show that not only is the method competitive when compared to existing supervised feature extraction methods in all cases studied, but it also remarkably outperform them when the data are characterized by strongly nonlinear boundaries between classes. |