## 2014 |

## Inproceedings |

Djuric, Petar; Bravo-Santos, Ángel Cooperative Mesh Networks with EGC Detectors (Inproceeding) 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 225–228, IEEE, A Coruña, 2014, ISBN: 978-1-4799-1481-4. (Abstract | Links | BibTeX | Tags: binary modulations, cooperative communications, cooperative mesh networks, decode and forward communication, decode and forward relays, Detectors, EGC detectors, Gaussian processes, Joints, Manganese, Mesh networks, multihop multibranch networks, Nakagami channels, Nakagami distribution, Nakagami distributions, relay networks (telecommunication), Signal to noise ratio, zero mean Gaussian) @inproceedings{Djuric2014, title = {Cooperative Mesh Networks with EGC Detectors}, author = {Djuric, Petar M. and Bravo-Santos, Ángel M.}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6882381}, isbn = {978-1-4799-1481-4}, year = {2014}, date = {2014-01-01}, booktitle = {2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM)}, pages = {225--228}, publisher = {IEEE}, address = {A Coruña}, abstract = {We address mesh networks with decode and forward relays that use binary modulations. For detection, the nodes employ equal gain combining, which is appealing because it is very easy to implement. We study the performance of these networks and compare it to that of multihop multi-branch networks. We also examine the performance of the networks when both the number of groups and total number of nodes are fixed but the topology of the network varies. We demonstrate the performance of these networks where the channels are modeled with Nakagami distributions and the noise is zero mean Gaussian}, keywords = {binary modulations, cooperative communications, cooperative mesh networks, decode and forward communication, decode and forward relays, Detectors, EGC detectors, Gaussian processes, Joints, Manganese, Mesh networks, multihop multibranch networks, Nakagami channels, Nakagami distribution, Nakagami distributions, relay networks (telecommunication), Signal to noise ratio, zero mean Gaussian}, pubstate = {published}, tppubtype = {inproceedings} } We address mesh networks with decode and forward relays that use binary modulations. For detection, the nodes employ equal gain combining, which is appealing because it is very easy to implement. We study the performance of these networks and compare it to that of multihop multi-branch networks. We also examine the performance of the networks when both the number of groups and total number of nodes are fixed but the topology of the network varies. We demonstrate the performance of these networks where the channels are modeled with Nakagami distributions and the noise is zero mean Gaussian |

## 2012 |

## Inproceedings |

Zhong, Jingshan; Dauwels, Justin; Vazquez, Manuel; Waller, Laura Efficient Gaussian Inference Algorithms for Phase Imaging (Inproceeding) 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 617–620, IEEE, Kyoto, 2012, ISSN: 1520-6149. (Abstract | Links | BibTeX | Tags: biomedical optical imaging, complex optical field, computational complexity, defocus distances, Fourier domain, Gaussian inference algorithms, image sequences, inference mechanisms, intensity image sequence, iterative Kalman smoothing, iterative methods, Kalman filter, Kalman filters, Kalman recursions, linear model, Manganese, Mathematical model, medical image processing, Noise, noisy intensity image, nonlinear observation model, Optical imaging, Optical sensors, Phase imaging, phase inference algorithms, smoothing methods) @inproceedings{Zhong2012a, title = {Efficient Gaussian Inference Algorithms for Phase Imaging}, author = {Zhong, Jingshan and Dauwels, Justin and Vazquez, Manuel A. and Waller, Laura}, url = {http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=6287959}, issn = {1520-6149}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {617--620}, publisher = {IEEE}, address = {Kyoto}, abstract = {Novel efficient algorithms are developed to infer the phase of a complex optical field from a sequence of intensity images taken at different defocus distances. The non-linear observation model is approximated by a linear model. The complex optical field is inferred by iterative Kalman smoothing in the Fourier domain: forward and backward sweeps of Kalman recursions are alternated, and in each such sweep, the approximate linear model is refined. By limiting the number of iterations, one can trade off accuracy vs. complexity. The complexity of each iteration in the proposed algorithm is in the order of N logN, where N is the number of pixels per image. The storage required scales linearly with N. In contrast, the complexity of existing phase inference algorithms scales with N3 and the required storage with N2. The proposed algorithms may enable real-time estimation of optical fields from noisy intensity images.}, keywords = {biomedical optical imaging, complex optical field, computational complexity, defocus distances, Fourier domain, Gaussian inference algorithms, image sequences, inference mechanisms, intensity image sequence, iterative Kalman smoothing, iterative methods, Kalman filter, Kalman filters, Kalman recursions, linear model, Manganese, Mathematical model, medical image processing, Noise, noisy intensity image, nonlinear observation model, Optical imaging, Optical sensors, Phase imaging, phase inference algorithms, smoothing methods}, pubstate = {published}, tppubtype = {inproceedings} } Novel efficient algorithms are developed to infer the phase of a complex optical field from a sequence of intensity images taken at different defocus distances. The non-linear observation model is approximated by a linear model. The complex optical field is inferred by iterative Kalman smoothing in the Fourier domain: forward and backward sweeps of Kalman recursions are alternated, and in each such sweep, the approximate linear model is refined. By limiting the number of iterations, one can trade off accuracy vs. complexity. The complexity of each iteration in the proposed algorithm is in the order of N logN, where N is the number of pixels per image. The storage required scales linearly with N. In contrast, the complexity of existing phase inference algorithms scales with N3 and the required storage with N2. The proposed algorithms may enable real-time estimation of optical fields from noisy intensity images. |

Campo, Adria Tauste; Vazquez-Vilar, Gonzalo; Guillen i Fabregas, Albert; Koch, Tobias; Martinez, Alfonso Achieving Csiszár's Source-Channel Coding Exponent with Product Distributions (Inproceeding) 2012 IEEE International Symposium on Information Theory Proceedings, pp. 1548–1552, IEEE, Cambridge, MA, 2012, ISSN: 2157-8095. (Abstract | Links | BibTeX | Tags: average probability of error, Channel Coding, code construction, codewords, Csiszár's source-channel coding, Decoding, Encoding, error probability, error statistics, Joints, Manganese, product distributions, random codes, random-coding upper bound, source coding, source messages, Upper bound) @inproceedings{Campo2012a, title = {Achieving Csiszár's Source-Channel Coding Exponent with Product Distributions}, author = {Campo, Adria Tauste and Vazquez-Vilar, Gonzalo and Guillen i Fabregas, Albert and Koch, Tobias and Martinez, Alfonso}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6283524}, issn = {2157-8095}, year = {2012}, date = {2012-01-01}, booktitle = {2012 IEEE International Symposium on Information Theory Proceedings}, pages = {1548--1552}, publisher = {IEEE}, address = {Cambridge, MA}, abstract = {We derive a random-coding upper bound on the average probability of error of joint source-channel coding that recovers Csiszár's error exponent when used with product distributions over the channel inputs. Our proof technique for the error probability analysis employs a code construction for which source messages are assigned to subsets and codewords are generated with a distribution that depends on the subset.}, keywords = {average probability of error, Channel Coding, code construction, codewords, Csiszár's source-channel coding, Decoding, Encoding, error probability, error statistics, Joints, Manganese, product distributions, random codes, random-coding upper bound, source coding, source messages, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } We derive a random-coding upper bound on the average probability of error of joint source-channel coding that recovers Csiszár's error exponent when used with product distributions over the channel inputs. Our proof technique for the error probability analysis employs a code construction for which source messages are assigned to subsets and codewords are generated with a distribution that depends on the subset. |

## 2011 |

## Inproceedings |

Goparaju,; Calderbank,; Carson,; Rodrigues, Miguel; Perez-Cruz, Fernando When to Add Another Dimension when Communicating over MIMO Channels (Inproceeding) 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3100–3103, IEEE, Prague, 2011, ISSN: 1520-6149. (Abstract | Links | BibTeX | Tags: divide and conquer approach, divide and conquer methods, error probability, error rate, error statistics, Gaussian channels, Lattices, Manganese, MIMO, MIMO channel, MIMO communication, multiple input multiple output Gaussian channel, Mutual information, optimal power allocation, power allocation, power constraint, receive filter, Resource management, Signal to noise ratio, signal-to-noise ratio, transmit filter, Upper bound) @inproceedings{Goparaju2011, title = {When to Add Another Dimension when Communicating over MIMO Channels}, author = {Goparaju, S. and Calderbank, A. R. and Carson, W. R. and Rodrigues, Miguel R. D. and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5946351}, issn = {1520-6149}, year = {2011}, date = {2011-01-01}, booktitle = {2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {3100--3103}, publisher = {IEEE}, address = {Prague}, abstract = {This paper introduces a divide and conquer approach to the design of transmit and receive filters for communication over a Multiple Input Multiple Output (MIMO) Gaussian channel subject to an average power constraint. It involves conversion to a set of parallel scalar channels, possibly with very different gains, followed by coding per sub-channel (i.e. over time) rather than coding across sub-channels (i.e. over time and space). The loss in performance is negligible at high signal-to-noise ratio (SNR) and not significant at medium SNR. The advantages are reduction in signal processing complexity and greater insight into the SNR thresholds at which a channel is first allocated power. This insight is a consequence of formulating the optimal power allocation in terms of an upper bound on error rate that is determined by parameters of the input lattice such as the minimum distance and kissing number. The resulting thresholds are given explicitly in terms of these lattice parameters. By contrast, when the optimization problem is phrased in terms of maximizing mutual information, the solution is mercury waterfilling, and the thresholds are implicit.}, keywords = {divide and conquer approach, divide and conquer methods, error probability, error rate, error statistics, Gaussian channels, Lattices, Manganese, MIMO, MIMO channel, MIMO communication, multiple input multiple output Gaussian channel, Mutual information, optimal power allocation, power allocation, power constraint, receive filter, Resource management, Signal to noise ratio, signal-to-noise ratio, transmit filter, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } This paper introduces a divide and conquer approach to the design of transmit and receive filters for communication over a Multiple Input Multiple Output (MIMO) Gaussian channel subject to an average power constraint. It involves conversion to a set of parallel scalar channels, possibly with very different gains, followed by coding per sub-channel (i.e. over time) rather than coding across sub-channels (i.e. over time and space). The loss in performance is negligible at high signal-to-noise ratio (SNR) and not significant at medium SNR. The advantages are reduction in signal processing complexity and greater insight into the SNR thresholds at which a channel is first allocated power. This insight is a consequence of formulating the optimal power allocation in terms of an upper bound on error rate that is determined by parameters of the input lattice such as the minimum distance and kissing number. The resulting thresholds are given explicitly in terms of these lattice parameters. By contrast, when the optimization problem is phrased in terms of maximizing mutual information, the solution is mercury waterfilling, and the thresholds are implicit. |