## 2012 |

## Journal Articles |

Salamanca, Luis ; Murillo-Fuentes, Juan Jose ; Perez-Cruz, Fernando Bayesian Equalization for LDPC Channel Decoding Journal Article IEEE Transactions on Signal Processing, 60 (5), pp. 2672–2676, 2012, ISSN: 1053-587X. Abstract | Links | BibTeX | Tags: Approximation methods, Bayes methods, Bayesian equalization, Bayesian estimation problem, Bayesian inference, Bayesian methods, BCJR (Bahl–Cocke–Jelinek–Raviv) algorithm, BCJR algorithm, Channel Coding, channel decoding, channel equalization, channel equalization problem, Channel estimation, channel state information, CSI, Decoding, equalisers, Equalizers, expectation propagation, expectation propagation algorithm, fading channels, graphical model representation, intersymbol interference, Kullback-Leibler divergence, LDPC, LDPC coding, low-density parity-check decoder, Modulation, parity check codes, symbol posterior estimates, Training @article{Salamanca2012b, title = {Bayesian Equalization for LDPC Channel Decoding}, author = {Salamanca, Luis and Murillo-Fuentes, Juan Jose and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6129544}, issn = {1053-587X}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Signal Processing}, volume = {60}, number = {5}, pages = {2672--2676}, abstract = {We describe the channel equalization problem, and its prior estimate of the channel state information (CSI), as a joint Bayesian estimation problem to improve each symbol posterior estimates at the input of the channel decoder. Our approach takes into consideration not only the uncertainty due to the noise in the channel, but also the uncertainty in the CSI estimate. However, this solution cannot be computed in linear time, because it depends on all the transmitted symbols. Hence, we also put forward an approximation for each symbol's posterior, using the expectation propagation algorithm, which is optimal from the Kullback-Leibler divergence viewpoint and yields an equalization with a complexity identical to the BCJR algorithm. We also use a graphical model representation of the full posterior, in which the proposed approximation can be readily understood. The proposed posterior estimates are more accurate than those computed using the ML estimate for the CSI. In order to illustrate this point, we measure the error rate at the output of a low-density parity-check decoder, which needs the exact posterior for each symbol to detect the incoming word and it is sensitive to a mismatch in those posterior estimates. For example, for QPSK modulation and a channel with three taps, we can expect gains over 0.5 dB with same computational complexity as the ML receiver.}, keywords = {Approximation methods, Bayes methods, Bayesian equalization, Bayesian estimation problem, Bayesian inference, Bayesian methods, BCJR (Bahl–Cocke–Jelinek–Raviv) algorithm, BCJR algorithm, Channel Coding, channel decoding, channel equalization, channel equalization problem, Channel estimation, channel state information, CSI, Decoding, equalisers, Equalizers, expectation propagation, expectation propagation algorithm, fading channels, graphical model representation, intersymbol interference, Kullback-Leibler divergence, LDPC, LDPC coding, low-density parity-check decoder, Modulation, parity check codes, symbol posterior estimates, Training}, pubstate = {published}, tppubtype = {article} } We describe the channel equalization problem, and its prior estimate of the channel state information (CSI), as a joint Bayesian estimation problem to improve each symbol posterior estimates at the input of the channel decoder. Our approach takes into consideration not only the uncertainty due to the noise in the channel, but also the uncertainty in the CSI estimate. However, this solution cannot be computed in linear time, because it depends on all the transmitted symbols. Hence, we also put forward an approximation for each symbol's posterior, using the expectation propagation algorithm, which is optimal from the Kullback-Leibler divergence viewpoint and yields an equalization with a complexity identical to the BCJR algorithm. We also use a graphical model representation of the full posterior, in which the proposed approximation can be readily understood. The proposed posterior estimates are more accurate than those computed using the ML estimate for the CSI. In order to illustrate this point, we measure the error rate at the output of a low-density parity-check decoder, which needs the exact posterior for each symbol to detect the incoming word and it is sensitive to a mismatch in those posterior estimates. For example, for QPSK modulation and a channel with three taps, we can expect gains over 0.5 dB with same computational complexity as the ML receiver. |

## 2011 |

## Inproceedings |

Plata-Chaves, Jorge ; Lazaro, M; Artés-Rodríguez, Antonio Optimal Neyman-Pearson Fusion in Two-Dimensional Densor Networks with Serial Architecture and Dependent Observations Inproceedings Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pp. 1–6, Chicago, 2011, ISBN: 978-1-4577-0267-9. Abstract | Links | BibTeX | Tags: Bayesian methods, binary distributed detection problem, decision theory, dependent observations, Joints, local decision rule, Measurement uncertainty, Network topology, Neyman-Pearson criterion, optimal Neyman-Pearson fusion, optimum distributed detection, Parallel architectures, Performance evaluation, Probability density function, sensor dependent observations, sensor fusion, serial architecture, serial network topology, two-dimensional sensor networks, Wireless Sensor Networks @inproceedings{Plata-Chaves2011, title = {Optimal Neyman-Pearson Fusion in Two-Dimensional Densor Networks with Serial Architecture and Dependent Observations}, author = {Plata-Chaves, Jorge and Lazaro, M. and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5977545&searchWithin%3Dartes+rodriguez%26sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5977431%29}, isbn = {978-1-4577-0267-9}, year = {2011}, date = {2011-01-01}, booktitle = {Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on}, pages = {1--6}, address = {Chicago}, abstract = {In this correspondence, we consider a sensor network with serial architecture. When solving a binary distributed detection problem where the sensor observations are dependent under each one of the two possible hypothesis, each fusion stage of the network applies a local decision rule. We assume that, based on the information available at each fusion stage, the decision rules provide a binary message regarding the presence or absence of an event of interest. Under this scenario and under a Neyman-Pearson formulation, we derive the optimal decision rules associated with each fusion stage. As it happens when the sensor observations are independent, we are able to show that, under the Neyman-Pearson criterion, the optimal fusion rules of a serial configuration with dependent observations also match optimal Neyman-Pearson tests.}, keywords = {Bayesian methods, binary distributed detection problem, decision theory, dependent observations, Joints, local decision rule, Measurement uncertainty, Network topology, Neyman-Pearson criterion, optimal Neyman-Pearson fusion, optimum distributed detection, Parallel architectures, Performance evaluation, Probability density function, sensor dependent observations, sensor fusion, serial architecture, serial network topology, two-dimensional sensor networks, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } In this correspondence, we consider a sensor network with serial architecture. When solving a binary distributed detection problem where the sensor observations are dependent under each one of the two possible hypothesis, each fusion stage of the network applies a local decision rule. We assume that, based on the information available at each fusion stage, the decision rules provide a binary message regarding the presence or absence of an event of interest. Under this scenario and under a Neyman-Pearson formulation, we derive the optimal decision rules associated with each fusion stage. As it happens when the sensor observations are independent, we are able to show that, under the Neyman-Pearson criterion, the optimal fusion rules of a serial configuration with dependent observations also match optimal Neyman-Pearson tests. |

## 2010 |

## Inproceedings |

Salamanca, Luis ; Jose Murillo-Fuentes, Juan ; Perez-Cruz, Fernando Bayesian BCJR for Channel Equalization and Decoding Inproceedings 2010 IEEE International Workshop on Machine Learning for Signal Processing, pp. 53–58, IEEE, Kittila, 2010, ISSN: 1551-2541. Abstract | Links | BibTeX | Tags: a posteriori probability, Bayes methods, Bayesian BCJR, Bayesian methods, Bit error rate, channel decoding, channel estate information, Channel estimation, Decoding, digital communication, digital communications, equalisers, Equalizers, error statistics, Markov processes, Maximum likelihood decoding, maximum likelihood estimation, multipath channel, probabilistic channel equalization, Probability, single input single output model, SISO model, statistical information, Training @inproceedings{Salamanca2010, title = {Bayesian BCJR for Channel Equalization and Decoding}, author = {Salamanca, Luis and Jose Murillo-Fuentes, Juan and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5589201}, issn = {1551-2541}, year = {2010}, date = {2010-01-01}, booktitle = {2010 IEEE International Workshop on Machine Learning for Signal Processing}, pages = {53--58}, publisher = {IEEE}, address = {Kittila}, abstract = {In this paper we focus on the probabilistic channel equalization in digital communications. We face the single input single output (SISO) model to show how the statistical information about the multipath channel can be exploited to further improve our estimation of the a posteriori probabilities (APP) during the equalization process. We consider not only the uncertainty due to the noise in the channel, but also in the estimate of the channel estate information (CSI). Thus, we resort to a Bayesian approach for the computation of the APP. This novel algorithm has the same complexity as the BCJR, exhibiting lower bit error rate at the output of the channel decoder than the standard BCJR that considers maximum likelihood (ML) to estimate the CSI.}, keywords = {a posteriori probability, Bayes methods, Bayesian BCJR, Bayesian methods, Bit error rate, channel decoding, channel estate information, Channel estimation, Decoding, digital communication, digital communications, equalisers, Equalizers, error statistics, Markov processes, Maximum likelihood decoding, maximum likelihood estimation, multipath channel, probabilistic channel equalization, Probability, single input single output model, SISO model, statistical information, Training}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we focus on the probabilistic channel equalization in digital communications. We face the single input single output (SISO) model to show how the statistical information about the multipath channel can be exploited to further improve our estimation of the a posteriori probabilities (APP) during the equalization process. We consider not only the uncertainty due to the noise in the channel, but also in the estimate of the channel estate information (CSI). Thus, we resort to a Bayesian approach for the computation of the APP. This novel algorithm has the same complexity as the BCJR, exhibiting lower bit error rate at the output of the channel decoder than the standard BCJR that considers maximum likelihood (ML) to estimate the CSI. |

Vinuelas-Peris, Pablo ; Artés-Rodríguez, Antonio Bayesian Joint Recovery of Correlated Signals in Distributed Compressed Sensing Inproceedings 2010 2nd International Workshop on Cognitive Information Processing, pp. 382–387, IEEE, Elba, 2010, ISBN: 978-1-4244-6459-3. Abstract | Links | BibTeX | Tags: Bayes methods, Bayesian joint recovery, Bayesian methods, correlated signal, Correlation, correlation methods, Covariance matrix, Dictionaries, distributed compressed sensing, matrix decomposition, Noise measurement, sensors, sparse component correlation coefficient @inproceedings{Vinuelas-Peris2010, title = {Bayesian Joint Recovery of Correlated Signals in Distributed Compressed Sensing}, author = {Vinuelas-Peris, Pablo and Artés-Rodríguez, Antonio}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5604103}, isbn = {978-1-4244-6459-3}, year = {2010}, date = {2010-01-01}, booktitle = {2010 2nd International Workshop on Cognitive Information Processing}, pages = {382--387}, publisher = {IEEE}, address = {Elba}, abstract = {In this paper we address the problem of Distributed Compressed Sensing (DCS) of correlated signals. We model the correlation using the sparse components correlation coefficient of signals, a general and simple measure. We develop an sparse Bayesian learning method for this setting, that can be applied to both random and optimized projection matrices. As a result, we obtain a reduction of the number of measurements needed for a given recovery error that is dependent on the correlation coefficient, as shown by computer simulations in different scenarios.}, keywords = {Bayes methods, Bayesian joint recovery, Bayesian methods, correlated signal, Correlation, correlation methods, Covariance matrix, Dictionaries, distributed compressed sensing, matrix decomposition, Noise measurement, sensors, sparse component correlation coefficient}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the problem of Distributed Compressed Sensing (DCS) of correlated signals. We model the correlation using the sparse components correlation coefficient of signals, a general and simple measure. We develop an sparse Bayesian learning method for this setting, that can be applied to both random and optimized projection matrices. As a result, we obtain a reduction of the number of measurements needed for a given recovery error that is dependent on the correlation coefficient, as shown by computer simulations in different scenarios. |

Salamanca, Luis ; Murillo-Fuentes, Juan Jose ; Perez-Cruz, Fernando Channel Decoding with a Bayesian Equalizer Inproceedings 2010 IEEE International Symposium on Information Theory, pp. 1998–2002, IEEE, Austin, TX, 2010, ISBN: 978-1-4244-7892-7. Abstract | Links | BibTeX | Tags: a posteriori probability, Bayesian equalizer, Bayesian methods, BER, Bit error rate, Channel Coding, channel decoding, channel estate information, Communication channels, Decoding, equalisers, Equalizers, error statistics, low-density parity-check decoders, LPDC decoders, Maximum likelihood decoding, maximum likelihood detection, maximum likelihood estimation, Noise reduction, parity check codes, Probability, Uncertainty @inproceedings{Salamanca2010a, title = {Channel Decoding with a Bayesian Equalizer}, author = {Salamanca, Luis and Murillo-Fuentes, Juan Jose and Perez-Cruz, Fernando}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5513348}, isbn = {978-1-4244-7892-7}, year = {2010}, date = {2010-01-01}, booktitle = {2010 IEEE International Symposium on Information Theory}, pages = {1998--2002}, publisher = {IEEE}, address = {Austin, TX}, abstract = {Low-density parity-check (LPDC) decoders assume the channel estate information (CSI) is known and they have the true a posteriori probability (APP) for each transmitted bit. But in most cases of interest, the CSI needs to be estimated with the help of a short training sequence and the LDPC decoder has to decode the received word using faulty APP estimates. In this paper, we study the uncertainty in the CSI estimate and how it affects the bit error rate (BER) output by the LDPC decoder. To improve these APP estimates, we propose a Bayesian equalizer that takes into consideration not only the uncertainty due to the noise in the channel, but also the uncertainty in the CSI estimate, reducing the BER after the LDPC decoder.}, keywords = {a posteriori probability, Bayesian equalizer, Bayesian methods, BER, Bit error rate, Channel Coding, channel decoding, channel estate information, Communication channels, Decoding, equalisers, Equalizers, error statistics, low-density parity-check decoders, LPDC decoders, Maximum likelihood decoding, maximum likelihood detection, maximum likelihood estimation, Noise reduction, parity check codes, Probability, Uncertainty}, pubstate = {published}, tppubtype = {inproceedings} } Low-density parity-check (LPDC) decoders assume the channel estate information (CSI) is known and they have the true a posteriori probability (APP) for each transmitted bit. But in most cases of interest, the CSI needs to be estimated with the help of a short training sequence and the LDPC decoder has to decode the received word using faulty APP estimates. In this paper, we study the uncertainty in the CSI estimate and how it affects the bit error rate (BER) output by the LDPC decoder. To improve these APP estimates, we propose a Bayesian equalizer that takes into consideration not only the uncertainty due to the noise in the channel, but also the uncertainty in the CSI estimate, reducing the BER after the LDPC decoder. |

## 2009 |

## Inproceedings |

Djuric, Petar M; Miguez, Joaquin Model Assessment with Kolmogorov-Smirnov Statistics Inproceedings 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2973–2976, IEEE, Taipei, 2009, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Bayesian methods, Computer Simulation, Context modeling, Electronic mail, Filtering, ill-conditioned problem, Kolmogorov-Smirnov statistics, model assessment, modelling, Predictive models, Probability, statistical analysis, statistics, Testing @inproceedings{Djuric2009, title = {Model Assessment with Kolmogorov-Smirnov Statistics}, author = {Djuric, Petar M. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4960248}, issn = {1520-6149}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {2973--2976}, publisher = {IEEE}, address = {Taipei}, abstract = {One of the most basic problems in science and engineering is the assessment of a considered model. The model should describe a set of observed data and the objective is to find ways of deciding if the model should be rejected. It seems that this is an ill-conditioned problem because we have to test the model against all the possible alternative models. In this paper we use the Kolmogorov-Smirnov statistic to develop a test that shows if the model should be kept or it should be rejected. We explain how this testing can be implemented in the context of particle filtering. We demonstrate the performance of the proposed method by computer simulations.}, keywords = {Bayesian methods, Computer Simulation, Context modeling, Electronic mail, Filtering, ill-conditioned problem, Kolmogorov-Smirnov statistics, model assessment, modelling, Predictive models, Probability, statistical analysis, statistics, Testing}, pubstate = {published}, tppubtype = {inproceedings} } One of the most basic problems in science and engineering is the assessment of a considered model. The model should describe a set of observed data and the objective is to find ways of deciding if the model should be rejected. It seems that this is an ill-conditioned problem because we have to test the model against all the possible alternative models. In this paper we use the Kolmogorov-Smirnov statistic to develop a test that shows if the model should be kept or it should be rejected. We explain how this testing can be implemented in the context of particle filtering. We demonstrate the performance of the proposed method by computer simulations. |

Martino, Luca ; Miguez, Joaquin A Novel Rejection Sampling Scheme for Posterior Probability Distributions Inproceedings 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2921–2924, IEEE, Taipei, 2009, ISSN: 1520-6149. Abstract | Links | BibTeX | Tags: Additive noise, arbitrary target probability distributions, Bayes methods, Bayesian methods, Monte Carlo integration, Monte Carlo methods, Monte Carlo techniques, Overbounding, posterior probability distributions, Probability density function, Probability distribution, Proposals, Rejection sampling, rejection sampling scheme, Sampling methods, Signal processing algorithms, signal sampling, Upper bound @inproceedings{Martino2009, title = {A Novel Rejection Sampling Scheme for Posterior Probability Distributions}, author = {Martino, Luca and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4960235}, issn = {1520-6149}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Acoustics, Speech and Signal Processing}, pages = {2921--2924}, publisher = {IEEE}, address = {Taipei}, abstract = {Rejection sampling (RS) is a well-known method to draw from arbitrary target probability distributions, which has important applications by itself or as a building block for more sophisticated Monte Carlo techniques. The main limitation to the use of RS is the need to find an adequate upper bound for the ratio of the target probability density function (pdf) over the proposal pdf from which the samples are generated. There are no general methods to analytically find this bound, except in the particular case in which the target pdf is log-concave. In this paper we adopt a Bayesian view of the problem and propose a general RS scheme to draw from the posterior pdf of a signal of interest using its prior density as a proposal function. The method enables the analytical calculation of the bound and can be applied to a large class of target densities. We illustrate its use with a simple numerical example.}, keywords = {Additive noise, arbitrary target probability distributions, Bayes methods, Bayesian methods, Monte Carlo integration, Monte Carlo methods, Monte Carlo techniques, Overbounding, posterior probability distributions, Probability density function, Probability distribution, Proposals, Rejection sampling, rejection sampling scheme, Sampling methods, Signal processing algorithms, signal sampling, Upper bound}, pubstate = {published}, tppubtype = {inproceedings} } Rejection sampling (RS) is a well-known method to draw from arbitrary target probability distributions, which has important applications by itself or as a building block for more sophisticated Monte Carlo techniques. The main limitation to the use of RS is the need to find an adequate upper bound for the ratio of the target probability density function (pdf) over the proposal pdf from which the samples are generated. There are no general methods to analytically find this bound, except in the particular case in which the target pdf is log-concave. In this paper we adopt a Bayesian view of the problem and propose a general RS scheme to draw from the posterior pdf of a signal of interest using its prior density as a proposal function. The method enables the analytical calculation of the bound and can be applied to a large class of target densities. We illustrate its use with a simple numerical example. |

Achutegui, Katrin ; Martino, Luca ; Rodas, Javier ; Escudero, Carlos J; Miguez, Joaquin A Multi-Model Particle Filtering Algorithm for Indoor Tracking of Mobile Terminals Using RSS Data Inproceedings 2009 IEEE International Conference on Control Applications, pp. 1702–1707, IEEE, Saint Petersburg, 2009, ISBN: 978-1-4244-4601-8. Abstract | Links | BibTeX | Tags: Bayesian methods, Control systems, Filtering algorithms, generalized interacting multiple model, GIMM, indoor radio, Indoor tracking, mobile radio, mobile terminal, Monte Carlo methods, multipath propagation, position-dependent data measurement, random process, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, Sliding mode control, State-space methods, state-space model, Target tracking, tracking, transmitter-to-receiver distance, wireless network, wireless technology @inproceedings{Achutegui2009, title = {A Multi-Model Particle Filtering Algorithm for Indoor Tracking of Mobile Terminals Using RSS Data}, author = {Achutegui, Katrin and Martino, Luca and Rodas, Javier and Escudero, Carlos J. and Miguez, Joaquin}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5280960}, isbn = {978-1-4244-4601-8}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Conference on Control Applications}, pages = {1702--1707}, publisher = {IEEE}, address = {Saint Petersburg}, abstract = {In this paper we address the problem of indoor tracking using received signal strength (RSS) as a position-dependent data measurement. This type of measurements is very appealing because they can be easily obtained with a variety of wireless technologies which are relatively inexpensive. The extraction of accurate location information from RSS in indoor scenarios is not an easy task, though. Since RSS is highly influenced by multipath propagation, it turns out very hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. The measurement models proposed in the literature are site-specific and require a great deal of information regarding the structure of the building where the tracking will be performed and therefore are not useful for a general application. For that reason we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to specific and different propagation environments. This methodology, is called interacting multiple models (IMM), has been used in the past for modeling the motion of maneuvering targets. Here, we extend its application to handle also the uncertainty in the RSS observations and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data.}, keywords = {Bayesian methods, Control systems, Filtering algorithms, generalized interacting multiple model, GIMM, indoor radio, Indoor tracking, mobile radio, mobile terminal, Monte Carlo methods, multipath propagation, position-dependent data measurement, random process, random processes, Rao-Blackwellized sequential Monte Carlo tracking, received signal strength, RSS data, Sliding mode control, State-space methods, state-space model, Target tracking, tracking, transmitter-to-receiver distance, wireless network, wireless technology}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we address the problem of indoor tracking using received signal strength (RSS) as a position-dependent data measurement. This type of measurements is very appealing because they can be easily obtained with a variety of wireless technologies which are relatively inexpensive. The extraction of accurate location information from RSS in indoor scenarios is not an easy task, though. Since RSS is highly influenced by multipath propagation, it turns out very hard to adequately model the correspondence between the received power and the transmitter-to-receiver distance. The measurement models proposed in the literature are site-specific and require a great deal of information regarding the structure of the building where the tracking will be performed and therefore are not useful for a general application. For that reason we propose the use of a compound model that combines several sub-models, whose parameters are adjusted to specific and different propagation environments. This methodology, is called interacting multiple models (IMM), has been used in the past for modeling the motion of maneuvering targets. Here, we extend its application to handle also the uncertainty in the RSS observations and we refer to the resulting state-space model as a generalized IMM (GIMM) system. The flexibility of the GIMM approach is attained at the expense of an increase in the number of random processes that must be accurately tracked. To overcome this difficulty, we introduce a Rao-Blackwellized sequential Monte Carlo tracking algorithm that exhibits good performance both with synthetic and experimental data. |

Goez, Roger ; Lazaro, Marcelino Training of Neural Classifiers by Separating Distributions at the Hidden Layer Inproceedings 2009 IEEE International Workshop on Machine Learning for Signal Processing, pp. 1–6, IEEE, Grenoble, 2009, ISBN: 978-1-4244-4947-7. Abstract | Links | BibTeX | Tags: Artificial neural networks, Bayesian methods, Cost function, Curve fitting, Databases, Function approximation, Neural networks, Speech recognition, Support vector machine classification, Support vector machines @inproceedings{Goez2009, title = {Training of Neural Classifiers by Separating Distributions at the Hidden Layer}, author = {Goez, Roger and Lazaro, Marcelino}, url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5306240}, isbn = {978-1-4244-4947-7}, year = {2009}, date = {2009-01-01}, booktitle = {2009 IEEE International Workshop on Machine Learning for Signal Processing}, pages = {1--6}, publisher = {IEEE}, address = {Grenoble}, abstract = {A new cost function for training of binary classifiers based on neural networks is proposed. This cost function aims at separating the distributions for patterns of each class at the output of the hidden layer of the network. It has been implemented in a Generalized Radial Basis Function (GRBF) network and its performance has been evaluated under three different databases, showing advantages with respect to the conventional Mean Squared Error (MSE) cost function. With respect to the Support Vector Machine (SVM) classifier, the proposed method has also advantages both in terms of performance and complexity.}, keywords = {Artificial neural networks, Bayesian methods, Cost function, Curve fitting, Databases, Function approximation, Neural networks, Speech recognition, Support vector machine classification, Support vector machines}, pubstate = {published}, tppubtype = {inproceedings} } A new cost function for training of binary classifiers based on neural networks is proposed. This cost function aims at separating the distributions for patterns of each class at the output of the hidden layer of the network. It has been implemented in a Generalized Radial Basis Function (GRBF) network and its performance has been evaluated under three different databases, showing advantages with respect to the conventional Mean Squared Error (MSE) cost function. With respect to the Support Vector Machine (SVM) classifier, the proposed method has also advantages both in terms of performance and complexity. |