Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Gonzalo Sánchez Giraldo is active.

Publication


Featured researches published by Luis Gonzalo Sánchez Giraldo.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Information Theoretic Shape Matching

Erion Hasanbelliu; Luis Gonzalo Sánchez Giraldo; Jose C. Principe

In this paper, we describe two related algorithms that provide both rigid and non-rigid point set registration with different computational complexity and accuracy. The first algorithm utilizes a nonlinear similarity measure known as correntropy. The measure combines second and high order moments in its decision statistic showing improvements especially in the presence of impulsive noise. The algorithm assumes that the correspondence between the point sets is known, which is determined with the surprise metric. The second algorithm mitigates the need to establish a correspondence by representing the point sets as probability density functions (PDF). The registration problem is then treated as a distribution alignment. The method utilizes the Cauchy-Schwarz divergence to measure the similarity/distance between the point sets and recover the spatial transformation function needed to register them. Both algorithms utilize information theoretic descriptors; however, correntropy works at the realizations level, whereas Cauchy-Schwarz divergence works at the PDF level. This allows correntropy to be less computationally expensive, and for correct correspondence, more accurate. The two algorithms are robust against noise and outliers and perform well under varying levels of distortion. They outperform several well-known and state-of-the-art methods for point set registration.


IEEE Transactions on Information Theory | 2015

Measures of Entropy from Data Using Infinitely Divisible Kernels

Luis Gonzalo Sánchez Giraldo; Murali Rao; Jose C. Principe

Information theory provides principled ways to analyze different inference and learning problems, such as hypothesis testing, clustering, dimensionality reduction, classification, and so forth. However, the use of information theoretic quantities as test statistics, that is, as quantities obtained from empirical data, poses a challenging estimation problem that often leads to strong simplifications, such as Gaussian models, or the use of plug in density estimators that are restricted to certain representation of the data. In this paper, a framework to nonparametrically obtain measures of entropy directly from data using operators in reproducing kernel Hilbert spaces defined by infinitely divisible kernels is presented. The entropy functionals, which bear resemblance with quantum entropies, are defined on positive definite matrices and satisfy similar axioms to those of Renyis definition of entropy. Convergence of the proposed estimators follows from concentration results on the difference between the ordered spectrum of the Gram matrices and the integral operators associated to the population quantities. In this way, capitalizing on both the axiomatic definition of entropy and on the representation power of positive definite kernels, the proposed measure of entropy avoids the estimation of the probability distribution underlying the data. Moreover, estimators of kernel-based conditional entropy and mutual information are also defined. Numerical experiments on independence tests compare favorably with state-of-the-art.


international workshop on machine learning for signal processing | 2011

A robust point matching algorithm for non-rigid registration using the Cauchy-Schwarz divergence

Erion Hasanbelliu; Luis Gonzalo Sánchez Giraldo; Jose C. Principe

In this paper, we describe an algorithm that provides both rigid and non-rigid point-set registration. The point sets are represented as probability density functions and the registration problem is treated as distribution alignment. Using the PDFs instead of the points provides a more robust way of dealing with outliers and noise, and it mitigates the need to establish a correspondence between the points in the two sets. The algorithm operates on the distance between the two PDFs to recover the spatial transformation function needed to register the two point sets. The distance measure used is the Cauchy-Schwarz divergence. The algorithm is robust to noise and outliers, and performswell in varying degrees of transformations and noise.


international workshop on machine learning for signal processing | 2011

Stochastic kernel temporal difference for reinforcement learning

Jihye Bae; Luis Gonzalo Sánchez Giraldo; Pratik Y. Chhatbar; Joseph T. Francis; Justin C. Sanchez; Jose C. Principe

This paper introduces a kernel adaptive filter using the stochastic gradient on temporal differences, kernel TD(λ), to estimate the state-action value function Q in reinforcement learning. Kernel methods are powerful for solving nonlinear problems, but the growing computational complexity and memory size limit their applicability on practical scenarios. To overcome this, the quantization approach introduced in [1] is applied. To help understand the behavior and illustrate the role of the parameters, we apply the algorithm on a 2-dimentional spatial navigation task. Eligibility traces are commonly applied in TD learning to improve data efficiency, so the relations of eligibility trace λ and step size and filter size are observed. Moreover, kernel TD (0) is applied to neural decoding of an 8 target center-out reaching task performed by a monkey. Results show the method can effectively learn the brain-state action mapping for this task.


international conference of the ieee engineering in medicine and biology society | 2013

Information-theoretic metric learning: 2-D linear projections of neural data for visualization

Austin J. Brockmeier; Luis Gonzalo Sánchez Giraldo; Matthew Emigh; Joonbum Bae; Jin Soo Choi; Joseph T. Francis; Jose C. Principe

Intracortical neural recordings are typically high-dimensional due to many electrodes, channels, or units and high sampling rates, making it very difficult to visually inspect differences among responses to various conditions. By representing the neural response in a low-dimensional space, a researcher can visually evaluate the amount of information the response carries about the conditions. We consider a linear projection to 2-D space that also parametrizes a metric between neural responses. The projection, and corresponding metric, should preserve class-relevant information pertaining to different behavior or stimuli. We find the projection as a solution to the information-theoretic optimization problem of maximizing the information between the projected data and the class labels. The method is applied to two datasets using different types of neural responses: motor cortex neuronal firing rates of a macaque during a center-out reaching task, and local field potentials in the somatosensory cortex of a rat during tactile stimulation of the forepaw. In both cases, projected data points preserve the natural topology of targets or peripheral touch sites. Using the learned metric on the neural responses increases the nearest-neighbor classification rate versus the original data; thus, the metric is tuned to distinguish among the conditions.


Neurocomputing | 2010

Weighted feature extraction with a functional data extension

Luis Gonzalo Sánchez Giraldo; Germán Castellanos Domínguez

Dimensionality reduction has proved to be a beneficial tool in learning problems. Two of the main advantages provided by dimensionality reduction are interpretation and generalization. Typically, dimensionality reduction is addressed in two separate ways: variable selection and feature extraction. However, in the recent years there has been a growing interest in developing combined schemes such as feature extraction with built-in feature selection. In this paper, we look at dimensionality reduction as a rank-deficient problem that embraces variable selection and feature extraction, simultaneously. From our analysis, we derive a weighting algorithm that is able to select and linearly transform variables by fixing the dimensionality of the space where a relevance criterion is evaluated. This step enforces sparseness on the resulting weights. Our main goal is dimensionality reduction for classification problems. Namely, we introduce modified versions of principal component analysis (PCA) by expectation maximization (EM) and linear regularized discriminant analysis (RDA). Finally, we propose a simple extension of WRDA that deals with functional features. In this case, observations are described by a set of functions defined over the same domain. Methods were put to test on artificial and real data sets showing high levels of generalization even for small sized training samples.


international workshop on machine learning for signal processing | 2011

A reproducing kernel Hilbert space formulation of the principle of relevant information

Luis Gonzalo Sánchez Giraldo; Jose C. Principe

Information theory allows one to pose problems in principled terms that very often have direct interpretation. For instance, capturing the structure based on statistical regularities of data can be thought of as a problem of relevance determination, that is, information preservation under limited resources. The principle of relevant information is an information theoretic objective function that attempts to capture the statistical regularities through entropy minimization under an information preservation constraint. Here, we employ an information theoretic reproducing kernel Hilbert space (RKHS) formulation, which can overcome some of the limitations of previous approaches based on Parzen density estimation. Results are competitive with kernel-based feature extractors such as kernel PCA. Moreover, the proposed framework goes further on the relation between information theoretic learning, kernel methods and support vector algorithms.


Computational Intelligence and Neuroscience | 2015

Kernel temporal differences for neural decoding

Jihye Bae; Luis Gonzalo Sánchez Giraldo; Eric A. Pohlmeyer; Joseph T. Francis; Justin C. Sanchez; Jose C. Principe

We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithms convergence can be guaranteed for policy evaluation. The algorithms nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkeys neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithms capabilities in reinforcement learning brain machine interfaces.


international conference on acoustics, speech, and signal processing | 2014

Projentropy: Using entropy to optimize spatial projections

Austin J. Brockmeier; Eder Santanna; Luis Gonzalo Sánchez Giraldo; Jose C. Principe

Methods for hypothesis testing on zero-mean vector-valued signals often rely on a Gaussian assumption, where the second-order statistics of the observed sample are sufficient statistics of the conditional distribution. This yields fast and simple tests, but by using information-theoretic statistics one can relax the Gaussian assumption. We propose using Rényis quadratic entropy as an alternative to the covariance and show how a linear projection can be optimized to maximize the difference between the conditional entropies. In addition, if the observed sample is actually a window of a multivariate time-series, then the temporal structure can be exploited using the generalized auto-correlation function, correntropy, of the projected sample. This both reduces the computational complexity and increases the performance. These tests can be applied for decoding the brain state from electroencephalogram (EEG) recordings. Preliminary results are demonstrated on a brain-computer interface competition dataset. On unfiltered signals, the projections optimized with the entropy-based statistic perform better than those of common spatial pattern (CSP) algorithm in terms of classification performance.


international symposium on neural networks | 2014

Correntropy kernel temporal differences for reinforcement learning brain machine interfaces

Jihye Bae; Luis Gonzalo Sánchez Giraldo; Jose C. Principe; Joseph T. Francis

This paper introduces a novel temporal difference algorithm to estimate a value function in reinforcement learning. This is a kernel adaptive system using a robust cost function called correntropy. We call this system correntropy kernel temporal differences (CKTD). This algorithm is integrated with Q-learning to find a proper policy (Q-learning via correntropy kernel temporal differences). The proposed method was tested with a synthetic problem, and its robustness under a changing policy was quantified. The same algorithm was applied to the decoding of a monkeys neural states in a reinforcement learning brain machine interface (RLBMI) in a center-out reaching task. The results showed the potential advantage of the proposed algorithm in the RLBMI framework.

Collaboration


Dive into the Luis Gonzalo Sánchez Giraldo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph T. Francis

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jihye Bae

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Badong Chen

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge