Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erion Hasanbelliu is active.

Publication


Featured researches published by Erion Hasanbelliu.


Pattern Recognition | 2009

The correntropy MACE filter

Kyu-Hwa Jeong; Weifeng Liu; Seungju Han; Erion Hasanbelliu; Jose C. Principe

The minimum average correlation energy (MACE) filter is well known for object recognition. This paper proposes a nonlinear extension to the MACE filter using the recently introduced correntropy function. Correntropy is a positive definite function that generalizes the concept of correlation by utilizing second and higher order moments of the signal statistics. Because of its positive definite nature, correntropy induces a new reproducing kernel Hilbert space (RKHS). Taking advantage of the linear structure of the RKHS it is possible to formulate the MACE filter equations in the RKHS induced by correntropy and obtained an approximate solution. Due to the nonlinear relation between the feature space and the input space, the correntropy MACE (CMACE) can potentially improve upon the MACE performance while preserving the shift-invariant property (additional computation for all shifts will be required in the CMACE). To alleviate the computation complexity of the solution, this paper also presents the fast CMACE using the fast Gauss transform (FGT). We apply the CMACE filter to the MSTAR public release synthetic aperture radar (SAR) data set as well as PIE database of human faces and show that the proposed method exhibits better distortion tolerance and outperforms the linear MACE in both generalization and rejection abilities.


international symposium on neural networks | 2011

Closed-form cauchy-schwarz PDF divergence for mixture of Gaussians

Kittipat Kampa; Erion Hasanbelliu; Jose C. Principe

This paper presents an efficient approach to calculate the difference between two probability density functions (pdfs), each of which is a mixture of Gaussians (MoG). Unlike Kullback-Leibler divergence (DKL), the authors propose that the Cauchy-Schwarz (CS) pdf divergence measure (DCS) can give an analytic, closed-form expression for MoG. This property of the DCS makes fast and efficient calculations possible, which is tremendously desired in real-world applications where the dimensionality of the data/features is very high. We show that DCS follows similar trends to DKL, but can be computed much faster, especially when the dimensionality is high. Moreover, the proposed method is shown to significantly outperform DKL in classifying real-world 2D and 3D objects, and static hand posture recognition based on distances alone.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Information Theoretic Shape Matching

Erion Hasanbelliu; Luis Gonzalo Sánchez Giraldo; Jose C. Principe

In this paper, we describe two related algorithms that provide both rigid and non-rigid point set registration with different computational complexity and accuracy. The first algorithm utilizes a nonlinear similarity measure known as correntropy. The measure combines second and high order moments in its decision statistic showing improvements especially in the presence of impulsive noise. The algorithm assumes that the correspondence between the point sets is known, which is determined with the surprise metric. The second algorithm mitigates the need to establish a correspondence by representing the point sets as probability density functions (PDF). The registration problem is then treated as a distribution alignment. The method utilizes the Cauchy-Schwarz divergence to measure the similarity/distance between the point sets and recover the spatial transformation function needed to register them. Both algorithms utilize information theoretic descriptors; however, correntropy works at the realizations level, whereas Cauchy-Schwarz divergence works at the PDF level. This allows correntropy to be less computationally expensive, and for correct correspondence, more accurate. The two algorithms are robust against noise and outliers and perform well under varying levels of distortion. They outperform several well-known and state-of-the-art methods for point set registration.


international workshop on machine learning for signal processing | 2011

A robust point matching algorithm for non-rigid registration using the Cauchy-Schwarz divergence

Erion Hasanbelliu; Luis Gonzalo Sánchez Giraldo; Jose C. Principe

In this paper, we describe an algorithm that provides both rigid and non-rigid point-set registration. The point sets are represented as probability density functions and the registration problem is treated as distribution alignment. Using the PDFs instead of the points provides a more robust way of dealing with outliers and noise, and it mitigates the need to establish a correspondence between the points in the two sets. The algorithm operates on the distance between the two PDFs to recover the spatial transformation function needed to register the two point sets. The distance measure used is the Cauchy-Schwarz divergence. The algorithm is robust to noise and outliers, and performswell in varying degrees of transformations and noise.


systems, man and cybernetics | 2009

Correntropy based matched filtering for classification in sidescan sonar imagery

Erion Hasanbelliu; Jose C. Principe; K. Clint Slatton

This paper presents an automated way of classifying mines in sidescan sonar imagery. A nonlinear extension to the matched filter is introduced using a new metric called correntropy. This method features high order moments in the decision statistic showing improvements in classification especially in the presence of noise. Templates have been designed using prior knowledge about the objects in the dataset. During classification, these templates are linearly transformed to accommodate for the shape variability in the observation. The template resulting in the largest correntropy cost function is chosen as the object category. The method is tested on real sonar images producing promising results considering the low number of images required to design the templates.


international symposium on neural networks | 2012

Online learning using a Bayesian surprise metric

Erion Hasanbelliu; Kittipat Kampa; Jose C. Principe; James Tory Cobb

Dictionary.com defines learning as the process of acquiring knowledge. In psychology, learning is defined as the modification of behavior through training. In our work, we combine these definitions to define learning as the modification of a system model to incorporate the knowledge acquired by new observations. During learning, the system creates and modifies a model to improve its performance. As new samples are introduced, the system updates its model based on the new information provided by the samples. However, this update may not necessarily improve the model. We propose a Bayesian surprise metric to differentiate good data (beneficial) from outliers (detrimental), and thus help to selectively adapt the model parameters. The surprise metric is calculated based on the difference between the prior and the posterior distributions of the model when a new sample is introduced. The metric is useful not only to identify outlier data, but also to differentiate between the data carrying useful information for improving the model and those carrying no new information (redundant). Allowing only the relevant data to update the model would speed up the learning process and prevent the system from overfitting. The method is demonstrated in all three learning procedures: supervised, semi-supervised and unsupervised. The results show the benefit of surprise in both clustering and outlier detection.


IEEE Journal of Oceanic Engineering | 2012

Deformable Bayesian Network: A Robust Framework for Underwater Sensor Fusion

Kittipat Kampa; Erion Hasanbelliu; James Tory Cobb; Jose C. Principe; K. C. Slatton

The dynamic tree (DT) graphical model is a popular analytical tool for image segmentation and object classification tasks. A DT is a useful model in this context because its hierarchical property enables the user to examine information in multiple scales and its flexible structure can more easily fit complex region boundaries compared to rigid quadtree structures such as tree-structured Bayesian networks. This paper proposes a novel framework for data fusion called a deformable Bayesian network (DFBN) by using a DT model to fuse measurements from multiple sensing platforms into a nonredundant representation. The structural flexibility of the DFBN will be used to fuse common information across different sensor measurements. The appropriate structure update strategies for the DFBN and its parameters for the data fusion application are discussed. A real-world example application using sonar images collected from a survey mission is presented. The fusion results using the presented DFBN framework are shown to outperform state-of-the-art approaches such as the Gaussian mean shift and spectral clustering algorithms. The DFBNs complexity and scalability are discussed to address its potential for a larger data set.


international conference on multimedia information networking and security | 2011

Bayesian surprise metric for outlier detection in on-line learning

Erion Hasanbelliu; Kittipat Kampa; James Tory Cobb; Jose C. Principe

Our previous work developed an online learning Bayesian framework (dynamic tree) for data organization and clustering. To continuously adapt the system during operation, we concurrently seek to perform outlier detection to prevent them from incorrectly modifying the system. We propose a new Bayesian surprise metric to differentiate outliers from the training data and thus help to selectively adapt the model parameters. The metric is calculated based on the difference between the prior and the posterior distributions on the model when a new sample is introduced. A good training datum would sufficiently but not excessively change the model; consequently, the difference between the prior and the posterior distributions would be reasonable to the amount of new information present on the datum. However, an outlier carries an element of surprise that would significantly change the model. In such a case, the posterior distribution would greatly differ from the prior resulting in a large value for the surprise metric. We categorize this datum as an outlier and other means (e.g. human operator) will have to be used to handle such cases. The surprise metric is calculated based on the model distribution, and as such, it adapts with the model. The surprise factor is dependent on the state of the system. This speeds up the learning process by considering only the relevant new data. Both the model parameters and even the structure of the dynamic tree can be updated under this approach.


international workshop on machine learning for signal processing | 2008

Content addressable memories in reproducing Kernel Hilbert spaces

Erion Hasanbelliu; Jose C. Principe

Content addressable memories (CAM) are one of the few technologies that provide the capability to store and retrieve information based on content. Even more useful is their ability to recall data from noisy or incomplete inputs. However, the input data dimension limits the amount of data that CAMs can store and successfully retrieve. We propose to increase the amount of information that can be stored by implementing CAMs in a reproducing kernel Hilbert space where the input dimension is practically infinite, effectively lifting this CAM limitation. We show the advantages of kernel CAMs over CAMs by comparing their performance in information retrieval, generalization, storage, and online learning.


computer vision and pattern recognition | 2017

Group-Wise Point-Set Registration Based on Rényi's Second Order Entropy

Luis Gonzalo Sánchez Giraldo; Erion Hasanbelliu; Murali Rao; Jose C. Principe

In this paper, we describe a set of robust algorithms for group-wise registration using both rigid and non-rigid transformations of multiple unlabelled point-sets with no bias toward a given set. These methods mitigate the need to establish a correspondence among the point-sets by representing them as probability density functions where the registration is treated as a multiple distribution alignment. Holders and Jensens inequalities provide a notion of similarity/distance among point-sets and Rényis second order entropy yields a closed-form solution to the cost function and update equations. We also show that the methods can be improved by normalizing the entropy with a scale factor. These provide simple, fast and accurate algorithms to compute the spatial transformation function needed to register multiple point-sets. The algorithms are compared against two well-known methods for group-wise point-set registration. The results show an improvement in both accuracy and computational complexity.

Collaboration


Dive into the Erion Hasanbelliu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Tory Cobb

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge