Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Hasenfuss is active.

Publication


Featured researches published by Alexander Hasenfuss.


Neural Computation | 2010

Topographic mapping of large dissimilarity data sets

Barbara Hammer; Alexander Hasenfuss

Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectorial data sets; they require a classical feature encoding of information. Often data are available in the form of pairwise distances only, such as arise from a kernel matrix, a graph, or some general dissimilarity measure. In such cases, NG and SOM cannot be applied directly. In this article, we introduce relational topographic maps as an extension of relational clustering algorithms, which offer prototype-based representations of dissimilarity data, to incorporate neighborhood structure. These methods are equivalent to the standard (vectorial) techniques if a Euclidean embedding exists, while preventing the need to explicitly compute such an embedding. Extending these techniques for the general case of non-Euclidean dissimilarities makes possible an interpretation of relational clustering as clustering in pseudo-Euclidean space. We compare the methods to well-known clustering methods for proximity data based on deterministic annealing and discuss how far convergence can be guaranteed in the general case. Relational clustering is quadratic in the number of data points, which makes the algorithms infeasible for huge data sets. We propose an approximate patch version of relational clustering that runs in linear time. The effectiveness of the methods is demonstrated in a number of examples.


KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence | 2007

Relational Neural Gas

Barbara Hammer; Alexander Hasenfuss

We introduce relational variants of neural gas, a very efficient and powerful neural clustering algorithm, which allow a clustering and mining of data given in terms of a pairwise similarity or dissimilarity matrix. It is assumed that this matrix stems from Euclidean distance or dot product, respectively, however, the underlying embedding of points is unknown. One can equivalently formulate batch optimization in terms of the given similarities or dissimilarities, thus providing a way to transfer batch optimization to relational data. For this procedure, convergence is guaranteed and extensions such as the integration of label information can readily be transferred to this framework.


Neurocomputing | 2009

Patch clustering for massive data sets

Nikolai Alex; Alexander Hasenfuss; Barbara Hammer

The presence of huge data sets poses new problems to popular clustering and visualization algorithms such as neural gas (NG) and the self-organising-map (SOM) due to memory and time constraints. In such situations, it is no longer possible to store all data points in the main memory at once and only a few, ideally only one run over the whole data set is still affordable to achieve a feasible training time. In this contribution we propose single pass extensions of the classical clustering algorithms NG and SOM which are based on a simple patch decomposition of the data set and fast batch optimization schemes of the underlying cost function. The algorithms only require a fixed memory space. They maintain the benefits of the original ones including easy implementation and interpretation as well as large flexibility and adaptability. We demonstrate that parallelization of the methods becomes easily possible and we show the efficiency of the approach in a variety of experiments.


Neurocomputing | 2007

Magnification control for batch neural gas

Barbara Hammer; Alexander Hasenfuss; Thomas Villmann

Neural gas (NG) constitutes a very robust clustering algorithm which can be derived as stochastic gradient descent from a cost function closely connected to the quantization error. In the limit, an NG network samples the underlying data distribution. Thereby, the connection is not linear, rather, it follows a power law with magnification exponent different from the information theoretically optimum one in adaptive map formation. There exists a couple of schemes to explicitly control the exponent such as local learning which leads to a small change of the learning algorithm of NG. Batch NG constitutes a fast alternative optimization scheme for NG vector quantizers which has been derived from the same cost function and which constitutes a fast Newton optimization scheme. It possesses the same magnification factor (different from 1) as standard online NG. In this paper, we propose a method to integrate magnification control by local learning into batch NG. Thereby, the key observation is a link of local learning to an underlying cost function which opens the way towards alternative, e.g.batch optimization schemes. We validate the learning rule derived from this altered cost function in an artificial experimental setting and we demonstrate the benefit of magnification control to sample rare events for a real data set.


Neurocomputing | 2011

Local matrix adaptation in topographic neural maps

Banchar Arnonkijpanich; Alexander Hasenfuss; Barbara Hammer

The self-organizing map (SOM) and neural gas (NG) and generalizations thereof such as the generative topographic map constitute popular algorithms to represent data by means of prototypes arranged on a (hopefully) topology representing map. Most standard methods rely on the Euclidean metric, hence the resulting clusters tend to have isotropic form and they cannot account for local distortions or correlations of data. For this reason, several proposals exist in the literature which extend prototype-based clustering towards more general models which, for example, incorporate local principal directions into the winner computation. This allows to represent data faithfully using less prototypes. In this contribution, we establish a link of models which rely on local principal components (PCA), matrix learning, and a formal cost function of NG and SOM which allows to show convergence of the algorithm. For this purpose, we consider an extension of prototype-based clustering algorithms such as NG and SOM towards a more general metric which is given by a full adaptive matrix such that ellipsoidal clusters are accounted for. The approach is derived from a natural extension of the standard cost functions of NG and SOM (in the form of Heskes). We obtain batch optimization learning rules for prototype and matrix adaptation based on these generalized cost functions and we show convergence of the algorithm. The batch optimization schemes can be interpreted as local principal component analysis (PCA) and the local eigenvectors correspond to the main axes of the ellipsoidal clusters. Thus, this approach provides a cost function associated to proposals in the literature which combine SOM or NG with local PCA models. We demonstrate the behavior of matrix NG and SOM in several benchmark examples and in an application to image compression.


intelligent data analysis | 2007

Relational topographic maps

Alexander Hasenfuss; Barbara Hammer

We introduce relational variants of neural topographic maps including the self-organizing map and neural gas, which allow clustering and visualization of data given as pairwise similarities or dissimilarities with continuous prototype updates. It is assumed that the (dis-)similarity matrix originates from Euclidean distances, however, the underlying embedding of points is unknown. Batch optimization schemes for topographic map formations are formulated in terms of the given (dis-)similarities and convergence is guaranteed, thus providing a way to transfer batch optimization to relational data.


GbRPR '09 Proceedings of the 7th IAPR-TC-15 International Workshop on Graph-Based Representations in Pattern Recognition | 2009

Graph-Based Representation of Symbolic Musical Data

Bassam Mokbel; Alexander Hasenfuss; Barbara Hammer

In this work, we present an approach that utilizes a graph-based representation of symbolic musical data in the context of automatic topographic mapping. A novel approach is introduced that represents melodic progressions as graph structures providing a dissimilarity measure which complies with the invariances in the human perception of melodies. That way, music collections can be processed by non-Euclidean variants of Neural Gas or Self-Organizing Maps for clustering, classification, or topographic mapping for visualization. We demonstrate the performance of the technique on several datasets of classical music.


artificial neural networks in pattern recognition | 2006

Supervised batch neural gas

Barbara Hammer; Alexander Hasenfuss; Frank-Michael Schleif; Thomas Villmann

Recently, two extensions of neural gas have been proposed: a fast batch version of neural gas for data given in advance, and extensions of neural gas to learn a (possibly fuzzy) supervised classification. Here we propose a batch version for supervised neural gas training which allows to efficiently learn a prototype-based classification, provided training data are given beforehand. The method relies on a simpler cost function than online supervised neural gas and leads to simpler update formulas. We prove convergence of the algorithm in a general framework, which also incorporates supervised k-means and supervised batch-SOM, and which opens the way towards metric adaptation as well as application to proximity data not embedded in a real-vector space.


workshop on self organizing maps | 2011

Topographic mapping of dissimilarity data

Barbara Hammer; Andrej Gisbrecht; Alexander Hasenfuss; Bassam Mokbel; Frank-Michael Schleif; Xibin Zhu

Topographic mapping offers a very flexible tool to inspect large quantities of high-dimensional data in an intuitive way. Often, electronic data are inherently non-Euclidean and modern data formats are connected to dedicated non-Euclidean dissimilarity measures for which classical topographic mapping cannot be used. We give an overview about extensions of topographic mapping to general dissimilarities by means of median or relational extensions. Further, we discuss efficient approximations to avoid the usually squared time complexity.


Neural Networks | 2010

2010 Special Issue: Local matrix learning in clustering and applications for manifold visualization

Banchar Arnonkijpanich; Alexander Hasenfuss; Barbara Hammer

Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization.

Collaboration


Dive into the Alexander Hasenfuss's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge