Sven Haase
Hochschule Mittweida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sven Haase.
Neural Computation | 2011
Thomas Villmann; Sven Haase
Supervised and unsupervised vector quantization methods for classification and clustering traditionally use dissimilarities, frequently taken as Euclidean distances. In this article, we investigate the applicability of divergences instead, focusing on online learning. We deduce the mathematical fundamentals for its utilization in gradient-based online vector quantization algorithms. It bears on the generalized derivatives of the divergences known as Frchet derivatives in functional analysis, which reduces in finite-dimensional problems to partial derivatives in a natural way. We demonstrate the application of this methodology for widely applied supervised and unsupervised online vector quantization schemes, including self-organizing maps, neural gas, and learning vector quantization. Additionally, principles for hyperparameter optimization and relevance learning for parameterized divergences in the case of supervised vector quantization are given to achieve improved classification accuracy.
Neurocomputing | 2012
Kerstin Bunte; Sven Haase; Michael Biehl; Thomas Villmann
We present a systematic approach to the mathematical treatment of the t-distributed stochastic neighbor embedding (t-SNE) and the stochastic neighbor embedding (SNE) method. This allows an easy adaptation of the methods or exchange of their respective modules. In particular, the divergence which measures the difference between probability distributions in the original and the embedding space can be treated independently from other components like, e.g. the similarity of data points or the data distribution. We focus on the extension for different divergences and propose a general framework based on the consideration of Frechet-derivatives. This way the general approach can be adapted to the user specific needs.
Neurocomputing | 2015
Thomas Villmann; Sven Haase; Marika Kaden
Abstract Prototype based vector quantization is usually proceeded in the Euclidean data space. In the last years, also non-standard metrics became popular. For classification by support vector machines, Hilbert space representations, which are based on so-called kernel metrics, seem to be very successful. In this paper we show that gradient based learning in prototype-based vector quantization is possible by means of kernel metrics instead of the standard Euclidean distance. We will show that an appropriate handling requires differentiable universal kernels defining the feature space metric. This allows a prototype adaptation in the original data space but equipped with a metric determined by the kernel and, therefore, it is isomorphic to respective kernel Hilbert space. However, this approach avoids the Hilbert space representation as known for support vector machines. We give the mathematical justification for the isomorphism and demonstrate the abilities and the usefulness of this approach for several examples including both artificial and real world datasets.
WSOM | 2013
Thomas Villmann; Sven Haase; Marika Kästner
Supervised and unsupervised prototype based vector quantization frequently are proceeded in the Euclidean space. In the last years, also non-standard metrics became popular. For classification by support vector machines, Hilbert space representations are very successful based on so-called kernel metrics. In this paper we give the mathematical justification that gradient based learning in prototype-based vector quantization is possible by means of kernel metrics instead of the standard Euclidean distance. We will show that an appropriate handling requires differentiable universal kernels defining the kernel metric. This allows a prototype adaptation in the original data space but equipped with a metric determined by the kernel. This approach avoids the Hilbert space representation as known for support vector machines. Moreover, we give prominent examples for differentiable universal kernels based on information theoretic concepts and show exemplary applications.
international conference on artificial intelligence and soft computing | 2010
Thomas Villmann; Sven Haase; Frank-Michael Schleif; Barbara Hammer
We propose the utilization of divergences in gradient descent learning of supervised and unsupervised vector quantization as an alternative for the squared Euclidean distance. The approach is based on the determination of the Frechet-derivatives for the divergences, wich can be immediately plugged into the online-learning rules.
workshop on self organizing maps | 2011
Marika Kästner; Andreas Backhaus; Tina Geweniger; Sven Haase; Udo Seiffert; Thomas Villmann
We propose relevance learning for unsupervised online vector quantization algorithm based on stochastic gradient descent learning according to the given vector quantization cost function. We consider several widely used models including the neural gas algorithm, the Heskes variant of self-organizing maps and the fuzzy c-means. We apply the relevance learning scheme for divergence based similarity measures between prototypes and data vectors in the vector quantization schemes.
artificial neural networks in pattern recognition | 2010
Thomas Villmann; Sven Haase; Frank-Michael Schleif; Barbara Hammer; Michael Biehl
We propose the utilization of divergences in gradient descent learning of supervised and unsupervised vector quantization as an alternative for the squared Euclidean distance. The approach is based on the determination of the Frechet-derivatives for the divergences, wich can be immediately plugged into the online-learning rules. We provide the mathematical foundation of the respective framework. This framework includes usual gradient descent learning of prototypes as well as parameter optimization and relevance learning for improvement of the performance.
international symposium on neural networks | 2011
Thomas Villmann; Sven Haase
In this paper, we consider the magnification behavior of neural maps using several (parametrized) divergences as dissimilarity measure instead of the Euclidean distance. We show experimentally that optimal magnification, i.e. information optimum data coding by the prototypes, can be achieved for properly chosen divergence parameters. Thereby, the divergences considered here represent all main classes of divergences. Hence, we can conclude that information optimal vector quantization can be processed independently from the divergence class by appropriate parameter setting.
workshop on hyperspectral image and signal processing: evolution in remote sensing | 2010
Thomas Villmann; Sven Haase
Unsupervised and supervised vector quantization models for clustering and classification are usually designed for processing of Euclidean vectorial data. Yet, in this scenario the physical context might be not adequately reflected. For example, spectra can be seen as positive functions (positive measures). Yet, this context information is not used in Euclidean vector quantization. — In this contribution we propose a methodology for extending gradient based vector quantization approaches utilizing divergences as dissimilarity measure instead of the Euclidean distance for positive measures. Divergences are specifically designed to judge the dissimilarities between positive measures and have frequently an underlying physical meaning. We present in the paper the mathematical foundation for plugging divergences into vector quantization schemes and their adaptation rules. Thereafter, we demonstrate the ability of this methodology for the self-organizing map as widely ranged vector quantizer, applying it for topographic data clustering of a hyperspectral AVIRIS image cube taken from a lunar crater volcanic field.
Neurocomputing | 2011
Ernest Mwebaze; Petra Schneider; Frank-Michael Schleif; Jennifer R. Aduwo; John A. Quinn; Sven Haase; Thomas Villmann; Michael Biehl