Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Villmann is active.

Publication


Featured researches published by Thomas Villmann.


Neural Networks | 2002

Generalized relevance learning vector quantization

Barbara Hammer; Thomas Villmann

We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository. Afterwards, the method is compared to several well known algorithms which determine the intrinsic data dimension on real world satellite image data.


IEEE Transactions on Neural Networks | 1997

Topology preservation in self-organizing feature maps: exact definition and measurement

Thomas Villmann; Ralf Der; J. Michael Herrmann; Thomas Martinetz

The neighborhood preservation of self-organizing feature maps like the Kohonen map is an important property which is exploited in many applications. However, if a dimensional conflict arises this property is lost. Various qualitative and quantitative approaches are known for measuring the degree of topology preservation. They are based on using the locations of the synaptic weight vectors. These approaches, however, may fail in case of nonlinear data manifolds. To overcome this problem, in this paper we present an approach which uses what we call the induced receptive fields for determining the degree of topology preservation. We first introduce a precise definition of topology preservation and then propose a tool for measuring it, the topographic function. The topographic function vanishes if and only if the map is topology preserving. We demonstrate the power of this tool for various examples of data manifolds.


Neural Networks | 2003

Neural maps in remote sensing image analysis

Thomas Villmann; Erzsébet Merényi; Barbara Hammer

We study the application of self-organizing maps (SOMs) for the analyses of remote sensing spectral images. Advanced airborne and satellite-based imaging spectrometers produce very high-dimensional spectral signatures that provide key information to many scientific investigations about the surface and atmosphere of Earth and other planets. These new, sophisticated data demand new and advanced approaches to cluster detection, visualization, and supervised classification. In this article we concentrate on the issue of faithful topological mapping in order to avoid false interpretations of cluster maps created by an SOM. We describe several new extensions of the standard SOM, developed in the past few years: the growing SOM, magnification control, and generalized relevance learning vector quantization, and demonstrate their effect on both low-dimensional traditional multi-spectral imagery and approximately 200-dimensional hyperspectral imagery.


Neural Processing Letters | 2005

Supervised Neural Gas with General Similarity Measure

Barbara Hammer; Marc Strickert; Thomas Villmann

Prototype based classification offers intuitive and sparse models with excellent generalization ability. However, these models usually crucially depend on the underlying Euclidian metric; moreover, online variants likely suffer from the problem of local optima. We here propose a generalization of learning vector quantization with three additional features: (I) it directly integrates neighborhood cooperation, hence is less affected by local optima; (II) the method can be combined with any differentiable similarity measure whereby metric parameters such as relevance factors of the input dimensions can automatically be adapted according to the given data; (III) it obeys a gradient dynamics hence shows very robust behavior, and the chosen objective is related to margin optimization.


IEEE Transactions on Neural Networks | 1997

Growing a hypercubical output space in a self-organizing feature map

Hans-Ulrich Bauer; Thomas Villmann

Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonens self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.


workshop on self-organizing maps | 2006

Batch and median neural gas

Marie Cottrell; Barbara Hammer; Alexander Hasenfuß; Thomas Villmann

Neural Gas (NG) constitutes a very robust clustering algorithm given Euclidean data which does not suffer from the problem of local minima like simple vector quantization, or topological restrictions like the self-organizing map. Based on the cost function of NG, we introduce a batch variant of NG which shows much faster convergence and which can be interpreted as an optimization of the cost function by the Newton method. This formulation has the additional benefit that, based on the notion of the generalized median in analogy to Median SOM, a variant for non-vectorial proximity data can be introduced. We prove convergence of batch and median versions of NG, SOM, and k-means in a unified formulation, and we investigate the behavior of the algorithms in several experiments.


Psychiatry Research-neuroimaging | 2005

Serotonin and dopamine transporter imaging in patients with obsessive-compulsive disorder.

Swen Hesse; Ulrich Müller; Thomas Lincke; Henryk Barthel; Thomas Villmann; Matthias C. Angermeyer; Osama Sabri; Katarina Stengler-Wenzke

In obsessive-compulsive disorder (OCD), the success of pharmacological treatment with serotonin re-uptake inhibitors and atypical antipsychotic drugs suggests that both the central serotonergic and dopaminergic systems are involved in the pathophysiology of the disorder. We applied [123I]-2beta-carbomethoxy-3beta-(4-idiophenyl)tropane (beta-CIT) and a brain-dedicated high-resolution single photon emission computed tomography (SPECT) system to quantify dopamine transporter (DAT) and serotonin transporter (SERT) availability. By comparing 15 drug-naïve patients with OCD and 10 controls, we found a significantly reduced availability (corrected for age) of striatal DAT and of thalamic/hypothalamic, midbrain and brainstem SERT in OCD patients. Severity of OCD symptoms showed a significant negative correlation with thalamic/hypothalamic SERT availability, corrected for age and duration of symptoms. Our data provide evidence for imbalanced monoaminergic neurotransmitter modulation in OCD. Further studies with more selective DAT and SERT radiotracers are needed.


Neural Networks | 1999

Neural maps and topographic vector quantization

Hans-Ulrich Bauer; J. Michael Herrmann; Thomas Villmann

Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantify. Yet, topography of a neural map is an advantageous property, e.g. in the presence of noise in a transmission channel, in data visualization, and in numerous other applications. In this article we review some conceptual aspects of definitions of topography, and some recently proposed measures to quantify topography. We apply the measures first to neural maps trained on synthetic data sets, and check the measures for properties like reproducibility, scalability, systematic dependence of the value of the measure on the topology of the map, etc. We then test the measures on maps generated for four real-world data sets, a chaotic time series, speech data, and two sets of image data. The measures are found to do an imperfect, but an adequate job in selecting a topographically optimal output space dimension, while they consistently single out particular maps as non-topographic.


Neural Processing Letters | 2005

On the Generalization Ability of GRLVQ Networks

Barbara Hammer; Marc Strickert; Thomas Villmann

We derive a generalization bound for prototype-based classifiers with adaptive metric. The bound depends on the margin of the classifier and is independent of the dimensionality of the data. It holds for classifiers based on the Euclidean metric extended by adaptive relevance terms. In particular, the result holds for relevance learning vector quantization (RLVQ) [4] and generalized relevance learning vector quantization (GRLVQ) [19].


Neural Computation | 2006

Magnification Control in Self-Organizing Maps and Neural Gas

Thomas Villmann; Jens Christian Claussen

We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.

Collaboration


Dive into the Thomas Villmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Strickert

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge