Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Martinetz is active.

Publication


Featured researches published by Thomas Martinetz.


IEEE Transactions on Neural Networks | 1993

'Neural-gas' network for vector quantization and its application to time-series prediction

Thomas Martinetz; Stanislav G. Berkovich; Klaus Schulten

A neural network algorithm based on a soft-max adaptation rule is presented. This algorithm exhibits good performance in reaching the optimum minimization of a cost function for vector quantization data compression. The soft-max rule employed is an extension of the standard K-means clustering procedure and takes into account a neighborhood ranking of the reference (weight) vectors. It is shown that the dynamics of the reference (weight) vectors during the input-driven adaptation procedure are determined by the gradient of an energy function whose shape can be modulated through a neighborhood determining parameter and resemble the dynamics of Brownian particles moving in a potential determined by the data point density. The network is used to represent the attractor of the Mackey-Glass equation and to predict the Mackey-Glass time series, with additional local linear mappings for generating output values. The results obtained for the time-series prediction compare favorably with the results achieved by backpropagation and radial basis function networks.


Neural Networks | 1994

Topology representing networks

Thomas Martinetz; Klaus Schulten

Abstract A Hebbian adaptation rule with winner-take-all like competition is introduced. It is shown that this competitive Hebbian rule forms so-called Delaunay triangulations, which play an important role in computational geometry for efficiently solving proximity problems. Given a set of neural units i, i = 1,…, N, the synaptic weights of which can be interpreted as pointers wi, i = 1,…, N in RD, the competitive Hebbian rule leads to a connectivity structure between the units i that corresponds to the Delaunay triangulation of the set of pointers wi. Such competitive Hebbian rule develops connections (Cij > 0) between neural units i, j with neighboring receptive fields (Voronoi polygons) Vi, Vj, whereas between all other units i, j no connections evolve (Cij = 0). Combined with a procedure that distributes the pointers wi over a given feature manifold M, for example, a submanifold M ⊂ RD, the competitive Hebbian rule provides a novel approach to the problem of constructing topology preserving feature maps and representing intricately structured manifolds. The competitive Hebbian rule connects only neural units, the receptive fields (Voronoi polygons) Vi, Vj of which are adjacent on the given manifold M. This leads to a connectivity structure that defines a perfectly topology preserving map and forms a discrete, path preserving representation of M, also in cases where M has an intricate topology. This makes this novel approach particularly useful in all applications where neighborhood relations have to be exploited or the shape and topology of submanifolds have to be take into account.


Journal of the Operational Research Society | 1992

Neural computation and self-organizing maps: an introduction

Helge Ritter; Thomas Martinetz; Klaus Schulten; D. Barsky; Marcus Tesch; Ronald Kates

A process for removing sulfur from crude oil by contacting with calcium carbonate-containing material at atmospheric pressures and temperatures less than about 100 DEG F.


IEEE Transactions on Neural Networks | 1997

Topology preservation in self-organizing feature maps: exact definition and measurement

Thomas Villmann; Ralf Der; J. Michael Herrmann; Thomas Martinetz

The neighborhood preservation of self-organizing feature maps like the Kohonen map is an important property which is exploited in many applications. However, if a dimensional conflict arises this property is lost. Various qualitative and quantitative approaches are known for measuring the degree of topology preservation. They are based on using the locations of the synaptic weight vectors. These approaches, however, may fail in case of nonlinear data manifolds. To overcome this problem, in this paper we present an approach which uses what we call the induced receptive fields for determining the degree of topology preservation. We first introduce a precise definition of topology preservation and then propose a tool for measuring it, the topographic function. The topographic function vanishes if and only if the map is topology preserving. We demonstrate the power of this tool for various examples of data manifolds.


Neural Networks | 1989

Topology-conserving maps for learning visuo-motor-coordination

Helge Ritter; Thomas Martinetz; Klaus Schulten

Abstract We investigate the application of an extension of Kohonens self-organizing mapping algorithm to the learning of visuo-motor-coordination of a simulated robot arm. We show that both arm kinematics and arm dynamics can be learned, if a suitable representation for the map output is used. Due to the topology-conserving property of the map spatially neighboring neurons can learn cooperatively, which greatly improves the robustness and the convergence properties of the algorithm.


international conference on artificial neural networks | 1993

Competitive Hebbian Learning Rule Forms Perfectly Topology Preserving Maps

Thomas Martinetz

The problem of forming perfectly topology preserving maps of feature manifolds is studied. First, through introducing “masked Voronoi polyhedra” as a geometrical construct for determining neighborhood on manifolds, a rigorous definition of the term “topology preserving feature map” is given. Starting from this definition, it is shown that a network G of neural units i, i = 1, …, N has to have a lateral connectivity structure A, Aij ∈ {0, 1}, i, j = 1,…, N which corresponds to the “induced Delaunay triangulation” of the synaptic weight vectors wi ∈ ℜDin order to form a perfectly topology preserving map of a given manifold M ⊆ ℜD of features v ∈ M. The lateral connections determine the neighborhood relations between the units in the network, which have to match the neighborhood relations of the features on the manifold. If all the weight vectors wi are distributed over the given feature manifold M, and if this distribution resolves the shape of M, it can be shown that Hebbian learning with competition leads to lateral connections i —j (Aij = 1) that correspond to the edges of the “induced Delaunay triangulation” and, hence, leads to a network structure that forms a perfectly topology preserving map of M, independent of M’s topology. This yields a means for constructing perfectly topology preserving maps of arbitrarily structured feature manifolds.


IEEE Transactions on Neural Networks | 1990

Three-dimensional neural net for learning visuomotor coordination of a robot arm

Thomas Martinetz; Helge Ritter; Klaus Schulten

An extension of T. Kohonens (1982) self-organizing mapping algorithm together with an error-correction scheme based on the Widrow-Hoff learning rule is applied to develop a learning algorithm for the visuomotor coordination of a simulated robot arm. Learning occurs by a sequence of trial movements without the need for an external teacher. Using input signals from a pair of cameras, the closed robot arm system is able to reduce its positioning error to about 0.3% of the linear dimensions of its work space. This is achieved by choosing the connectivity of a three-dimensional lattice consisting of the units of the neural net.


Neuron | 2013

Auditory Closed-Loop Stimulation of the Sleep Slow Oscillation Enhances Memory

Hong-Viet V. Ngo; Thomas Martinetz; Jan Born; Matthias Mölle

Brain rhythms regulate information processing in different states to enable learning and memory formation. The <1 Hz sleep slow oscillation hallmarks slow-wave sleep and is critical to memory consolidation. Here we show in sleeping humans that auditory stimulation in phase with the ongoing rhythmic occurrence of slow oscillation up states profoundly enhances the slow oscillation rhythm, phase-coupled spindle activity, and, consequently, the consolidation of declarative memory. Stimulation out of phase with the ongoing slow oscillation rhythm remained ineffective. Closed-loop in-phase stimulation provides a straight-forward tool to enhance sleep rhythms and their functional efficacy.


Physics Reports | 2001

Dynamic fitness landscapes in molecular evolution

Claus O. Wilke; Christopher Ronnewinkel; Thomas Martinetz

We study self-replicating molecules under externally varying conditions. Changing conditions such as temperature variations and/or alterations in the environment’s resource composition lead to both non-constant replication and decay rates of the molecules. In general, therefore, molecular evolution takes place in a dynamic rather than a static tness landscape. We incorporate dynamic replication and decay rates into the standard quasispecies theory of molecular evolution, and show that for periodic time-dependencies, a system of evolving molecules enters a limit cycle for t ! 1. For fast periodic changes, we show that molecules adapt to the timeaveraged tness landscape, whereas for slow changes they track the variations in the landscape arbitrarily closely. We derive a general approximation method that allows us to calculate the attractor of time-periodic landscapes, and demonstrate using several examples that the results of the approximation and the limiting cases of very slow and very fast changes are in perfect agreement. We also discuss landscapes with arbitrary time dependencies, and show that very fast changes again lead to a system that adapts to the time-averaged landscape. Finally, we analyze the dynamics of a nite population of molecules in a dynamic landscape, and discuss its relation to the innite population limit.


IEEE Transactions on Neural Networks | 2008

Simple Method for High-Performance Digit Recognition Based on Sparse Coding

Kai Labusch; Erhardt Barth; Thomas Martinetz

In this brief paper, we propose a method of feature extraction for digit recognition that is inspired by vision research: a sparse-coding strategy and a local maximum operation. We show that our method, despite its simplicity, yields state-of-the-art classification results on a highly competitive digit-recognition benchmark. We first employ the unsupervised Sparsenet algorithm to learn a basis for representing patches of handwritten digit images. We then use this basis to extract local coefficients. In a second step, we apply a local maximum operation to implement local shift invariance. Finally, we train a support vector machine (SVM) on the resulting feature vectors and obtain state-of-the-art classification performance in the digit recognition task defined by the MNIST benchmark. We compare the different classification performances obtained with sparse coding, Gabor wavelets, and principal component analysis (PCA). We conclude that the learning of a sparse representation of local image patches combined with a local maximum operation for feature extraction can significantly improve recognition performance.

Collaboration


Dive into the Thomas Martinetz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claus O. Wilke

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge