Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc Strickert is active.

Publication


Featured researches published by Marc Strickert.


Neural Networks | 2004

Recursive self-organizing network models

Barbara Hammer; Alessandro Sperduti; Marc Strickert

Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.


Neurocomputing | 2004

A general framework for unsupervised processing of structured data

Barbara Hammer; Alessandro Sperduti; Marc Strickert

We propose a general framework for unsupervised recurrent and recursive networks. This proposal covers various popular approaches like standard self organizing maps (SOM), temporal Kohonen maps, resursive SOM, and SOM for structured data. We define Hebbian learning within this general framework. We show how approaches based on an energy function, like neural gas, can be transferred to this abstract framework so that proposals for new learning algorithms emerge.


international conference on artificial intelligence and soft computing | 2004

Relevance LVQ versus SVM

Barbara Hammer; Marc Strickert; Thomas Villmann

The support vector machine (SVM) constitutes one of the most successful current learning algorithms with excellent classification accuracy in large real-life problems and strong theoretical background. However, a SVM solution is given by a not intuitive classification in terms of extreme values of the training set and the size of a SVM classifier scales with the number of training data. Generalized relevance learning vector quantization (GRLVQ) has recently been introduced as a simple though powerful expansion of basic LVQ. Unlike SVM, it provides a very intuitive classification in terms of prototypical vectors the number of which is independent of the size of the training set. Here, we discuss GRLVQ in comparison to the SVM and point out its beneficial theoretical properties which are similar to SVM whereby providing sparse and intuitive solutions. In addition, the competitive performance of GRLVQ is demonstrated in one experiment from computational biology.


international conference on artificial neural networks | 2002

Rule Extraction from Self-Organizing Networks

Barbara Hammer; Andreas Rechtien; Marc Strickert; Thomas Villmann

Generalized relevance learning vector quantization (GRLVQ) [4] constitutes a prototype based clustering algorithm based on LVQ [5] with energy function and adaptive metric. We propose a method for extracting logical rules from a trained GRLVQ-network. Real valued attributes are automatically transformed to symbolic values. The rules are given in the form of a decision tree yielding several advantages: hybrid symbolic/subsymbolic descriptions can be obtained as an alternative and the complexity of the rules can be controlled.


Neurocomputing | 2005

Unsupervised recursive sequence processing

Marc Strickert; Barbara Hammer; Sebastian Blohm

The self-organizing map (SOM) is a valuable tool for data visualization and data mining for potentially high-dimensional data of an a priori fixed dimensionality. We investigate SOMs for sequences and propose the SOM-S architecture for sequential data. Sequences of potentially infinite length are recursively processed by integrating the currently presented item and the recent map activation, as proposed in the SOMSD presented in (IEEE Trans. Neural Networks 14(3) (2003) 491). We combine that approach with the hyperbolic neighborhood of Ritter (Proceedings of PKDD-01, Springer, Berlin, 2001, pp. 338-349), in order to account for the representation of possibly exponentially increasing sequence diversification over time. Discrete and real-valued sequences can be processed efficiently with this method, as we will show in experiments. Temporal dependencies can be reliably extracted from a trained SOM. U-matrix methods, adapted to sequence processing SOMs, allow the detection of clusters also for real-valued sequence elements.


PLOS ONE | 2014

Comprehensive Transcriptome Analysis Unravels the Existence of Crucial Genes Regulating Primary Metabolism during Adventitious Root Formation in Petunia hybrida

Amirhossein Ahkami; Uwe Scholz; Burkhard Steuernagel; Marc Strickert; Klaus-Thomas Haensch; Uwe Druege; Didier Reinhardt; Eva Nouri; Nicolaus von Wirén; Philipp Franken; Mohammad-Reza Hajirezaei

To identify specific genes determining the initiation and formation of adventitious roots (AR), a microarray-based transcriptome analysis in the stem base of the cuttings of Petunia hybrida (line W115) was conducted. A microarray carrying 24,816 unique, non-redundant annotated sequences was hybridized to probes derived from different stages of AR formation. After exclusion of wound-responsive and root-regulated genes, 1,354 of them were identified which were significantly and specifically induced during various phases of AR formation. Based on a recent physiological model distinguishing three metabolic phases in AR formation, the present paper focuses on the response of genes related to particular metabolic pathways. Key genes involved in primary carbohydrate metabolism such as those mediating apoplastic sucrose unloading were induced at the early sink establishment phase of AR formation. Transcriptome changes also pointed to a possible role of trehalose metabolism and SnRK1 (sucrose non-fermenting 1- related protein kinase) in sugar sensing during this early step of AR formation. Symplastic sucrose unloading and nucleotide biosynthesis were the major processes induced during the later recovery and maintenance phases. Moreover, transcripts involved in peroxisomal beta-oxidation were up-regulated during different phases of AR formation. In addition to metabolic pathways, the analysis revealed the activation of cell division at the two later phases and in particular the induction of G1-specific genes in the maintenance phase. Furthermore, results point towards a specific demand for certain mineral nutrients starting in the recovery phase.


international conference on artificial neural networks | 2001

Generalized Relevance LVQ for Time Series

Marc Strickert; Thorsten Bojer; Barbara Hammer

An application of the recently proposed generalized relevance learning vector quantization (GRLVQ) to the analysis and modeling of time series data is presented. We use GRLVQ for two tasks: first, for obtaining a phase space embedding of a scalar time series, and second, for short term and long term data prediction. The proposed embedding method is tested with a signal from the wellknown Lorenz system. Afterwards, it is applied to daily lysimeter observations of water runoff. A one-stepp rediction of the runoff dynamic is obtained from the classification of high dimensional subseries data vectors, from which a promising technique for long term forecasts is derived.


computational intelligence and data mining | 2013

Regularization and improved interpretation of linear data mappings and adaptive distance measures

Marc Strickert; Barbara Hammer; Thomas Villmann; Michael Biehl

Linear data transformations are essential operations in many machine learning algorithms, helping to make such models more flexible or to emphasize certain data directions. In particular for high dimensional data sets linear transformations are not necessarily uniquely determined, though, and alternative parameterizations exist which do not change the mapping of the training data. Thus, regularization is required to make the model robust to noise and more interpretable for the user. In this contribution, we characterize the group of transformations which leave a linear mapping invariant for a given finite data set, and we discuss the consequences on the interpretability of the models. We propose an intuitive regularization mechanism to avoid problems in under-determined configurations, and we test the approach in two machine learning models.


international conference on artificial neural networks | 2002

Learning Vector Quantization for Multimodal Data

Barbara Hammer; Marc Strickert; Thomas Villmann

Learning vector quantization (LVQ) as proposed by Kohonen is a simple and intuitive, though very successful prototype-based clustering algorithm. Generalized relevance LVQ (GRLVQ) constitutes a modification which obeys the dynamics of a gradient descent and allows an adaptive metric utilizing relevance factors for the input dimensions. As iterative algorithms with local learning rules, LVQ and modifications crucially depend on the initialization of the prototypes. They often fail for multimodal data. We propose a variant of GRLVQ which introduces ideas of the neural gas algorithm incorporating a global neighborhood coordination of the prototypes. The resulting learning algorithm, supervised relevance neural gas, is capable of learning highly multimodal data, whereby it shares the benefits of a gradient dynamics and an adaptive metric with GRLVQ.


IEEE Transactions on Knowledge and Data Engineering | 2016

CavSimBase: A Database for Large Scale Comparison of Protein Binding Sites

Matthias Leinweber; Thomas Fober; Marc Strickert; Lars Baumgartner; Gerhard Klebe; Bernd Freisleben; Eyke Hüllermeier

CavBase is a database containing information about the three-dimensional geometry and the physicochemical properties of putative protein binding sites. Analyzing CavBase data typically involves computing the similarity of pairs of binding sites. In contrast to sequence alignment, however, a structural comparison of protein binding sites is a computationally challenging problem, making large scale studies difficult or even infeasible. One possibility to overcome this obstacle is to precompute pairwise similarities in an all-against-all comparison, and to make these similarities subsequently accessible to data analysis methods. Pairwise similarities, once being computed, can also be used to equip CavBase with a neighborhood structure. Taking advantage of this structure, methods for problems such as similarity retrieval can be implemented efficiently. In this paper, we tackle the problem of performing an all-against-all comparison using CavBase, consisting of more than 200,000 protein cavities, by means of parallel computation and cloud computing techniques. We present the conceptual design and technical realization of a large-scale study to create a similarity database called CavSimBase. We illustrate how CavSimBase is constructed, is accessed, and is used to answer biological questions by data analysis and similarity retrieval.

Collaboration


Dive into the Marc Strickert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerstin Bunte

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge