Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marko Tscherepanow is active.

Publication


Featured researches published by Marko Tscherepanow.


Pattern Recognition | 2012

A saliency map based on sampling an image into random rectangular regions of interest

Tadmeri Narayan Vikram; Marko Tscherepanow; Britta Wrede

In this article we propose a novel approach to compute an image saliency map based on computing local saliencies over random rectangular regions of interest. Unlike many of the existing methods, the proposed approach does not require any training bases, operates on the image at the original scale and has only a single parameter which requires tuning. It has been tested on the two distinct tasks of salient region detection (using MSRA dataset) and eye gaze prediction (using York University and MIT datasets). The proposed method achieves state-of-the-art performance on the eye gaze prediction task as compared with nine other state-of-the-art methods.


industrial conference on data mining | 2008

Automatic Segmentation of Unstained Living Cells in Bright-Field Microscope Images

Marko Tscherepanow; Frank G. Zöllner; Matthias Hillebrand; Franz Kummert

The automatic subcellular localisation of proteins in living cells is a critical step in determining their function. The evaluation of fluorescence images constitutes a common method of localising these proteins. For this, additional knowledge about the position of the considered cells within an image is required. In an automated system, it is advantageous to recognise these cells in bright-field microscope images taken in parallel with the regarded fluorescence micrographs. Unfortunately, currently available cell recognition methods are only of limited use within the context of protein localisation, since they frequently require microscopy techniques that enable images of higher contrast (e.g. phase contrast microscopy or additional dyes) or can only be employed with too low magnifications. Therefore, this article introduces a novel approach to the robust automatic recognition of unstained living cells in bright-field microscope images. Here, the focus is on the automatic segmentation of cells.


Neural Networks | 2011

2011 Special Issue: A hierarchical ART network for the stable incremental learning of topological structures and associations from noisy data

Marko Tscherepanow; Marco Kortkamp; Marc Kammer

In this article, a novel unsupervised neural network combining elements from Adaptive Resonance Theory and topology-learning neural networks is presented. It enables stable on-line clustering of stationary and non-stationary input data by learning their inherent topology. Here, two network components representing two different levels of detail are trained simultaneously. By virtue of several filtering mechanisms, the sensitivity to noise is diminished, which renders the proposed network suitable for the application to real-world problems. Furthermore, we demonstrate that this network constitutes an excellent basis to learn and recall associations between real-world associative keys. Its incremental nature ensures that the capacity of the corresponding associative memory fits the amount of knowledge to be learnt. Moreover, the formed clusters efficiently represent the relations between the keys, even if noisy data is used for training. In addition, we present an iterative recall mechanism to retrieve stored information based on one of the associative keys used for training. As different levels of detail are learnt, the recall can be performed with different degrees of accuracy.


International Machine Vision and Image Processing Conference (IMVIP 2007) | 2007

Recognition of Unstained Live Drosophila Cells in Microscope Images

Marko Tscherepanow; Nickels Jensen; Franz Kummert

In order to localise tagged proteins in living cells, the surrounding cells must be recognised first. Based on previous work regarding cell recognition in bright-field images, we propose an approach to the automated recognition of unstained live Drosophila cells, which are of high biological relevance. In order to achieve this goal, the original methods were extended to enable the additional application of an alternative microscopy technique, since the exclusive usage of bright-field images does not allow for an accurate segmentation of the considered cells. In order to cope with the increased number of parameters to be set, a genetic algorithm is applied. Furthermore, the employed segmentation and classification techniques needed to be adapted to the new cell characteristics. Therefore, a modified active contour approach and an enhanced feature set, allowing for a more detailed description of the obtained segments, are introduced.


international conference on pattern recognition | 2006

Classification of Segmented Regions in Brightfield Microscope Images

Marko Tscherepanow; Frank G. Zöllner; Franz Kummert

The subcellular localisation of proteins in living cells is an important step to determine their function. A common method is the evaluation of fluorescence images. The position of marked proteins, visible as bright spots, enables conclusions concerning their function. In order to determine the subcellular localisation, it is crucial to know the exact positions of the considered cells within an image. These are provided by the segmentation of a corresponding brightfield microscope image. As the resulting segments do not exclusively comprise cells, they have to be classified. Therefore, we propose an approach for the classification of the resulting segments in cells and non-cells, which is an essential step of the automatic recognition of cells and thus of the automatic subcellular localisation of proteins in living cells


international conference on artificial neural networks | 2010

TopoART: a topology learning hierarchical ART network

Marko Tscherepanow

In this paper, a novel unsupervised neural network combining elements from Adaptive Resonance Theory and topology learning neural networks, in particular the Self-Organising Incremental Neural Network, is introduced. It enables stable on-line clustering of stationary and non-stationary input data. In addition, two representations reflecting different levels of detail are learnt simultaneously. Furthermore, the network is designed in such a way that its sensitivity to noise is diminished, which renders it suitable for the application to real-world problems.


workshop on applications of computer vision | 2011

A random center surround bottom up visual attention model useful for salient region detection

Tadmeri Narayan Vikram; Marko Tscherepanow; Britta Wrede

In this article, we propose a bottom-up saliency model which works on capturing the contrast between random pixels in an image. The model is explained on the basis of the stimulus bias between two given stimuli (pixel intensity values) in an image and has a minimal set of tunable parameters. The methodology does not require any training bases or priors. We followed an established experimental setting and obtained state-of-the-art-results for salient region detection on the MSR dataset. Further experiments demonstrate that our method is robust to noise and has, in comparison to six other state-of-the-art models, a consistent performance in terms of recall, precision and F-measure.


ieee-ras international conference on humanoid robots | 2009

Direct imitation of human facial expressions by a user-interface robot

Marko Tscherepanow; Matthias Hillebrand; Frank Hegel; Britta Wrede; Franz Kummert

Imitating the facial expressions of another person is a meaningful signal within interpersonal communication. Providing a robot with the capability of imitating the face of an interactant marks a first step towards implementing a communication model of mimicry. In this paper, we present a novel approach to facial expression imitation which does not require observed expressions to be assigned to a set of basic emotional expressions. Rather, arbitrary expressions are directly imitated exclusively based on camera images of the interactants face. Consequently, the repertoire of displayable expressions is extended significantly and becomes more appropriate for interactions with humans.


BMC Bioinformatics | 2008

An incremental approach to automated protein localisation

Marko Tscherepanow; Nickels Jensen; Franz Kummert

BackgroundThe subcellular localisation of proteins in intact living cells is an important means for gaining information about protein functions. Even dynamic processes can be captured, which can barely be predicted based on amino acid sequences. Besides increasing our knowledge about intracellular processes, this information facilitates the development of innovative therapies and new diagnostic methods. In order to perform such a localisation, the proteins under analysis are usually fused with a fluorescent protein. So, they can be observed by means of a fluorescence microscope and analysed. In recent years, several automated methods have been proposed for performing such analyses. Here, two different types of approaches can be distinguished: techniques which enable the recognition of a fixed set of protein locations and methods that identify new ones. To our knowledge, a combination of both approaches – i.e. a technique, which enables supervised learning using a known set of protein locations and is able to identify and incorporate new protein locations afterwards – has not been presented yet. Furthermore, associated problems, e.g. the recognition of cells to be analysed, have usually been neglected.ResultsWe introduce a novel approach to automated protein localisation in living cells. In contrast to well-known techniques, the protein localisation technique presented in this article aims at combining the two types of approaches described above: After an automatic identification of unknown protein locations, a potential user is enabled to incorporate them into the pre-trained system. An incremental neural network allows the classification of a fixed set of protein location as well as the detection, clustering and incorporation of additional patterns that occur during an experiment. Here, the proposed technique achieves promising results with respect to both tasks. In addition, the protein localisation procedure has been adapted to an existing cell recognition approach. Therefore, it is especially well-suited for high-throughput investigations where user interactions have to be avoided.ConclusionWe have shown that several aspects required for developing an automatic protein localisation technique – namely the recognition of cells, the classification of protein distribution patterns into a set of learnt protein locations, and the detection and learning of new locations – can be combined successfully. So, the proposed method constitutes a crucial step to render image-based protein localisation techniques amenable to large-scale experiments.


Neurocomputing | 2013

ART-based fusion of multi-modal perception for robots

Elmar Berghöfer; Denis Schulze; Christian Rauch; Marko Tscherepanow; Tim Köhler; Sven Wachsmuth

Robotic application scenarios in uncontrolled environments pose high demands on mobile robots. This is especially true if human-robot interaction or robot-robot interaction is involved. Here, potential interaction partners need to be identified. To tackle challenges like this, robots make use of different sensory systems. In many cases, these robots have to deal with erroneous data from different sensory systems which often are processed separately. A possible strategy to improve identification results is to combine different processing results of complementary sensors. Their relation is often hard coded and difficult to learn incrementally if new kinds of objects or events occur. In this paper, we present a new fusion strategy which we call the Simplified Fusion ARTMAP (SiFuAM) which is very flexible and therefore can be easily adapted to new domains or sensor configurations. As our approach is based on the Adaptive Resonance Theory (ART) it is inherently capable of incremental on-line learning. We show its applicability in different robotic scenarios and platforms and give an overview of its performance.

Collaboration


Dive into the Marko Tscherepanow's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge