Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephan Tschechne is active.

Publication


Featured researches published by Stephan Tschechne.


affective computing and intelligent interaction | 2011

Multiple classifier systems for the classificatio of audio-visual emotional states

Michael Glodek; Stephan Tschechne; Georg Layher; Martin Schels; Tobias Brosch; Stefan Scherer; Markus Kächele; Miriam Schmidt; Heiko Neumann; Günther Palm; Friedhelm Schwenker

Research activities in the field of human-computer interaction increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human emotional states. For that a variety of features have been derived. From the audio signal the fundamental frequency, LPCand MFCC coefficients, and RASTA-PLP have been used. In addition to that two types of visual features have been computed, namely form and motion features of intermediate complexity. The numerical evaluation has been performed on the four emotional labels Arousal, Expectancy, Power, Valence as defined in the AVEC data set. As classifier architectures multiple classifier systems are applied, these have been proven to be accurate and robust against missing and noisy data.


Frontiers in Neuroscience | 2015

On event-based optical flow detection.

Tobias Brosch; Stephan Tschechne; Heiko Neumann

Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.


Frontiers in Computational Neuroscience | 2014

Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

Stephan Tschechne; Heiko Neumann

Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the objects contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.


artificial neural networks in pattern recognition | 2014

Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input

Stephan Tschechne; Roman Sailer; Heiko Neumann

Computational models of visual processing often use frame-based image acquisition techniques to process a temporally changing stimulus. This approach is unlike biological mechanisms that are spike-based and independent of individual frames. The neuromorphic Dynamic Vision Sensor (DVS) [Lichtsteiner et al., 2008] provides a stream of independent visual events that indicate local illumination changes, resembling spiking neurons at a retinal level. We introduce a new approach for the modelling of cortical mechanisms of motion detection along the dorsal pathway using this type of representation. Our model combines filters with spatio-temporal tunings also found in visual cortex to yield spatio-temporal and direction specificity. We probe our model with recordings of test stimuli, articulated motion and ego-motion. We show how our approach robustly estimates optic flow and also demonstrate how this output can be used for classification purposes.


BICT '14 Proceedings of the 8th International Conference on Bioinspired Information and Communications Technologies | 2014

On event-based motion detection and integration

Stephan Tschechne; Tobias Brosch; Roman Sailer; Nora von Egloffstein; Luma Issa Abdul-Kreem; Heiko Neumann

Event-based vision sensors sample individual pixels at a much higher temporal resolution and provide a representation of the visual input available in their receptive fields that is temporally independent of neighboring pixels. The information available on pixel level for subsequent processing stages is reduced to representations of changes in the local intensity function. In this paper we present theoretical implications of this condition with respect to the structure of light fields for stationary observers and local moving contrasts in the luminance function. On this basis we derive several constraints on what kind of information can be extracted from event-based sensory acquisition using the address-event-representation (AER) principle. We discuss how subsequent visual mechanisms can build upon such representations in order to integrate motion and static shape information. On this foundation we present approaches for motion detection and integration in a neurally inspired model that demonstrates the interaction of early and intermediate stages of visual processing. Results replicating experimental findings demonstrate the abilities of the initial and subsequent stages of the model in the domain of motion processing.


conference of the international speech communication association | 2014

On Annotation and Evaluation of Multi-modal Corpora in Affective Human-Computer Interaction

Markus Kächele; Martin Schels; Sascha Meudt; Viktor Kessler; Michael Glodek; Patrick Thiam; Stephan Tschechne; Günther Palm; Friedhelm Schwenker

In this paper, we discuss the topic of affective human-computer interaction from a data driven viewpoint. This comprises the collection of respective databases with emotional contents, feasible annotation procedures and software tools that are able to conduct a suitable labeling process. A further issue that is discussed in this paper is the evaluation of the results that are computed using statistical classifiers. Based on this we propose to use fuzzy memberships in order to model affective user state and endorse respective fuzzy performance measures.


international conference on artificial neural networks | 2013

A Biologically Inspired Model for the Detection of External and Internal Head Motions

Stephan Tschechne; Georg Layher; Heiko Neumann

Non-verbal communication signals are to a large part conveyed by visual motion information of the users facial components (intrinsic motion) and head (extrinsic motion). An observer perceives the visual flow as a superposition of both types of motions. However, when visual signals are used for training of classifiers for non-articulated communication signals, a decomposition is advantageous. We propose a biologically inspired model that builds upon the known functional organization of cortical motion processing at low and intermediate stages to decompose the composite motion signal. The approach extends previous models to incorporate mechanisms that represent motion gradients and direction sensitivity. The neural models operate on larger spatial scales to capture properties in flow patterns elicited by turning head movements. Center-surround mechanisms build contrast-sensitive cells and detect local facial motion. The model is probed with video samples and outputs occurrences and magnitudes of extrinsic and intrinsic motion patterns.


international conference on image analysis and processing | 2011

Half ellipse detection

Nikolai Sergeev; Stephan Tschechne

This paper presents an algorithm of half ellipse detection from color images. Additionally the algorithm detects two color average values along the both sides of a half ellipse. In contrast to standard methods the new one finds not only parameters of the entire ellipse but also the end points of a half ellipse. The paper introduces a new way of edge and line detection. The new detector of edges in color images was designed to extract color on the both sides of an edge. The new line detector is designed to optimize the detection of endpoints of a line.


international conference on adaptive and natural computing algorithms | 2011

Mechanisms of adaptive spatial integration in a neural model of cortical motion processing

Stefan Ringbauer; Stephan Tschechne; Heiko Neumann

In visual cortex information is processed along a cascade of neural mechanisms that pool activations from the surround with spatially increasing receptive fields. Watching a scenery of multiple moving objects leads to object boundaries on the retina defined by discontinuities in feature domains such as luminance or velocities. Spatial integration across the boundaries mixes distinct sources of input signals and leads to unreliable measurements. Previous work [6] proposed a luminance-gated motion integration mechanism, which does not account for the presence of discontinuities in other feature domains. Here, we propose a biologically inspired model that utilizes the low and intermediate stages of cortical motion processing, namely V1, MT and MSTl, to detect motion by locally adapting spatial integration fields depending on motion contrast. This mechanism generalizes the concept of bilateral filtering proposed for anisotropic smoothing in image restoration in computer vision.


intelligent environments | 2015

Towards the Separation of Rigid and Non-rigid Motions for Facial Expression Analysis

Georg Layher; Stephan Tschechne; Robert Niese; Ayoub Al-Hamadi; Heiko Neumann

In intelligent environments, computer systems not solely serve as passive input devices waiting for user interaction but actively analyze their environment and adapt their behaviour according to changes in environmental parameters. One essential ability to achieve this goal is to analyze the mood, emotions and dispositions a user experiences while interacting with such intelligent systems. Features allowing to infer such parameters can be extracted from auditive, as well as visual sensory input streams. For the visual feature domain, in particular facial expressions are known to contain rich information about a users emotional state and can be detected by using either static and/or dynamic image features. During interaction facial expressions are rarely performed in isolation, but most of the time co-occur with movements of the head. Thus, optical flow based facial features are often compromised by additional motions. Parts of the optical flow may be caused by rigid head motions, while other parts reflect deformations resulting from facial expressivity (non-rigid motions). In this work, we propose the first steps towards an optical flow based separation of rigid head motions from non-rigid motions caused by facial expressions. We suggest that after their separation, both, head movements and facial expressions can be used as a basis for the recognition of a users emotions and dispositions and thus allow a technical system to effectively adapt to the users state.

Collaboration


Dive into the Stephan Tschechne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge