Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elke Braun is active.

Publication


Featured researches published by Elke Braun.


Frontiers in Behavioral Neuroscience | 2012

Prototypical components of honeybee homing flight behavior depend on the visual appearance of objects surrounding the goal.

Elke Braun; Laura Dittmar; Norbert Boeddeker; Martin Egelhaaf

Honeybees use visual cues to relocate profitable food sources and their hive. What bees see while navigating, depends on the appearance of the cues, the bee’s current position, orientation, and movement relative to them. Here we analyze the detailed flight behavior during the localization of a goal surrounded by cylinders that are characterized either by a high contrast in luminance and texture or by mostly motion contrast relative to the background. By relating flight behavior to the nature of the information available from these landmarks, we aim to identify behavioral strategies that facilitate the processing of visual information during goal localization. We decompose flight behavior into prototypical movements using clustering algorithms in order to reduce the behavioral complexity. The determined prototypical movements reflect the honeybee’s saccadic flight pattern that largely separates rotational from translational movements. During phases of translational movements between fast saccadic rotations, the bees can gain information about the 3D layout of their environment from the translational optic flow. The prototypical movements reveal the prominent role of sideways and up- or downward movements, which can help bees to gather information about objects, particularly in the frontal visual field. We find that the occurrence of specific prototypes depends on the bees’ distance from the landmarks and the feeder and that changing the texture of the landmarks evokes different prototypical movements. The adaptive use of different behavioral prototypes shapes the visual input and can facilitate information processing in the bees’ visual system during local navigation.


The Journal of Experimental Biology | 2010

A syntax of hoverfly flight prototypes

Bart R. H. Geurten; Roland Kern; Elke Braun; Martin Egelhaaf

SUMMARY Hoverflies such as Eristalis tenax Linnaeus are known for their distinctive flight style. They can hover on the same spot for several seconds and then burst into movement in apparently any possible direction. In order to determine a quantitative and structured description of complex flight manoeuvres, we searched for a set of repeatedly occurring prototypical movements (PMs) and a set of rules for their ordering. PMs were identified by applying clustering algorithms to the translational and rotational velocities of the body of Eristalis during free-flight sequences. This approach led to nine stable and reliable PMs, and thus provided a tremendous reduction in the complexity of behavioural description. This set of PMs together with the probabilities of transition between them constitute a syntactical description of flight behaviour. The PMs themselves can be roughly segregated into fast rotational turns (saccades) and a variety of distinct translational movements (intersaccadic intervals). We interpret this segregation as reflecting an active sensing strategy which facilitates the extraction of spatial information from retinal image displacements. Detailed analysis of saccades shows that they are performed around all rotational axes individually and in all possible combinations. We found the probability of occurrence of a given saccade type to depend on parameters such as the angle between the long body axis and the direction of flight.


PLOS ONE | 2010

Identifying Prototypical Components in Behaviour Using Clustering Algorithms

Elke Braun; Bart R. H. Geurten; Martin Egelhaaf

Quantitative analysis of animal behaviour is a requirement to understand the task solving strategies of animals and the underlying control mechanisms. The identification of repeatedly occurring behavioural components is thereby a key element of a structured quantitative description. However, the complexity of most behaviours makes the identification of such behavioural components a challenging problem. We propose an automatic and objective approach for determining and evaluating prototypical behavioural components. Behavioural prototypes are identified using clustering algorithms and finally evaluated with respect to their ability to represent the whole behavioural data set. The prototypes allow for a meaningful segmentation of behavioural sequences. We applied our clustering approach to identify prototypical movements of the head of blowflies during cruising flight. The results confirm the previously established saccadic gaze strategy by the set of prototypes being divided into either predominantly translational or rotational movements, respectively. The prototypes reveal additional details about the saccadic and intersaccadic flight sections that could not be unravelled so far. Successful application of the proposed approach to behavioural data shows its ability to automatically identify prototypical behavioural components within a large and noisy database and to evaluate these with respect to their quality and stability. Hence, this approach might be applied to a broad range of behavioural and neural data obtained from different animals and in different contexts.


international conference on pattern recognition | 1998

Hybrid object recognition in image sequences

Franz Kummert; Gernot A. Fink; Gerhard Sagerer; Elke Braun

We present a hybrid approach attaching probabilistic formalisms, as artificial neural networks or hidden Markov models, to concepts of a semantic network for a robust and efficient detection of objects. Additionally, an efficient processing strategy for image sequences is outlined which propagates the structural results of the semantic network as an expectation for the next image. This method allows one to produce linked results over time supporting the recognition of events and actions.


International Journal of Pattern Recognition and Artificial Intelligence | 2004

FROM IMAGE FEATURES TO SYMBOLS AND VICE VERSA — USING GRAPHS TO LOOP DATA- AND MODEL-DRIVEN PROCESSING IN VISUAL ASSEMBLY RECOGNITION

Christian Bauckhage; Elke Braun; Gerhard Sagerer

Graphs and graph matching are powerful mechanisms for knowledge representation, pattern recognition and machine learning. Especially in computer vision their application is manifold. Graphs can characterize relations among image features like points or regions but they may also represent symbolic object knowledge. Hence, graph matching can accomplish recognition tasks on different levels of abstraction. In this contribution, we demonstrate that graphs may also bridge the gap between different levels of knowledge representation. We present a system for visual assembly monitoring that integrates bottom-up and top-down strategies for recognition and automatically generates and learns graph models to recognize assembled objects. Data-driven processing is subdived into three stages: first, elementary objects are recognized from low-level image features. Then, clusters of elementary objects are analyzed syntactically; if an assembly structure is found, it is translated into a graph that uniquely models the assembly. Finally, symbolic models like this are stored in a database so that individual assemblies can be recognized by means of graph matching. At the same time, these graphs enable top-down knowledge propagation: they are transformed into graphs which represent relations between image features and thus describe the visual appearance of the recently found assembly. Therefore, due to model-driven knowledge propagation assemblies may subsequently be recognized from graph matching on a lower computational level and tedious bottom-up processing becomes superfluous.


Pattern Recognition Letters | 1999

A multi—directional multiple—path recognition scheme for complex objects applied to the domain of a wooden toy kit

Elke Braun; Gunther Heidemann; Helge Ritter; Gerhard Sagerer

Abstract Recognition and description of complex objects require the representation and processing of various aspects and features. In the paper, we propose an architecture which combines the advantages of different paradigms. Voting and Bayesian networks enable an integrated approach for knowledge based and probabilistic reasoning.


Flying Insects and Robots | 2009

Active Vision in Blowflies: Strategies and Mechanisms of Spatial Orientation

Martin Egelhaaf; Roland Kern; Jens Peter Lindemann; Elke Braun; Bart R. H. Geurten

With its miniature brain blowflies are able to control highly aerobatic flight manoeuvres and, in this regard, outperform any man-made autonomous flying system. To accomplish this extraordinary performance, flies shape actively by the specific succession of characteristic movements the dynamics of the image sequences on their eyes (‘optic flow’): They shift their gaze only from time to time by saccadic turns of body and head and keep it fixed between these saccades. Utilising the intervals of stable vision between saccades, an ensemble of motion-sensitive visual interneurons extracts from the optic flow information about different aspects of the self-motion of the animal and the spatial layout of the environment. This is possible in a computationally parsimonious way because the retinal image flow evoked by translational self-motion contains information about the spatial layout of the environment. Detection of environmental objects is even facilitated by adaptation mechanisms in the visual motion pathway. The consistency of our experimentally established hypotheses is tested by modelling the blowfly motion vision system and using this model to control the locomotion of a ‘Cyberfly’ moving in virtual environments. This CyberFly is currently being integrated in a robotic platform steering in three dimensions with a dynamics similar to that of blowflies.


international conference on computer vision | 2001

Incorporating process knowledge into object recognition for assemblies

Elke Braun; Jannik Fritsch; Gerhard Sagerer

In this paper we present an object recognition framework integrating several recognition paradigms and context information from the scene history to recognize elementary parts contained in assemblies. We use a symbolic approach to detect actions based on the object changes in the scene to monitor the construction process. The information about the elements used to construct a new assembly serves as additional source of information for recognition. Process knowledge is exploited also for selecting the best interpretation out of several alternatives for a single scene which result from contradictions and uncertainties during integration of the different cues.


international conference on advances in pattern recognition | 2001

Integrating Recognition Paradigms in a Multiple-Path Architecture

Gerhard Sagerer; Christian Bauckhage; Elke Braun; Gunther Heidemann; Franz Kummert; Helge Ritter; Daniel Schlüter

Four decades of intensive research in computer vision have lead to numerous computational paradigms. This fact is comprehensible since problems like object recognition or scene descriptions are of high complexity, have different aspects and can be attacked by processing various features. In this paper we propose an architecture that combines the advantages of different paradigms in pattern recognition. Voting and Bayesian networks provide a computational framework to integrate approaches to knowledge based and probabilistic reasoning as well as neural computations.


Revised Papers from the International Workshop on Sensor Based Intelligent Robots | 2000

Structure and Process: Learning of Visual Models and Construction Plans for Complex Objects

Gerhard Sagerer; Christian Bauckhage; Elke Braun; Jannik Fritsch; Franz Kummert; Frank Lömker; Sven Wachsmuth

Supervising robotic assembly of multi-functional objects by means of a computer vision system requires components to identify assembly operations and to recognize feasible assemblies of single objects. Thus, the structure of complex objects as well as their construction processes are of interest. If the results of both components should be consistent there have to be common models providing knowledge about the intended application. However, if the assembly system should handle not only exactly specified tasks it is rather impossible to model every possible assembly or action explicitly. The fusion of a flexible dynamic model for assemblies and a monitor for the construction process enables reliable and efficient learning and supervision. As an example, the construction of objects by aggregating wooden toy pieces is used. The system also integrates a natural speech dialog module, which provides the overall communication strategy and additionally supports decisions in the case of ambiguities and uncertainty.

Collaboration


Dive into the Elke Braun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge