Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florentin Wörgötter is active.

Publication


Featured researches published by Florentin Wörgötter.


Brain Research Reviews | 2009

Activity-dependent structural plasticity

Markus Butz; Florentin Wörgötter; Arjen van Ooyen

Plasticity in the brain reaches far beyond a mere changing of synaptic strengths. Recent time-lapse imaging in the living brain reveals ongoing structural plasticity by forming or breaking of synapses, motile spines, and re-routing of axonal branches in the developing and adult brain. Some forms of structural plasticity do not follow Hebbian- or anti-Hebbian paradigms of plasticity but rather appear to contribute to the homeostasis of network activity. Four decades of lesion studies have brought up a wealth of data on the mutual interdependence of neuronal activity, neurotransmitter release and neuronal morphogenesis and network formation. Here, we review these former studies on structural plasticity in the context of recent experimental studies. We compare spontaneous and experience-dependent structural plasticity with lesion-induced (reactive) structural plasticity that occurs during development and in the adult brain. Understanding the principles of neural network reorganization on a structural level is relevant for a deeper understanding of long-term memory formation as well as for the treatment of neurological diseases such as stroke.


computer vision and pattern recognition | 2013

Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

Jeremie Papon; Alexey Abramov; Markus Schoeler; Florentin Wörgötter

Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.


The International Journal of Robotics Research | 2006

Fast Biped Walking with a Sensor-driven Neuronal Controller and Real-time Online Learning

Tao Geng; Bernd Porr; Florentin Wörgötter

In this paper, we present our design and experiments on a planar biped robot under the control of a pure sensor-driven controller. This design has some special mechanical features, for example small curved feet allowing rolling action and a properly positioned center of mass, that facilitate fast walking through exploitation of the robots natural dynamics. Our sensor-driven controller is built with biologically inspired sensor- and motor-neuron models, and does not employ any kind of position or trajectory tracking control algorithm. Instead, it allows our biped robot to exploit its own natural dynamics during critical stages of its walking gait cycle. Due to the interaction between the sensor-driven neuronal controller and the properly designed mechanics of the robot, the biped robot can realize stable dynamic walking gaits in a large domain of the neuronal parameters. In addition, this structure allows the use of a policy gradient reinforcement learning algorithm to tune the parameters of the sensor-driven controller in real-time, during walking. This way RunBot can reach a relative speed of 3.5 leg lengths per second after only a few minutes of online learning, which is faster than that of any other biped robot, and is also comparable to the fastest relative speed of human walking.


The International Journal of Robotics Research | 2011

Learning the semantics of object-action relations by observation

Eren Erdal Aksoy; Alexey Abramov; Johannes Dörr; KeJun Ning; Babette Dellen; Florentin Wörgötter

Recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine.


PLOS Computational Biology | 2010

Self-organized criticality in developing neuronal networks.

Christian Tetzlaff; Samora Okujeni; Ulrich Egert; Florentin Wörgötter; Markus Butz

Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV) of cortical cell cultures (n = 20) and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV) is followed by a supercritical (≈20 DIV) and then a subcritical one (≈36 DIV) until the network finally reaches stable criticality (≈58 DIV). Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro.


PLOS Computational Biology | 2007

Adaptive, fast walking in a biped robot under neuronal control and learning

Poramate Manoonpong; Tao Geng; Tomas Kulvicius; Bernd Porr; Florentin Wörgötter

Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walkers sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.


Robotics and Autonomous Systems | 2011

Object-action complexes: grounded abstractions of sensory-motor processes

Norbert Krüger; Christopher W. Geib; Justus H. Piater; Ronald P. A. Petrick; Mark Steedman; Florentin Wörgötter; Ales Ude; Tamim Asfour; Dirk Kraft; Damir Omrcen; Alejandro Agostini; Rüdiger Dillmann

Abstract This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive robots, and enumerates a number of critical learning problems in terms of OACs.


Frontiers in Neural Circuits | 2013

Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

Poramate Manoonpong; Ulrich Parlitz; Florentin Wörgötter

Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.


Robotics and Autonomous Systems | 2009

Cognitive agents - a procedural perspective relying on the predictability of Object-Action-Complexes (OACs)

Florentin Wörgötter; Alejandro Agostini; Norbert Krüger; Natalya Shylo; Ben Porr

Embodied cognition suggests that complex cognitive traits can only arise when agents have a body situated in the world. The aspects of embodiment and situatedness are being discussed here from the perspective of linear systems theory. This perspective treats bodies as dynamic, temporally variable entities, which can be extended (or curtailed) at their boundaries. We show how acting agents can, for example, actively extend their body for some time by incorporating predictably behaving parts of the world and how this affects the transfer functions. We suggest that primates have mastered this to a large degree increasingly splitting their world into predictable and unpredictable entities. We argue that temporary body extension may have been instrumental in paving the way for the development of higher cognitive complexity as it is reliably widening the cause-effect horizon about the actions of the agent. A first robot experiment is sketched to support these ideas. We continue discussing the concept of Object-Action Complexes (OACs) introduced by the European PACO-PLUS consortium to emphasize the notion that, for a cognitive agent, objects and actions are inseparably intertwined. In another robot experiment we devise a semi-supervised procedure using the OAC-concept to demonstrate how an agent can acquire knowledge about its world. Here the notion of predicting changes fundamentally underlies the implemented procedure and we try to show how this concept can be used to improve the robots inner model and behaviour. Hence, in this article we have tried to show how predictability can be used to augment the agents body and to acquire knowledge about the external world, possibly leading to more advanced cognitive traits.


international conference on robotics and automation | 2010

Categorizing object-action relations from semantic scene graphs

Eren Erdal Aksoy; Alexey Abramov; Florentin Wörgötter; Babette Dellen

In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting object-action relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics.

Collaboration


Dive into the Florentin Wörgötter's collaboration.

Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eren Erdal Aksoy

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremie Papon

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge