Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephan K. U. Zibner is active.

Publication


Featured researches published by Stephan K. U. Zibner.


IEEE Transactions on Autonomous Mental Development | 2011

Dynamic Neural Fields as Building Blocks of a Cortex-Inspired Architecture for Robotic Scene Representation

Stephan K. U. Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner

Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation.


international conference on artificial neural networks | 2013

A Software Framework for Cognition, Embodiment, Dynamics, and Autonomy in Robotics: Cedar

Oliver Lomp; Stephan K. U. Zibner; Mathis Richter; Iñaki Rañó; Gregor Schöner

We present Cedar, a software framework for the implementation and simulation of embodied cognitive models based on Dynamic Field Theory (DFT). DFT is a neurally inspired theoretical framework that integrates perception, action, and cognition. Cedar captures the power of DFT in software by facilitating the process of software development for embodied cognitive systems, both artificial and as models of human cognition. In Cedar, models can be designed through a graphical interface and interactively tuned. We demonstrate this by implementing an exemplary robotic architecture.


international conference on development and learning | 2011

Making a robotic scene representation accessible to feature and label queries

Stephan K. U. Zibner; Christian Faubel; Gregor Schöner

We present a neural architecture for scene representation that stores semantic information about objects in the robots workspace. We show how this representation can be queried both through low-level features such as color and size, through feature conjunctions, as well as through symbolic labels. This is possible by binding different feature dimensions through space and integrating these space-feature representations with an object recognition system. Queries lead to the activation of a neural representation of previously seen objects, which can then be used to drive object-oriented action. The representation is continuously linked to sensory information and autonomously updates when objects are moved or removed.


international conference on development and learning | 2010

Scenes and tracking with dynamic neural fields: How to update a robotic scene representation

Stephan K. U. Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner; John P. Spencer

We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robots head in smooth pursuit and in multi-item tracking when several items move simultaneously.


Frontiers in Neurorobotics | 2017

A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating

Guido Knips; Stephan K. U. Zibner; Hendrik Reimann; Gregor Schöner

Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.


joint ieee international conference on development and learning and epigenetic robotics | 2015

The neural dynamics of goal-directed arm movements: A developmental perspective

Stephan K. U. Zibner; Jan Tekülve; Gregor Schöner

We present a neuro-dynamic architecture for the generation of movement of the hand toward a visual target that integrates movement planning based on visual input, movement initiation and termination, the generation of the time courses of virtual trajectories of the hand in Cartesian space, and their transformation into virtual joint trajectories and muscle forces. The architecture captures properties of adult goal-directed arm movements such as bell-shaped velocity profiles and on-line updating of a movement when the target is shifted. The integrated and autonomous nature of the architecture makes it possible to study how motor performance is affected when one of the three core processes, planning, timing, and transformation from end-effector to joint space, are decalibrated to reflect earlier stages of development. We find signatures of the development of reaching such as multiple movement units and curved movement paths in the “young” model.


intelligent robots and systems | 2014

A neural dynamics architecture for grasping that integrates perception and movement generation and enables on-line updating.

Guido Knips; Stephan K. U. Zibner; Hendrik Reimann; Irina Popova; Gregor Schöner

We present a neural dynamics architecture for grasping that integrates perceptual processes of scene exploration, object selection and classification, and grasp pose estimation with motor processes such as planning and controlling reach and grasp movements. Inspired by theories of human embodied cognition, the entire architecture is essentially one big dynamical system from which discrete events such as initiating and terminating reaches and grasps emerge through dynamical instabilities. Using a Kinect sensor as input, we implement the architecture on a Kuka light weight arm with a Schunk Dextrous Hand and demonstrate grasping movements that are updated on-line when the object is shifted or rotated during movement planning or execution.


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Reaching and grasping novel objects: Using neural dynamics to integrate and organize scene and object perception with movement generation

Guido Knips; Stephan K. U. Zibner; Hendrik Reimann; Irina Popova; Gregor Schöner

We present a neural dynamics architecture for robotic grasping of novel objects. It closes the perception-action loop by integrating perceptual processes such as scene exploration, pose estimation, and shape classification with movement generation to reach and grasp a target object. Inspired by theories of human embodied cognition, this is achieved by interconnected dynamical systems, whose dynamical instabilities mark the discrete events of the grasping process. The architecture perceives the scene through a Kinect sensor and executes the grasp with a Schunk Dextrous Hand attached to a Kuka light weight arm.


intelligent robots and systems | 2010

A neuro-dynamic object recognition architecture enhanced by foveal vision and a gaze control mechanism

Christian Faubel; Stephan K. U. Zibner

We present an extension of a neuro-dynamic object recognition system that combines bottom-up recognition of matching patterns and top-down estimation of pose parameters in a recurrent loop. It is extended by an active foveal vision system. Adding the active vision component is easily integrated within the architecture and improves the recognition rate on previous experiments on the COIL-100 database and for scenes where segmentation of objects is not trivial. Furthermore the active component allows to substantially increase the spatial area where objects can be tracked. When objects move faster than visual servoing can track, catch-up saccades are autonomously generated.


Frontiers in Neurorobotics | 2016

Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar

Oliver Lomp; Mathis Richter; Stephan K. U. Zibner; Gregor Schöner

Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

Collaboration


Dive into the Stephan K. U. Zibner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guido Knips

Ruhr University Bochum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Lomp

Ruhr University Bochum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge