Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Herbort is active.

Publication


Featured researches published by Oliver Herbort.


Psychological Review | 2007

Exploiting Redundancy for Flexible Behavior: Unsupervised Learning in a Modular Sensorimotor Control Architecture

Martin V. Butz; Oliver Herbort; Joachim Hoffmann

Autonomously developing organisms face several challenges when learning reaching movements. First, motor control is learned unsupervised or self-supervised. Second, knowledge of sensorimotor contingencies is acquired in contexts in which action consequences unfold in time. Third, motor redundancies must be resolved. To solve all 3 of these problems, the authors propose a sensorimotor, unsupervised, redundancy-resolving control architecture (SURE_REACH), based on the ideomotor principle. Given a 3-degrees-of-freedom arm in a 2-dimensional environment, SURE_REACH encodes 2 spatial arm representations with neural population codes: a hand end-point coordinate space and an angular arm posture space. A posture memory solves the inverse kinematics problem by associating hand end-point neurons with neurons in posture space. An inverse sensorimotor model associates posture neurons with each other action-dependently. Together, population encoding, redundant posture memory, and the inverse sensorimotor model enable SURE_REACH to learn and represent sensorimotor grounded distance measures and to use dynamic programming to reach goals efficiently. The architecture not only solves the redundancy problem but also increases goal reaching flexibility, accounting for additional task constraints or realizing obstacle avoidance. While the spatial population codes resemble neurophysiological structures, the simulations confirm the flexibility and plausibility of the model by mimicking previously published data in arm-reaching tasks.


Experimental Brain Research | 2011

Habitual and goal-directed factors in (everyday) object handling.

Oliver Herbort; Martin V. Butz

A habitual and a goal-directed system contribute to action selection in the human CNS. We examined to which extent both systems interact when selecting grasps for handling everyday objects. In Experiment 1, an upright or inverted cup had to be rotated or moved. To-be-rotated upright cups were more frequently grasped with a thumb-up grasp, which is habitually used to hold an upright cup, than inverted cups, which are not associated with a specific grasp. Additionally, grasp selection depended on the overarching goal of the movement sequence (rotation vs. transport) according to the end-state comfort principle. This shows that the habitual system and the goal-directed system both contribute to grasp selection. Experiment 2 revealed that this object-orientation-dependent grasp selection was present for movements of the dominant- and non-dominant hand. In Experiment 3, different everyday objects had to be moved or rotated. Only if different orientations of an object were associated with different habitual grasps, the grasp selection depended on the object orientation. Additionally, grasp selection was affected by the horizontal direction of the forthcoming movement. In sum, the experiments provide evidence that the interaction between the habitual and the goal-directed system determines grasp selection for the interaction with every-day objects.


genetic and evolutionary computation conference | 2008

Context-dependent predictions and cognitive arm control with XCSF

Martin V. Butz; Oliver Herbort

While John Holland has always envisioned learning classifier systems (LCSs) as cognitive systems, most work on LCSs has focused on classification, datamining, and function approximation. In this paper, we show that the XCSF classifier system can be very suitably modified to control a robot system with redundant degrees of freedom, such as a robot arm. Inspired by recent research insights that suggest that sensorimotor codes are nearly ubiquitous in the brain and an essential ingredient for cognition in general, the XCSF system is modified to learn classifiers that encode piecewise linear sensorimotor structures, which are conditioned on prediction-relevant contextual input. In the investigated robot arm problem, we show that XCSF partitions the (contextual) posture space of the arm in such a way that accurate hand movements can be predicted given particular motor commands. Furthermore, we show that the inversion of the sensorimotor predictive structures enables accurate goal-directed closed-loop control of arm reaching movements. Besides the robot arm application, we also investigate performance of the modified XCSF system on a set of artificial functions. All results point out that XCSF is a useful tool to evolve problem space partitions that are maximally effective for the encoding of sensorimotor dependencies. A final discussion elaborates on the relation of the taken approach to actual brain structures and cognitive psychology theories of learning and behavior.


Experimental Brain Research | 2010

Planning and control of hand orientation in grasping movements.

Oliver Herbort; Martin V. Butz

Humans grasp objects in a way that facilitates the intended use of the object. We examined how humans grasp a circular control knob in order to turn it in different directions and by different extents. To examine the processes involved in anticipatory planning of grasps, we manipulated advance information about the location of the control knob and the target of the knob-turn. The forearm orientation at the time of grasping depended strongly on the knob-turn, with the direction of the knob-turn having a stronger effect than the extent of the knob-turn. However, the variability of the forearm orientations after the knob-turn remained considerable. Anticipatory forearm orientations began early during the grasping movement. Advance information had no influence on the trajectory of the grasp but affected reaction times and the duration of the grasp. From the results, we conclude that (1) grasps are selected in anticipation of the upcoming knob rotation, (2) the desired hand location and forearm orientation at the time of grasping are specified before the onset of the grasp, and (3) an online programming strategy is used to schedule the preparation of the knob-turn during the execution of the grasp.


Psychological Research-psychologische Forschung | 2012

The continuous end-state comfort effect: weighted integration of multiple biases.

Oliver Herbort; Martin V. Butz

The grasp orientation when grasping an object is frequently aligned in anticipation of the intended rotation of the object (end-state comfort effect). We analyzed grasp orientation selection in a continuous task to determine the mechanisms underlying the end-state comfort effect. Participants had to grasp a box by a circular handle—which allowed for arbitrary grasp orientations—and then had to rotate the box by various angles. Experiments 1 and 2 revealed both that the rotation’s direction considerably determined grasp orientations and that end-postures varied considerably. Experiments 3 and 4 further showed that visual stimuli and initial arm postures biased grasp orientations if the intended rotation could be easily achieved. The data show that end-state comfort but also other factors determine grasp orientation selection. A simple mechanism that integrates multiple weighted biases can account for the data.


Cognitive Processing | 2007

Explorations of anticipatory behavioral control (ABC): a report from the cognitive psychology unit of the University of Würzburg

Joachim Hoffmann; Michael P. Berner; Martin V. Butz; Oliver Herbort; Andrea Kiesel; Wilfried Kunde; Alexandra Lenhard

The report comprises recent theoretical considerations, experimental research, and simulations which all aim at a clarification of anticipatory mechanisms of behavioral control.


PLOS ONE | 2012

Influence of Motor Planning on Distance Perception within the Peripersonal Space

Wladimir Kirsch; Oliver Herbort; Martin V. Butz; Wilfried Kunde

We examined whether movement costs as defined by movement magnitude have an impact on distance perception in near space. In Experiment 1, participants were given a numerical cue regarding the amplitude of a hand movement to be carried out. Before the movement execution, the length of a visual distance had to be judged. These visual distances were judged to be larger, the larger the amplitude of the concurrently prepared hand movement was. In Experiment 2, in which numerical cues were merely memorized without concurrent movement planning, this general increase of distance with cue size was not observed. The results of these experiments indicate that visual perception of near space is specifically affected by the costs of planned hand movements.


From Motor Learning to Interaction Learning in Robots | 2010

The SURE_REACH Model for Motor Learning and Control of a Redundant Arm: From Modeling Human Behavior to Applications in Robotics

Oliver Herbort; Martin V. Butz; Gerulf K.m. Pedersen

The recently introduced neural network SURE_REACH (sensorimotor unsupervised redundancy resolving control architecture) models motor cortical learning and control of human reaching movements. The model learns redundant, internal body models that are highly suitable to flexibly invoke effective motor commands. The encoded redundancy is used to adapt behavior flexible to situational constraints without the need for further learning. These adaptations to specific tasks or situations are realized by a neurally generated movement plan that adheres to various end-state or trajectory-related constraints. The movement plan can be implemented by proprioceptive or visual closed-loop control. This chapter briefly reviews the literature on computational models of motor learning and control and gives a description of SURE_REACH and its neural network implementation. Furthermore, we relate the model to human motor learning and performance and discuss its neural foundations. Finally, we apply the model to the control of a dynamic robot platform. In sum, SURE_REACH grounds highly flexible task-dependent behavior on a neural network framework for unsupervised learning. It accounts for the neural processes that underlie fundamental aspects of human behavior and is well applicable to the control of robots.


international conference on development and learning | 2007

Learning to select targets within targets in reaching tasks

Oliver Herbort; Dimitri Ognibene; Martin V. Butz; Gianluca Baldassarre

We present a developmental neural network model of motor learning and control, called RL_SURE_REACH. In a childhood phase, a motor controller for goal directed reaching movements with a redundant arm develops unsupervised. In subsequent task-specific learning phases, the neural network acquires goal-modulation skills. These skills enable RL_SURE-REACH to master a task that was used in a psychological experiment by Trommershauser, Maloney, and Landy (2003). This task required participants to select aimpoints within targets that maximize the likelihood of hitting a rewarded target and minimizes the likelihood of accidentally hitting an adjacent penalty area. The neural network acquires the necessary skills by means of a reinforcement learning based modulation of the mapping from visual representations to the target representation of the motor controller. This mechanism enables the model to closely replicate the data from the original experiment. In conclusion, the effectiveness of learned actions can be significantly enhanced by fine-tuning action selection based on the combination of information about the statistical properties of the motor system with different environmental payoff scenarios.


Vision Research | 2015

Goal-oriented gaze strategies afforded by object interaction.

Anna Belardinelli; Oliver Herbort; Martin V. Butz

Task influence has long been known to play a major role in the way our eyes scan a scene. Yet most studies focus either on visual search or on sequences of active tasks in complex real world scenarios. Few studies have contrasted the distribution of eye fixations during viewing and grasping objects. Here we address how attention is deployed when different actions are planned on objects, in contrast to when the same objects are categorized. In this respect, we are particularly interested in the role every fixation plays in the unfolding dynamics of action control. We conducted an eye-tracking experiment in which participants were shown images of real-world objects. Subjects were either to assign the displayed objects to one of two classes (categorization task), to mimic lifting (lifting task), or to mimic opening the object (opening task). Results suggest that even on simplified, two dimensional displays the eyes reveal the participants intentions in an anticipatory fashion. For the active tasks, already the second saccade after stimulus onset was directed towards the central region between the two locations where the thumb and the rest of the fingers would be placed. An analysis of saliency at fixation locations showed that fixations in active tasks have higher correspondence with salient features than fixations in the passive task. We suggest that attention flexibly coordinates visual selection for information retrieval and motor planning, working as a gateway between three components, linking the task (action), the object (target), and the effector (hand) in an effective way.

Collaboration


Dive into the Oliver Herbort's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Wirth

University of Würzburg

View shared research outputs
Top Co-Authors

Avatar

David A. Rosenbaum

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge