Wolfram Erlhagen
University of Minho
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wolfram Erlhagen.
Psychological Review | 2002
Wolfram Erlhagen; Gregor Schöner
A theoretical framework for understanding movement preparation is proposed. Movement parameters are represented by activation fields, distributions of activation defined over metric spaces. The fields evolve under the influence of various sources of localized input, representing information about upcoming movements. Localized patterns of activation self-stabilize through cooperative and competitive interactions within the fields. The task environment is represented by a 2nd class of fields, which preshape the movement parameter representation. The model accounts for a sizable body of empirical findings on movement initiation (continuous and graded nature of movement preparation, dependence on the metrics of the task, stimulus uncertainty effect, stimulus-response compatibility effects, Simon effect, precuing paradigm, and others) and suggests new ways of exploring the structure of motor representations.
Journal of Neural Engineering | 2006
Wolfram Erlhagen; Estela Bicho
This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partners action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work.
Neuroreport | 1998
Annette Bastian; Alexa Riehle; Wolfram Erlhagen; Gregor Schöner
SINGLE neuron activity was recorded in monkey motor cortex during the execution of pointing movements in six directions. The amount of prior information was manipulated by varying the range of precued directions. A distribution of neural population activation was constructed in the space of movement directions. This population representation of movement direction was preshaped by the precue. Peak location and width reflected the precued range of movement directions. From this preshaped form, the population representation evolved continuously in time and gradually in parameter space toward a more sharply peaked distribution centered on the parameter value specified by the response signal. A theoretical model of motor programming generated a similar temporal evolution of an activation field representing movement direction.
Journal of Neuroscience Methods | 1999
Wolfram Erlhagen; Annette Bastian; Dirk Jancke; Alexa Riehle; Gregor Schöner
In many cortical areas, simple stimuli or task conditions activate large populations of neurons. We hypothesize that such populations support processes of interaction within parametric representations and integration of multiple sources of input and we propose to study these processes using distributions of population activation (DPAs) as a tool. Such distributions can be viewed as neuronal representations of continuous stimulus or task parameters. They are built from basis functions contributed by each neuron. These functions may be explicitly chosen based on tuning curves or receptive field profiles. Or they may be determined by minimizing the distance between chosen target distributions and the constructed DPAs. In both cases, construction of the DPA is based on a set of reference conditions in which the stimulus or task parameters are sampled experimentally. In a second step, basis functions are kept fixed, and the DPAs are used to explore time dependent processing, interaction and integration of information. For instance, stimuli which simultaneously specify multiple parameter values can be used to study interactions within the parametric representation. We review an experiment, in which the representation of retinal position is probed in this way, revealing fast excitatory interactions among neurons representing similar retinal positions and slower inhibitory interactions among neurons representing dissimilar retinal positions. Similarly, DPAs can be used to analyze different sources of input that are fused within a parametric representation. We review an experiment in which the representation of the direction of goal-directed arm movements in motor and premotor cortex is studied when prior and current information about upcoming movement tasks are integrated.
The Journal of Physiology | 2004
Dirk Jancke; Wolfram Erlhagen; Gregor Schöner; Hubert R. Dinse
Psychophysical evidence in humans indicates that localization is different for stationary flashed and coherently moving objects. To address how the primary visual cortex represents object position we used a population approach that pools spiking activity of many neurones in cat area 17. In response to flashed stationary squares (0.4 deg) we obtained localized activity distributions in visual field coordinates, which we referred to as profiles across a ‘population receptive field’ (PRF). We here show how motion trajectories can be derived from activity across the PRF and how the representation of moving and flashed stimuli differs in position. We found that motion was represented by peaks of population activity that followed the stimulus with a speed‐dependent lag. However, time‐to‐peak latencies were shorter by ∼16 ms compared to the population responses to stationary flashes. In addition, motion representation showed a directional bias, as latencies were more reduced for peripheral‐to‐central motion compared to the opposite direction. We suggest that a moving stimulus provides ‘preactivation’ that allows more rapid processing than for a single flash event.
Neural Networks | 2006
Rh Raymond Cuijpers; Hein T. van Schie; Mathieu Koppen; Wolfram Erlhagen; Harold Bekkering
Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use different action means becomes evident when the observer and observed actor have different bodies (robots and humans) or bodily measurements (parents and children), or when the environments of actor and observer differ substantially (when an obstacle is present or absent in either environment). A selective focus on the action goals instead of the action means furthermore circumvents the need to consider the vantage point of the actor, which is consistent with recent findings that people prefer to represent the actions of others from their own individual perspective. In this paper, we use a computational approach to investigate how knowledge about action goals and means are used in action observation. We hypothesise that in action observation human agents are primarily interested in identifying the goals of the observed actors behaviour. Behavioural cues (e.g. the way an object is grasped) may help to disambiguate the goal of the actor (e.g. whether a cup is grasped for drinking or handing it over). Recent advances in cognitive neuroscience are cited in support of the models architecture.
Journal of the Experimental Analysis of Behavior | 2009
Armando Machado; Maria Teresa Malheiro; Wolfram Erlhagen
In the last decades, researchers have proposed a large number of theoretical models of timing. These models make different assumptions concerning how animals learn to time events and how such learning is represented in memory. However, few studies have examined these different assumptions either empirically or conceptually. For knowledge to accumulate, variation in theoretical models must be accompanied by selection of models and model ideas. To that end, we review two timing models, Scalar Expectancy Theory (SET), the dominant model in the field, and the Learning-to-Time (LeT) model, one of the few models dealing explicitly with learning. In the first part of this article, we describe how each model works in prototypical concurrent and retrospective timing tasks, identify their structural similarities, and classify their differences concerning temporal learning and memory. In the second part, we review a series of studies that examined these differences and conclude that both the memory structure postulated by SET and the state dynamics postulated by LeT are probably incorrect. In the third part, we propose a hybrid model that may improve on its parents. The hybrid model accounts for the typical findings in fixed-interval schedules, the peak procedure, mixed fixed interval schedules, simple and double temporal bisection, and temporal generalization tasks. In the fourth and last part, we identify seven challenges that any timing model must meet.
Advances in psychology | 1997
Gregor Schöner; Klaus Kopecz; Wolfram Erlhagen
Abstract Simple motor acts, such as reaching for an object or making a saccade toward a target in a scene, involve deep and general problems: extracting and fusing from various sources of sensation the information that specifies the motor act; relating such information to coordinate frames relevant to motor behavior; shaping and stabilizing a single movement act in the face of multivalued and ambiguous sensory information. We propose a theoretical framework, within which some of these problems can be addressed. While compatible with ideas from control theory and from information processing in neural networks, the framework is aimed primarily at the processes of integration. We demonstrate the concepts by building two models of motor programming, one for goal-directed arm movements, the other for saccadic eye movements. In each case, the interaction between current sensory information and memorized information on the task environment is addressed to exemplify integration. We cover such aspects as the dependence of reaction time on number, probability, and metrics of choices, the effect of stimulus-response compatibility on reaction times, the graded and continuous evolution of motor program parameters, and the modification of the motor program in response to sudden changes in input information. The relationships between the concepts of this theoretical framework and concepts of neurophysiology as well as of cognitive science are discussed.
Frontiers in Neurorobotics | 2010
Estela Bicho; Luis Henrique Leme Louro; Wolfram Erlhagen
How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observers motor system are crucially involved in our ability to understand actions of others’, to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human–robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and nonverbal communication skills of the robot in a joint assembly task in which the human–robot team has to construct toy objects from their components. The experiments focus on the robots capacity to anticipate the users needs and to detect and communicate unexpected events that may occur during joint task execution.
Biological Cybernetics | 2003
Wolfram Erlhagen
Abstract. Although the extrapolation of past perceptual history into the immediate and distant future is a fundamental phenomenon in everyday life, the underlying processing mechanisms are not well understood. A network model consisting of interacting excitatory and inhibitory cell populations coding for stimulus position is used to study the neuronal population response to a continuously moving stimulus. An adaptation mechanism is proposed that offers the possibility to control and modulate motion-induced extrapolation without changing the spatial interaction structure within the network. Using an occluder paradigm, functional advantages of an internally generated model of a moving stimulus are discussed. It is shown that the integration of such a model in processing leads to a faster and more reliable recognition of the input stream and allows for object permanence following occlusion. The modeling results are discussed in relation to recent experimental findings that show motion-induced extrapolation.