Myrka Zago
University of Rome Tor Vergata
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Myrka Zago.
Nature Neuroscience | 2001
Joseph McIntyre; Myrka Zago; Alain Berthoz; Francesco Lacquaniti
How does the nervous system synchronize movements to catch a falling ball? According to one theory, only sensory information is used to estimate time-to-contact (TTC) with an approaching object; alternatively, implicit knowledge about physics may come into play. Here we show that astronauts initiated catching movements earlier in 0 g than in 1 g, which demonstrates that the brain uses an internal model of gravity to supplement sensory information when estimating TTC.
Experimental Brain Research | 2009
Myrka Zago; Joseph McIntyre; Patrice Senot; Francesco Lacquaniti
Intercepting and avoiding collisions with moving objects are fundamental skills in daily life. Anticipatory behavior is required because of significant delays in transforming sensory information about target and body motion into a timed motor response. The ability to predict the kinematics and kinetics of interception or avoidance hundreds of milliseconds before the event may depend on several different sources of information and on different strategies of sensory-motor coordination. What are exactly the sources of spatio-temporal information and what are the control strategies remain controversial issues. Indeed, these topics have been the battlefield of contrasting views on how the brain interprets visual information to guide movement. Here we attempt a synthetic overview of the vast literature on interception. We discuss in detail the behavioral and neurophysiological aspects of interception of targets falling under gravity, as this topic has received special attention in recent years. We show that visual cues alone are insufficient to predict the time and place of interception or avoidance, and they need to be supplemented by prior knowledge (or internal models) about several features of the dynamic interaction with the moving object.
The Journal of Physiology | 2012
Francesco Lacquaniti; Yuri P. Ivanenko; Myrka Zago
Abstract There is much experimental evidence for the existence of biomechanical constraints which simplify the problem of control of multi‐segment movements. In addition, it has been hypothesized that movements are controlled using a small set of basic temporal components or activation patterns, shared by several different muscles and reflecting global kinematic and kinetic goals. Here we review recent studies on human locomotion showing that muscle activity is accounted for by a combination of few basic patterns, each one timed at a different phase of the gait cycle. Similar patterns are involved in walking and running at different speeds, walking forwards or backwards, and walking under different loading conditions. The corresponding weights of distribution to different muscles may change as a function of the condition, allowing highly flexible control. Biomechanical correlates of each activation pattern have been described, leading to the hypothesis that the co‐ordination of limb and body segments arises from the coupling of neural oscillators between each other and with limb mechanical oscillators. Muscle activations need only intervene during limited time epochs to force intrinsic oscillations of the system when energy is lost.
Experimental Brain Research | 1999
Mauro Carrozzo; Joseph McIntyre; Myrka Zago; Francesco Lacquaniti
Abstract It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching.
Experimental Brain Research | 1999
R. Grasso; A. Peppe; F. Stratta; D. Angelini; Myrka Zago; P. Stanzione; Francesco Lacquaniti
Abstract Gait coordination was analyzed (four-camera 100 Hz ELITE system) in two groups of idiopathic Parkinson disease (PD) patients. Five patients underwent continuous infusion of apomorphine and were recorded in two different sessions (APO OFF and APO ON) in the same day. Three patients with a previous chronic electrode implantation in both internal globi pallidi (GPi) were recorded in the same experimental session with the electrodes on and off (STIM ON and STIM OFF). The orientation of both the trunk and the lower-limb segments was described with respect to the vertical in the sagittal plane. Lower-limb inter-segmental coordination was evaluated by analyzing the co-variation between thigh, shank, and foot elevation angles by means of orthogonal planar regression. At least 30 gait cycles per experimental condition were processed. We found that the trunk was bent forward in STIM OFF, whereas it was better aligned with the vertical in STIM ON in both PD groups. The legs never fully extended during the gait cycle in STIM OFF, whereas they extended before heel strike in STIM ON. The multisegmental coordination of the lower limb changed almost in parallel with the changes in trunk orientation. In STIM OFF, both the shape and the spatial orientation of the planar gait loops (thigh angle vs. shank angle vs. foot angle) differed from those of physiological locomotion, whereas in STIM ON the gait loop tended to resume features closer to the control. Switching the electrodes on and off in patients with GPi electrodes resulted in quasi-parallel changes of the trunk inclination and of the planar gait loop. The bulk of the data suggest that the basal-ganglia circuitry may be relevant in locomotion by providing an appropriate spatio-temporal framework for the control of posture and movement in a gravity-based body-centered frame of reference. Pallido-thalamic and/or pallido-mesencephalic pathways may influence the timing of the inter-segmental coordination for gait.
Journal of Neural Engineering | 2005
Myrka Zago; Francesco Lacquaniti
Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the objects kinematics. However, there are limitations in the visual system that raise questions about the general validity of these theories. Most notably, vision is poorly sensitive to arbitrary accelerations. How then does the brain deal with the motion of objects accelerated by Earths gravity? Here we review evidence in favor of the view that the brain makes the best estimate about target motion based on visually measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from the expected kinetics in the Earths gravitational field.
Current Opinion in Neurobiology | 2012
Francesco Lacquaniti; Yuri P. Ivanenko; Myrka Zago
Neural control of locomotion in human adults involves the generation of a small set of basic patterned commands directed to the leg muscles. The commands are generated sequentially in time during each step by neural networks located in the spinal cord, called Central Pattern Generators. This review outlines recent advances in understanding how motor commands are expressed at different stages of human development. Similar commands are found in several other vertebrates, indicating that locomotion development follows common principles of organization of the control networks. Movements show a high degree of flexibility at all stages of development, which is instrumental for learning and exploration of variable interactions with the environment.
Frontiers in Computational Neuroscience | 2013
Francesco Lacquaniti; Yuri P. Ivanenko; Andrea d'Avella; Karl E. Zelik; Myrka Zago
The identification of biological modules at the systems level often follows top-down decomposition of a task goal, or bottom-up decomposition of multidimensional data arrays into basic elements or patterns representing shared features. These approaches traditionally have been applied to mature, fully developed systems. Here we review some results from two other perspectives on modularity, namely the developmental and evolutionary perspective. There is growing evidence that modular units of development were highly preserved and recombined during evolution. We first consider a few examples of modules well identifiable from morphology. Next we consider the more difficult issue of identifying functional developmental modules. We dwell especially on modular control of locomotion to argue that the building blocks used to construct different locomotor behaviors are similar across several animal species, presumably related to ancestral neural networks of command. A recurrent theme from comparative studies is that the developmental addition of new premotor modules underlies the postnatal acquisition and refinement of several different motor behaviors in vertebrates.
The Journal of Neuroscience | 2012
Patrice Senot; Myrka Zago; Anne Le Séac'h; Mohammed Zaoui; Alain Berthoz; Francesco Lacquaniti; Joseph McIntyre
Humans are known to regulate the timing of interceptive actions by modeling, in a simplified way, Newtonian mechanics. Specifically, when intercepting an approaching ball, humans trigger their movements a bit earlier when the target arrives from above than from below. This bias occurs regardless of the balls true kinetics, and thus appears to reflect an a priori expectation that a downward moving object will accelerate. We postulate that gravito-inertial information is used to tune visuomotor responses to match the targets most likely acceleration. Here we used the peculiar conditions of parabolic flight—where gravitys effects change every 20 s—to test this hypothesis. We found a striking reversal in the timing of interceptive responses performed in weightlessness compared with trials performed on ground, indicating a role of gravity sensing in the tuning of this response. Parallels between these observations and the properties of otolith receptors suggest that vestibular signals themselves might plausibly provide the critical input. Thus, in addition to its acknowledged importance for postural control, gaze stabilization, and spatial navigation, we propose that detecting the direction of gravitys pull plays a role in coordinating quick reactions intended to intercept a fast-moving visual target.
Frontiers in Integrative Neuroscience | 2013
Francesco Lacquaniti; Gianfranco Bosco; Iole Indovina; Barbara La Scaleia; Vincenzo Maffei; Alessandro Moscatelli; Myrka Zago
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.