Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimo Gangitano is active.

Publication


Featured researches published by Massimo Gangitano.


Experimental Brain Research | 2000

Language and motor control.

Maurizio Gentilucci; Francesca Benuzzi; Luca Bertolani; Elena Daprati; Massimo Gangitano

Abstract. We investigated the possible influence of automatic word reading on processes of visuo-motor transformation. Subjects reached and grasped an object on which the following Italian words were printed: VICINO (near) or LONTAN (far) on an object either near or far from the agent (experiments 1, 2); PICCOLO (small) or GRANDE (large) on either a small or a large object (experiment 4); and ALTO (high) or BASSO (low) on either a high or a low object (experiment 5). The kinematics of the initial phase of reaching-grasping was affected by the meaning of the printed words. Namely, subjects automatically associated the meaning of the word with the corresponding property of the object and activated a reach and/or a grasp motor program influenced by the word. No effect on initial reach kinematics was observed for words related to object properties not directly involved in reach control (experiment 3). Moreover, in all the experiments, the presented words poorly influenced perceptual judgement of object properties. In experiments 5–7, the effects of the Italian adjectives ALTO (high) and BASSO (low) on reaching-grasping control were compared with those of the Italian adverbs SOPRA (up) and SOTTO (down). Adjectives influenced visual analysis of target-object properties, whereas adverbs more directly influenced the control of the action. We suggest that these effects resemble the structure of a sentence, where adjectives are commonly referred to nouns, and adverbs to verbs. In other words, class of words and, in a broad sense, grammar influenced motor control. The results of the present study show that cognitive functions such as language can affect visuo-motor transformation. They are discussed according to the notion that a strict relation between language and motor control exists, and that the frontal cortex can be involved in interactions between automatic word reading and visuo-motor transformation.


European Journal of Neuroscience | 1998

Influence of automatic word reading on motor control

Maurizio Gentilucci; Massimo Gangitano

We investigated the possible influence of automatic word reading on processes of visuo‐motor transformation. Six subjects were required to reach and grasp a rod on whose visible face the word ‘long’ or ‘short’ was printed. Word reading was not explicitly required. In order to induce subjects to visually analyse the object trial by trial, object position and size were randomly varied during the experimental session. The kinematics of the reaching component was affected by word presentation. Peak acceleration, peak velocity, and peak deceleration of arm were higher for the word ‘long’ with respect to the word ‘short’. That is, during the initial movement phase subjects automatically associated the meaning of the word with the distance to be covered and activated a motor program for a farther and/or nearer object position. During the final movement phase, subjects modified the braking forces (deceleration) in order to correct the initial error. No effect of the words on the grasp component was observed. These results suggest a possible influence of cognitive functions on motor control and seem to contrast with the notion that the analyses executed in the ventral and dorsal cortical visual streams are different and independent.


Experimental Brain Research | 1997

Planning an action

Maurizio Gentilucci; Anna Negrotti; Massimo Gangitano

Abstractu2002The motor control of a sequence of two motor acts forming an action was studied in the present experiment. The two analysed motor acts were reaching-grasping an object (first target) and placing it on a second target of the same shape and size (experiment 1). The aim was to determine whether extrinsic properties of the second target (i.e. target distance) could selectively influence the kinematics of reaching and grasping. Distance, position and size of both targets were randomly varied across the experimental session. The kinematics of the initial phase of the first motor act, that is, velocity of reaching and hand shaping of grasping, were influenced by distance of the second target. No kinematic difference was found between movements executed with and without visual control of both hand and targets. These results could be due to computation of the general program of an action that takes into account extrinsic properties of the final target. Conversely, they could depend on a visual interference effect produced by the near second target on the control of the first motor act. In order to dissociate the effects due to second target distance from those due to visual interference, two control experiments were carried out. In the first control experiment (experiment 2) subjects executed movements directed towards spatial locations at different distances from the first target, as in experiment 1. However, the near second target was not presented and subjects were required to place the object on an arbitrary near position. Distance of the second (either real or arbitrary) target affected the reaching component of the first motor act, as in experiment 1, but not the grasp component. In the second control experiment (experiment 3), the pure visual interference effect was tested. Subjects were required to reach and grasp the object and to lift it in either presence or absence of a second near stimulus. No effect on the initial phase of the first motor act was observed. The results of the this study suggest a dissociation in the control of reaching and grasping, concerning not only visual analysis of extrinsic properties of the immediate target but also visual analysis of the final target of the action. In other words, the notion of modularity for the motor control can be extended to the construction of an entire action.


Experimental Brain Research | 1997

Tactile input of the hand and the control of reaching to grasp movements

Maurizio Gentilucci; Ivan Toni; Elena Daprati; Massimo Gangitano

Abstractu2002The role of tactile information of the hand in the control of reaching to grasp movements was investigated. The kinematics of both reaching (or transport) and grasp components were studied in healthy subjects in two experimental conditions. In one condition (control condition) subjects were required to reach and grasp an object that could have two sizes and that could be located at two distances from the viewer. In the other condition (anaesthesia condition) the same movements were executed, but anaesthesia was provided to the subjects’ fingertips. In both conditions vision of the hand was prevented during movement. Anaesthesia affected mainly the kinematics of the first phase of grasping, that is, the finger-opening phase. This phase was lengthened and maximal finger aperture increased. In contrast, the duration of the successive phase (finger-closure) was poorly modified. The reaching component was also impaired by anaesthesia. Although the total extent of hand path and the spatial relations between the finger aperture and closure phases did not change between the two conditions, hand path variability increased. This occurred during transport deceleration phase and after the increase in variability of finger path. In addition, the whole movement was slowed down. The results of the present experiment suggest that tactile signals at the beginning and at the end of movement can be used to compute grasp time and to optimise grasp temporal parameters. Alternatively, signals from tactile receptors can be involved in encoding the position sense of the fingers. When this input is lacking, the control of grasp and in particular that of finger-opening phase can be impaired. Finally, the effect of the grasp impairment on the reaching component supports the notion that the coordination between reaching and grasping involves the whole temporal course of the two components.


Cognitive Brain Research | 1998

Right-handers and left-handers have different representations of their own hand

Maurizio Gentilucci; Elena Daprati; Massimo Gangitano

The visual control of our own hand when dealing with an object and the observation of interactions between other peoples hand and objects can be involved in the construction of internal representations of our own hand, as well as in hand recognition processes. Therefore, a different effect on handedness recognition is expected when subjects are presented with hands holding objects with either a congruent or an incongruent type of grip. Such an experiment was carried out on right-handed and left-handed subjects. We expected that the different degree of lateralisation in motor activities observed in the two populations [J. Herron, Neuropsychology of left-handedness, Academic Press, New York, 1980.] could account for the construction of different internal hand representations. As previously found [L.M. Parsons, Imaged spatial transformations of ones hands and feet, Cogn. Psychol., 19 (1987) 178-241.], in order to identify handedness, subjects mentally rotated their own hand until it matched with the presented one. This process was confirmatory, being preceded by an implicit visual analysis of the target hand. Presentation of hands holding objects with congruent or incongruent types of grip influenced handedness recognition at different stages in right-handed and left-handed subjects. That is, the mental rotation stage was affected in right-handed subjects, whereas the initial phase of implicit hand analysis was affected in left-handed subjects. We suggest that in handedness recognition, left-handers relied more on a pictorial hand representation, whereas right-handers relied more on a pragmatic hand representation, probably derived from experience in the control of their own movements. The use of different hand representations may be due to differential activation of temporal and premotor areas.


Neuroscience Letters | 1997

Eye position tunes the contribution of allocentric and egocentric information to target localization in human goal-directed arm movements

Maurizio Gentilucci; Elena Daprati; Massimo Gangitano; Ivan Toni

Subjects were required to point to the distant vertex of the closed and the open configurations of the Müller-Lyer illusion using either their right hand (experiment 1) or their left hand (experiment 2). In both experiments the Müller-Lyer figures were horizontally presented either in the left or in the right hemispace and movements were executed using either foveal or peripheral vision of the target. According to the illusion effect, subjects undershot and overshot the vertex location of the closed and the open configuration, respectively. The illusion effect decreased when the target was fixated and when the stimulus was positioned in the right hemispace. These results confirm the hypothesis that both egocentric and allocentric information are combined in order to encode target position in space. When movements are directed to foveal targets, decreasing effects of allocentric cues, as shown by decreasing the illusion effect, could be due to enhanced efficiency of the egocentric system. That is, information on eye position when target is fixated can be used to precisely establish its spatial relations with the body. In addition, a more accurate analysis of allocentric information is hypothesized when the target is positioned in the left hemispace. In other words, our data confirm the notion that the right cerebral hemisphere is involved in space representation.


Experimental Brain Research | 1998

Visual distractors differentially interfere with the reaching and grasping components of prehension movements

Massimo Gangitano; Elena Daprati; Maurizio Gentilucci

Abstractu2002In the present study we addressed the issue of how an object is visually isolated from surrounding cues when a reaching-grasping (prehension) movement towards it is planned. Subjects were required to reach and grasp an object presented either alone or with a distractor. In five experiments, different degrees of elaboration of the distractor were induced by varying: (1) the position of the distractor (central or peripheral); (2) the time when the distractor was suppressed (immediately or delayed, with respect to stimulus presentation); and (3) the type of distractor analysis (implicit or explicit). In addition, we tested whether the possible effects of the distractor on reaching-grasping were due to the use of an allocentric reference centered on it. This was obtained by comparing the effects of the distractor with those of a stimulus, the target of a placing movement successive to the reaching-grasping. The results of the five experiments can be summarized as follows. The necessary condition for an interference effect on both the reaching and the grasping components was the central presentation of the distractor. When the information on the distractor could be immediately suppressed, an interference effect was observed only on the grasp component. In the case of delayed suppression, an effect was found on the reaching component. Finally, when an overt analysis of the distractor was required, the interference effect disappeared. Two main conclusions have been drawn from the results of the present study. First, comparison between properties of the target and surrounding cues is performed by two independent processes for reaching and grasping an object. The process for the grasp relies more on allocentric cues than that for the reach. Second, when surrounding stimuli are automatically analyzed during visual search of the target, the process of visuo-motor transformation can incorporate their features into the target. In contrast, overt analysis of surrounding stimuli is performed separately from that of the target. Finally, the data of the present study are discussed in support of the premotor theory of attention.


Neuropsychologia | 2000

Impaired control of an action after supplementary motor area lesion : a case study

Maurizio Gentilucci; Luca Bertolani; Francesca Benuzzi; Anna Negrotti; Giovanni Pavesi; Massimo Gangitano

The kinematics of the action formed by reaching-grasping an object and placing it on a second target was studied in a patient who suffered from an acute vascular left brain lesion, which affected the Supplementary Motor Area proper (SMA-proper) (Matelli M, Luppino G. Thalamic input to mesial and superior area 6 in the macaque monkey. Journal of Comparative Neurology 1996;372:59-87, Matelli M, Luppino G, Fogassi L, Rizzolatti G. Thalamic input to inferior area 6 and area 4 in the macaque monkey. Journal of Comparative Neurology 1989;280:468-488), and in five healthy control subjects. The reach kinematics of the controls was affected by the positions of both the reaching-grasping and the placing targets (Gentilucci M, Negrotti A, Gangitano M. Planning an action. Experimental Brain Research 1997;115:116-28). In contrast, the reach kinematics of the patient was affected only by the position of the reaching-grasping target. By comparing these results with those previously found in Parkinsons disease patients executing the same action (Gentilucci M, Negrotti A. Planning and executing an action in Parkinsons disease patients. Movement Disorders 1999;1:69-79, Gentilucci M, Negrotti A. The control of an action in Parkinsons disease. Experimental Brain Research 1999;129:269-277), we suggest that the anatomical motor circuit formed by SMA-proper (see above), Basal Ganglia (BG) and Thalamus (Alexander GE, Crutcher MD. Functional architecture of basal ganglia circuits: neural substrates of parallel processing. Trends in the Neurosciences 1990;13:266-271, Hoover JE, Strick PL. Multiple output channels in the basal ganglia. Nature 1993;259:819-821) may be involved in the control of actions: SMA-proper assembles the sequence of the action, whereas BG updates its parameters and stores them.


Consciousness and Cognition | 1998

Implicit visual analysis in handedness recognition

Maurizio Gentilucci; Elena Daprati; Massimo Gangitano

In the present study, we addressed the problem of whether hand representations, derived from the control of hand gesture, are used in handedness recognition. Pictures of hands and fingers, assuming either common or uncommon postures, were presented to right-handed subjects, who were required to judge their handedness. In agreement with previous results (Parsons, 1987, 1994; Gentilucci, Daprati, & Gangitano, 1998), subjects recognized handedness through mental movement of their own hand in order to match the posture of the presented hand. This was proved by a control experiment of physical matching. The new finding was that presentation of common finger postures affected responses differently from presentation of less common finger postures. These effects could be not attributed to mental matching movements nor related to richness in hand-finger cues useful for handedness recognition. The results of the present study are discussed in the context of the notion that implicit visual analysis of the presented hands is performed before mental movement of ones hand takes place (Parsons, 1987; Gentilucci et al., 1998). In this process, hand representation acquired by experience in the control and observation of ones and other peoples hand gestures is used. We propose that such an immediate recognition mechanism belongs to the class of mental processes which are grouped under the name of intuition, that is, the processes by which situations or peoples intentions are immediately understood, without conscious reasoning.


Cognitive Brain Research | 2000

Recognising a hand by grasp

Maurizio Gentilucci; Francesca Benuzzi; Luca Bertolani; Elena Daprati; Massimo Gangitano

The present study aimed to demonstrate that motor representations are used to recognise biological stimuli. In three experiments subjects were required to judge laterality of hands and forearms presented by pictures. The postures of the hands were those assumed when holding a small, medium and large sphere. In experiment 1, the sphere held in hand was presented, whereas in experiment 2 it was absent. In experiment 3, the same images, showing holding-a-sphere hands, as in experiment 1 were presented, but without forearm. In all experiments one finger of each hand could be absent. In experiment 1 recognition time was longer for those hand postures for which the corresponding grasping motor acts required more accuracy. This was confirmed by a control experiment (experiment 4), in which subjects actually grasped the spheres. Absence of fingers did not influence right-left hand recognition. However, the absence of target object in experiment 2, and of forearm in experiment 3 reduced the effects of the type of holding on hand laterality recognition. The results of the present study indicate that grasp representations are used to recognise hand laterality. In particular, the visual description of how hand and object interact in space (the opposition space [M.A. Arbib, Programs, schemas and neural networks for control of hand movement: beyond the RS frameworks, in: M. Jeannerod (Ed.), Attention and Performance XIII: Motor Representation and Control, Lawrence Erlbaum, Hillsdale, NJ, 1990, 111-138; M.A. Arbib, T. Iberall, D. Lyons, Coordinated control programs for movements of the hand, in: A.W. Goodman, I. Darian-Smith (Eds.), Hand function and the neocortex, Springer, Berlin, 1985, pp. 135-170]) and the anchoring of the hand to the agent are the features of the grasp representations used in hand-recognition processes. The data are discussed according to the more general notion that motor representations are automatically extracted in the process of intuiting situations, or peoples intentions. These motor representations, which are compared with those of other people, contain concrete information on the actions (the motor program) by which a situation is created and on the aim of the agents executing those actions.

Collaboration


Dive into the Massimo Gangitano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elena Daprati

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Francesca Benuzzi

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Toni

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Toni

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Elena Daprati

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge