Elisa De Stefani
University of Parma
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elisa De Stefani.
PLOS ONE | 2012
Alessandro Innocenti; Elisa De Stefani; Nicolò Francesco Bernardi; Giovanna Cristina Campione; Maurizio Gentilucci
One of the most important faculties of humans is to understand the behaviour of other conspecifics. The present study aimed at determining whether, in a social context, request gesture and gaze direction of an individual are enough to infer his/her intention to communicate, by searching for their effects on the kinematics of another individuals arm action. In four experiments participants reached, grasped and lifted a bottle filled of orange juice in presence of an empty glass. In experiment 1, the further presence of a conspecific not producing any request with a hand and gaze did not modify the kinematics of the sequence. Conversely, experiments 2 and 3 showed that the presence of a conspecific producing only a request of pouring by holding the glass with his/her right hand, or only a request of comunicating with the conspecific, by using his/her gaze, affected lifting and grasping of the sequence, respectively. Experiment 4 showed that hand gesture and eye contact simultaneously produced affected the entire sequence. The results suggest that the presence of both request gesture and direct gaze produced by an individual changes the control of a motor sequence executed by another individual. We propose that a social request activates a social affordance that interferes with the control of whatever sequence and that the gaze of the potential receiver who held the glass with her hand modulates the effectiveness of the manual gesture. This paradigm if applied to individuals affected by autism disorder can give new insight on the nature of their impairment in social interaction and communication.
Experimental Brain Research | 2012
Elisa De Stefani; Alessandro Innocenti; Nicolò Francesco Bernardi; Giovanna Cristina Campione; Maurizio Gentilucci
The present study aimed at determining whether the observation of two functionally compatible artefacts, that is which potentially concur in achieving a specific function, automatically activates a motor programme of interaction between the two objects. To this purpose, an interference paradigm was used during which an artefact (a bottle filled with orange juice), target of a reaching-grasping and lifting sequence, was presented alone or with a non-target object (distractor) of the same or different semantic category and functionally compatible or not. In experiment 1, the bottle was presented alone or with an artefact (a sphere), or a natural (an apple) distractor. In experiment 2, the bottle was presented with either the apple or a glass (an artefact) filled with orange juice, whereas in experiment 3, either an empty or a filled glass was presented. In the control experiment 4, we compared the kinematics of reaching-grasping and pouring with those of reaching-grasping and lifting. The kinematics of reach, grasp and lift was affected by distractor presentation. However, no difference was observed between two distractors that belonged to different semantic categories. In contrast, the presence of the empty rather filled glass affected the kinematics of the actual grasp. This suggests that an actually functional compatibility between target (the bottle) and distractor (the empty glass) was necessary to activate automatically a programme of interaction (i.e. pouring) between the two artefacts. This programme affected the programme actually executed (i.e. lifting). The results of the present study indicate that, in addition to affordances related to intrinsic object properties, “working affordances” related to a specific use of an artefact with another object can be activated on the basis of functional compatibility.
Experimental Brain Research | 2014
Elisa De Stefani; Alessandro Innocenti; Doriana De Marco; Marianna Busiello; Francesca Ferri; Marcello Costantini; Maurizio Gentilucci
The present experiment aimed at verifying whether the spatial alignment effect modifies kinematic parameters of pantomimed reaching-grasping of cups located at reachable and not reachable distance. The cup’s handle could be oriented either to the right or to the left, thus inducing a grasp movement that could be either congruent or incongruent with the pantomime. The incongruence/congruence induced an increase/decrease in maximal finger aperture, which was observed when the cup was located near but not far from the body. This effect probably depended on influence of the size of the cup body on pantomime control when, in the incongruent condition, cup body was closer to the grasp hand as compared to the handle. Cup distance (near and far) influenced the pantomime even if it was actually executed in the same peripersonal space. Specifically, arm and hand temporal parameters were affected by actual cup distance as well as movement amplitudes. The results indicate that, when executing a reach-to-grasp pantomime, affordance related to the use of the object was instantiated (and in particular the spatial alignment effect became effective), but only when the object could be actually reached. Cup distance (extrinsic object property) influenced affordance, independently of the possibility to actually reach the target.
European Journal of Neuroscience | 2014
Francesca Ferri; Marianna Busiello; Giovanna Cristina Campione; Elisa De Stefani; Alessandro Innocenti; Gian Luca Romani; Marcello Costantini; Maurizio Gentilucci
Request and emblematic gestures, despite being both communicative gestures, do differ in terms of social valence. Indeed, only the former are used to initiate/maintain/terminate an actual interaction. If such a difference is at stake, a relevant social cue, i.e. eye contact, should have different impacts on the neuronal underpinnings of the two types of gesture. We measured blood oxygen level‐dependent signals, using functional magnetic resonance imaging, while participants watched videos of an actor, either blindfolded or not, performing emblems, request gestures, or meaningless control movements. A left‐lateralized network was more activated by both types of communicative gestures than by meaningless movements, regardless of the accessibility of the actors eyes. Strikingly, when eye contact was taken into account as a factor, a right‐lateralized network was more strongly activated by emblematic gestures performed by the non‐blindfolded actor than by those performed by the blindfolded actor. Such modulation possibly reflects the integration of information conveyed by the eyes with the representation of emblems. Conversely, a wider right‐lateralized network was more strongly activated by request gestures performed by the blindfolded than by those performed by the non‐blindfolded actor. This probably reflects the effect of the conflict between the observed action and its associated contextual information, in which relevant social cues are missing.
Brain Topography | 2015
Maddalena Fabbri-Destro; Pietro Avanzini; Elisa De Stefani; Alessandro Innocenti; Cristina Campi; Maurizio Gentilucci
What happens if you see a person pronouncing the word “go” after having gestured “stop”? Differently from iconic gestures, that must necessarily be accompanied by verbal language in order to be unambiguously understood, symbolic gestures are so conventionalized that they can be effortlessly understood in the absence of speech. Previous studies proposed that gesture and speech belong to a unique communication system. From an electrophysiological perspective the N400 modulation was considered the main variable indexing the interplay between two stimuli. However, while many studies tested this effect between iconic gestures and speech, little is known about the capability of an emblem to modulate the neural response to subsequently presented words. Using high-density EEG, the present study aimed at evaluating the presence of an N400 effect and its spatiotemporal dynamics, in terms of cortical activations, when emblems primed the observation of words. Participants were presented with symbolic gestures followed by a semantically congruent or incongruent verb. A N400 modulation was detected, showing larger negativity when gesture and words were incongruent. The source localization during N400 time window evidenced the activation of different portions of temporal cortex according to the gesture and word congruence. Our data provide further evidence of how the observation of an emblem influences verbal language perception, and of how this interplay is mainly instanced by different portions of the temporal cortex.
Frontiers in Human Neuroscience | 2013
Elisa De Stefani; Alessandro Innocenti; Claudio Secchi; Veronica Papa; Maurizio Gentilucci
The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actors gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them.
PLOS ONE | 2013
Elisa De Stefani; Alessandro Innocenti; Doriana De Marco; Maurizio Gentilucci
The present study aimed at determining how actions executed by two conspecifics can be coordinated with each other, or more specifically, how the observation of different phases of a reaching-grasping action is temporary related to the execution of a movement of the observer. Participants observed postures of initial finger opening, maximal finger aperture, and final finger closing of grasp after observation of an initial hand posture. Then, they opened or closed their right thumb and index finger (experiments 1, 2 and 3). Response times decreased, whereas acceleration and velocity of actual finger movements increased when observing the two late phases of grasp. In addition, the results ruled out the possibility that this effect was due to salience of the visual stimulus when the hand was close to the target and confirmed an effect of even hand postures in addition to hand apparent motion due to the succession of initial hand posture and grasp phase. In experiments 4 and 5, the observation of grasp phases modulated even foot movements and pronunciation of syllables. Finally, in experiment 6, transcranial magnetic stimulation applied to primary motor cortex 300 ms post-stimulus induced an increase in hand motor evoked potentials of opponens pollicis muscle when observing the two late phases of grasp. These data suggest that the observation of grasp phases induced simulation which was stronger during observation of finger closing. This produced shorter response times, greater acceleration and velocity of the successive movement. In general, our data suggest best concatenation between two movements (one observed and the other executed) when the observed (and simulated) movement was to be accomplished. The mechanism joining the observation of a conspecific’s action with our own movement may be precursor of social functions. It may be at the basis for interactions between conspecifics, and related to communication between individuals.
Frontiers in Psychology | 2016
Elisa De Stefani; Doriana De Marco; Maurizio Gentilucci
Aim: Do the emotional content and meaning of sentences affect the kinematics of successive motor sequences? Material and Methods: Participants observed video-clips of an actor pronouncing sentences expressing positive or negative emotions and meanings (related to happiness or anger in Experiment 1 and food admiration or food disgust in Experiment 2). Then, they reached-to-grasp and placed a sugar lump on the actor’s mouth. Participants acted in response to sentences whose content could convey (1) emotion (i.e., face expression and prosody) and meaning, (2) meaning alone, or (3) emotion alone. Within each condition, the kinematic effects of sentences expressing positive and negative emotions were compared. Stimuli (positive for food admiration and negative for food disgust), conveyed either by emotion or meaning affected similarly the kinematics of both grasp and reach. Results: In Experiment 1, the kinematics did not vary between positive and negative sentences either when the content was expressed by both emotion and meaning, or meaning alone. In contrast, in the case of sole emotion, sentences with positive valence made faster the approach of the conspecific. In Experiment 2, the valence of emotions (positive for food admiration and negative for food disgust) affected the kinematics of both grasp and reach, independently of the modality. Discussion: The lack of an effect of meaning in Experiment 1 could be due to the weak relevance of sentence meaning with respect to the motor sequence goal (feeding). Experiment 2 demonstrated that, indeed, this was the case, because when the meaning and the consequent emotion were related to the sequence goal, they affected the kinematics. In contrast, the sole emotion activated approach or avoidance toward the actor according to positive and negative valence. The data suggest a behavioral dissociation between effects of emotion and meaning.
Frontiers in Psychology | 2015
Elisa De Stefani; Doriana De Marco; Maurizio Gentilucci
Aim: This study delineated how observing sports scenes of cooperation or competition modulated an action of interaction, in expert athletes, depending on their specific sport attitude. Method: In a kinematic study, athletes were divided into two groups depending on their attitude toward teammates (cooperative or competitive). Participants observed sport scenes of cooperation and competition (basketball, soccer, water polo, volleyball, and rugby) and then they reached for, picked up, and placed an object on the hand of a conspecific (giving action). Mixed-design ANOVAs were carried out on the mean values of grasping-reaching parameters. Results: Data showed that the type of scene observed as well as the athletes’ attitude affected reach-to-grasp actions to give. In particular, the cooperative athletes were speeded when they observed scenes of cooperation compared to when they observed scenes of competition. Discussion: Participants were speeded when executing a giving action after observing actions of cooperation. This occurred only when they had a cooperative attitude. A match between attitude and intended action seems to be a necessary prerequisite for observing an effect of the observed type of scene on the performed action. It is possible that the observation of scenes of competition activated motor strategies which interfered with the strategies adopted by the cooperative participants to execute a cooperative (giving) sequence.
Behavioural Brain Research | 2014
Giovanna Cristina Campione; Elisa De Stefani; Alessandro Innocenti; Doriana De Marco; Patricia M. Gough; Giovanni Buccino; Maurizio Gentilucci
The present study aimed at determining whether or not the comprehension of symbolic gestures, and corresponding-in-meaning words, makes use of cortical circuits involved in movement execution control. Participants were presented with videos of an actress producing meaningful or meaningless gestures, pronouncing corresponding-in-meaning words or pseudo-words; they were required to judge whether the signal was meaningful or meaningless. Single pulse TMS was applied to forearm primary motor cortex area 150-200 ms after the point when the stimulus meaning could be understood. MEPs were significantly greater when processing meaningless signals as compared to a baseline condition presenting a still-and-silent actress. In contrast, this was not the case for meaningful signals whose motor activation did not differ from that for the baseline stimulus. MEPs were significantly greater for meaningless than meaningful signals and no significant difference was found between gesture and speech. On the basis of these results, we hypothesized that the observation-of/listening-to meaningless signals recruits motor areas. In contrast, this did not occur when the signals were meaningful. Overall, the data suggest that the processes related to comprehension of symbolic gestures and communicative words do not involve primary motor area and probably use brain areas involved in semantics.