Doriana De Marco
University of Parma
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Doriana De Marco.
Experimental Brain Research | 2014
Elisa De Stefani; Alessandro Innocenti; Doriana De Marco; Marianna Busiello; Francesca Ferri; Marcello Costantini; Maurizio Gentilucci
The present experiment aimed at verifying whether the spatial alignment effect modifies kinematic parameters of pantomimed reaching-grasping of cups located at reachable and not reachable distance. The cup’s handle could be oriented either to the right or to the left, thus inducing a grasp movement that could be either congruent or incongruent with the pantomime. The incongruence/congruence induced an increase/decrease in maximal finger aperture, which was observed when the cup was located near but not far from the body. This effect probably depended on influence of the size of the cup body on pantomime control when, in the incongruent condition, cup body was closer to the grasp hand as compared to the handle. Cup distance (near and far) influenced the pantomime even if it was actually executed in the same peripersonal space. Specifically, arm and hand temporal parameters were affected by actual cup distance as well as movement amplitudes. The results indicate that, when executing a reach-to-grasp pantomime, affordance related to the use of the object was instantiated (and in particular the spatial alignment effect became effective), but only when the object could be actually reached. Cup distance (extrinsic object property) influenced affordance, independently of the possibility to actually reach the target.
PLOS ONE | 2013
Elisa De Stefani; Alessandro Innocenti; Doriana De Marco; Maurizio Gentilucci
The present study aimed at determining how actions executed by two conspecifics can be coordinated with each other, or more specifically, how the observation of different phases of a reaching-grasping action is temporary related to the execution of a movement of the observer. Participants observed postures of initial finger opening, maximal finger aperture, and final finger closing of grasp after observation of an initial hand posture. Then, they opened or closed their right thumb and index finger (experiments 1, 2 and 3). Response times decreased, whereas acceleration and velocity of actual finger movements increased when observing the two late phases of grasp. In addition, the results ruled out the possibility that this effect was due to salience of the visual stimulus when the hand was close to the target and confirmed an effect of even hand postures in addition to hand apparent motion due to the succession of initial hand posture and grasp phase. In experiments 4 and 5, the observation of grasp phases modulated even foot movements and pronunciation of syllables. Finally, in experiment 6, transcranial magnetic stimulation applied to primary motor cortex 300 ms post-stimulus induced an increase in hand motor evoked potentials of opponens pollicis muscle when observing the two late phases of grasp. These data suggest that the observation of grasp phases induced simulation which was stronger during observation of finger closing. This produced shorter response times, greater acceleration and velocity of the successive movement. In general, our data suggest best concatenation between two movements (one observed and the other executed) when the observed (and simulated) movement was to be accomplished. The mechanism joining the observation of a conspecific’s action with our own movement may be precursor of social functions. It may be at the basis for interactions between conspecifics, and related to communication between individuals.
Cortex | 2017
Riccardo Dalla Volta; Pietro Avanzini; Doriana De Marco; Maurizio Gentilucci; Maddalena Fabbri-Destro
Sensorimotor and affective brain systems are known to be involved in language processing. However, to date it is still debated whether this involvement is a crucial step of semantic processing or, on the contrary, it is dependent on the specific context or strategy adopted to solve a task at hand. The present electroencephalographic (EEG) study is aimed at investigating which brain circuits are engaged when processing written verbs. By aligning event-related potentials (ERPs) both to the verb onset and to the motor response indexing the accomplishment of a semantic task of categorization, we were able to dissociate the relative stimulus-related and response-related cognitive components at play, respectively. EEG signal source reconstruction showed that while the recruitment of sensorimotor fronto-parietal circuits was time-locked with action verb onset, a left temporal-parietal circuit was time-locked to the task accomplishment. Crucially, by comparing the time course of both these bottom-up and top-down cognitive components, it appears that the frontal motor involvement precedes the task-related temporal-parietal activity. The present findings suggest that the recruitment of fronto-parietal sensorimotor circuits is independent of the specific strategy adopted to solve a semantic task and, given its temporal hierarchy, it may provide crucial information to brain circuits involved in the categorization task. Eventually, a discussion on how the present results may contribute to the clinical literature on patients affected by disorders specifically impairing the motor system is provided.
Frontiers in Psychology | 2016
Elisa De Stefani; Doriana De Marco; Maurizio Gentilucci
Aim: Do the emotional content and meaning of sentences affect the kinematics of successive motor sequences? Material and Methods: Participants observed video-clips of an actor pronouncing sentences expressing positive or negative emotions and meanings (related to happiness or anger in Experiment 1 and food admiration or food disgust in Experiment 2). Then, they reached-to-grasp and placed a sugar lump on the actor’s mouth. Participants acted in response to sentences whose content could convey (1) emotion (i.e., face expression and prosody) and meaning, (2) meaning alone, or (3) emotion alone. Within each condition, the kinematic effects of sentences expressing positive and negative emotions were compared. Stimuli (positive for food admiration and negative for food disgust), conveyed either by emotion or meaning affected similarly the kinematics of both grasp and reach. Results: In Experiment 1, the kinematics did not vary between positive and negative sentences either when the content was expressed by both emotion and meaning, or meaning alone. In contrast, in the case of sole emotion, sentences with positive valence made faster the approach of the conspecific. In Experiment 2, the valence of emotions (positive for food admiration and negative for food disgust) affected the kinematics of both grasp and reach, independently of the modality. Discussion: The lack of an effect of meaning in Experiment 1 could be due to the weak relevance of sentence meaning with respect to the motor sequence goal (feeding). Experiment 2 demonstrated that, indeed, this was the case, because when the meaning and the consequent emotion were related to the sequence goal, they affected the kinematics. In contrast, the sole emotion activated approach or avoidance toward the actor according to positive and negative valence. The data suggest a behavioral dissociation between effects of emotion and meaning.
Frontiers in Psychology | 2015
Elisa De Stefani; Doriana De Marco; Maurizio Gentilucci
Aim: This study delineated how observing sports scenes of cooperation or competition modulated an action of interaction, in expert athletes, depending on their specific sport attitude. Method: In a kinematic study, athletes were divided into two groups depending on their attitude toward teammates (cooperative or competitive). Participants observed sport scenes of cooperation and competition (basketball, soccer, water polo, volleyball, and rugby) and then they reached for, picked up, and placed an object on the hand of a conspecific (giving action). Mixed-design ANOVAs were carried out on the mean values of grasping-reaching parameters. Results: Data showed that the type of scene observed as well as the athletes’ attitude affected reach-to-grasp actions to give. In particular, the cooperative athletes were speeded when they observed scenes of cooperation compared to when they observed scenes of competition. Discussion: Participants were speeded when executing a giving action after observing actions of cooperation. This occurred only when they had a cooperative attitude. A match between attitude and intended action seems to be a necessary prerequisite for observing an effect of the observed type of scene on the performed action. It is possible that the observation of scenes of competition activated motor strategies which interfered with the strategies adopted by the cooperative participants to execute a cooperative (giving) sequence.
Behavioural Brain Research | 2014
Giovanna Cristina Campione; Elisa De Stefani; Alessandro Innocenti; Doriana De Marco; Patricia M. Gough; Giovanni Buccino; Maurizio Gentilucci
The present study aimed at determining whether or not the comprehension of symbolic gestures, and corresponding-in-meaning words, makes use of cortical circuits involved in movement execution control. Participants were presented with videos of an actress producing meaningful or meaningless gestures, pronouncing corresponding-in-meaning words or pseudo-words; they were required to judge whether the signal was meaningful or meaningless. Single pulse TMS was applied to forearm primary motor cortex area 150-200 ms after the point when the stimulus meaning could be understood. MEPs were significantly greater when processing meaningless signals as compared to a baseline condition presenting a still-and-silent actress. In contrast, this was not the case for meaningful signals whose motor activation did not differ from that for the baseline stimulus. MEPs were significantly greater for meaningless than meaningful signals and no significant difference was found between gesture and speech. On the basis of these results, we hypothesized that the observation-of/listening-to meaningless signals recruits motor areas. In contrast, this did not occur when the signals were meaningful. Overall, the data suggest that the processes related to comprehension of symbolic gestures and communicative words do not involve primary motor area and probably use brain areas involved in semantics.
NeuroImage | 2015
Doriana De Marco; Elisa De Stefani; Maurizio Gentilucci
The present study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. Experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100 ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. Thus, the same type of integration with a word was present for both prime gesture and word. It was probably successive to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words.
Neuropsychologia | 2018
Doriana De Marco; Elisa De Stefani; Diego Bernini; Maurizio Gentilucci
Background: Strong embodiment theories claimed that action language representation is grounded in the sensorimotor system, which would be crucially to semantic understanding. However, there is a large disagreement in literature about the neural mechanisms involved in abstract (symbolic) language comprehension. Objective: In the present study, we investigated the role of motor context in the semantic processing of abstract language. We hypothesized that motor cortex excitability during abstract word comprehension could be modulated by previous presentation of a stimuli which associated a congruent motor content (i.e., a semantically related gesture) to the word. Methods and results: We administered a semantic priming paradigm where postures of gestures (primes) were followed by semantically congruent verbal stimuli (targets, meaningful or meaningless words). Transcranial Magnetic Stimulation was delivered to left motor cortex 100, 250 and 500ms after the presentation of each target. Results showed that Motor evoked potentials of hand muscle significantly increased in correspondence to meaningful compared to meaningless words, but only in the earlier phase of semantic processing (100 and 250ms from target onset). Conclusion: Results suggested that the gestural motor representation was integrated with corresponding word meaning in order to accomplish (and facilitate) the lexical task. We concluded that motor context resulted crucial to highlight motor system involvement during semantic processing of abstract language. HIGHLIGHTSMotor areas could be involved in processing abstract language.Abstract words were presented with congruent symbolic gestures in a lexical task.Hand MEPs increased in the earlier phase of word comprehension.Gestures facilitated words comprehension simulating a common motor representation.The context modulated the motor activation during semantic processing.
Frontiers in Psychology | 2018
Antonella Tramacere; Pier Francesco Ferrari; Maurizio Gentilucci; Valeria Giuffrida; Doriana De Marco
It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceivers face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.
Frontiers in Human Neuroscience | 2017
Giuseppe Di Cesare; Elisa De Stefani; Maurizio Gentilucci; Doriana De Marco