Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Caterina Ansuini is active.

Publication


Featured researches published by Caterina Ansuini.


Experimental Brain Research | 2008

An object for an action, the same object for other actions: effects on hand shaping.

Caterina Ansuini; Livia Giosa; Luca Turella; Gianmarco Altoè; Umberto Castiello

Objects can be grasped in several ways due to their physical properties, the context surrounding the object, and the goal of the grasping agent. The aim of the present study was to investigate whether the prior-to-contact grasping kinematics of the same object vary as a result of different goals of the person grasping it. Subjects were requested to reach toward and grasp a bottle filled with water, and then complete one of the following tasks: (1) Grasp it without performing any subsequent action; (2) Lift and throw it; (3) Pour the water into a container; (4) Place it accurately on a target area; (5) Pass it to another person. We measured the angular excursions at both metacarpal-phalangeal (mcp) and proximal interphalangeal (pip) joints of all digits, and abduction angles of adjacent digit pairs by means of resistive sensors embedded in a glove. The results showed that the presence and the nature of the task to be performed following grasping affect the positioning of the fingers during the reaching phase. We contend that a one-to-one association between a sensory stimulus and a motor response does not capture all the aspects involved in grasping. The theoretical approach within which we frame our discussion considers internal models of anticipatory control which may provide a suitable explanation of our results.


The Journal of Neuroscience | 2007

Choice of Contact Points during Multidigit Grasping: Effect of Predictability of Object Center of Mass Location

Jamie R. Lukos; Caterina Ansuini; Marco Santello

It has been shown that when subjects can predict object properties [e.g., weight or center of mass (CM)], fingertip forces are appropriately scaled before the object is lifted, i.e., before somatosensory feedback can be processed. However, it is not known whether subjects, in addition to these anticipatory force mechanisms, exploit the ability to choose where digits can be placed to facilitate object manipulation. We addressed this question by asking subjects to reach and grasp an object whose CM was changed to the left, center, or right of the object in either a predictable or unpredictable manner. The only task requirement was to minimize object roll during lift. We hypothesized that subjects would modulate contact points but only when object CM location could be predicted. As expected, object roll was significantly smaller in the predictable condition. This experimental condition was also associated with statistically distinct spatial distributions of contact points as a function of object CM location but primarily when large torques had to be counteracted, i.e., for right and left CM locations. In contrast, when subjects could not anticipate CM location, a “default” distribution of contact points was used, this being statistically indistinguishable from that adopted for the center CM location in the predictable condition. We conclude that choice of contact points is integrated with anticipatory force control mechanisms to facilitate object manipulation. These results demonstrate that planning of digit placement is an important component of grasp control.


The Journal of Neuroscience | 2008

Anticipatory Control of Grasping: Independence of Sensorimotor Memories for Kinematics and Kinetics

Jamie R. Lukos; Caterina Ansuini; Marco Santello

We have recently provided evidence for anticipatory grasp control mechanisms in the kinematic domain by showing that subjects modulate digit placement on an object based on its center of mass (CM) when it can be anticipated (Lukos et al., 2007). This behavior relied on sensorimotor memories about digit contact points and forces required for optimal manipulation. We found that accurate sensorimotor memories depended on the acquisition of implicit knowledge about object properties associated with repeated manipulations of the same object. Whereas implicit knowledge of object properties is essential for anticipatory grasp control, the extent to which subjects can use explicit knowledge to accurately scale digit forces in an anticipatory manner is controversial. Additionally, it is not known whether subjects are able to use explicit knowledge of object properties for anticipatory control of contact points. We addressed this question by asking subjects to grasp and lift an object while providing explicit knowledge of object CM location as visual or verbal cues. Contact point modulation and object roll, a measure of anticipatory force control, were assessed using blocked and random CM presentations. We found that explicit knowledge of object CM enabled subjects to modulate contact points. In contrast, subjects could not minimize object roll in the random condition to the same extent as in the blocked when provided with a verbal or visual cue. These findings point to a dissociation in the effect of explicit knowledge of object properties on grasp kinematics versus kinetics, thus suggesting independent anticipatory processes for grasping.


The Neuroscientist | 2015

Intentions in the Brain: The Unveiling of Mister Hyde

Caterina Ansuini; Andrea Cavallo; Cesare Bertone; Cristina Becchio

Is it possible to understand the intentions of others by merely observing their movements? Current debate has been mainly focused on the role that mirror neurons and motor simulation may play in this process, with surprisingly little attention being devoted to how intentions are actually translated into movements. Here, we delineate an alternative approach to the problem of intention-from-movement understanding, which takes “action execution” rather than “action observation” as a starting point. We first consider whether and to what extent, during action execution, intentions shape movement kinematics. We then examine whether observers are sensitive to intention information conveyed by visual kinematics and can use this information to discriminate between different intentions. Finally, we consider the neural mechanisms that may contribute to intention-from-movement understanding. We argue that by reframing the relationship between intention and movement, this evidence opens new perspectives into the neurobiology of how we know other minds and predict others’ behavior.


Frontiers in Psychology | 2014

The visible face of intention: why kinematics matters

Caterina Ansuini; Andrea Cavallo; Cesare Bertone; Cristina Becchio

A key component of social understanding is the ability to read intentions from movements. But how do we discern intentions in others’ actions? What kind of intention information is actually available in the features of others’ movements? Based on the assumption that intentions are hidden away in the other person’s mind, standard theories of social cognition have mainly focused on the contribution of higher level processes. Here, we delineate an alternative approach to the problem of intention-from-movement understanding. We argue that intentions become “visible” in the surface flow of agents’ motions. Consequently, the ability to understand others’ intentions cannot be divorced from the capability to detect essential kinematics. This hypothesis has far reaching implications for how we know other minds and predict others’ behavior.


Experimental Brain Research | 2011

The effects of task and content on digit placement on a bottle

Céline Crajé; Jamie R. Lukos; Caterina Ansuini; Andrew M. Gordon; Marco Santello

In addition to hand shaping, previous studies have shown that subjects adapt placement of individual digits to object properties such as its weight and center of mass. However, the extent to which digit placement varies based on task context is unknown. In the present study, we investigated where subjects place their digits on a bottle when the upcoming task (lift versus pour) and object content (i.e., amount of liquid: empty, half, and full) were manipulated. Our results showed that subjects anticipated both the upcoming task and content by varying digit placement when grasping the bottle prior to the onset of manipulation. Specifically, subjects increased the vertical distance between the thumb and index finger for pouring but not for lifting. This larger moment arm might have been established to decrease the amount of force required to tilt the bottle. Content also affected digit placement: the digits were placed higher and were wrapped more around the bottle with increasing content. This strategy may maximize grip surface contact, and hence grasp stability. These findings extend previous research showing that grasp planning not only takes place at a macroscopic level (whole-hand position relative to an object), but also at the level of individual digit placement. This finer level of control appears to be sensitive to the expected mechanical properties of the object and how these may affect grasp stability throughout the upcoming manipulation.


PLOS ONE | 2010

When Ears Drive Hands: The Influence of Contact Sound on Reaching to Grasp

Umberto Castiello; Bruno L. Giordano; Chiara Begliomini; Caterina Ansuini; Massimo Grassi

Background Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements. Methodology/Principal Findings We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound. Conclusions/Significance Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action.


PLOS ONE | 2015

Predicting object size from hand kinematics: a temporal perspective.

Caterina Ansuini; Andrea Cavallo; Atesh Koul; Marco Jacono; Yuan Yang; Cristina Becchio

Research on reach-to-grasp movements generally concentrates on kinematics values that are expression of maxima, in particular the maximum aperture of the hand and the peak of wrist velocity. These parameters provide a snapshot description of movement kinematics at a specific time point during reach, i.e., the maximum within a set of value, but do not allow to investigate how hand kinematics gradually conform to target properties. The present study was designed to extend the characterization of object size effects to the temporal domain. Thus, we computed the wrist velocity and the grip aperture throughout reach-to-grasp movements aimed at large versus small objects. To provide a deeper understanding of how joint movements varied over time, we also considered the time course of finger motion relative to hand motion. Results revealed that movement parameters evolved in parallel but at different rates in relation to object size. Furthermore, a classification analysis performed using a Support Vector Machine (SVM) approach showed that kinematic features taken as a group predicted the correct target size well before contact with the object. Interestingly, some kinematics features exhibited a higher ability to discriminate the target size than others did. These findings reinforce our knowledge about the relationship between kinematics and object properties and shed new light on the quantity and quality of information available in the kinematics of a reach-to-grasp movement over time. This might have important implications for our understanding of the action-perception coupling mechanism.


PLOS ONE | 2008

The Grasping Side of Odours

Federico Tubaldi; Caterina Ansuini; Roberto Tirindelli; Umberto Castiello

Background Research on multisensory integration during natural tasks such as reach-to-grasp is still in its infancy. Crossmodal links between vision, proprioception and audition have been identified, but how olfaction contributes to plan and control reach-to-grasp movements has not been decisively shown. We used kinematics to explicitly test the influence of olfactory stimuli on reach-to-grasp movements. Methodology/Principal Findings Subjects were requested to reach towards and grasp a small or a large visual target (i.e., precision grip, involving the opposition of index finger and thumb for a small size target and a power grip, involving the flexion of all digits around the object for a large target) in the absence or in the presence of an odour evoking either a small or a large object that if grasped would require a precision grip and a whole hand grasp, respectively. When the type of grasp evoked by the odour did not coincide with that for the visual target, interference effects were evident on the kinematics of hand shaping and the level of synergies amongst fingers decreased. When the visual target and the object evoked by the odour required the same type of grasp, facilitation emerged and the intrinsic relations amongst individual fingers were maintained. Conclusions/Significance This study demonstrates that olfactory information contains highly detailed information able to elicit the planning for a reach-to-grasp movement suited to interact with the evoked object. The findings offer a substantial contribution to the current debate about the multisensory nature of the sensorimotor transformations underlying grasping.


Scientific Reports | 2016

Decoding intentions from movement kinematics

Andrea Cavallo; Atesh Koul; Caterina Ansuini; Francesca Capozzi; Cristina Becchio

How do we understand the intentions of other people? There has been a longstanding controversy over whether it is possible to understand others’ intentions by simply observing their movements. Here, we show that indeed movement kinematics can form the basis for intention detection. By combining kinematics and psychophysical methods with classification and regression tree (CART) modeling, we found that observers utilized a subset of discriminant kinematic features over the total kinematic pattern in order to detect intention from observation of simple motor acts. Intention discriminability covaried with movement kinematics on a trial-by-trial basis, and was directly related to the expression of discriminative features in the observed movements. These findings demonstrate a definable and measurable relationship between the specific features of observed movements and the ability to discriminate intention, providing quantitative evidence of the significance of movement kinematics for anticipating others’ intentional actions.

Collaboration


Dive into the Caterina Ansuini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cristina Becchio

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Santello

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Atesh Koul

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jamie R. Lukos

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Jessica Podda

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge