Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandra Sciutti is active.

Publication


Featured researches published by Alessandra Sciutti.


Brain | 2012

Parkinson's disease accelerates age-related decline in haptic perception by altering somatosensory integration.

Juergen Konczak; Alessandra Sciutti; Laura Avanzino; Valentina Squeri; Monica Gori; Lorenzo Masia; Giovanni Abbruzzese; Giulio Sandini

This study investigated how Parkinsons disease alters haptic perception and the underlying mechanisms of somatosensory and sensorimotor integration. Changes in haptic sensitivity and acuity (the abilities to detect and to discriminate between haptic stimuli) due to Parkinsons disease were systematically quantified and contrasted to the performance of healthy older and young adults. Using a robotic force environment, virtual contours of various curvatures were presented. Participants explored these contours with their hands and indicated verbally whether they could detect or discriminate between two contours. To understand what aspects of sensory or sensorimotor integration are altered by ageing and disease, we manipulated the sensorimotor aspect of the task: the robot either guided the hand along the contour or the participant actively moved the hand. Active exploration relies on multimodal sensory and sensorimotor integration, while passive guidance only requires sensory integration of proprioceptive and tactile information. The main findings of the study are as follows: first, a decline in haptic precision can already be observed in adults before the age of 70 years. Parkinsons disease may lead to an additional decrease in haptic sensitivity well beyond the levels typically seen in middle-aged and older adults. Second, the haptic deficit in Parkinsons disease is general in nature. It becomes manifest as a decrease in sensitivity and acuity (i.e. a smaller perceivable range and a diminished ability to discriminate between two perceivable haptic stimuli). Third, thresholds during both active and passive exploration are elevated, but not significantly different from each other. That is, active exploration did not enhance the haptic deficit when compared to passive hand motion. This implies that Parkinsons disease affects early stages of somatosensory integration that ultimately have an impact on processes of sensorimotor integration. Our results suggest that the known motor problems in Parkinsons disease that are generally characterized as a failure of sensorimotor integration may, in fact, have a sensory origin.


International Journal of Social Robotics | 2012

Measuring Human-Robot Interaction Through Motor Resonance

Alessandra Sciutti; Ambra Bisio; Francesco Nori; Giorgio Metta; Luciano Fadiga; Thierry Pozzo; Giulio Sandini

In the last decades, the introduction of robotic devices in fields such as industries, dangerous environments, and medicine has notably improved working practices. The availability of a new generation of humanoid robots for everyday’s activities in human populated environments can entail an even wider revolution. Indeed, not only domestic activities but also social behaviors will adapt to a continuous interaction with a completely new kind of social agents.In the light of this scenario, it becomes crucial to design robots suited to natural cooperation with humans, and contextually to develop quantitative methods to measure human-robot interaction (HRI). Motor resonance, i.e. the activation of the observer’s motor control system during action perception, has been suggested to be a key component of human social behavior, and as such is thought to play a central role for HRI.In the literature there are reports of robots that have been used as tools to understand the human brain. The aim of this review is to offer a different perspective in suggesting that human responses can become a tool to measure and improve robot interactional attitudes. In the first part of the paper the notion of motor resonance and its neurophysiological correlates are introduced. Subsequently we describe motor resonance studies on the perception of robotic agents’ behavior. Finally we introduce proactive gaze and automatic imitation, two techniques adopted in human motor resonance studies, and we present the advantages which would follow their application to HRI.


Experimental Brain Research | 2012

Motor commands in children interfere with their haptic perception of objects

Monica Gori; Valentina Squeri; Alessandra Sciutti; Lorenzo Masia; Giulio Sandini; Juergen Konczak

Neural processes of sensory-motor- and motor-sensory integration link perception and action, forming the basis for human interaction with the environment. Haptic perception, the ability to extract object features through action, is based on these processes. To study the development of motor-sensory integration, children judged the curvature of virtual objects after exploring them actively or while guided passively by a robot. Haptic acuity reached adult levels only at early adolescence. Unlike in adults, haptic precision in children was consistently lower during active exploration when compared to passive motion. Thus, the exploratory movements themselves constitute a form of noise for the developing haptic system that younger brains cannot compensate until mid-adolescence. Computationally, this is consistent with a noisy efference copy mechanism producing imprecise predicted sensory feedback, which compromises haptic precision in children, while the mature mechanism aids the adult brain to account for the effect of self-generated motion on perception.


Journal of Neurophysiology | 2012

Visual gravity influences arm movement planning

Alessandra Sciutti; Laurent Demougeot; Bastien Berret; Simone Toma; Giulio Sandini; Charalambos Papaxanthis; Thierry Pozzo

When submitted to a visuomotor rotation, subjects show rapid adaptation of visually guided arm reaching movements, indicated by a progressive reduction in reaching errors. In this study, we wanted to make a step forward by investigating to what extent this adaptation also implies changes into the motor plan. Up to now, classical visuomotor rotation paradigms have been performed on the horizontal plane, where the reaching motor plan in general requires the same kinematics (i.e., straight path and symmetric velocity profile). To overcome this limitation, we considered vertical and horizontal movement directions requiring specific velocity profiles. This way, a change in the motor plan due to the visuomotor conflict would be measurable in terms of a modification in the velocity profile of the reaching movement. Ten subjects performed horizontal and vertical reaching movements while observing a rotated visual feedback of their motion. We found that adaptation to a visuomotor rotation produces a significant change in the motor plan, i.e., changes to the symmetry of velocity profiles. This suggests that the central nervous system takes into account the visual information to plan a future motion, even if this causes the adoption of nonoptimal motor plans in terms of energy consumption. However, the influence of vision on arm movement planning is not fixed, but rather changes as a function of the visual orientation of the movement. Indeed, a clear influence on motion planning can be observed only when the movement is visually presented as oriented along the vertical direction. Thus vision contributes differently to the planning of arm pointing movements depending on motion orientation in space.


Experimental Brain Research | 2010

Predicted sensory feedback derived from motor commands does not improve haptic sensitivity

Alessandra Sciutti; Valentina Squeri; Monica Gori; Lorenzo Masia; Giulio Sandini; Juergen Konczak

Haptic perception is based on the integration of afferent proprioceptive and tactile signals. A further potential source of information during active touch is predicted sensory feedback (PSF) derived from a copy of efferent motor commands that give rise to the exploratory actions. There is substantial evidence that PSF is important for predicting the sensory consequences of action, but its role in perception is unknown. Theoretically, PSF leads to a higher redundancy of haptic information, which should improve sensitivity of the haptic sense. To investigate the effect of PSF on haptic precision, blindfolded subjects haptically explored the curved contour of a virtual object generated by a robotic manipulandum. They either actively moved their hand along the contour, or their hand was moved passively by the device along the same contour. In the active condition afferent sensory information and PSF were present, while in the passive condition subjects relied solely on afferent information. In each trial, two stimuli of different curvature were presented. Subjects needed to indicate which of the two was more “curved” (forced choice). For each condition, the detection and three discrimination thresholds were computed. The main finding is that absence of efference copy information did not systematically degrade haptic acuity. This indirectly implies that PSF does not aid or enhance haptic perception. We conclude that when maximum haptic sensitivity is required to explore novel objects, the perceptual system relies primarily on afferent tactile and proprioceptive information, and PSF has no added effect on the precision of the perceptual estimate.


IEEE Transactions on Autonomous Mental Development | 2014

Understanding Object Weight from Human and Humanoid Lifting Actions

Alessandra Sciutti; Laura Patanè; Francesco Nori; Giulio Sandini

Humans are very good at interacting with each other. This natural ability depends, among other factors, on an implicit communication mediated by motion observation. By simple action observation we can easily infer not only the goal of an agent, but often also some “hidden” properties of the object he is manipulating, as its weight or its temperature. This implicit understanding is developed early in childhood and is supposedly based on a common motor repertoire between the cooperators. In this paper, we have investigated whether and under which conditions it is possible for a humanoid robot to foster the same kind of automatic communication, focusing on the ability to provide cues about object weight with action execution. We have evaluated on which action properties weight estimation is based in humans and we have accordingly designed a set of simple robotic lifting behaviors. Our results show that subjects can reach a performance in weight recognition from robot observation comparable to that obtained during human observation, with no need of training. These findings suggest that it is possible to design robot behaviors that are implicitly understandable by nonexpert partners and that this approach could be a viable path to obtain more natural human-robot collaborations.


PLOS ONE | 2011

Direct and indirect haptic calibration of visual size judgments.

Monica Gori; Alessandra Sciutti; David C. Burr; Giulio Sandini

It has long been suspected that touch plays a fundamental role in the calibration of visual perception, and much recent evidence supports this idea. However, as the haptic exploration workspace is limited by the kinematics of the body, the contribution of haptic information to the calibration process should occur only within the region of the haptic workspace reachable by a limb (peripersonal space). To test this hypothesis we evaluated visual size perception and showed that it is indeed more accurate inside the peripersonal space. We then show that allowing subjects to touch the (unseen) stimulus after observation restores accurate size perception; the accuracy persists for some time, implying that calibration has occurred. Finally, we show that observing an actor grasp the object also produces accurate (and lasting) size perception, suggesting that the calibration can also occur indirectly by observing goal-directed actions, implicating the involvement of the “mirror system”.


human robot interaction | 2015

A Gaze-contingent Dictating Robot to Study Turn-taking

Alessandra Sciutti; Lars Schillingmann; Oskar Palinko; Yukie Nagai; Giulio Sandini

In this paper we describe a human-robot interaction scenario designed to evaluate the role of gaze as implicit signal for turn-taking in a robotic teaching context. In particular we propose a protocol to assess the impact of different timing strategies in a common teaching task (English dictation). The task is designed to compare the effects of a teaching behavior whose timing is dependent on the students gaze with the more standard fixed timing approach. An initial validation indicates that this scenario could represent a functional tool for investigating the positive and negative impacts that personalized timing might have on different subjects.


Frontiers in Psychology | 2015

Investigating the ability to read others’ intentions using humanoid robots

Alessandra Sciutti; Caterina Ansuini; Cristina Becchio; Giulio Sandini

The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.


intelligent robots and systems | 2016

Robot reading human gaze: Why eye tracking is better than head tracking for human-robot collaboration

Oskar Palinko; Francesco Rea; Giulio Sandini; Alessandra Sciutti

Robots are at the position to become our everyday companions in the near future. Still, many hurdles need to be cleared to achieve this goal. One of them is the fact that robots are still not able to perceive some important communication cues naturally used by humans, e.g. gaze. In the recent past, eye gaze in robot perception was substituted by its proxy, head orientation. Such an approach is still adopted in many applications today. In this paper we introduce performance improvements to an eye tracking system we previously developed and use it to explore if this approximation is appropriate. More precisely, we compare the impact of the use of eye- or head-based gaze estimation in a human robot interaction experiment with the iCub robot and naïve subjects. We find that the possibility to exploit the richer information carried by eye gaze has a significant impact on the interaction. As a result, our eye tracking system allows for a more efficient human-robot collaboration than a comparable head tracking approach, according to both quantitative measures and subjective evaluation by the human participants.

Collaboration


Dive into the Alessandra Sciutti's collaboration.

Top Co-Authors

Avatar

Giulio Sandini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Francesco Rea

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Francesco Nori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Monica Gori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Oskar Palinko

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giorgio Metta

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Thierry Pozzo

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge