Atesh Koul
Istituto Italiano di Tecnologia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Atesh Koul.
PLOS ONE | 2015
Caterina Ansuini; Andrea Cavallo; Atesh Koul; Marco Jacono; Yuan Yang; Cristina Becchio
Research on reach-to-grasp movements generally concentrates on kinematics values that are expression of maxima, in particular the maximum aperture of the hand and the peak of wrist velocity. These parameters provide a snapshot description of movement kinematics at a specific time point during reach, i.e., the maximum within a set of value, but do not allow to investigate how hand kinematics gradually conform to target properties. The present study was designed to extend the characterization of object size effects to the temporal domain. Thus, we computed the wrist velocity and the grip aperture throughout reach-to-grasp movements aimed at large versus small objects. To provide a deeper understanding of how joint movements varied over time, we also considered the time course of finger motion relative to hand motion. Results revealed that movement parameters evolved in parallel but at different rates in relation to object size. Furthermore, a classification analysis performed using a Support Vector Machine (SVM) approach showed that kinematic features taken as a group predicted the correct target size well before contact with the object. Interestingly, some kinematics features exhibited a higher ability to discriminate the target size than others did. These findings reinforce our knowledge about the relationship between kinematics and object properties and shed new light on the quantity and quality of information available in the kinematics of a reach-to-grasp movement over time. This might have important implications for our understanding of the action-perception coupling mechanism.
Scientific Reports | 2016
Andrea Cavallo; Atesh Koul; Caterina Ansuini; Francesca Capozzi; Cristina Becchio
How do we understand the intentions of other people? There has been a longstanding controversy over whether it is possible to understand others’ intentions by simply observing their movements. Here, we show that indeed movement kinematics can form the basis for intention detection. By combining kinematics and psychophysical methods with classification and regression tree (CART) modeling, we found that observers utilized a subset of discriminant kinematic features over the total kinematic pattern in order to detect intention from observation of simple motor acts. Intention discriminability covaried with movement kinematics on a trial-by-trial basis, and was directly related to the expression of discriminative features in the observed movements. These findings demonstrate a definable and measurable relationship between the specific features of observed movements and the ability to discriminate intention, providing quantitative evidence of the significance of movement kinematics for anticipating others’ intentional actions.
PLOS ONE | 2016
Atesh Koul; Andrea Cavallo; Caterina Ansuini; Cristina Becchio
Individuals show significant variations in performing a motor act. Previous studies in the action observation literature have largely ignored this ubiquitous, if often unwanted, characteristic of motor performance, assuming movement patterns to be highly similar across repetitions and individuals. In the present study, we examined the possibility that individual variations in motor style directly influence the ability to understand and predict others’ actions. To this end, we first recorded grasping movements performed with different intents and used a two-step cluster analysis to identify quantitatively ‘clusters’ of movements performed with similar movement styles (Experiment 1). Next, using videos of the same movements, we proceeded to examine the influence of these styles on the ability to judge intention from action observation (Experiments 2 and 3). We found that motor styles directly influenced observers’ ability to ‘read’ others’ intention, with some styles always being less ‘readable’ than others. These results provide experimental support for the significance of motor variability for action prediction, suggesting that the ability to predict what another person is likely to do next directly depends on her individual movement style.
Frontiers in Human Neuroscience | 2016
Caterina Ansuini; Andrea Cavallo; Claudio Campus; Davide Quarona; Atesh Koul; Cristina Becchio
Behavioral and neuropsychological studies suggest that real actions and pantomimed actions tap, at least in part, different neural systems. Inspired by studies showing weight-attunement in real grasps, here we asked whether (and to what extent) kinematics of pantomimed reach-to-grasp movement can reveal the weight of the pretended target. To address this question, we instructed participants (n = 15) either to grasp or pretend to grasp toward two differently weighted objects, i.e., a light object and heavy object. Using linear discriminant analysis, we then proceeded to classify the weight of the target – either real or pretended – on the basis of the recorded movement patterns. Classification analysis revealed that pantomimed reach-to-grasp movements retained information about object weight, although to a lesser extent than real grasp movements. These results are discussed in relation to the mechanisms underlying the control of real and pantomimed grasping movements.
Physics of Life Reviews | 2018
Cristina Becchio; Atesh Koul; Caterina Ansuini; Cesare Bertone; Andrea Cavallo
Is it possible to directly perceive others’ mental states? Mediating the debate between Direct Perception and Inferentialism proponents would require knowing “what counts as an inference and how to tell the difference between inferential and non-inferential processing” [1]. However, few theorists have even attempted to answer the question of what counts as inference. The consequence, as noted by Spaulding [1], is that “given that neither Inferentialists nor DSP [Direct Social Perception, Ed.] proponents specify what they mean by inference, it is hard to tell what exactly each side is affirming and denying. Thus, the debate between Inferentialism and DSP is at an impasse”. Similar considerations apply to distinguishing between what is ‘observable’ versus ‘unobservable’ [2]. The motivation for the work discussed in the target article [2] was partly to reconceptualize the notion of ‘direct perception’ to make the observability of others’ mental states empirically addressable. This resulted in the proposal to reformulate ‘direct perception’ as reflecting the conditional probability of perceiving a given mental state from the observation of certain movement features. We do not claim that this formulation resolves the issue of whether perception of others’ mental states involves inferential steps. As noted by Overgaard [3], in principle, a stimulus may contain discriminatory information about a mental state, the information may be perceptually useful, while identifying the mental state could still involve ‘inferential’ steps. This argument brings us back to the initial impasse of what counts as inference. More radically, one
Cerebral Cortex | 2018
Atesh Koul; Andrea Cavallo; Franco Cauda; Tommaso Costa; Matteo Diano; Massimiliano Pontil; Cristina Becchio
Abstract Mirror neurons have been proposed to underlie humans’ ability to understand others’ actions and intentions. Despite 2 decades of research, however, the exact computational and neuronal mechanisms implied in this ability remain unclear. In the current study, we investigated whether, in the absence of contextual cues, regions considered to be part of the human mirror neuron system represent intention from movement kinematics. A total of 21 participants observed reach-to-grasp movements, performed with either the intention to drink or to pour while undergoing functional magnetic resonance imaging. Multivoxel pattern analysis revealed successful decoding of intentions from distributed patterns of activity in a network of structures comprising the inferior parietal lobule, the superior parietal lobule, the inferior frontal gyrus, and the middle frontal gyrus. Consistent with the proposal that parietal regions play a key role in intention understanding, classifier weights were higher in the inferior parietal region. These results provide the first demonstration that putative mirror neuron regions represent subtle differences in movement kinematics to read the intention of an observed motor act.
Behavior Research Methods | 2018
Atesh Koul; Cristina Becchio; Andrea Cavallo
Recent years have seen an increased interest in machine learning-based predictive methods for analyzing quantitative behavioral data in experimental psychology. While these methods can achieve relatively greater sensitivity compared to conventional univariate techniques, they still lack an established and accessible implementation. The aim of current work was to build an open-source R toolbox – “PredPsych” – that could make these methods readily available to all psychologists. PredPsych is a user-friendly, R toolbox based on machine-learning predictive algorithms. In this paper, we present the framework of PredPsych via the analysis of a recently published multiple-subject motion capture dataset. In addition, we discuss examples of possible research questions that can be addressed with the machine-learning algorithms implemented in PredPsych and cannot be easily addressed with univariate statistical analysis. We anticipate that PredPsych will be of use to researchers with limited programming experience not only in the field of psychology, but also in that of clinical neuroscience, enabling computational assessment of putative bio-behavioral markers for both prognosis and diagnosis.
computer vision and pattern recognition | 2017
Andrea Zunino; Jacopo Cavazza; Atesh Koul; Andrea Cavallo; Cristina Becchio; Vittorio Murino
In computer vision, video-based approaches have been widely explored for the early classification and the prediction of actions or activities. However, it remains unclear whether this modality (as compared to 3D kinematics) can still be reliable for the prediction of human intentions, defined as the overarching goal embedded in an action sequence. Since the same action can be performed with different intentions, this problem is more challenging but yet affordable as proved by quantitative cognitive studies which exploit the 3D kinematics acquired through motion capture systems.In this paper, we bridge cognitive and computer vision studies, by demonstrating the effectiveness of video-based approaches for the prediction of human intentions. Precisely, we propose Intention from Motion, a new paradigm where, without using any contextual information, we consider instantaneous grasping motor acts involving a bottle in order to forecast why the bottle itself has been reached (to pass it or to place in a box, or to pour or to drink the liquid inside).We process only the grasping onsets casting intention prediction as a classification framework. Leveraging on our multimodal acquisition (3D motion capture data and 2D optical videos), we compare the most commonly used 3D descriptors from cognitive studies with state-of-the-art video-based techniques. Since the two analyses achieve an equivalent performance, we demonstrate that computer vision tools are effective in capturing the kinematics and facing the cognitive problem of human intention prediction.
acm multimedia | 2017
Andrea Zunino; Jacopo Cavazza; Atesh Koul; Andrea Cavallo; Cristina Becchio; Vittorio Murino
In this paper, we address the new problem of the prediction of human intentions. There is neuro-psychological evidence that actions performed by humans are anticipated by peculiar motor acts which are discriminant of the type of action going to be performed afterwards. In other words, an actual intention can be forecast by looking at the kinematics of the immediately preceding movement. To prove it in a computational and quantitative manner, we devise a new experimental setup where, without using contextual information, we predict human intentions all originating from the same motor act. We posit the problem as a classification task and we introduce a new multi-modal dataset consisting of a set of motion capture marker 3D data and 2D video sequences, where, by only analysing very similar movements in both training and test phases, we are able to predict the underlying intention, i.e., the future, never observed action. We also present an extensive experimental evaluation as a baseline, customizing state-of-the-art techniques for either 3D and 2D data analysis. Realizing that video processing methods lead to inferior performance but show complementary information with respect to 3D data sequences, we developed a 2D+3D fusion analysis where we achieve better classification accuracies, attesting the superiority of the multimodal approach for the context-free prediction of human intentions.
Physics of Life Reviews | 2017
Cristina Becchio; Atesh Koul; Caterina Ansuini; Cesare Bertone; Andrea Cavallo