Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefano Piana is active.

Publication


Featured researches published by Stefano Piana.


Molecular Autism | 2016

An investigation of the ‘female camouflage effect’ in autism using a computerized ADOS-2 and a test of sex/gender differences

Agnieszka Rynkiewicz; Bjoern W. Schuller; Erik Marchi; Stefano Piana; Antonio Camurri; Amandine Lassalle; Simon Baron-Cohen

BackgroundAutism spectrum conditions (autism) are diagnosed more frequently in boys than in girls. Females with autism may have been under-identified due to not only a male-biased understanding of autism but also females’ camouflaging. The study describes a new technique that allows automated coding of non-verbal mode of communication (gestures) and offers the possibility of objective, evaluation of gestures, independent of human judgment. The EyesWeb software platform and the Kinect sensor during two demonstration activities of ADOS-2 (Autism Diagnostic Observation Schedule, Second Edition) were used.MethodsThe study group consisted of 33 high-functioning Polish girls and boys with formal diagnosis of autism or Asperger syndrome aged 5–10, with fluent speech, IQ average and above and their parents (girls with autism, n = 16; boys with autism, n = 17). All children were assessed during two demonstration activities of Module 3 of ADOS-2, administered in Polish, and coded using Polish codes. Children were also assessed with Polish versions of the Eyes and Faces Tests. Parents provided information on the author-reviewed Polish research translation of SCQ (Social Communication Questionnaire, Current and Lifetime) and Polish version of AQ Child (Autism Spectrum Quotient, Child).ResultsGirls with autism tended to use gestures more vividly as compared to boys with autism during two demonstration activities of ADOS-2. Girls with autism made significantly more mistakes than boys with autism on the Faces Test. All children with autism had high scores in AQ Child, which confirmed the presence of autistic traits in this group. The current communication skills of boys with autism reported by parents in SCQ were significantly better than those of girls with autism. However, both girls with autism and boys with autism improved in the social and communication abilities over the lifetime. The number of stereotypic behaviours in boys significantly decreased over life whereas it remained at a comparable level in girls with autism.ConclusionsHigh-functioning females with autism might present better on non-verbal (gestures) mode of communication than boys with autism. It may camouflage other diagnostic features. It poses risk of under-diagnosis or not receiving the appropriate diagnosis for this population. Further research is required to examine this phenomenon so appropriate gender revisions to the diagnostic assessments might be implemented.


Human-Computer Interaction | 2016

Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain

Aneesha Singh; Stefano Piana; Davide Pollarolo; Gualtiero Volpe; Giovanna Varni; Ana Tajadura-Jiménez; Amanda C. de C. Williams; Antonio Camurri; Nadia Bianchi-Berthouze

Chronic (persistent) pain (CP) affects 1 in 10 adults; clinical resources are insufficient, and anxiety about activity restricts lives. Technological aids monitor activity but lack necessary psychological support. This article proposes a new sonification framework, Go-with-the-Flow, informed by physiotherapists and people with CP. The framework proposes articulation of user-defined sonified exercise spaces (SESs) tailored to psychological needs and physical capabilities that enhance body and movement awareness to rebuild confidence in physical activity. A smartphone-based wearable device and a Kinect-based device were designed based on the framework to track movement and breathing and sonify them during physical activity. In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement, and relaxation with sound feedback. Home studies, a focus group, and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the SESs and their calibration can facilitate self-directed rehabilitation and how the wearable version of the device can facilitate transfer of gains from exercise to feared or demanding activities in real life. We conclude by discussing the implications of our findings on the design of technology for physical rehabilitation.


Ksii Transactions on Internet and Information Systems | 2016

Adaptive Body Gesture Representation for Automatic Emotion Recognition

Stefano Piana; Alessandra Staglianò; Francesca Odone; Antonio Camurri

We present a computational model and a system for the automated recognition of emotions starting from full-body movement. Three-dimensional motion data of full-body movements are obtained either from professional optical motion-capture systems (Qualisys) or from low-cost RGB-D sensors (Kinect and Kinect2). A number of features are then automatically extracted at different levels, from kinematics of a single joint to more global expressive features inspired by psychology and humanistic theories (e.g., contraction index, fluidity, and impulsiveness). An abstraction layer based on dictionary learning further processes these movement features to increase the model generality and to deal with intraclass variability, noise, and incomplete information characterizing emotion expression in human movement. The resulting feature vector is the input for a classifier performing real-time automatic emotion recognition based on linear support vector machines. The recognition performance of the proposed model is presented and discussed, including the tradeoff between precision of the tracking measures (we compare the Kinect RGB-D sensor and the Qualisys motion-capture system) versus dimension of the training dataset. The resulting model and system have been successfully applied in the development of serious games for helping autistic children learn to recognize and express emotions by means of their full-body movement.


Archive | 2013

Automated Analysis of Non-Verbal Expressive Gesture

Stefano Piana; Maurizio Mancini; Antonio Camurri; Giovanna Varni; Gualtiero Volpe

A framework and software system for real-time tracking and analysis of non-verbal expressive and emotional behavior is proposed. The objective is to design and create multimodal interactive systems for the automated analysis of emotions. The system will give audiovisual feedback to support the therapy of children affected by Autism Spectrum Conditions (ASC). The system is based on the EyesWeb XMI open software platform and on Kinect depth sensors.


human factors in computing systems | 2016

Movement Fluidity Analysis Based on Performance and Perception

Stefano Piana; Paolo Alborno; Radoslaw Niewiadomski; Maurizio Mancini; Gualtiero Volpe; Antonio Camurri

In this work we present a framework and an experimental approach to investigate human body movement qualities (i.e., the expressive components of non-verbal communication) in HCI. We first define a candidate movement quality conceptually, with the involvement of experts in the field (e.g., dancers, choreographers). Next, we collect a dataset of performances and we evaluate the perception of the chosen quality. Finally, we propose a computational model to detect the presence of the quality in a movement segment and we compare the outcomes of the model with the evaluation results. In the proposed on-going work, we apply this approach to a specific quality of movement: Fluidity. The proposed methods and models may have several applications, e.g., in emotion detection from full-body movement, interactive training of motor skills, rehabilitation.


affective computing and intelligent interaction | 2013

Expressive Non-verbal Interaction in String Quartet

Donald Glowinski; Giorgio Gnecco; Stefano Piana; Antonio Camurri

The present study investigates expressive non-verbal interaction in musical context starting from behavioral features extracted at individual and group level. We define four features related to head movement and direction that may help gaining insight on the expressivity and cohesion of the performance. Our preliminary findings obtained from the analysis of a string quartet recorded in ecological settings show that these features may help in distinguishing between two types of performance: (a) a concert-like condition where all musicians aim at performing at best, (b) a perturbed one where the 1st violinist devises alternative interpretations of the music score without discussing them with the other musicians.


ubiquitous computing | 2016

A system to support the learning of movement qualities in dance: a case study on dynamic symmetry

Antonio Camurri; Corrado Canepa; Nicola Ferrari; Maurizio Mancini; Radoslaw Niewiadomski; Stefano Piana; Gualtiero Volpe; Jean-Marc Matos; Pablo Palacio; Muriel Romero

In this paper, we present (i) a computational model of Dynamic Symmetry of human movement, and (ii) a system to teach this movement quality (symmetry or asymmetry) by means of an interactive sonification exergame based on IMU sensors and the EyesWeb XMI software platform. The implemented system is available as a demo at the workshop.


Proceedings of the 3rd International Symposium on Movement and Computing | 2016

Towards a Multimodal Repository of Expressive Movement Qualities in Dance

Stefano Piana; Paolo Coletta; Simone Ghisio; Radoslaw Niewiadomski; Maurizio Mancini; Roberto Sagoleo; Gualtiero Volpe; Antonio Camurri

In this paper, we present a new multimodal repository for the analysis of expressive movement qualities in dance. First, we discuss guidelines and methodology that we applied to create this repository. Next, the technical setup of recordings and the platform for capturing the synchronized audio-visual, physiological, and motion capture data are presented. The initial content of the repository consists of about 90 minutes of short dance performances movement sequences, and improvisations performed by four dancers, displaying three expressive qualities: Fluidity, Impulsivity, and Rigidity.


Proceedings of the 16th International Conference on Multimodal Interaction | 2014

Emotional Charades

Stefano Piana; Alessandra Staglianò; Francesca Odone; Antonio Camurri

This is a short description of the Emotional Charades serious game demo. Our goal is to focus on emotion expression through body gestures, making the players aware of the amount of affective information their bodies convey. The whole framework aims at helping children with autism to understand and express emotions. We also want to compare the performances of our automatic recognition system and the ones achieved by humans.


Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter | 2017

A multimodal corpus for technology-enhanced learning of violin playing

Gualtiero Volpe; Ksenia Kolykhalova; Erica Volta; Simone Ghisio; George Waddell; Paolo Alborno; Stefano Piana; Corrado Canepa; Rafael Ramirez-Melendez

Learning to play a musical instrument is a difficult task, mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. Nevertheless, multimodal interactive systems can complement actual learning and teaching practice, by offering students guidance during self-study and by helping teachers and students to focus on details that would be otherwise difficult to appreciate from usual audiovisual recordings. This paper introduces a multimodal corpus consisting of the recordings of expert models of success, provided by four professional violin performers. The corpus is publicly available on the repoVizz platform, and includes synchronized audio, video, motion capture, and physiological (EMG) data. It represents the reference archive for the EU-H2020-ICT Project TELMI, an international research project investigating how we learn musical instruments from a pedagogical and scientific perspective and how to develop new interactive, assistive, self-learning, augmented-feedback, and social-aware systems to support musical instrument learning and teaching.

Collaboration


Dive into the Stefano Piana's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge