Donald Glowinski
University of Genoa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Donald Glowinski.
IEEE Transactions on Affective Computing | 2011
Donald Glowinski; Nele Dael; Antonio Camurri; Gualtiero Volpe; Marcello Mortillaro; Klaus R. Scherer
This paper presents a framework for analysis of affective behavior starting with a reduced amount of visual information related to human upper-body movements. The main goal is to individuate a minimal representation of emotional displays based on nonverbal gesture features. The GEMEP (Geneva multimodal emotion portrayals) corpus was used to validate this framework. Twelve emotions expressed by 10 actors form the selected data set of emotion portrayals. Visual tracking of trajectories of head and hands were performed from a frontal and a lateral view. Postural/shape and dynamic expressive gesture features were identified and analyzed. A feature reduction procedure was carried out, resulting in a 4D model of emotion expression that effectively classified/grouped emotions according to their valence (positive, negative) and arousal (high, low). These results show that emotionally relevant information can be detected/measured/obtained from the dynamic qualities of gesture. The framework was implemented as software modules (plug-ins) extending the EyesWeb XMI Expressive Gesture Processing Library and is going to be used in user centric, networked media applications, including future mobiles, characterized by low computational resources, and limited sensor systems.
computer vision and pattern recognition | 2008
Donald Glowinski; Antonio Camurri; Gualtiero Volpe; Nele Dael; Klaus R. Scherer
This paper illustrates our recent work on the analysis of expressive gesture related to the motion of the upper body (the head and the hands) in the context of emotional portrayals performed by professional actors. An experiment is presented which is the result of a multidisciplinary joint work. The experiment aims at (i) developing models and algorithms for analysis of such expressive content (ii) individuating which motion cues are involved in conveying the actorpsilas expressive intentions to portray four emotions (anger, joy, relief, sadness) via a scenario approach. The paper discusses the experiment in detail with reference to related conceptual issues, developed techniques, and the obtained results.
Neuropsychologia | 2014
Leonardo Badino; Alessandro D'Ausilio; Donald Glowinski; Antonio Camurri; Luciano Fadiga
Non-verbal group dynamics are often opaque to a formal quantitative analysis of communication flow. In this context, ensemble musicians can be a reliable model of expert group coordination. In fact, bodily motion is a critical component of inter-musician coordination and thus could be used as a valuable index of sensorimotor communication. Here we measured head movement kinematics of an expert quartet of musicians and, by applying Granger Causality analysis, we numerically described the causality patterns between participants. We found a clear positive relationship between the amount of communication and complexity of the score segment. Furthermore, we also applied temporal and dynamical changes to the musical score, known by the first violin only. The perturbations were devised in order to force unidirectional communication between the leader of the quartet and the other participants. Results show that in these situations, unidirectional influence from the leader decreased, thus implying that effective leadership may require prior sharing of information between participants. In conclusion, we could measure the amount of information flow and sensorimotor group dynamics suggesting that the fabric of leadership is not built upon exclusive information knowledge but rather on sharing it.
Frontiers in Psychology | 2013
Donald Glowinski; Maurizio Mancini; Roddy Cowie; Antonio Camurri; Carlo Chiorri; Cian Doherty
When people perform a task as part of a joint action, their behavior is not the same as it would be if they were performing the same task alone, since it has to be adapted to facilitate shared understanding (or sometimes to prevent it). Joint performance of music offers a test bed for ecologically valid investigations of the way non-verbal behavior facilitates joint action. Here we compare the expressive movement of violinists when playing in solo and ensemble conditions. The first violinists of two string quartets (SQs), professional and student, were asked to play the same musical fragments in a solo condition and with the quartet. Synchronized multimodal recordings were created from the performances, using a specially developed software platform. Different patterns of head movement were observed. By quantifying them using an appropriate measure of entropy, we showed that head movements are more predictable in the quartet scenario. Rater evaluations showed that the change does not, as might be assumed, entail markedly reduced expression. They showed some ability to discriminate between solo and ensemble performances, but did not distinguish them in terms of emotional content or expressiveness. The data raise provocative questions about joint action in realistically complex scenarios.
HBU'12 Proceedings of the Third international conference on Human Behavior Understanding | 2012
Maurizio Mancini; Giovanna Varni; Donald Glowinski; Gualtiero Volpe
The EU-ICT FET Project ILHAIRE is aimed at endowing machines with automated detection, analysis, and synthesis of laughter. This paper describes the Body Laughter Index (BLI) for automated detection of laughter starting from the analysis of body movement captured by a video source. The BLI algorithm is described, and the index is computed on a corpus of videos. The assessment of the algorithm by means of subjects rating is also presented. Results show that BLI can successfully distinguish between different videos of laughter, even if improvements are needed with respect to perception of subjects, multimodal fusion, cultural aspects, and generalization to a broad range of social contexts.
acm multimedia | 2010
Donald Glowinski; Paolo Coletta; Gualtiero Volpe; Antonio Camurri; Carlo Chiorri; Andrea Schenone
Our research focused on ensemble musical performance, an ideal test-bed for the development of models and techniques for measuring creative social interaction in an ecologically valid framework. Starting from expressive behavioral data of a string quartet, this paper addresses the application of Multi-Scale Entropy method to investigate dominance.
Art & Perception | 2014
Katie Noble; Donald Glowinski; Helen Murphy; Corinne Jola; Phil McAleer; Nikhil Darshane; Kedzie Penfield; Sandhiya Kalyanasundaram; Antonio Camurri; Frank E. Pollick
We used a combination of behavioral, computational vision and fMRI methods to examine human brain activity while viewing a 386 s video of a solo Bharatanatyam dance. A computational analysis provided us with a Motion Index (MI) quantifying the silhouette motion of the dancer throughout the dance. A behavioral analysis using 30 naive observers provided us with the time points where observers were most likely to report event boundaries where one movement segment ended and another began. These behavioral and computational data were used to interpret the brain activity of a different set of 11 naive observers who viewed the dance video while brain activity was measured using fMRI. Results showed that the Motion Index related to brain activity in a single cluster in the right Inferior Temporal Gyrus (ITG) in the vicinity of the Extrastriate Body Area (EBA). Perception of event boundaries in the video was related to the BA44 region of right Inferior Frontal Gyrus as well as extensive clusters of bilateral activity in the Inferior Occipital Gyrus which extended in the right hemisphere towards the posterior Superior Temporal Sulcus (pSTS).
affective computing and intelligent interaction | 2013
Maurizio Mancini; Jennifer Hofmann; Tracey Platt; Gualtiero Volpe; Giovanna Varni; Donald Glowinski; Willibald Ruch; Antonio Camurri
Within the EU ILHAIRE Project, researchers of several disciplines (e.g., computer sciences, psychology) collaborate to investigate the psychological foundations of laughter, and to bring this knowledge into shape for the use in new technologies (i.e., affective computing). Within this framework, in order to endow machines with laughter capabilities (encoding as well as decoding), one crucial task is an adequate description of laughter in terms of morphology. In this paper we present a work methodology towards automated full body laughter detection: starting from expert annotations of laughter videos we aim to identify the body features that characterize laughter.
affective computing and intelligent interaction | 2013
Donald Glowinski; Giorgio Gnecco; Stefano Piana; Antonio Camurri
The present study investigates expressive non-verbal interaction in musical context starting from behavioral features extracted at individual and group level. We define four features related to head movement and direction that may help gaining insight on the expressivity and cohesion of the performance. Our preliminary findings obtained from the analysis of a string quartet recorded in ecological settings show that these features may help in distinguishing between two types of performance: (a) a concert-like condition where all musicians aim at performing at best, (b) a perturbed one where the 1st violinist devises alternative interpretations of the music score without discussing them with the other musicians.
affective computing and intelligent interaction | 2011
Donald Glowinski; Maurizio Mancini
Aiming at providing a solid foundation to the creation of future affect detection applications in HCI, we propose to analyze human expressive gesture by computing movement Sample Entropy (SampEn). This method provides two main advantages: (i) it is adapted to the non-linearity and non-stationarity of human movement; (ii) it allows a fine-grain analysis of the information encoded in the movement features dynamics. A realtime application is presented, implementing the SampEn method. Preliminary results obtained by computing SampEn on two expressive features, smoothness and symmetry, are provided in a video available on the web.