Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Alborno is active.

Publication


Featured researches published by Paolo Alborno.


human factors in computing systems | 2016

Movement Fluidity Analysis Based on Performance and Perception

Stefano Piana; Paolo Alborno; Radoslaw Niewiadomski; Maurizio Mancini; Gualtiero Volpe; Antonio Camurri

In this work we present a framework and an experimental approach to investigate human body movement qualities (i.e., the expressive components of non-verbal communication) in HCI. We first define a candidate movement quality conceptually, with the involvement of experts in the field (e.g., dancers, choreographers). Next, we collect a dataset of performances and we evaluate the perception of the chosen quality. Finally, we propose a computational model to detect the presence of the quality in a movement segment and we compare the outcomes of the model with the evaluation results. In the proposed on-going work, we apply this approach to a specific quality of movement: Fluidity. The proposed methods and models may have several applications, e.g., in emotion detection from full-body movement, interactive training of motor skills, rehabilitation.


Frontiers in Digital Humanities | 2017

Extracting coarse body movements from video in music performance : a comparison of automated computer vision techniques with motion capture data.

Kelly Jakubowski; Tuomas Eerola; Paolo Alborno; Gualtiero Volpe; Antonio Camurri; Martin Clayton

The measurement and tracking of body movement within musical performances can provide valuable sources of data for studying interpersonal interaction and coordination between musicians. The continued development of tools to extract such data from video recordings will offer new opportunities to research musical movement across a diverse range of settings, including field research and other ecological contexts in which the implementation of complex motion capture systems is not feasible or affordable. Such work might also make use of the multitude of video recordings of musical performances that are already available to researchers. The present study made use of such existing data, specifically, three video datasets of ensemble performances from different genres, settings, and instrumentation (a pop piano duo, three jazz duos, and a string quartet). Three different computer vision techniques were applied to these video datasets—frame differencing, optical flow, and kernelized correlation filters (KCF)—with the aim of quantifying and tracking movements of the individual performers. All three computer vision techniques exhibited high correlations with motion capture data collected from the same musical performances, with median correlation (Pearson’s r) values of .75 to .94. The techniques that track movement in two dimensions (optical flow and KCF) provided more accurate measures of movement than a technique that provides a single estimate of overall movement change by frame for each performer (frame differencing). Measurements of performer’s movements were also more accurate when the computer vision techniques were applied to more narrowly-defined regions of interest (head) than when the same techniques were applied to larger regions (entire upper body, above the chest or waist). Some differences in movement tracking accuracy emerged between the three video datasets, which may have been due to instrument-specific motions that resulted in occlusions of the body part of interest (e.g. a violinist’s right hand occluding the head whilst tracking head movement). These results indicate that computer vision techniques can be effective in quantifying body movement from videos of musical performances, while also highlighting constraints that must be dealt with when applying such techniques in ensemble coordination research.


Proceedings of the 4th International Conference on Movement Computing | 2017

Limbs synchronisation as a measure of movement quality in karate

Paolo Alborno; Nikolas De Giorgis; Antonio Camurri; Enrico Puppo

We present a method to compute a measure of karate movement quality from MoCap data. We start from well-known common assumptions: an expert athlete is able to perform movements characterized by stable and clean postures and stances, i.e., he is able to conclude the movements without hesitation, noisy small fluctuations or movement ripples. To explore this hypothesis, we collected a dataset of motion capture data of movements of five athletes while performing two different katas for a total of 22 trials. The athletes have two distinct levels of skill and age: junior brown belt and senior black belt. For each trial, we compute the acceleration of the limbs (arms and legs) and carry out a multi-scale analysis to identify and extract relevant events. Such events correspond to maxima and minima of acceleration intensity (i.e. peaks of high acceleration or deceleration) that occur near the start and the end points of each basic movement segment in a session of kata. Significant events are then selected and an event-synchronisation approach is used to measure the amount of synchrony between the two arms and between the two legs. Results show that expert performers exhibit higher synchronisation with respect to beginners, resulting in more stable and clean movements perceived by observers.


Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter | 2017

A multimodal corpus for technology-enhanced learning of violin playing

Gualtiero Volpe; Ksenia Kolykhalova; Erica Volta; Simone Ghisio; George Waddell; Paolo Alborno; Stefano Piana; Corrado Canepa; Rafael Ramirez-Melendez

Learning to play a musical instrument is a difficult task, mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. Nevertheless, multimodal interactive systems can complement actual learning and teaching practice, by offering students guidance during self-study and by helping teachers and students to focus on details that would be otherwise difficult to appreciate from usual audiovisual recordings. This paper introduces a multimodal corpus consisting of the recordings of expert models of success, provided by four professional violin performers. The corpus is publicly available on the repoVizz platform, and includes synchronized audio, video, motion capture, and physiological (EMG) data. It represents the reference archive for the EU-H2020-ICT Project TELMI, an international research project investigating how we learn musical instruments from a pedagogical and scientific perspective and how to develop new interactive, assistive, self-learning, augmented-feedback, and social-aware systems to support musical instrument learning and teaching.


advanced visual interfaces | 2016

Analysis of Intrapersonal Synchronization in Full-Body Movements Displaying Different Expressive Qualities

Paolo Alborno; Stefano Piana; Maurizio Mancini; Radoslaw Niewiadomski; Gualtiero Volpe; Antonio Camurri

Intrapersonal synchronization of limb movements is a relevant feature for assessing coordination of motoric behavior. In this paper, we show that it can also distinguish between full-body movements performed with different expressive qualities, namely rigidity, fluidity, and impulsivity. For this purpose, we collected a dataset of movements performed by professional dancers, and annotated the perceived movement qualities with the help of a group of experts in expressive movement analysis. We computed intra personal synchronization by applying the Event Synchronization algorithm to the time-series of the speed of arms and hands. Results show that movements performed with different qualities display a significantly different amount of intra personal synchronization: impulsive movements are the most synchronized, the fluid ones show the lowest values of synchronization, and the rigid ones lay in between.


advanced visual interfaces | 2018

A system to support non-IT researchers in the automated analysis of human movement

Paolo Alborno; Kelly Jakubowski; Antonio Camurri; Gualtiero Volpe

Analysis of human movement data is a core topic of many research studies in human-human and human-computer interaction. Whereas, on the one side, automated movement analysis is often based on the application of sophisticated computer science techniques (e.g., motion tracking from video recordings), on the other side the interdisciplinary nature of research in this area requires the availability of tools that can be used by researchers who may not have an advanced computer science expertise. This paper presents a system enabling users, who are not necessarily computer scientists, to perform motion tracking from a dataset of video recordings. The system - consisting of a set of (freely downloadable) tools accessible by means of user friendly graphical interfaces - was designed, developed, and tested in the context of a project for automated analysis of entrainment in ensemble music performance, following the needs and requirements of musicologists and psychologists.


Proceedings of the 5th International Conference on Movement and Computing | 2018

The Energy Lift: automated measurement of postural tension and energy transmission

Antonio Camurri; Gualtiero Volpe; Stefano Piana; Maurizio Mancini; Paolo Alborno; Simone Ghisio

This abstract presents a computational model and a software library for the EyesWeb XMI platform to measure a mid-level movement quality of particular importance to convey expressivity: Postural Tension. A whole body posture can be described by a vector containing the angles between the adjacent lines identifying feet (the line connecting the barycentre of each foot), knees, hip, trunk, shoulders, head, and gaze (eyes direction). Postural Tension is the extent at which a movement exhibits rotation of these multiple horizontal planes including spirals. The abstract presents a definition of this mid-level quality, and describe a demonstration: movement of a user is captured with a low-cost wearable device, postural tension and transmission of energy through the body are then extracted, visualized and sonified.


medical informatics europe | 2017

What cognitive and affective states should technology monitor to support learning

Temitayo A. Olugbade; Luigi F. Cuturi; Giulia Cappagli; Erica Volta; Paolo Alborno; Joseph W. Newbold; Nadia Bianchi-Berthouze; Gabriel Baud-Bovy; Gualtiero Volpe; Monica Gori

This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW.


medical informatics europe | 2017

An open platform for full-body multisensory serious-games to teach geometry in primary school

Simone Ghisio; Erica Volta; Paolo Alborno; Monica Gori; Gualtiero Volpe

Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school.


medical informatics europe | 2017

A multimodal serious-game to teach fractions in primary school

Simone Ghisio; Paolo Alborno; Erica Volta; Monica Gori; Gualtiero Volpe

Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children.

Collaboration


Dive into the Paolo Alborno's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Monica Gori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Baud-Bovy

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge