Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simone Ghisio is active.

Publication


Featured researches published by Simone Ghisio.


new interfaces for musical expression | 2007

Developing multimodal interactive systems with EyesWeb XMI

Antonio Camurri; Paolo Coletta; Giovanna Varni; Simone Ghisio

EyesWeb XMI (for eXtended Multimodal Interaction) is the new version of the well-known EyesWeb platform. It has a main focus on multimodality and the main design target of this new release has been to improve the ability to process and correlate several streams of data. It has been used extensively to build a set of interactive systems for performing arts applications for Festival della Scienza 2006, Genoa, Italy. The purpose of this paper is to describe the developed installations as well as the new EyesWeb features that helped in their development.


Gesture-Based Human-Computer Interaction and Simulation | 2009

Automatic Classification of Expressive Hand Gestures on Tangible Acoustic Interfaces According to Laban's Theory of Effort

Antonio Camurri; Corrado Canepa; Simone Ghisio; Gualtiero Volpe

Tangible Acoustic Interfaces (TAIs) exploit the propagation of sound in physical objects in order to localize touching positions and to analyse users gesture on the object. Designing and developing TAIs consists of exploring how physical objects, augmented surfaces, and spaces can be transformed into tangible-acoustic embodiments of natural seamless unrestricted interfaces. Our research focuses on Expressive TAIs , i.e., TAIs able at processing expressive users gesture and providing users with natural multimodal interfaces that fully exploit expressive, emotional content. This paper presents a concrete example of analysis of expressive gesture in TAIs: hand gestures on a TAI surface are classified according to the Space and Time dimensions of Rudolf Labans Theory of Effort. Research started in the EU-IST Project TAI-CHI (Tangible Acoustic Interfaces for Computer-Human Interaction) and is currently going on in the EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every way, www.sameproject.eu). Expressive gesture analysis and multimodal and cross-modal processing are achieved in the new EyesWeb XMI open platform (available at www.eyesweb.org) by means of a new version of the EyesWeb Expressive Gesture Processing Library.


Proceedings of the 3rd International Symposium on Movement and Computing | 2016

Towards a Multimodal Repository of Expressive Movement Qualities in Dance

Stefano Piana; Paolo Coletta; Simone Ghisio; Radoslaw Niewiadomski; Maurizio Mancini; Roberto Sagoleo; Gualtiero Volpe; Antonio Camurri

In this paper, we present a new multimodal repository for the analysis of expressive movement qualities in dance. First, we discuss guidelines and methodology that we applied to create this repository. Next, the technical setup of recordings and the platform for capturing the synchronized audio-visual, physiological, and motion capture data are presented. The initial content of the repository consists of about 90 minutes of short dance performances movement sequences, and improvisations performed by four dancers, displaying three expressive qualities: Fluidity, Impulsivity, and Rigidity.


Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter | 2017

A multimodal corpus for technology-enhanced learning of violin playing

Gualtiero Volpe; Ksenia Kolykhalova; Erica Volta; Simone Ghisio; George Waddell; Paolo Alborno; Stefano Piana; Corrado Canepa; Rafael Ramirez-Melendez

Learning to play a musical instrument is a difficult task, mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. Nevertheless, multimodal interactive systems can complement actual learning and teaching practice, by offering students guidance during self-study and by helping teachers and students to focus on details that would be otherwise difficult to appreciate from usual audiovisual recordings. This paper introduces a multimodal corpus consisting of the recordings of expert models of success, provided by four professional violin performers. The corpus is publicly available on the repoVizz platform, and includes synchronized audio, video, motion capture, and physiological (EMG) data. It represents the reference archive for the EU-H2020-ICT Project TELMI, an international research project investigating how we learn musical instruments from a pedagogical and scientific perspective and how to develop new interactive, assistive, self-learning, augmented-feedback, and social-aware systems to support musical instrument learning and teaching.


intelligent technologies for interactive entertainment | 2011

User-Centered Evaluation of the Virtual Binocular Interface

Donald Glowinski; Maurizio Mancini; Paolo Coletta; Simone Ghisio; Carlo Chiorri; Antonio Camurri; Gualtiero Volpe

This paper describes a full-body pointing interface based on the mimicking of the use of binoculars, the Virtual Binocular Interface. This interface is a component of the interactive installation “Viaggiatori di Sguardo”, located at Palazzo Ducale, Genova, Italy, and visited by more than 5,000 visitors so far. This paper focuses on the evaluation of such an interface.


Proceedings of the 5th International Conference on Movement and Computing | 2018

The Energy Lift: automated measurement of postural tension and energy transmission

Antonio Camurri; Gualtiero Volpe; Stefano Piana; Maurizio Mancini; Paolo Alborno; Simone Ghisio

This abstract presents a computational model and a software library for the EyesWeb XMI platform to measure a mid-level movement quality of particular importance to convey expressivity: Postural Tension. A whole body posture can be described by a vector containing the angles between the adjacent lines identifying feet (the line connecting the barycentre of each foot), knees, hip, trunk, shoulders, head, and gaze (eyes direction). Postural Tension is the extent at which a movement exhibits rotation of these multiple horizontal planes including spirals. The abstract presents a definition of this mid-level quality, and describe a demonstration: movement of a user is captured with a low-cost wearable device, postural tension and transmission of energy through the body are then extracted, visualized and sonified.


Proceedings of the 5th International Conference on Movement and Computing | 2018

Enhancing Music Learning with Smart Technologies

Rafael Ramirez; Corrado Canepa; Simone Ghisio; Ksenia Kolykhalova; Maurizio Mancini; Erica Volta; Gualtiero Volpe; Sergio Giraldo; Oscar Mayor; Alfonso Pérez; George Waddell; Aaron Williamon

Learning to play a musical instrument is a difficult task, requiring the development of sophisticated skills. Nowadays, such a learning process is mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. The TELMI (Technology Enhanced Learning of Musical Instrument Performance) Project seeks to design and implement new interaction paradigms for music learning and training based on state-of-the-art multimodal (audio, image, video, and motion) technologies. The project focuses on the violin as a case study. This practice work is intended as demo, showing to MOCO attendants the results the project obtained along two years of work. The demo simulates a setup at a higher education music institution, where attendants with any level of previous violin experience (and even with no experience at all) are invited to try the technologies themselves, performing basic tests of violin skill and pre-defined exercises under the guidance of the researchers involved in the project.


medical informatics europe | 2017

An open platform for full-body multisensory serious-games to teach geometry in primary school

Simone Ghisio; Erica Volta; Paolo Alborno; Monica Gori; Gualtiero Volpe

Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school.


medical informatics europe | 2017

A multimodal serious-game to teach fractions in primary school

Simone Ghisio; Paolo Alborno; Erica Volta; Monica Gori; Gualtiero Volpe

Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children.


intelligent technologies for interactive entertainment | 2015

An open platform for full body interactive sonification exergames

Simone Ghisio; Paolo Coletta; Stefano Piana; Paolo Alborno; Gualtiero Volpe; Antonio Camurri; Ludovica Primavera; Carla Ferrari; Carla Maria Guenza; Paolo Moretti; Valeria Bergamaschi; Andrea Ravaschio

This paper addresses the use of a remote interactive platform to support home-based rehabilitation for children with motor and cognitive impairment. The interaction between user and platform is achieved on customizable full-body interactive serious games (exergames). These exergames perform real-time analysis of multimodal signals to quantify movement qualities and postural attitudes. Interactive sonification of movement is then applied for providing a real-time feedback based on “aesthetic resonance” and engagement of the children. The games also provide log file recordings therapists can use to assess the performance of the children and the effectiveness of the games. The platform allows the customization of the games to address the childrens needs. The platform is based on the EyesWeb XMI software, and the games are designed for home usage, based on Kinect for Xbox One and simple sensors including 3-axis accelerometers available in low-cost Android smartphones.

Collaboration


Dive into the Simone Ghisio's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Monica Gori

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge