Alessandro Carcangiu
University of Cagliari
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alessandro Carcangiu.
Proceedings of the ACM on Human-Computer Interaction | 2018
Alessandro Carcangiu; Lucio Davide Spano
The large availability of touch-sensitive screens fostered the research in gesture recognition. The Machine Learning community focused mainly on accuracy and robustness to noise, creating classifiers that precisely recognize gestures after their performance. Instead, the User Interface Engineering community developed compositional gesture descriptions that model gestures and their sub-parts. They are suitable for building guidance systems, but they lack a robust and accurate recognition support. In this paper, we establish a compromise between the accuracy and the provided information introducing G-Gene, a method for transforming compositional stroke gesture definitions into profile Hidden Markov Models (HMMs), able to provide both a good accuracy and information on gesture sub-parts. It supports online recognition without using any global feature, and it updates the information while receiving the input stream, with an accuracy useful for prototyping the interaction. We evaluated the approach in a user interface development task, showing that it requires less time and effort for creating guidance systems with respect to common gesture classification approaches.
Computers & Graphics | 2018
Fabio Marco Caputo; Pietro Prebianca; Alessandro Carcangiu; Lucio Davide Spano; Andrea Giachetti
Abstract User interfaces based on mid-air gesture recognition are expected to become popular in the near future due to the increasing diffusion of virtual, mixed reality applications and smart devices. The design of this kind of interfaces would be clearly helped by the availability of simple and effective methods to compare short 3D trajectories, allowing fast and accurate recognition of command gestures given a few examples. This approach, quite popular in 2D touch-based interfaces with the so-called “dollar” algorithm family, has not been deeply investigated for 3D mid-air gestures. In this paper, we explore several metrics that can be used for mid-air gesture comparison and present experimental tests performed to analyze their effectiveness on practical tasks. By adopting smart choices in gesture traces processing and comparing, it was possible to obtain very good results in the retrieval and recognition of simple command gestures, from complete or even partial hand trajectories. The approach was also extended in order to recognize gestures characterized by both hand and finger motions and tested on a recent benchmark, reaching state of the art performances.
international conference on image analysis and processing | 2017
Alessandro Carcangiu; Lucio Davide Spano; Giorgio Fumera; Fabio Roli
Gesture recognition approaches based on computer vision and machine learning mainly focus on recognition accuracy and robustness. Research on user interface development focuses instead on the orthogonal problem of providing guidance for performing and discovering interactive gestures, through compositional approaches that provide information on gesture sub-parts. We make a first step toward combining the advantages of both approaches. We introduce DEICTIC, a compositional and declarative gesture description model which uses basic Hidden Markov Models (HMMs) to recognize meaningful pre-defined primitives (gesture sub-parts), and uses a composition of basic HMMs to recognize complex gestures. Preliminary empirical results show that DEICTIC exhibits a similar recognition performance as “monolithic” HMMs used in state-of-the-art vision-based approaches, retaining at the same time the advantages of declarative approaches.
engineering interactive computing system | 2016
Alessandro Carcangiu; Gianni Fenu; Lucio Davide Spano
In this paper, we introduce the MVIC pattern for creating multidevice and multimodal interfaces. We discuss the advantages provided by introducing a new component to the MVC pattern for those interfaces which must adapt to different devices and modalities. The proposed solution is based on an input model defining equivalent and complementary sequence of inputs for the same interaction. In addition, we discuss Djestit, a javascript library which allows creating multidevice and multimodal input models for web applications, applying the aforementioned pattern. The library supports the integration of multiple devices (Kinect 2, Leap Motion, touchscreens) and different modalities (gestural, vocal and touch).
3DOR | 2016
Andrea Giachetti; Fabio Marco Caputo; Alessandro Carcangiu; Riccardo Scateni; Lucio Davide Spano
Despite the emerging importance of Virtual Reality and immersive interaction research, no papers on application of 3D shape retrieval to this topic have been presented in recent 3D Object Retrieval workshops. In this paper we discuss how geometric processing and geometric shape retrieval methods could be extremely useful to implement effective natural interaction systems for 3D immersive virtual environments. In particular, we will discuss how the reduction of complex gesture recognition tasks to simple geometric retrieval ones could be useful to solve open issue in gestural interaction. Algorithms for robust point description in trajectories data with learning of inter-subject invariant features could, for example, solve relevant issues of direct manipulation algorithms, and 3D object retrieval methods could be used as well to build dictionaries and implement guidance system to maximize usability of natural gestural interfaces.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2019
Alessandro Carcangiu; Lucio Davide Spano; Giorgio Fumera; Fabio Roli
Abstract The consumer-level devices that track the user’s gestures eased the design and the implementation of interactive applications relying on body movements as input. Gesture recognition based on computer vision and machine-learning focus mainly on accuracy and robustness. The resulting classifiers label precisely gestures after their performance, but they do not provide intermediate information during the execution. Human-Computer Interaction research focused instead on providing an easy and effective guidance for performing and discovering interactive gestures. The compositional approaches developed for solving such problem provide information on both the whole gesture and on its sub-parts, but they exploit heuristic techniques that have a low recognition accuracy. In this paper, we introduce DEICTIC, a compositional and declarative description for stroke gestures, which uses basic Hidden Markov Models (HMMs) to recognise meaningful predefined primitives (gesture sub-parts) and it composes them to recognise complex gestures. It provides information for supporting gesture guidance and it reaches an accuracy comparable with state-of-the-art approaches, evaluated on two datasets from the literature. Through a developer evaluation, we show that the implementation of a guidance system with DEICTIC requires an effort comparable to compositional approaches, while the definition procedure and the perceived recognition accuracy is comparable to machine learning.
Proceedings of the ACM on Human-Computer Interaction | 2018
Matteo Serpi; Alessandro Carcangiu; Alessio Murru; Lucio Davide Spano
The availability of consumer-level devices for both visualising and interacting with Virtual Reality (VR) environments opens the opportunity to introduce more immersive contents and experiences, even on the web. For reaching a wider audience, developing VR applications in a web environment requires a flexible adaptation to the different input and output devices that are currently available. This paper examines the required support and explores how to develop VR applications based on web technologies that can adapt to different VR devices. We summarize the main engineering challenges and we describe a flexible framework for integrating and exploiting various VR devices for both input and output. Using such framework, we describe how we re-implemented four manipulation techniques from the literature to enable them within the same application, providing details on how we adapted its parts for different input and output devices such as Kinect and Leap Motion. Finally, we briefly examine the usability of the final application using our framework.
intelligent user interfaces | 2017
Alessandro Carcangiu
Now, users can easily provide input relying on body movements through the newest tracking devices. The available solutions have a mismatch: on one hand, classifiers offer a high precision, but their structure is difficult to inspect for providing feedback and feedforward. On the other hand, compositional approaches for gesture definition support decomposition, but with a low recognition precision. We introduce DEICTIC, a compositional and declarative gesture description that allows creating Hidden Markov Models (HMMs) for recognizing a gesture precisely, while providing information on its sub-components.
eurographics, italian chapter conference | 2017
Marianna Saba; Fabio Sorrentino; Alessandro Muntoni; Sara Casti; Gianmarco Cherchi; Alessandro Carcangiu; Fabrizio Corda; Alessio Murru; Lucio Davide Spano; Riccardo Scateni; Ilaria Vitali; Ovidio Salvetti; Massimo Magrini; Andrea Villa; Andrea Carboni; Maria Antonietta Pascali
In this paper, we describe the design and the implementation of the demonstrator for the Virtuoso project, which aims at creating seamless support for fitness and wellness activities in touristic resort. We define the objectives of the user interface, the hardware and software setup, showing how we combined and exploited consumer-level devices for supporting 3D body scan, contact-less acquisition of physical parameters, exercise guidance and operator support.
eurographics | 2016
Andrea Giachetti; Fabio Marco Caputo; Alessandro Carcangiu; Riccardo Scateni; Lucio Davide Spano
Despite the emerging importance of Virtual Reality and immersive interaction research, no papers on application of 3D shape retrieval to this topic have been presented in recent 3D Object Retrieval workshops. In this paper we discuss how geometric processing and geometric shape retrieval methods could be extremely useful to implement effective natural interaction systems for 3D immersive virtual environments. In particular, we will discuss how the reduction of complex gesture recognition tasks to simple geometric retrieval ones could be useful to solve open issue in gestural interaction. Algorithms for robust point description in trajectories data with learning of inter-subject invariant features could, for example, solve relevant issues of direct manipulation algorithms, and 3D object retrieval methods could be used as well to build dictionaries and implement guidance system to maximize usability of natural gestural interfaces.