Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas D'Alessandro is active.

Publication


Featured researches published by Nicolas D'Alessandro.


international conference on acoustics, speech, and signal processing | 2013

A comparative study of pitch extraction algorithms on a large variety of singing sounds

Onur Babacan; Thomas Drugman; Nicolas D'Alessandro; Nathalie Henrich; Thierry Dutoit

The problem of pitch tracking has been extensively studied in the speech research community. The goal of this paper is to investigate how these techniques should be adapted to singing voice analysis, and to provide a comparative evaluation of the most representative state-of-the-art approaches. This study is carried out on a large database of annotated singing sounds with aligned EGG recordings, comprising a variety of singer categories and singing exercises. The algorithmic performance is assessed according to the ability to detect voicing boundaries and to accurately estimate pitch contour. First, we evaluate the usefulness of adapting existing methods to singing voice analysis. Then we compare the accuracy of several pitch-extraction algorithms, depending on singer category and laryngeal mechanism. Finally, we analyze their robustness to reverberation.


spoken language technology workshop | 2012

Reactive and continuous control of HMM-based speech synthesis

Maria Astrinaki; Nicolas D'Alessandro; Benjamin Picart; Thomas Drugman; Thierry Dutoit

In this paper, we present a modified version of HTS, called performative HTS or pHTS. The objective of pHTS is to enhance the control ability and reactivity of HTS. pHTS reduces the phonetic context used for training the models and generates the speech parameters within a 2-label window. Speech waveforms are generated on-the-fly and the models can be re-actively modified, impacting the synthesized speech with a delay of only one phoneme. It is shown that HTS and pHTS have comparable output quality. We use this new system to achieve reactive model interpolation and conduct a new test where articulation degree is modified within the sentence.


human factors in computing systems | 2016

Human-Centred Machine Learning

Marco Gillies; Rebecca Fiebrink; Atau Tanaka; Jérémie Garcia; Frédéric Bevilacqua; Alexis Heloir; Fabrizio Nunnari; Wendy E. Mackay; Saleema Amershi; Bongshin Lee; Nicolas D'Alessandro; Joëlle Tilmanne; Todd Kulesza; Baptiste Caramiaux

Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, and even deciding what should be modeled in the first place. Examining machine learning from a human-centered perspective includes explicitly recognising this human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the co-adaptation of humans and systems. A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally. This workshop will bring together researchers to discuss these issues and suggest future research questions aimed at creating a human-centered approach to machine learning.


content based multimedia indexing | 2009

AudioCycle: Browsing Musical Loop Libraries

Stéphane Dupont; Thomas Dubuisson; Jérôme Urbain; Raphaël Sebbe; Nicolas D'Alessandro; Christian Frisson

This paper presents AudioCycle, a prototype application for browsing through music loop libraries. AudioCycle provides the user with a graphical view where the audio extracts are visualized and organized according to their similarity in terms of musical properties, such as timbre, harmony, and rhythm. The user is able to navigate in this visual representation, and listen to individual audio extracts, searching for those of interest. AudioCycle draws from a range of technologies, including audio analysis from music information retrieval research, 3D visualization, spatial auditory rendering, audio time-scaling and pitch modification. The proposed approach extends on previously described music and audio browsers. Concepts developed here will be of interest to DJs, remixers, musicians, soundtrack composers, but also sound designers and foley artists. Possible extension to multimedia libraries are also suggested.


ieee international conference on automatic face gesture recognition | 2015

An HMM-based speech-smile synthesis system: An approach for amusement synthesis

Kevin El Haddad; Stéphane Dupont; Nicolas D'Alessandro; Thierry Dutoit

This paper presents an HMM-based speech-smile synthesis system. In order to do that, databases of three speech styles were recorded. This system was used to study to what extent synthesized speech-smiles (defined as Duchenne smiles in our work) and spread-lips (speech modulated by spreading the lips) communicate amusement. Our evaluation results showed that the speech-smiles synthesized sentences are perceived as more amused than the spread-lips ones. Acoustic analysis of the pitch and first two formants are also provided.


european signal processing conference | 2015

Motion machine: A new framework for motion capture signal feature prototyping

Joëlle Tilmanne; Nicolas D'Alessandro

Motion capture (mocap) is rapidly evolving and embraced by a growing community in various research areas. However, a common problem is the high dimensionality of mocap data, and the difficulty to extract and understand meaningful features. In this paper, we propose a framework for the rapid prototyping of feature sets, MotionMachine, which helps to overcome the standard problem of mocap feature understanding by an interactive visualisation of both features and 3D scene. Our framework aims at being flexible to input data format, and to work both offline or in real-time. The design of the feature extraction modules in C++ is intended for modules to be used both for visualisation in the MotionMachine framework and for integration in end-user applications or communication with other existing softwares. We present two examples of use-cases in which the main features of this framework have successfully been tested.


Proceedings of the 3rd International Symposium on Movement and Computing | 2016

A Novel Tool for Motion Capture Database Factor Statistical Exploration

Mickaël Tits; Joëlle Tilmanne; Nicolas D'Alessandro

The recent arise of Motion Capture (MoCap) technologies provides new possibilities, but also new challenges in human motion analysis. Indeed, the analysis of a motion database is a complex task, due to the high dimensionality of motion data, and the number of independent factors that can affect movements. We addressed the first issue in some of our earlier work by developing MotionMachine, a framework helping to overcome the problem of motion data interpretation through feature extraction and interactive visualization [20]. In this paper, we address the question of the relations between movements and some of the various factors (social, psychological, physiological, etc.) that can influence them. To that end, we propose a tool for rapid factor analysis of a MoCap database. This tool allows statistical exploration of the effect of any factor of the database on motion features. As a use case of this work, we present the analysis of a database of improvised contemporary dance, showing the capabilities and interest of our tool.


human factors in computing systems | 2014

Designing speech and language interactions

Cosmin Munteanu; Matt Jones; Steve Whittaker; Sharon Oviatt; Matthew P. Aylett; Gerald Penn; Stephen A. Brewster; Nicolas D'Alessandro


Proceedings of the 2014 International Workshop on Movement and Computing | 2014

Hidden Markov Model Based Real-Time Motion Recognition and Following

Thierry Ravet; Joëlle Tilmanne; Nicolas D'Alessandro


conference of the international speech communication association | 2013

A quantitative comparison of glottal closure instant estimation algorithms on a large variety of singing sounds.

Onur Babacan; Thomas Drugman; Nicolas D'Alessandro; Nathalie Henrich; Thierry Dutoit

Collaboration


Dive into the Nicolas D'Alessandro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johnty Wang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Sidney S. Fels

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Dubuisson

Faculté polytechnique de Mons

View shared research outputs
Researchain Logo
Decentralizing Knowledge