Damien Tardieu
University of Mons
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Damien Tardieu.
Journal on Multimodal User Interfaces | 2010
Damien Tardieu; Xavier Siebert; Barbara Mazzarino; Ricardo Chessini; Julien Dubois; Stéphane Dupont; Giovanna Varni; Alexandra Visentin
In this article we present a system for content-based browsing of a dance video database. A set of features describing dance is proposed, to quantify local gestures of the dancer as well as global stage usage. These features are used to compute similarities between recorded dance improvisations, which in turn serve to guide the visual exploration in the browsing methods presented here. The software integrating all these components is part of an interactive touch-screen installation, and is also accessible online in association with an artistic project. The different components of this browsing system are presented in this paper.
international conference on multimedia and expo | 2010
Damien Tardieu; Xavier Siebert; Stéphane Dupont; Barbara Mazzarino; B. Blumenthal
In this paper, we present an interactive installation allowing to navigate through a collection of dance performances. The collection, that was specially recorded for the project, is composed of professional dancers of any style or technique improvising within a precise context : 2 minutes, defined space, exact lighting. We describe a dedicated user interface designed to allow an easy and instructive navigation through the collection using either textual tags provided by the dancers or automatically extracted features of dance motion.
audio mostly conference | 2010
Cécile Picard; Christian Frisson; Jean Vanderdonckt; Damien Tardieu; Thierry Dutoit
This paper presents a new approach to sound composition for soundtrack composers and sound designers. We propose a tool for usable sound manipulation and composition that targets sound variety and expressive rendering of the composition. We first automatically segment audio recordings into atomic grains which are displayed on our navigation tool according to signal properties. To perform the synthesis, the user selects one recording as model for rhythmic pattern and timbre evolution, and a set of audio grains. Our synthesis system then processes the chosen sound material to create new sound sequences based on onset detection on the recording model and similarity measurements between the model and the selected grains. With our method, we can create a large variety of sound events such as those encountered in virtual environments or other training simulations, but also sound sequences that can be integrated in a music composition. We present a usability-minded interface that allows to manipulate and tune sound sequences in an appropriate way for sound design.
new interfaces for musical expression | 2010
Christian Frisson; Benoı̂t Macq; Stéphane Dupont; Xavier Siebert; Damien Tardieu; Thierry Dutoit
Journal of The Audio Engineering Society | 2010
Stéphane Dupont; Christian Frisson; Xavier Siebert; Damien Tardieu
Archive | 2009
Xavier Siebert; Stéphane Dupont; Philippe Fortemps; Damien Tardieu
Journal of The Audio Engineering Society | 2014
Emmanuel Deruty; Damien Tardieu
Archive | 2009
Damien Tardieu; Ricardo Chessini; Julien Dubois; Stéphane Dupont; Sullivan Hidot; Barbara Mazzarino; Alexis Moinet; Xavier Siebert; Giovanna Varni; Alessandra Visentin
Archive | 2010
Christian Frisson; Cécile Picard; Damien Tardieu
new interfaces for musical expression | 2010
Christian Frisson; Stéphane Dupont; Xavier Siebert; Damien Tardieu; Benoît Macq