Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thierry Ravet is active.

Publication


Featured researches published by Thierry Ravet.


Archive | 2009

Automatic Processing of EEG-EOG-EMG Artifacts in Sleep Stage Classification

S. Devuyst; Thierry Dutoit; Thierry Ravet; Patricia Stenuit; Myriam Kerkhofs; Etienne Stanus

In this paper, we present a series of algorithms for dealing with artifacts in electroencephalograms (EEG), electrooculograms (EOG) and electromyograms (EMG). The aim is to apply artifact correction whenever possible in order to lose a minimum of data, and to identify the remaining artifacts so as not take them into account during the sleep stage classification. Nine procedures were implemented to minimize cardiac interference and slow ondulations, and to detect muscle artifacts, failing electrode, 50/60Hz main interference, saturations, highlights abrupt transitions, EOG interferences and artifacts in EOG. Detection methods were developed in the time domain as well as in the frequency domain, using adjustable parameters. A database of 20 excerpts of polysomnographic sleep recordings scored in artifacts by an expert was available for developing (excerpts 1 to 10) and testing (excerpts 11 to 20) the automatic artifact detection algorithms. We obtained a global agreement rate of 96.06%, with sensitivity and specificity of 83.67% and 96.47% respectively.


international conference on multimedia and expo | 2013

Nonlinear dimensionality reduction approaches applied to music and textural sounds

Stéphane Dupont; Thierry Ravet; Cécile Picard-Limpens; Christian Frisson

Recently, various dimensionality reduction approaches have been proposed as alternatives to PCA or LDA. These improved approaches do not rely on a linearity assumption, and are hence capable of discovering more complex embeddings within different regions of the data sets. Despite their success on artificial datasets, it is not straightforward to predict which technique is the most appropriate for a given real dataset. In this paper, we empirically evaluate recent techniques on two real audio use cases: musical instrument loops used in music production and sound effects used in sound editing. ISOMAP and t-SNE are being compared to PCA in a visualization problem, where we end up with a two-dimensional view. Various evaluation measures are used: classification performance, as well as trustworthiness/continuity assessing the preservation of neighborhoods. Although PCA and ISOMAP can yield good continuity performance even locally (samples in the original space remain close-by in the low-dimensional one), they fail to preserve the structure of the data well enough to ensure that distinct subgroups remain separate in the visualization. We show that t-SNE presents the best performance, and can even be beneficial as a pre-processing stage for improving classification when the amount of labeled data is low.


conference on multimedia modeling | 2013

VideoCycle: User-Friendly Navigation by Similarity in Video Databases

Christian Frisson; Stéphane Dupont; Alexis Moinet; Cécile Picard-Limpens; Thierry Ravet; Xavier Siebert; Thierry Dutoit

VideoCycle is a candidate application for this second Video Browser Showdown challenge. VideoCycle allows interactive intra-video and inter-shot navigation with dedicated gestural controllers. MediaCy- cle, the framework it is built upon, provides media organization by sim- ilarity, with a modular architecture enabling most of its workflow to be performed by plugins: feature extraction, clustering, segmentation, summarization, intra-media and inter-segment visualization. MediaCy- cle focuses on user experience with user interfaces that can be tailored to specific use cases.


Journal on Multimodal User Interfaces | 2008

Real-time motion attention and expressive gesture interfaces

Matei Mancas; Donald Glowinski; Gualtiero Volpe; Antonio Camurri; Pierre Bretéché; Jonathan Demeyer; Thierry Ravet; Paolo Coletta

This paper aims at investigating the relationship between gestures’ expressivity and the amount of attention they attract. We present a technique for quantifying behavior saliency, here understood as the capacity to capture one’s attention, by the rarity of selected motion and gestural expressive features. This rarity index is based on the real-time computation of the occurrence probability of expressive motion features numerical values. Hence, the time instants that correspond to rare unusual dynamic patterns of an expressive feature are singled out. In a multi-user scenario, the rarity index highlights the person in a group which shows the most different behavior with respect to the others. In a mono-user scenario, the rarity index highlights when the expressive content of a gesture changes. Those methods can be considered as preliminary steps toward context-aware expressive gesture analysis. This work has been partly carried out in the framework of the eNTERFACE 2008 workshop (Paris, France, August 2008) and is partially supported by the EU ICT SAME Project (www.sameproject.eu) and by the NUMEDIART Project (www.numediart.org).


9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013

Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

Nicolas d’Alessandro; Joëlle Tilmanne; Maria Astrinaki; Thomas Hueber; Rasmus Dall; Thierry Ravet; Alexis Moinet; Hüseyin Çakmak; Onur Babacan; Adela Barbulescu; Valentin Parfait; Victor Huguenin; Emine Sümeyye Kalaycı; Qiong Hu

This paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool.


intelligent technologies for interactive entertainment | 2013

Medianeum: Gesture-Based Ergonomic Interaction

François Zajéga; Cécile Picard-Limpens; Julie René; Antonin Puleo; Justine Decuypere; Christian Frisson; Thierry Ravet; Matei Mancas

The proposed Medianeum system consists in an interactive installation allowing general audiences to explore a timeline and access informational multimedia data such as texts, images and video.


intelligent technologies for interactive entertainment | 2013

MashtaCycle: On-Stage Improvised Audio Collage by Content-Based Similarity and Gesture Recognition

Christian Frisson; Gauthier Keyaerts; Fabien Grisard; Stéphane Dupont; Thierry Ravet; François Zajéga; Laura Colmenares Guerra; Todor Todoroff; Thierry Dutoit

In this paper we present the outline of a performance in-progress. It brings together the skilled musical practices from Belgian audio collagist Gauthier Keyaerts aka Very Mash’ta; and the realtime, content-based audio browsing capabilities of the AudioCycle and LoopJam applications developed by the remaining authors. The tool derived from AudioCycle named MashtaCycle aids the preparation of collections of stem audio loops before performances by extracting content-based features (for instance timbre) used for the positioning of these sounds on a 2D visual map. The tool becomes an embodied on-stage instrument, based on a user interface which uses a depth-sensing camera, and augmented with the public projection of the 2D map. The camera tracks the position of the artist within the sensing area to trigger sounds similarly to the LoopJam installation. It also senses gestures from the performer interpreted with the Full Body Interaction (FUBI) framework, allowing to apply sound effects based on bodily movements. MashtaCycle blurs the boundary between performance and preparation, navigation and improvisation, installations and concerts.


Proceedings of the 2014 International Workshop on Movement and Computing | 2014

Hidden Markov Model Based Real-Time Motion Recognition and Following

Thierry Ravet; Joëlle Tilmanne; Nicolas D'Alessandro


new interfaces for musical expression | 2012

LoopJam: turning the dance floor into a collaborative instrumental map.

Christian Frisson; Stéphane Dupont; Julien Leroy; Alexis Moinet; Thierry Ravet; Xavier Siebert; Thierry Dutoit


Archive | 2009

MORFACE: FACE MORPHING

Matei Mancas; Ricardo Chessini; Sullivan Hidot; Caroline Machy; Radhwan Ben Madhkour; Thierry Ravet

Collaboration


Dive into the Thierry Ravet's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge