Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norbert Schnell is active.

Publication


Featured researches published by Norbert Schnell.


GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction | 2009

Continuous realtime gesture following and recognition

Frédéric Bevilacqua; Bruno Zamborlin; Anthony Sypniewski; Norbert Schnell; Fabrice Guédy; Nicolas H. Rasamimanana

We present a HMM based system for real-time gesture analysis. The system outputs continuously parameters relative to the gesture time progression and its likelihood. These parameters are computed by comparing the performed gesture with stored reference gestures. The method relies on a detailed modeling of multidimensional temporal curves. Compared to standard HMM systems, the learning procedure is simplified using prior knowledge allowing the system to use a single example for each class. Several applications have been developed using this system in the context of music education, music and dance performances and interactive installation. Typically, the estimation of the time progression allows for the synchronization of physical gestures to sound files by time stretching/compressing audio buffers or videos.


new interfaces for musical expression | 2007

Wireless sensor interface and gesture-follower for music pedagogy

Frédéric Bevilacqua; Fabrice Guédy; Norbert Schnell; Emmanuel Fléty; Nicolas Leroy

We present in this paper a complete gestural interface built to support music pedagogy. The development of this prototype concerned both hardware and software components: a small wireless sensor interface including accelerometers and gyroscopes, and an analysis system enabling gesture following and recognition. A first set of experiments was conducted with teenagers in a music theory class. The preliminary results were encouraging concerning the suitability of these developments in music education.


systems man and cybernetics | 1998

ESCHER-modeling and performing composed instruments in real-time

Marcelo M. Wanderley; Norbert Schnell; Joseph Butch Rovan

This article presents ESCHER, a sound synthesis environment based on Ircams real-time audio environment jMax. ESCHER is a modular system providing synthesis-independent prototyping of gesturally-controlled instruments by means of parameter interpolation. The system divides into two components: gestural controller and synthesis engine. Mapping between components takes place on two independent levels, coupled by an intermediate abstract parameter layer. This separation allows a flexible choice of controllers and/or sound synthesis methods.


tangible and embedded interaction | 2011

Modular musical objects towards embodied control of digital music

Nicolas H. Rasamimanana; Frédéric Bevilacqua; Norbert Schnell; Fabrice Guédy; Emmanuel Fléty; Côme Maestracci; Bruno Zamborlin; Jean-Louis Frechin; Uros Petrevski

We present an ensemble of tangible objects and software modules designed for musical interaction and performance. The tangible interfaces form an ensemble of connected objects communicating wirelessly. A central concept is to let users determine the final musical function of the objects, favoring customization, assembling, repurposing. This might imply assembling the wireless interfaces with existing everyday objects or musical instruments. Moreover, gesture analysis and recognition modules allow the users to define their own action/motion for the control of sound parameters. Various sound engines and interaction scenarios were built and experimented. Some examples that were developed in a music pedagogy context are described.


GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction | 2009

Towards a gesture-sound cross-modal analysis

Baptiste Caramiaux; Frédéric Bevilacqua; Norbert Schnell

This article reports on the exploration of a method based on canonical correlation analysis (CCA) for the analysis of the relationship between gesture and sound in the context of music performance and listening. This method is a first step in the design of an analysis tool for gesture-sound relationships. In this exploration we used motion capture data recorded from subjects performing free hand movements while listening to short sound examples. We assume that even though the relationship between gesture and sound might be more complex, at least part of it can be revealed and quantified by linear multivariate regression applied to the motion capture data and audio descriptors extracted from the sound examples. After outlining the theoretical background, the article shows how the method allows for pertinent reasoning about the relationship between gesture and sound by analysing the data sets recorded from multiple and individual subjects.


tests and proofs | 2014

The Role of Sound Source Perception in Gestural Sound Description

Baptiste Caramiaux; Frédéric Bevilacqua; Tommaso Bianco; Norbert Schnell; Olivier Houix; Patrick Susini

We investigated gesture description of sound stimuli performed during a listening task. Our hypothesis is that the strategies in gestural responses depend on the level of identification of the sound source and specifically on the identification of the action causing the sound. To validate our hypothesis, we conducted two experiments. In the first experiment, we built two corpora of sounds. The first corpus contains sounds with identifiable causal actions. The second contains sounds for which no causal actions could be identified. These corpora properties were validated through a listening test. In the second experiment, participants performed arm and hand gestures synchronously while listening to sounds taken from these corpora. Afterward, we conducted interviews asking participants to verbalize their experience while watching their own video recordings. They were questioned on their perception of the listened sounds and on their gestural strategies. We showed that for the sounds where causal action can be identified, participants mainly mimic the action that has produced the sound. In the other case, when no action can be associated with the sound, participants trace contours related to sound acoustic features. We also found that the interparticipants’ gesture variability is higher for causal sounds compared to noncausal sounds. Variability demonstrates that, in the first case, participants have several ways of producing the same action, whereas in the second case, the sound features tend to make the gesture responses consistent.


Archive | 2011

Online Gesture Analysis and Control of Audio Processing

Frédéric Bevilacqua; Norbert Schnell; Nicolas H. Rasamimanana; Bruno Zamborlin; Fabrice Guédy

1 Abstract. This chapter presents a general framework for gesture-controlled audio processing. The gesture parameters are assumed to be multi-dimensional temporal profiles obtained from movement or sound capture systems. The analysis is based on machine learning techniques, comparing the incoming dataflow with stored templates. The mapping procedures between the gesture and the audio processing include a specific method we called temporal mapping. In this case, the temporal evolution of the gesture input is taken into account in the mapping process. We describe an example of a possible use of the framework that we experimented with in various contexts, including music and dance performances, music pedagogy and installations.


human factors in computing systems | 2012

The urban musical game: using sport balls as musical interfaces

Nicolas H. Rasamimanana; Frédéric Bevilacqua; Julien Bloit; Norbert Schnell; Emmanuel Fléty; Andrea Cera; Uros Petrevski; Jean-Louis Frechin

We present Urban Musical Game, an installation using augmented sports balls to manipulate and transform an interactive music environment. The interaction is based on playing techniques, a concept borrowed from traditional music instruments and applied here to non musical objects.


Computer Music Journal | 1999

jMax: An Environment for Real-Time Musical Applications

François Déchelle; Riccardo Borghesi; Maurizio De Cecco; Enzo Maggi; Butch Rovan; Norbert Schnell

jMax est une nouvelle implementation du langage visuel MAX, largement utilise pour des applications musicales interactives. Base sur une architecture mixte JAVA/C, jMax privilegie la portabilite.


international conference on acoustics, speech, and signal processing | 2005

Training Ircam's score follower [audio to musical score alignment system]

Arshia Cont; Diemo Schwarz; Norbert Schnell

This paper describes our attempt to make the hidden Markov model (HMM) score following system, developed at Ircam, sensible to past experiences in order to obtain better audio to score real-time alignment for musical applications. A new observation modeling based on Gaussian mixture models is developed which is trainable using a learning algorithm we would call automatic discriminative training. The novelty of this system lies in the fact that this method, unlike classical methods for HMM training, is not concerned with modeling the music signal but with correctly choosing the sequence of music events that was performed. Besides obtaining better alignment, the new systems parameters are controllable in a physical manner and the training algorithm learns different styles of music performance as discussed.

Researchain Logo
Decentralizing Knowledge