Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sylvain Le Groux is active.

Publication


Featured researches published by Sylvain Le Groux.


The Engineering of Mixed Reality Systems | 2010

The eXperience Induction Machine: A New Paradigm for Mixed-Reality Interaction Design and Psychological Experimentation

Ulysses Bernardet; Sergi Bermúdez i Badia; Armin Duff; Martin Inderbitzin; Sylvain Le Groux; Jônatas Manzolli; Zenon Mathews; Anna Mura; Aleksander Väljamäe; Paul F. M. J. Verschure

The eXperience Induction Machine (XIM) is one of the most advanced mixed-reality spaces available today. XIM is an immersive space that consists of physical sensors and effectors and which is conceptualized as a general-purpose infrastructure for research in the field of psychology and human–artifact interaction. In this chapter, we set out the epistemological rational behind XIM by putting the installation in the context of psychological research. The design and implementation of XIM are based on principles and technologies of neuromorphic control. We give a detailed description of the hardware infrastructure and software architecture, including the logic of the overall behavioral control. To illustrate the approach toward psychological experimentation, we discuss a number of practical applications of XIM. These include the so-called, persistent virtual community, the application in the research of the relationship between human experience and multi-modal stimulation, and an investigation of a mixed-reality social interaction paradigm.


intelligent information systems | 2005

Nearest-neighbor automatic sound annotation with a WordNet taxonomy

Pedro Cano; Markus Koppenberger; Sylvain Le Groux; Julien Ricard; Nicolas Wack; Perfecto Herrera

Sound engineers need to access vast collections of sound effects for their film and video productions. Sound effects providers rely on text-retrieval techniques to give access to their collections. Currently, audio content is annotated manually, which is an arduous task. Automatic annotation methods, normally fine-tuned to reduced domains such as musical instruments or limited sound effects taxonomies, are not mature enough for labeling with great detail any possible sound. A general sound recognition tool would require first, a taxonomy that represents the world and, second, thousands of classifiers, each specialized in distinguishing little details. We report experimental results on a general sound annotator. To tackle the taxonomy definition problem we use WordNet, a semantic network that organizes real world knowledge. In order to overcome the need of a huge number of classifiers to distinguish many different sound classes, we use a nearest-neighbor classifier with a database of isolated sounds unambiguously linked to WordNet concepts. A 30% concept prediction is achieved on a database of over 50,000 sounds and over 1600 concepts.


new interfaces for musical expression | 2007

VR-RoBoser: real-time adaptive sonification of virtual environments based on avatar behavior

Sylvain Le Groux; Jônatas Manzolli; Paul F. M. J. Verschure

Until recently, the sonification of Virtual Environments had often been reduced to its simplest expression. Too often soundscapes and background music are predetermined, repetitive and somewhat predictable. Yet, there is room for more complex and interesting sonification schemes that can improve the sensation of presence in a Virtual Environment. In this paper we propose a system that automatically generates original background music in real-time called VR-RoBoser. As a test case we present the application of VR-RoBoser to a dynamic avatar that explores its environment. We show that the musical events are directly and continuously generated and influenced by the behavior of the avatar in three-dimensional virtual space, generating a context dependent sonification.


international conference on computer graphics and interactive techniques | 2008

re(PER)curso: an interactive mixed reality chronicle

Anna Mura; Behdad Rezazadeh; Armin Duff; Jônatas Manzolli; Sylvain Le Groux; Zenon Mathews; Ulysses Bernardet; Sytse Wierenga; Sergi Bermudez; Paul F. M. J. Verschure

re(PER)curso presents an interactive mixed reality narrative where two human performers – a percussionist and a dancer and a number of real-time synthetic actors including sonification, virtual cameras and an anthropomorphic avatar, explore the confluence of the physical and the virtual dimensions underlying existence and experience (Figure 1). The synthetic components of re(PER)curso are realized with computer generated graphics, automated moving light and stage control, video art, a synthetic music composition system called RoBoser [Manzolli and Verschure 2005], and an avatar embedded in a 3D graphic environment. The integration of all elements is realized through the multi-modal mixed reality system the eXperience Induction Machine (XIM) that is based on an earlier large scale public exhibition called Ada [Eng 2003]. XIM is controlled through a neuromorphic system that defines all the rules of interaction and performance dynamics and as a result the complete performance is synthesized in real-time and evolves without human intervention beyond that of the two human actors on the stage. re(PER)curso is an experiment in interactive narrative and explores the potential of virtual reality and augmented feedback technologies as tools for artistic expression. It expresses a general research strategy where the limits of advanced technologies are explored through their application in art. re(PER)curso is operated as an autonomous interactive installation that is augmented by 2 human performers. It is supported by a number of input devices that track and analyze the ongoing performance through cameras and microphones; controllers such as the synthetic composition engine RoBoser and output systems that include the large-scale real-time computer graphics, moving virtual and real cameras, and moving lights. Stage information obtained by the tracking systems is also projected onto the virtual world where it modulates the avatar’s behavior allowing it to adjust body position, posture and gaze to the physical world and to adjust properties of the virtual cameras.


Journal of The Audio Engineering Society | 2004

Nearest-neighbor Generic Sound Classification with a WordNet-based Taxonomy

Pedro Cano; Markus Koppenberger; Perfecto Herrera; Sylvain Le Groux; Julien Ricard; Nicolas Wack


new interfaces for musical expression | 2010

Disembodied and Collaborative Musical Interaction in the Multimodal Brain Orchestra.

Sylvain Le Groux; Jônatas Manzolli; Paul F. M. J. Verschure


Archive | 2007

Interactive Sonification of the Spatial Behavior of Human and Synthetic Characters in a Mixed-Reality Environment

Sylvain Le Groux; Jonatas Manzolli


international computer music conference | 2009

Situated Interactive Music System: Connecting Mind and Body Through Musical Interaction

Sylvain Le Groux; Paul F. M. J. Verschure


international computer music conference | 2009

IMPLICIT PHYSIOLOGICAL INTERACTION FOR THE GENERATION OF AFFECTIVE MUSICAL SOUNDS

Sylvain Le Groux; Aleksander Väljamäe; Jônatas Manzolli; Paul F. M. J. Verschure


international conference on e-business and telecommunication networks | 2004

Perceptual and semantic management of sound effects with a WordNet-based taxonomy

Pedro Cano; Markus Koppenberger; Sylvain Le Groux; Perfecto Herrera; Nicolas Wack

Collaboration


Dive into the Sylvain Le Groux's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jônatas Manzolli

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Wack

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Pedro Cano

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Mura

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Armin Duff

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge