Christian Frisson
University of Mons
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Frisson.
International Journal of Multimedia Information Retrieval | 2013
Klaus Schoeffmann; David Ahlström; Werner Bailer; Claudiu Cobârzan; Frank Hopfgartner; Kevin McGuinness; Cathal Gurrin; Christian Frisson; Duy-Dinh Le; Manfred Del Fabro; Hongliang Bai; Wolfgang Weiss
The Video Browser Showdown evaluates the performance of exploratory video search tools on a common data set in a common environment and in presence of the audience. The main goal of this competition is to enable researchers in the field of interactive video search to directly compare their tools at work. In this paper, we present results from the second Video Browser Showdown (VBS2013) and describe and evaluate the tools of all participating teams in detail. The evaluation results give insights on how exploratory video search tools are used and how they perform in direct comparison. Moreover, we compare the achieved performance to results from another user study where 16 participants employed a standard video player to complete the same tasks as performed in VBS2013. This comparison shows that the sophisticated tools enable better performance in general, but for some tasks common video players provide similar performance and could even outperform the expert tools. Our results highlight the need for further improvement of professional tools for interactive search in videos.
content based multimedia indexing | 2009
Stéphane Dupont; Thomas Dubuisson; Jérôme Urbain; Raphaël Sebbe; Nicolas D'Alessandro; Christian Frisson
This paper presents AudioCycle, a prototype application for browsing through music loop libraries. AudioCycle provides the user with a graphical view where the audio extracts are visualized and organized according to their similarity in terms of musical properties, such as timbre, harmony, and rhythm. The user is able to navigate in this visual representation, and listen to individual audio extracts, searching for those of interest. AudioCycle draws from a range of technologies, including audio analysis from music information retrieval research, 3D visualization, spatial auditory rendering, audio time-scaling and pitch modification. The proposed approach extends on previously described music and audio browsers. Concepts developed here will be of interest to DJs, remixers, musicians, soundtrack composers, but also sound designers and foley artists. Possible extension to multimedia libraries are also suggested.
international conference on multimedia and expo | 2013
Stéphane Dupont; Thierry Ravet; Cécile Picard-Limpens; Christian Frisson
Recently, various dimensionality reduction approaches have been proposed as alternatives to PCA or LDA. These improved approaches do not rely on a linearity assumption, and are hence capable of discovering more complex embeddings within different regions of the data sets. Despite their success on artificial datasets, it is not straightforward to predict which technique is the most appropriate for a given real dataset. In this paper, we empirically evaluate recent techniques on two real audio use cases: musical instrument loops used in music production and sound effects used in sound editing. ISOMAP and t-SNE are being compared to PCA in a visualization problem, where we end up with a two-dimensional view. Various evaluation measures are used: classification performance, as well as trustworthiness/continuity assessing the preservation of neighborhoods. Although PCA and ISOMAP can yield good continuity performance even locally (samples in the original space remain close-by in the low-dimensional one), they fail to preserve the structure of the data well enough to ensure that distinct subgroups remain separate in the visualization. We show that t-SNE presents the best performance, and can even be beneficial as a pre-processing stage for improving classification when the amount of labeled data is low.
conference on multimedia modeling | 2013
Christian Frisson; Stéphane Dupont; Alexis Moinet; Cécile Picard-Limpens; Thierry Ravet; Xavier Siebert; Thierry Dutoit
VideoCycle is a candidate application for this second Video Browser Showdown challenge. VideoCycle allows interactive intra-video and inter-shot navigation with dedicated gestural controllers. MediaCy- cle, the framework it is built upon, provides media organization by sim- ilarity, with a modular architecture enabling most of its workflow to be performed by plugins: feature extraction, clustering, segmentation, summarization, intra-media and inter-segment visualization. MediaCy- cle focuses on user experience with user interfaces that can be tailored to specific use cases.
EURASIP Journal on Advances in Signal Processing | 2010
Cécile Picard; Christian Frisson; François Faure; George Drettakis; Paul G. Kry
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.
audio mostly conference | 2014
Christian Frisson; Stéphane Dupont; Willy Yvart; Nicolas Riche; Xavier Siebert; Thierry Dutoit
Sound designers source sounds in massive collections, heavily tagged by themselves and sound librarians. For each query, once successive keywords attained a limit to filter down the results, hundreds of sounds are left to be reviewed. AudioMetro combines a new content-based information visualization technique with instant audio feedback to facilitate this part of their workflow. We show through user evaluations by known-item search in collections of textural sounds that a default grid layout ordered by filename unexpectedly outperforms content-based similarity layouts resulting from a recent dimension reduction technique (Student-t Stochastic Neighbor Embedding), even when complemented with content-based glyphs that emphasize local neighborhoods and cue perceptual features. We propose a solution borrowed from image browsing: a proximity grid, whose density we optimize for nearest neighborhood preservation among the closest cells. Not only does it remove overlap but we show through a subsequent user evaluation that it also helps to direct the search. We based our experiments on an open dataset (the OLPC sound library) for replicability.
tangible and embedded interaction | 2013
Christian Frisson
This paper focuses on one aspect of doctoral studies, within the last year of completion, consisting in designing applications for the navigation (by content-based similarity) in audio or video collections: the choice of tangible or free-form interfaces depending on use cases. One goal of this work is to determine which type of gestural interface suits best each chosen use case making use of navigation into media collections composed of audio or video elements, among: classifying sounds for electroacoustic music composition, derushing video, improvising instant music through an installation organizing and synchronizing audio loops. Prototype applications have been developed using the modular Media-Cycle framework for organization of media content by similarity. We conclude preliminarily that tangible interfaces are better-suited for focused expert tasks and free-form interfaces for multiple-user exploratory tasks, while a combination of both can create emergent practices.
audio mostly conference | 2010
Cécile Picard; Christian Frisson; Jean Vanderdonckt; Damien Tardieu; Thierry Dutoit
This paper presents a new approach to sound composition for soundtrack composers and sound designers. We propose a tool for usable sound manipulation and composition that targets sound variety and expressive rendering of the composition. We first automatically segment audio recordings into atomic grains which are displayed on our navigation tool according to signal properties. To perform the synthesis, the user selects one recording as model for rhythmic pattern and timbre evolution, and a set of audio grains. Our synthesis system then processes the chosen sound material to create new sound sequences based on onset detection on the recording model and similarity measurements between the model and the selected grains. With our method, we can create a large variety of sound events such as those encountered in virtual environments or other training simulations, but also sound sequences that can be integrated in a music composition. We present a usability-minded interface that allows to manipulate and tune sound sequences in an appropriate way for sound design.
international conference on multimedia and expo | 2009
Jérôme Urbain; Thomas Dubuisson; Stéphane Dupont; Christian Frisson; Raphaël Sebbe; Nicolas D'Alessandro
This paper presents AudioCycle, a prototype application for browsing through music loop libraries. AudioCycle provides the user with a graphical view where the audio extracts are visualized and organized according to their similarity in terms of musical properties, such as timbre, harmony, and rhythm. The user is able to navigate in this visual representation and listen to individual audio extracts, as well as query the database by providing audio examples. AudioCycle draws from a range of technologies, including audio analysis from music information retrieval research, 3D visualization, spatial auditory rendering, audio time-scaling and pitch modification. The proposed approach extends on previously described music and audio browsers. Possible extension to multimedia libraries is also suggested.
tangible and embedded interaction | 2016
Christian Frisson; Bruno Dumas
Information visualisation is the transformation of abstract data into visual, interactive representations. In this paper we present InfoPhys, a device that enables the direct, tangible manipulation of visualisations. InfoPhys makes use of a force-feedback pointing device to simulate haptic feedback while the user explores visualisations projected on top of the device. We present a use case illustrating the trends in ten years of TEI proceedings and how InfoPhys allows users to feel and manipulate these trends. The technical and software aspects of our prototype are presented, and promising improvements and future work opened by InfoPhys are then discussed.