Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Guillotel is active.

Publication


Featured researches published by Philippe Guillotel.


IEEE Transactions on Affective Computing | 2012

Physiological-Based Affect Event Detector for Entertainment Video Applications

Julien Fleureau; Philippe Guillotel; Quan Huynh-Thu

In this paper, we propose a methodology to build a real-time affect detector dedicated to video viewing and entertainment applications. This detector combines the acquisition of traditional physiological signals, namely, galvanic skin response, heart rate, and electromyogram, and the use of supervised classification techniques by means of Gaussian processes. It aims at detecting the emotional impact of a video clip in a new way by first identifying emotional events in the affective stream (fast increase of the subject excitation) and then by giving the associated binary valence (positive or negative) of each detected event. The study was conducted to be as close as possible to realistic conditions by especially minimizing the use of active calibrations and considering on-the-fly detection. Furthermore, the influence of each physiological modality is evaluated through three different key-scenarios (mono-user, multi-user and extended multi-user) that may be relevant for consumer applications. A complete description of the experimental protocol and processing steps is given. The performances of the detector are evaluated on manually labeled sequences, and its robustness is discussed considering the different single and multi-user contexts.


affective computing and intelligent interaction | 2013

Affective Benchmarking of Movies Based on the Physiological Responses of a Real Audience

Julien Fleureau; Philippe Guillotel; Izabela Orlac

We propose here an objective study of the emotional impact of a movie on an audience. An affective benchmarking solution is introduced making use of a low-intrusive measurement of the well-known Electro Dermal Answer. A dedicated processing of this biosignal produces a time-variant and normalized affective signal related to the significant excitation variations of the audience. Besides the new methodology, the originality of this paper stems from the fact that this framework has been tested on a real audience during regular cinema shows and a film festival, for five different movies and a total of 128 audience members.


virtual reality software and technology | 2012

HapSeat: producing motion sensation with multiple force-feedback devices embedded in a seat

Fabien Danieau; Julien Fleureau; Philippe Guillotel; Nicolas Mollet; Anatole Lécuyer; Marc Christie

We introduce a novel way of simulating sensations of motion which does not require an expensive and cumbersome motion platform. Multiple force-feedbacks are applied to the seated users body to generate a sensation of motion experiencing passive navigation. A set of force-feedback devices such as mobile armrests or headrests are arranged around a seat so that they can apply forces to the user. We have dubbed this new approach HapSeat. A proof of concept has been designed which uses three low-cost force-feedback devices, and two control models have been implemented. Results from the first user study suggest that subjective sensations of motion are reliably generated using either model. Our results pave the way to a novel device to generate consumer motion effects based on our prototype.


ieee haptics symposium | 2012

Framework for enhancing video viewing experience with haptic effects of motion

Fabien Danieau; Julien Fleureau; Audrey Cabec; Paul Kerbiriou; Philippe Guillotel; Nicolas Mollet; Marc Christie; Anatole Lécuyer

This work aims at enhancing a classical video viewing experience by introducing realistic haptic feelings in a consumer environment. More precisely, a complete framework to both produce and render the motion embedded in an audiovisual content is proposed to enhance a natural movie viewing session. We especially consider the case of a first-person point of view audiovisual content and we propose a general workflow to address this problem. This latter includes a novel approach to both capture the motion and video of the scene of interest, together with a haptic rendering system for generating a sensation of motion. A complete methodology to evaluate the relevance of our framework is finally proposed and demonstrates the interest of our approach.


IEEE Transactions on Haptics | 2013

Enhancing Audiovisual Experience with Haptic Feedback: A Survey on HAV

Fabien Danieau; Anatole Lécuyer; Philippe Guillotel; Julien Fleureau; Nicolas Mollet; Marc Christie

Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.


eurographics | 2016

Automated cinematography with unmanned aerial vehicles

Quentin Galvane; Julien Fleureau; Francois-Louis Tariolle; Philippe Guillotel

The rise of Unmanned Aerial Vehicles and their increasing use in the cinema industry calls for the creation of dedicated tools. Though there is a range of techniques to automatically control drones for a variety of applications, none have considered the problem of producing cinematographic camera motion in real-time for shooting purposes. In this paper we present our approach to UAV navigation for autonomous cinematography. The contributions of this research are twofold: (i) we adapt virtual camera control techniques to UAV navigation; (ii) we introduce a drone-independent platform for high-level user interactions that integrates cinematographic knowledge. The results presented in this paper demonstrate the capacities of our tool to capture live movie scenes involving one or two moving actors.


IEEE Transactions on Image Processing | 2013

Correspondence Map-Aided Neighbor Embedding for Image Intra Prediction

Safa Cherigui; Christine Guillemot; Dominique Thoreau; Philippe Guillotel; Patrick Pérez

This paper describes new image prediction methods based on neighbor embedding (NE) techniques. Neighbor embedding methods are used here to approximate an input block (the block to be predicted) in the image as a linear combination of K nearest neighbors. However, in order for the decoder to proceed similarly, the K nearest neighbors are found by computing distances between the known pixels in a causal neighborhood (called template) of the input block and the co-located pixels in candidate patches taken from a causal window. Similarly, the weights used for the linear approximation are computed in order to best approximate the template pixels. Although efficient, these methods suffer from limitations when the template and the block to be predicted are not correlated, e.g., in non homogenous texture areas. To cope with these limitations, this paper introduces new image prediction methods based on NE techniques in which the K-NN search is done in two steps and aided, at the decoder, by a block correspondence map, hence the name map-aided neighbor embedding (MANE) method. Another optimized variant of this approach, called oMANE method, is also studied. In these methods, several alternatives have also been proposed for the K-NN search. The resulting prediction methods are shown to bring significant rate-distortion performance improvements when compared to H.264 Intra prediction modes (up to 44.75% rate saving at low bit rates).


international conference on acoustics, speech, and signal processing | 2012

Hybrid template and block matching algorithm for image intra prediction

Safa Cherigui; Christine Guillemot; Dominique Thoreau; Philippe Guillotel; Patrick Pérez

Template matching has been shown to outperform the H.264 prediction modes for Intra video coding thanks to better spatial prediction and no additional ancillary data to transmit. The method indeed works well when the template and the block to be predicted are highly correlated, e.g., in homogenous image areas, however, it obviously fails in areas with non homogeneous textures. This paper explores the idea of using a block-matching intra prediction algorithm which, thanks to a Rate-Distorsion (RD) based decision mechanism, will naturally be used in image areas when template matching (TM) fails. This new method offers a significant coding gain compared to H.264 Intra prediction modes and the template matching based prediction. Indeed, the TM-based algorithm and the proposed hybrid algorithm lead, with the Bjontergaard measure, to rate gains of up to respectively 38.02% and 48.38% at low bitrates when compared with H.264 Intra only.


international conference on image processing | 2013

Optimized neighbor embeddings for single-image super-resolution

Mehmet Turkan; Dominique Thoreau; Philippe Guillotel

We describe a self-content single-image super-resolution algorithm based on multi-scale neighbor embeddings of small image patches. Given an input low-resolution patch, we gradually expand its size by relying on local geometric similarities of low- and high-resolution patch spaces under small scaling factors. We characterize the local geometry with K-similar patches taken from an exemplar set and we collect exemplar patch pairs from the input image and its appropriately rescaled versions. While ensuring local images compatibility with an optimization on K, we satisfy image smoothness by patch overlapping. We further enforce global consistency through an adaptive back-projection. Our experimental results show better performance on synthesizing natural looking textures and sharp edges with less artifacts when compared to other methods.


IEEE MultiMedia | 2014

Toward Haptic Cinematography: Enhancing Movie Experiences with Camera-Based Haptic Effects

Fabien Danieau; Julien Fleureau; Philippe Guillotel; Nicolas Mollet; Marc Christie; Anatole Lécuyer

Haptics, the technology which brings tactile or force-feedback to users, has a great potential for enhancing movies and could lead to new immersive experiences. This article introduces haptic cinematography, which presents haptics as a new component of the filmmakers toolkit. The authors propose a taxonomy of haptic effects and introduce new effects coupled with classical cinematographic motions to enhance the video-viewing experience. They propose two models to render haptic effects based on camera motions: the first model makes the audience feel the motion of the camera, and the second provides haptic metaphors related to the semantics of the camera effect. Results from a user study suggest that these new effects improve the quality of experience. Filmmakers can use this new way of creating haptic effects to propose new immersive audio-visual experiences.

Collaboration


Dive into the Philippe Guillotel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabien Danieau

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Mollet

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christine Guillemot

French Institute for Research in Computer Science and Automation

View shared research outputs
Researchain Logo
Decentralizing Knowledge