Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Fuchs is active.

Publication


Featured researches published by Philippe Fuchs.


IEEE Transactions on Industrial Electronics | 2012

Visual Fatigue Reduction for Immersive Stereoscopic Displays by Disparity, Content, and Focus-Point Adapted Blur

Laure Leroy; Philippe Fuchs; Guillaume Moreau

As stereoscopic devices become widely used (immersion-based working environments, stereoscopically viewed movies, autostereoscopic screens, etc), exposure to stereoscopic images can become lengthy, and some eyestrain can set in. We propose a method for reducing eyestrain induced by stereoscopic vision. After reviewing sources of eyestrain linked to stereoscopic vision, we will focus on one of these sources: images with high-frequency contents associated with large disparities. We will put forward an algorithm for removing irritating high frequencies in high disparity zones (i.e., for virtual objects appearing far from the real screen level). We will elaborate on our testing protocol to establish that our processing reduces eyestrain caused by stereoscopic vision, both objectively and subjectively. We will subsequently quantify the positive effects of our algorithm on the relief of eyestrain. As our processing alters the visual quality of the virtual world, we propose a new adaptation of our method to remove this drawback by coupling an eye tracking to our original processing to keep visual quality on the focus point.


Image and Vision Computing | 2004

An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts

François Deschênes; Djemel Ziou; Philippe Fuchs

Abstract This paper presents an algorithm for a cooperative and simultaneous estimation of depth cues: defocus blur and spatial shifts (stereo disparities, two-dimensional (2D) motion, and/or zooming disparities). These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space and for which the intrinsic parameters are known. This algorithm is based on generalized moment expansion. We show that the more blurred image may be expressed as a function of the partial derivatives of the two images, the blur difference and the horizontal and vertical shifts. Hence, these depth cues can be computed by resolving a system of equations. The behavior of the algorithm is studied for constant and linear images, step edges, lines and junctions. The rules governing the choice of its parameters are then discussed. The proposed algorithm is tested using synthetic and real images. The results obtained are accurate and dense. They confirm that defocus blurs and spatial shifts (stereo disparities, 2D motion, and/or zooming disparities) can be simultaneously computed without using the epipolar geometry. They thus implicitly show that the unified approach allows: (1) blur estimation even if the spatial locations of corresponding pixels do not match perfectly; (2) spatial shift estimation even if some of the intrinsic parameters of the camera have been modified during the capture.


Handbook of Augmented Reality | 2011

New Augmented Reality Taxonomy: Technologies and Features of Augmented Environment

Olivier Hugues; Philippe Fuchs; Olivier Nannipieri

This article has a dual aim: firstly to define augmented reality (AR) environments and secondly, based on our definition, a new taxonomy enabling these environments to be classified. After briefly reviewing existing classifications, we define AR by its purpose, ie. to enable someone to create sensory-motor and cognitive activities in a new space combining the real environment and a virtual environment. Below we present our functional taxonomy of AR environments. We divide these environments into two distinct groups. The first concerns the different functionalities enabling us to discover and understand our environment, an augmented perception of reality. The second corresponds to applications whose aim is to create an artificial environment. Finally, more than a functional difference, we demonstrate that it is possible to consider that both types of AR have a pragmatic purpose. The difference therefore seems to lie in the ability of both types of AR to free themselves or not of location in time and space.


tests and proofs | 2012

Real-time adaptive blur for reducing eye strain in stereoscopic displays

Laure Leroy; Philippe Fuchs; Guillaume Moreau

Stereoscopic devices are widely used (immersion-based working environments, stereoscopically-viewed movies, auto-stereoscopic screens). In some instances, exposure to stereoscopic immersion techniques can be lengthy, and so eye strain sets in. We propose a method for reducing eye strain induced by stereoscopic vision. After reviewing sources of eye strain linked to stereoscopic vision, we focus on one of these sources: images with high frequency content associated with large disparities. We put forward an algorithm for removing the irritating high frequencies in high horizontal disparity zones (i.e., for virtual objects appearing far from the real screen level). We elaborate on our testing protocol to establish that our image processing method reduces eye strain caused by stereoscopic vision, both objectively and subjectively. We subsequently quantify the positive effects of our algorithm on the relief of eye strain and discuss further research perspectives.


virtual reality software and technology | 2013

A methodology to assess the acceptability of human-robot collaboration using virtual reality

Vincent Weistroffer; Alexis Paljic; Lucile Callebert; Philippe Fuchs

Robots are becoming more and more present in our everyday life: they are already used for domestic tasks, for companionship activities, and soon they will be used to assist humans and collaborate with them in their work. Human-robot collaboration has already been studied in the industry, for ergonomics and efficiency purposes, but more from a safety than from an acceptability point of view. In this work, we focused on how people perceive robots in a collaboration task and we proposed to use virtual reality as a simulation environment to test different parameters, by making users collaborate with virtual robots. A simple use case was implemented to compare different robot appearances and different robot movements. Questionnaires and physiological measures were used to assess the acceptability level of each condition with a user study. The results showed that the perception of robot movements depended on robot appearance and that a more anthropomorphic robot, both in its appearance and movements, was not necessarily better accepted by the users in a collaboration task. Finally, this preliminary use case was also the opportunity to guarantee the relevance of using such a methodology --- based on virtual reality, questionnaires and physiological measures --- for future studies.


Presence: Teleoperators & Virtual Environments | 2002

Assistance for telepresence by stereovision-based augmented reality and interactivity in 3D space

Philippe Fuchs; Fawzi Nashashibi; D. Maman

In this paper, we describe some use of mixed reality as a new assistance for performing teleoperation tasks in remote scenes. We will start by a brief classification of augmented reality. This paper then describes the principle of our mixed reality system in teleoperation. It tackles the problem of scene registration using a manmachine cooperative and multisensory vision system. The system provides the operator with powerful sensorial feedback as well as appropriate tools to build (and update automatically) the geometric model of the perceived scene. We describe a new interactive approach combining image analysis and mixed reality techniques for assisted 3D geometric and semantic modeling. At the end of this paper, we describe applications in nuclear plants with results in 3D positioning.


Pattern Recognition | 2003

Improved Estimation of Defocus Blur and Spatial Shifts in Spatial Domain: A Homotopy-Based Approach

François Deschênes; Djemel Ziou; Philippe Fuchs

This paper presents a homotopy-based algorithm for the recovery of depth cues in the spatial domain. The algorithm specifically deals with defocus blur and spatial shifts, that is 2D motion, stereo disparities and/or zooming disparities. These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space. We show that they can be simultaneously computed by resolving a system of equations using a homotopy method. The proposed algorithm is tested using synthetic and real images. The results confirm that the use of a homotopy method leads to a dense and accurate estimation of depth cues. This approach has been integrated into an application for relief estimation from remotely sensed images.


robot and human interactive communication | 2014

Assessing the acceptability of human-robot co-presence on assembly lines: A comparison between actual situations and their virtual reality counterparts

Vincent Weistroffer; Alexis Paljic; Philippe Fuchs; Olivier Hugues; Jean-Paul Chodacki; Pascal Ligot; Alexandre Morais

This paper focuses on the acceptability of human-robot collaboration in industrial environments. A use case was designed in which an operator and a robot had to work side-by-side on automotive assembly lines, with different levels of co-presence. This use case was implemented both in a physical and in a virtual situation using virtual reality. A user study was conducted with operators from the automotive industry. The operators were asked to assess the acceptability to work side-by-side with the robot through questionnaires, and physiological measures (heart rate and skin conductance) were taken during the user study. The results showed that working close to the robot imposed more constraints on the operators and required them to adapt to the robot. Moreover, an increase in skin conductance level was observed after working close to the robot. Although no significant difference was found in the questionnaires results between the physical and virtual situations, the increase in physiological measures was significant only in the physical situation. This suggests that virtual reality may be a good tool to assess the acceptability of human-robot collaboration and draw preliminary results through questionnaires, but that physical experiments are still necessary to a complete study, especially when dealing with physiological measures.


symposium on 3d user interfaces | 2013

User-defined gestural interaction: A study on gesture memorization

Jean-François Jégo; Alexis Paljic; Philippe Fuchs

In this paper we study the memorization of user created gestures for 3DUI. Wide public applications mostly use standardized gestures for interactions with simple contents. This work is motivated by two application cases for which a standardized approach is not possible and thus user specific or dedicated interfaces are needed. The first one is applications for people with limited sensory-motor abilities for whom generic interaction methods may not be adapted. The second one is creative arts applications, for which gesture freedom is part of the creative process. In this work, users are asked to create gestures for a set of tasks, in a specific phase, prior to using the system. We propose a user study to explore the question of gesture memorization. Gestures are recorded and recognized with a Hidden Markov Model. Results show that it seems difficult to recall more than two abstract gestures. Affordances strongly improve memorization whereas the use of colocalization has no significant effect.


electronic imaging | 2003

Enhancement of stereoscopic comfort by fast control of frequency content with wavelet transform

Nicolas Lemmer; Guillaume Moreau; Philippe Fuchs

As the scope of virtual reality applications including stereoscopic imaging becomes wider, it is quite clear that not every designer of a VR application thinks of its constraints in order to make a correct use of stereo. Stereoscopic imagery though not required can be a useful tool for depth perception. It is possible to limit the depth of field as shown by Perrin who has also undertaken research on the link between the ability of fusing stereoscopic images (stereopsis) and local disparity and spatial frequency content. We will show how we can extend and enhance this work especially on the computational complexity point of view. The wavelet theory allows us to define a local spatial frequency and then a local measure of stereoscopic comfort. This measure is based on local spatial frequency and disparity as well as on the observations made by Woepking. Local comfort estimation allows us to propose several filtering methods to enhance this comfort. The idea to modify the images such as they check a “stereoscopic comfort condition” defined as a threshold for the stereoscopic comfort condition. More technically, we seek to limit high spatial frequency content when disparity is high thanks to the use of fast algorithms.

Collaboration


Dive into the Philippe Fuchs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Domitile Lourdeaux

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Richir

Arts et Métiers ParisTech

View shared research outputs
Researchain Logo
Decentralizing Knowledge