Paul Issartel
University of Paris-Sud
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Issartel.
IEEE Transactions on Visualization and Computer Graphics | 2017
Lonni Besançon; Paul Issartel; Mehdi Ammi; Tobias Isenberg
We present the design and evaluation of an interface that combines tactile and tangible paradigms for 3D visualization. While studies have demonstrated that both tactile and tangible input can be efficient for a subset of 3D manipulation tasks, we reflect here on the possibility to combine the two complementary input types. Based on a field study and follow-up interviews, we present a conceptual framework of the use of these different interaction modalities for visualization both separately and combined-focusing on free exploration as well as precise control. We present a prototypical application of a subset of these combined mappings for fluid dynamics data visualization using a portable, position-aware device which offers both tactile input and tangible sensing. We evaluate our approach with domain experts and report on their qualitative feedback.
international conference on robotics and automation | 2014
Mohamed Yassine Tsalamlal; Paul Issartel; Nizar Ouarti; Mehdi Ammi
Haptic devices are dedicated to render virtual tactile stimulation. A limitation of these devices is the intrusiveness of their mechanical structures, i.e. the user need to hold or wear the device to interact with the environment. Here, we propose a concept of new tactile device named HAIR. The device is composed of a computer vision system, a mechatronic device and air jets that stimulate the skin. We designed a first prototype and conducted a preliminary experiment to validate our concept. The interface enables a tactile interaction without using physical contact with material devices, providing better freedom of movement and enhancing the interaction transparency.
symposium on 3d user interfaces | 2014
Paul Issartel; Florimond Guéniat; Mehdi Ammi
Manipulating slice planes is an important task for exploring volumetric datasets. Since this task is inherently 3D, it is difficult to accomplish with standard 2D input devices. Alternative interaction techniques have been proposed for direct and natural 3D manipulation of slice planes. However, they also require bulky and dedicated hardware, making them inconvenient for everyday work. To address this issue, we adapted two of these techniques for use in a portable and self-contained handheld AR environment. The first is based on a tangible slicing tool, and the other is based on a spatially aware display. In this paper, we describe our design choices and the technical challenges encountered in this implementation. We then present the results, both objective and subjective, from an evaluation of the two slicing techniques. Our study provides new insight into the usability of these techniques in a handheld AR setting.
international symposium on mixed and augmented reality | 2016
Paul Issartel; Lonni Besançon; Tobias Isenberg; Mehdi Ammi
We present a new mixed reality approach to achieve tangible object manipulation with a single, fully portable, and self-contained device. Our solution is based on the concept of a “tangible volume.” We turn a tangible object into a handheld fish-tank display. Our approach, however, goes beyond traditional fish-tank VR in that it can be viewed from all sides, and that the tangible volume represents a volume of space that can be freely manipulated within a virtual scene. This volume can be positioned onto virtual objects to directly grasp them and to manipulate them in 3D space. We investigate this concept with a user study to evaluate the intuitiveness of using a tangible volume for grasping and manipulating virtual objects. The results show that a majority of participants spontaneously understood the idea of grasping a virtual object “through” the tangible volume.
virtual reality software and technology | 2014
Paul Issartel; Florimond Guéniat; Mehdi Ammi
Exploration of volumetric data is an essential task in many scientific fields. However, the use of standard devices, such as the 2D mouse, leads to suboptimal interaction mappings. Several VR systems provide better interaction capabilities, but they remain dedicated and expensive solutions. In this work, we propose an interface that combines tangible tools and a handheld device. This configuration allows natural and full 6 DOF interaction in a convenient, fully portable and affordable system. This paper presents our design choices for this interface and associated tangible exploration techniques.
Presence: Teleoperators & Virtual Environments | 2017
Paul Issartel; Florimond Guéniat; Tobias Isenberg; Mehdi Ammi
We examine a class of techniques for 3D object manipulation on mobile devices, in which the device’s physical motion is applied to 3D objects displayed on the device itself. This “local coupling” between input and display creates specific challenges compared to manipulation techniques designed for monitor-based or immersive virtual environments. Our work focuses specifically on the mapping between device motion and object motion. We review existing manipulation techniques and introduce a formal description of the main mappings under a common notation. Based on this notation, we analyze these mappings and their properties in order to answer crucial usability questions. We first investigate how the 3D objects should move on the screen, since the screen also moves with the mobile device during manipulation. We then investigate the effects of a limited range of manipulation and present a number of solutions to overcome this constraint. This work provides a theoretical framework to better understand the properties of locally coupled 3D manipulation mappings based on mobile device motion.
symposium on spatial user interaction | 2016
Paul Issartel; Lonni Besançon; Florimond Guéniat; Tobias Isenberg; Mehdi Ammi
We study user preference between allocentric and egocentric 3D manipulation on mobile devices, in a configuration where the motion of the device is applied to an object displayed on the device itself. We first evaluate this preference for translations and for rotations alone, then for full 6-DOF manipulation. We also investigate the role of contextual cues by performing this experiment in different 3D scenes. Finally, we look at the specific influence of each manipulation axis. Our results provide guidelines to help interface designers select an appropriate default mapping in this locally coupled configuration.
arXiv: Human-Computer Interaction | 2016
Lonni Besançon; Paul Issartel; Mehdi Ammi; Tobias Isenberg
Journées Visu 2017 | 2017
Lonni Besançon; Paul Issartel; Mehdi Ammi; Tobias Isenberg
Archive | 2016
Paul Issartel; Lonni Besançon; Steven Franconeri