Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michel Pahud is active.

Publication


Featured researches published by Michel Pahud.


conference on computer supported cooperative work | 2010

Three's company: understanding communication channels in three-way distributed collaboration

Anthony Tang; Michel Pahud; Kori Inkpen; Hrvoje Benko; John C. Tang; Bill Buxton

We explore the design of a system for three-way collaboration over a shared visual workspace, specifically in how to support three channels of communication: person, reference, and task-space. In two studies, we explore the implications of extending designs intended for dyadic collaboration to three-person groups, and the role of each communication channel. Our studies illustrate the utility of multiple configurations of users around a distributed workspace, and explore the subtleties of traditional notions of identity, awareness, spatial metaphor, and corporeal embodiments as they relate to three-way collaboration.


human factors in computing systems | 2010

Manual deskterity: an exploration of simultaneous pen + touch direct input

Ken Hinckley; Koji Yatani; Michel Pahud; Nicole Coddington; Jenny Rodenhouse; Andrew D. Wilson; Hrvoje Benko; Bill Buxton

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.


user interface software and technology | 2014

Sensing techniques for tablet+stylus interaction

Ken Hinckley; Michel Pahud; Hrvoje Benko; Pourang Irani; François Guimbretière; Marcel Gavriliu; Xiang 'Anthony' Chen; Fabrice Matulic; William Buxton; Andrew D. Wilson

We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screens relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.


human computer interaction with mobile devices and services | 2013

Toward compound navigation tasks on mobiles via spatial manipulation

Michel Pahud; Ken Hinckley; Shamsi T. Iqbal; Abigail Sellen; Bill Buxton

We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.


human factors in computing systems | 2012

Informal information gathering techniques for active reading

Ken Hinckley; Xiaojun Bi; Michel Pahud; Bill Buxton

GatherReader is a prototype e-reader with both pen and multi-touch input that illustrates several interesting design trade-offs to fluidly interleave content consumption behaviors (reading and flipping through pages) with information gathering and informal organization activities geared to active reading tasks. These choices include (1) relaxed precision for casual specification of scope; (2) multiple object collection via a visual clipboard; (3) flexible workflow via deferred action; and (4) complementary use of pen+touch. Our design affords active reading by limiting the transaction costs for secondary subtasks, while keeping users in the flow of the primary task of reading itself.


interactive tabletops and surfaces | 2013

TouchMover: actuated 3D touchscreen with haptic feedback

Michael J. Sinclair; Michel Pahud; Hrvoje Benko

This paper presents the design and development of a novel visual+haptic device that co-locates 3D stereo visualization, direct touch and touch force sensing with a robotically actuated display. Our actuated immersive 3D display, called TouchMover, is capable of providing 1D movement (up to 36cm) and force feedback (up to 230N) in a single dimension, perpendicular to the screen plane. In addition to describing the details of our design, we showcase how TouchMover allows the user to: 1) interact with 3D objects by pushing them on the screen with realistic force feedback, 2) touch and feel the contour of a 3D object, 3) explore and annotate volumetric medical images (e.g., MRI brain scans) and 4) experience different activation forces and stiffness when interacting with common 2D on-screen elements (e.g., buttons). We also contribute the results of an experiment which demonstrates the effectiveness of the haptic output of our device. Our results show that people are capable of disambiguating between 10 different 3D shapes with the same 2D footprint by touching alone and without any visual feedback (85% recognition rate, 12 participants).


Pervasive and Mobile Computing | 2015

The utility of Magic Lens interfaces on handheld devices for touristic map navigation

Jens Grubert; Michel Pahud; Raphael Grasset; Dieter Schmalstieg; Hartmut Seichter

This paper investigates the utility of the Magic Lens metaphor on small screen handheld devices for map navigation given state of the art computer vision tracking. We investigate both performance and user experience aspects. In contrast to previous studies a semi-controlled field experiment ( n = 18 ) in a ski resort indicated significantly longer task completion times for a Magic Lens compared to a Static Peephole interface in an information browsing task. A follow-up controlled laboratory study ( n = 21 ) investigated the impact of the workspace size on the performance and usability of both interfaces. We show that for small workspaces Static Peephole outperforms Magic Lens. As workspace size increases performance gets equivalent and subjective measurements indicate less demand and better usability for Magic Lens. Finally, we discuss the relevance of our findings for the application of Magic Lens interfaces for map interaction in touristic contexts. Investigation of Magic Lens and Static Peephole on smartphones for maps.Two experiments: semi-controlled field experiment in a ski resort and lab study.For A0 sized posters Magic Lens is slower and less preferred.For larger workspace sizes performance between interfaces is equivalent.Magic Lens interaction results in better usability for large workspaces.


user interface software and technology | 2015

Sensing Tablet Grasp + Micro-mobility for Active Reading

Dongwook Yoon; Ken Hinckley; Hrvoje Benko; François Guimbretière; Pourang Irani; Michel Pahud; Marcel Gavriliu

The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.


ieee haptics symposium | 2014

TouchMover 2.0 - 3D touchscreen with force feedback and haptic texture

Michael J. Sinclair; Michel Pahud; Hrvoje Benko

This paper presents the design and development of a novel visio-haptic device that co-locates 3D stereo visualization, direct touch, texture and touch force sensing with a robotically actuated display. Our actuated immersive 3D display, called TouchMover 2.0, is capable of providing 1D movement (up to 36cm), haptic screen force feedback (up to 230N) in a single dimension perpendicular to the screen plane, and has an additional capability to render haptic texture cues via vibrotactile actuators attached to the touchscreen. We describe the details of our design and improvements. We showcase how TouchMover 2.0 allows the user to: touch and feel the 3D contour and 2D texture of a topographic map, to interact with 3D objects by pushing them on the screen with realistic force feedback and intuitively explore and feel pseudo tissue texture within volumetric data from medical imagery (e.g., MRI brain scan).


Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016

GlassHands: Interaction Around Unmodified Mobile Devices Using Sunglasses

Jens Grubert; Eyal Ofek; Michel Pahud; Matthias Kranz; Dieter Schmalstieg

We present a novel approach for extending the input space around unmodified mobile devices. Using built-in front-facing cameras of unmodified handheld devices, GlassHands estimates hand poses and gestures through reflections in sunglasses, ski goggles or visors. Thereby, GlassHands creates an enlarged input space, rivaling input reach on large touch displays. We introduce the idea along with its technical concept and implementation. We demonstrate the feasibility and potential of our proposed approach in several application scenarios, such as map browsing or drawing using a set of interaction techniques previously possible only with modified mobile devices or on large touch displays. Our research is backed up with a user study.

Collaboration


Dive into the Michel Pahud's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Grubert

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge