Caroline Appert
University of Paris-Sud
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Caroline Appert.
human factors in computing systems | 2009
Caroline Appert; Shumin Zhai
This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the performance and ease of learning of stroke shortcuts in comparison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of shortcuts had the same level of performance with enough practice, stroke shortcuts had substantial cognitive advantages in learning and recall. With the same amount of practice, users could successfully recall more shortcuts and make fewer errors with stroke shortcuts than with keyboard shortcuts. The second half of the paper focuses on UI development support and articulates guidelines for toolkits to implement stroke shortcuts in a wide range of software applications. We illustrate how to apply these guidelines by introducing the Stroke Shortcuts Toolkit (SST) which is a library for adding stroke shortcuts to Java Swing applications with just a few lines of code.
human factors in computing systems | 2006
Caroline Appert; Jean-Daniel Fekete
This article introduces the OrthoZoom Scroller, a novel interaction technique that improves target acquisition in very large one-dimensional spaces. The OrthoZoom Scroller requires only a mouse to perform panning and zooming in a 1D space. Panning is performed along the slider dimension while zooming is performed along the orthogonal one. We present a controlled experiment showing that the OrthoZoom Scroller is about twice as fast as Speed Dependant Automatic Zooming to perform pointing tasks whose index of difficulty is in the 10-30 bits range. We also present an application to browse large textual documents with the OrthoZoom Scroller that uses semantic zooming and snapping on the structure.
human factors in computing systems | 2008
Emmanuel Pietriga; Caroline Appert
Focus + context techniques such as fisheye lenses are used to navigate and manipulate objects in multi-scale worlds. They provide in-place magnification of a region without requiring users to zoom the whole representation and consequently lose context. Their adoption is however hindered by usability problems mostly due to the nature of the transition between focus and context. Existing transitions are often based on a physical metaphor (magnifying glass, fisheye, rubber sheet), and are almost always achieved through a single dimension: space. We investigate how other dimensions, namely time and translucence, can be used to achieve more efficient transitions. We present an extension to Carpendales framework for unifying presentation space accommodating these new dimensions. We define new lenses in that space, called Sigma lenses, and compare them to existing lenses through experiments based on a generic task: focus targeting. Results show that one new lens, the Speed-coupled flattening lens, significantly outperforms all others.
Foundations and Trends in Human-computer Interaction | 2012
Shumin Zhai; Per Ola Kristensson; Caroline Appert; Tue Haste Andersen; Xiang Cao
The potential for using stroke gestures to enter, retrieve and select commands and text has been recently unleashed by the popularity of touchscreen devices. This monograph provides a state-of-the-art integrative review of a body of human–computer interaction research on stroke gestures. It begins with an analysis of the design dimensions of stroke gestures as an interaction medium. The analysis classifies gestures into analogue versus abstract gestures, gestures for commands versus for symbols, gestures with different orders of complexity, visual-spatial dependent and independent gestures, and finger versus stylus drawn gestures. Gesture interfaces such as the iOS interface, the Graffiti text entry method for Palm devices, marking menus, and the SHARK/ShapeWriter word-gesture keyboard, make different choices in this multi-dimensional design space. n nThe main body of this work consists of reviewing and synthesizing some of the foundational studies in the literature on stroke gesture interaction, particularly those done by the authors in the last decade. The human performance factors covered include motor control complexity, visual and auditory feedback, and human memory capabilities in dealing with gestures. Based on these foundational studies this review presents a set of design principles for creating stroke gesture interfaces. These include making gestures analogous to physical effects or cultural conventions, keeping gestures simple and distinct, defining stroke gestures systematically, making them self-revealing, supporting appropriate levels of chunking, and facilitating progress from visually guided performance to recall-driven performance. The overall theme is on making learning gestures easier while designing for long-term efficiency. Important system implementation issues of stroke gesture interfaces such as gesture recognition algorithms and gesture design toolkits are also covered in this review. The monograph ends with a few call-to-action research topics.
human factors in computing systems | 2010
Caroline Appert; Olivier Chapuis; Emmanuel Pietriga
Focus+context interfaces provide in-place magnification of a region of the display, smoothly integrating the focus of attention into its surroundings. Two representations of the data exist simultaneously at two different scales, providing an alternative to classical pan & zoom for navigating multi-scale interfaces. For many practical applications however, the magnification range of focus+context techniques is too limited. This paper addresses this limitation by exploring the quantization problem: the mismatch between visual and motor precision in the magnified region. We introduce three new interaction techniques that solve this problem byn integrating fast navigation and high-precision interaction in the magnified region. Speed couples precision to navigation speed. Key and Ring use a discrete switch between precision levels, the former using a keyboard modifier, the latter by decoupling the cursor from the lens center. We report on three experiments showing that our techniques make interacting with lenses easier while increasing the range of practical magnification factors, and that performance can be further improved by integrating speed-dependent visual behaviors.
People and Computers | 2005
Caroline Appert; Michel Beaudouin-Lafon; Wendy E. Mackay
This article introduces the Complexity of Interaction Sequences model (CIS). CIS describes the structure of interaction techniques and the SimCIS simulator uses these descriptions to predict their performance in the context of an interaction sequence. The model defines the complexity of an interaction technique as a measure of its effectiveness within a given context. We tested CIS to compare three interaction techniques: fixed unimanual palettes, fixed bimanual palettes and toolglasses. The model predicts that the complexity of both palettes depends on interaction sequences, while toolglasses are less context-dependent. CIS also predicts that fixed bimanual palettes outperform the other two techniques. Predictions were tested empirically with a controlled experiment and confirmed the hypotheses. We argue that, in order to be generalizable, experimental comparisons of interaction techniques should include the concept of context sensitivity. CIS is a step in this direction as it helps predict the performance of interaction techniques according to the context of use.
human computer interaction with mobile devices and services | 2013
Daniel Spelmezan; Caroline Appert; Olivier Chapuis; Emmanuel Pietriga
Virtual navigation on a mobile touchscreen is usually performed using finger gestures: drag and flick to scroll or pan, pinch to zoom. While easy to learn and perform, these gestures cause significant occlusion of the display. They also require users to explicitly switch between navigation mode and edit mode to either change the viewports position in the document, or manipulate the actual content displayed in that viewport, respectively. SidePress augments mobile devices with two continuous pressure sensors co-located on one of their sides. It provides users with generic bidirectional navigation capabilities at different levels of granularity, all seamlessly integrated to act as an alternative to traditional navigation techniques, including scrollbars, drag-and-flick, or pinch-to-zoom. We describe the hardware prototype, detail the associated interaction vocabulary for different applications, and report on two laboratory studies. The first shows that users can precisely and efficiently control SidePress; the second, that SidePress can be more efficient than drag-and-flick touch gestures when scrolling large documents.
human factors in computing systems | 2010
Caroline Appert; Olivier Bau
Gesture-based interfaces provide expert users with an efficient form of interaction but they require a learning effort for novice users. To address this problem, some on-line guiding techniques display all available gestures in response to partial input. However, partial input recognition algorithms are scale dependent while most gesture recognizers support scale independence (i.e., the same shape at different scales actually invokes the same command). We propose an algorithm for estimating the scale of any partial input in the context of a gesture recognition system and illustrate how it can be used to improve users experience with gesture-based systems.
l'interaction homme-machine | 2011
David Bonnet; Caroline Appert
This article introduces the Swiss Army Menu (SAM), a radial menu that enables a very large number of functions on a single small tactile screen. The design of SAM relies on four different kinds of items, support for navigating in hierarchies of items and a control based on small thumb movements. SAM can thus offer a set of functions so large that it would typically have required a number of widgets that could not have been displayed in a single viewport at the same time.
l'interaction homme-machine | 2006
Caroline Appert; Michel Beaudouin-Lafon
This article presents SMCanvas, an extension of the Java Swing toolkit dedicated to prototyping and teaching graphical interaction. SMCanvas uses a simplified scene graph for rendering and state machines for interaction. The use of polymorphism and reification helps combine ease of use and power of expression. We describe our experience of using SMCanvas with Master level students for programming advanced interactions, and propose to evaluate user interface tools with benchmarks.