Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ken Hinckley is active.

Publication


Featured researches published by Ken Hinckley.


user interface software and technology | 2000

Sensing techniques for mobile interaction

Ken Hinckley; Jeffrey S. Pierce; Michael J. Sinclair; Eric Horvitz

We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.


user interface software and technology | 1994

A survey of design issues in spatial input

Ken Hinckley; Randy Pausch; John C. Goble; Neal F. Kassell

We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues using examples drawn from instances of 3D interfaces. For example, our first issue suggests that users have difficulty understanding three-dimensional space. We offer a set of strategies which may help users to better perceive a 3D virtual environment, including the use of spatial references, relative gesture, two-handed interaction, multisensory feedback, physical constraints, and head tracking. We describe interfaces which employ these strategies. Our major contribution is the synthesis of many scattered results, observations, and examples into a common framework. This framework should serve as a guide to researchers or systems builders who may not be familiar with design issues in spatial input. Where appropriate, we also try to identify areas in free-space 3D interaction which we see as likely candidates for additional research. An extended and annotated version of the references list for this paper is available on-line through mosaic at address http://uvacs.cs.virginia.edu/~kph2q/.


user interface software and technology | 2003

Synchronous gestures for multiple persons and computers

Ken Hinckley

This research explores distributed sensing techniques for mobile devices using synchronous gestures. These are patterns of activity, contributed by multiple users (or one user with multiple devices), which take on a new meaning when they occur together in time, or in a specific sequence in time. To explore this new area of inquiry, this work uses tablet computers augmented with touch sensors and two-axis linear accelerometers (tilt sensors). The devices are connected via an 802.11 wireless network and synchronize their time-stamped sensor data. This paper describes a few practical examples of interaction techniques using synchronous gestures such as dynamically tiling together displays by physically bumping them together, discusses implementation issues, and speculates on further possibilities for synchronous gestures.


user interface software and technology | 2000

Speed-dependent automatic zooming for browsing large documents

Takeo Igarashi; Ken Hinckley

We propose a navigation technique for browsing large documents that integrates rate-based scrolling with automatic zooming. The view automatically zooms out when the user scrolls rapidly so that the perceptual scrolling speed in screen space remains constant. As a result, the user can efficiently and smoothly navigate through a large document without becoming disoriented by extremely fast visual flow. By incorporating semantic zooming techniques, the user can smoothly access a global overview of the document during rate-based scrolling. We implemented several prototype systems, including a web browser, map viewer, image browser, and dictionary viewer. An informal usability study suggests that for a document browsing task, most subjects prefer automatic zooming and the technique exhibits approximately equal performance time to scroll bars , suggesting that automatic zooming is a helpful alternative to traditional scrolling when the zoomed out view provides appropriate visual cues.


human factors in computing systems | 2000

The Task Gallery: a 3D window manager

George G. Robertson; Maarten van Dantzich; Daniel C. Robbins; Mary Czerwinski; Ken Hinckley; Kirsten Risden; David Thiel; Vadim Gorokhovsky

The Task Gallery is a window manager that uses interactive 3D graphics to provide direct support for task management and document comparison, lacking from many systems implementing the desktop metaphor. User tasks appear as artwork hung on the walls of a virtual art gallery, with the selected task on a stage. Multiple documents can be selected and displayed side-by-side using 3D space to provide uniform and intuitive scaling. The Task Gallery hosts any Windows application, using a novel redirection mechanism that routes input and output between the 3D environment and unmodified 2D Windows applications. User studies suggest that the Task Gallery helps with task management, is enjoyable to use, and that the 3D metaphor evokes spatial memory and cognition.


advanced visual interfaces | 2004

Stitching: pen gestures that span multiple displays

Ken Hinckley; Gonzalo Ramos; François Guimbretière; Patrick Baudisch; Marc A. Smith

Stitching is a new interaction technique that allows users to combine pen-operated mobile devices with wireless networking by using pen gestures that span multiple displays. To stitch, a user starts moving the pen on one screen, crosses over the bezel, and finishes the stroke on the screen of a nearby device. Properties of each portion of the pen stroke are observed by the participating devices, synchronized via wireless network communication, and recognized as a unitary act performed by one user, thus binding together the devices. We identify the general requirements of stitching and describe a prototype photo sharing application that uses stitching to allow users to copy images from one tablet to another that is nearby, expand an image across multiple screens, establish a persistent shared workspace, or use one tablet to present images that a user selects from another tablet. We also discuss design issues that arise from proxemics, that is, the sociological implications of users collaborating in close quarters.


human factors in computing systems | 2006

Hover widgets: using the tracking state to extend the capabilities of pen-operated devices

Tovi Grossman; Ken Hinckley; Patrick Baudisch; Maneesh Agrawala; Ravin Balakrishnan

We present Hover Widgets, a new technique for increasing the capabilities of pen-based interfaces. Hover Widgets are implemented by using the pen movements above the display surface, in the tracking state. Short gestures while hovering, followed by a pen down, access the Hover Widgets, which can be used to activate localized interface widgets. By using the tracking state movements, Hover Widgets create a new command layer which is clearly distinct from the input layer of a pen interface. In a formal experiment Hover Widgets were found to be faster than a more traditional command activation technique, and also reduced errors due to divided attention.


human factors in computing systems | 1999

Touch-sensing input devices

Ken Hinckley; Michael J. Sinclair

We can touch things, and our senses tell us when our hands aretouching something. But most computer input devices cannot detectwhen the user touches or releases the device or some portion of thedevice. Thus, adding touch sensors to input devices offers manypossibilities for novel interaction techniques. We demonstrate theTouchTrackball and the Scrolling TouchMouse, which use unobtrusivecapacitance sensors to detect contact from the users hand withoutrequiring pressure or mechanical actuation of a switch. We furtherdemonstrate how the capabilities of these devices can be matched toan implicit interaction technique, the On-Demand Interface, whichuses the passive information captured by touch sensors to fade inor fade out portions of a display depending on what the user isdoing; a second technique uses explicit, intentional interactionwith touch sensors for enhanced scrolling. We present our newdevices in the context of a simple tax- onomy of tactile inputtechnologies. Finally, we discuss the properties of touch-sensingas an input channel in general.


ACM Transactions on Computer-Human Interaction | 1998

Two-handed virtual manipulation

Ken Hinckley; Randy Pausch; Dennis R. Proffitt; Neal F. Kassell

We discuss a two-handed user interface designed to support three-dimesional neurosurgical visualization. By itself, this system is a “point design,” an example of an advanced user interface technique. In this work, we argue that in order to understand why interaction techniques do or do not work, and to suggest possibilities for new techniques, it is important to move beyond point design and to introduce careful scientific measurement of human behavioral principles. In particular, we argue that the common-sense viewpoint that “two hands save time by working in parallel” may not always be an effective way to think about two-handed interface design because the hands do not necessarily work in parallel (there is a structure to two-handed manipulation) and because two hands do more than just save time over one hand (two hands provide the user with more information and can structure how the user thinks about a task). To support these claims, we present an interface design developed in collaboration with neurosurgeons which has undergone extensive informal usability testing, as well as a pair of formal experimental studies which investigate behavioral aspects of two-handed virtual object manipulation. Our hope is that this discussion will help others to apply the lessons in our neurosurgery application to future two-handed user interface designs.


human factors in computing systems | 2005

Experimental analysis of mode switching techniques in pen-based user interfaces

Yang Li; Ken Hinckley; Zhiwei Guan; James A. Landay

Inking and gesturing are two central tasks in pen-based user interfaces. Switching between modes for entry of uninterpreted ink and entry of gestures is required by many pen-based user interfaces. Without an appropriate mode switching technique, pen-based interactions in such situations may be inefficient and cumbersome. In this paper, we investigate five techniques for switching between ink and gesture modes in pen interfaces, including a pen-pressure based mode switching technique that allows implicit mode transition. A quantitative experimental study was conducted to evaluate the performance of these techniques. The results suggest that pressing a button with the non-preferred hand offers the fastest performance, while the technique of holding the pen still is significantly slower and more prone to error than the other techniques. Pressure, while promising, did not perform as well as the non-preferred hand button with our current implementation.

Collaboration


Dive into the Ken Hinckley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randy Pausch

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge