Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey S. Pierce is active.

Publication


Featured researches published by Jeffrey S. Pierce.


user interface software and technology | 2000

Sensing techniques for mobile interaction

Ken Hinckley; Jeffrey S. Pierce; Michael J. Sinclair; Eric Horvitz

We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.


interactive 3d graphics and games | 1997

Image plane interaction techniques in 3D immersive environments

Jeffrey S. Pierce; Andrew S. Forsberg; Matthew Conway; Seung Hong; Robert C. Zeleznik; Mark R. Mine

This paper presents a set of interaction techniques for use in headtracked immersive virtual environments. With these techniques, the user interacts with the 2D projections that 3D objects in the scene make on his image plane. The desktop analog is the use of a mouse to interact with objects in a 3D scene based on their projections on the monitor screen. Participants in an immersive environment can use the techniques we discuss for object selection, object manipulation, and user navigation in virtual environments. CR Categories and Subject Descriptors: 1.3.6 [Computer Graphics]: Methodology and Techniques - InteractionTechniques; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - VirtualReality. Additional Keywords: virtual worlds, virtual environments, navigation, selection, manipulation.


interactive 3d graphics and games | 1999

Voodoo dolls: seamless interaction at multiple scales in virtual environments

Jeffrey S. Pierce; Brian C. Stearns; Randy Pausch

The Voodoo Dolls technique is a two-handed interaction technique for manipulating objects at a distance in immersive virtual environments. This technique addresses some limitations of existing techniques: they do not provide a lightweight method of interacting with objects of widely varying sizes, and many limit the objects that can be selected and the manipulations possible after making a selection. With the Voodoo Dolls technique, the user dynamically creates dolls: transient, hand held copies of objects whose effects on the objects they represent are determined by the hand holding them. For simplicity, we assume a right-handed user in the following discussion. When a user holds a doll in his right hand and moves it relative to a doll in his left hand, the object represented by the doll in his right hand moves to the same position and orientation relative to the object represented by the doll in his left hand. The system scales the dolls so that the doll in the left hand is half a meter along its longest dimension and the other dolls maintain the same relative size; this allows the user to work seamlessly at multiple scales. The Voodoo Dolls technique also allows both visible and occluded objects to be selected, and provides a stationary frame of reference for working relative to moving objects.


user interface software and technology | 2004

A gesture-based authentication scheme for untrusted public terminals

Shwetak N. Patel; Jeffrey S. Pierce; Gregory D. Abowd

Powerful mobile devices with minimal I/O capabilities increase the likelihood that we will want to annex these devices to I/O resources we encounter in the local environment. This opportunistic annexing will require authentication. We present a sensor-based authentication mechanism for mobile devices that relies on physical possession instead of knowledge to setup the initial connection to a public terminal. Our solution provides a simple mechanism for shaking a device to authenticate with the public infrastructure, making few assumptions about the surrounding infrastructure while also maintaining a reasonable level of security.


ACM Transactions on Computer-Human Interaction | 2005

Foreground and background interaction with sensor-enhanced mobile devices

Ken Hinckley; Jeffrey S. Pierce; Eric Horvitz; Michael J. Sinclair

Building on Buxtons foreground/background model, we discuss the importance of explicitly considering both foreground interaction and background interaction, as well as transitions between foreground and background, in the design and implementation of sensing techniques for sensor-enhanced mobile devices. Our view is that the foreground concerns deliberate user activity where the user is attending to the device, while the background is the realm of inattention or split attention, using naturally occurring user activity as an input that allows the device to infer or anticipate user needs. The five questions for sensing systems of Bellotti et al. [2002] proposed as a framework for this special issue, primarily address the foreground, but neglect critical issues with background sensing. To support our perspective, we discuss a variety of foreground and background sensing techniques that we have implemented for sensor-enhanced mobile devices, such as powering on the device when the user picks it up, sensing when the user is holding the device to his ear, automatically switching between portrait and landscape display orientations depending on how the user is holding the device, and scrolling the display using tilt. We also contribute system architecture issues, such as using the foreground/background model to handle cross-talk between multiple sensor-based interaction techniques, and theoretical perspectives, such as a classification of recognition errors based on explicitly considering transitions between the foreground and background. Based on our experiences, we propose design issues and lessons learned for foreground/background sensing systems.


interactive 3d graphics and games | 1999

Toolspaces and glances: storing, accessing, and retrieving objects in 3D desktop applications

Jeffrey S. Pierce; Matthew Conway; Maarten van Dantzich; George G. Robertson

Users of 3D desktop applications perform tasks that require accessing data storage, moving objects, and navigation. These operations are typically performed using 2D GUI elements or 3D widgets. We wish to focus on interaction with 3D widgets directly in the 3D world, rather than forcing our users to repeatedly switch contexts between 2D and 3D. However, the use of 3D widgets requires a mechanism for storing, accessing, and retrieving these widgets. In this paper we present foolspuces and glances to provide this capability for 3D widgets and other objects in interactive 3D worlds. Toolspaces are storage spaces attached to the user’s virtual body; objects placed in these spaces are always accessible yet out of the user’s view until needed. Users access these toolspaces to store and retrieve objects through a type of lightweight and ephemeral navigation we call glances. CR


ieee virtual reality conference | 2004

Navigation with place representations and visible landmarks

Jeffrey S. Pierce; Randy Pausch

Existing navigation techniques do not scale well to large virtual worlds. We present a new technique, navigation with place representations and visible landmarks that scales from town-sized to planet-sized worlds. Visible landmarks make distant landmarks visible and allow users to travel relative to those landmarks with a single gesture. Actual and symbolic place representations allow users to detect and travel to more distant locations with a small number of gestures. The worlds semantic place hierarchy determines which visible landmarks and place representations users can see at any point in time. We present experimental results demonstrating that our technique allows users to navigate more efficiently than a modified panning and zooming W1M, completing within-place navigation tasks 22% faster and between-place tasks 38% faster on average.


ubiquitous computing | 2004

From devices to tasks: automatic task prediction for personalized appliance control

Charles Lee Isbell; Olufisayo Omojokun; Jeffrey S. Pierce

One of the driving applications of ubiquitous computing is universal appliance interaction: the ability to use arbitrary mobile devices to interact with arbitrary appliances, such as TVs, printers, and lights. Because of limited screen real estate and the plethora of devices and commands available to the user, a central problem in achieving this vision is predicting which appliances and devices the user wishes to use next in order to make interfaces for those devices available. We believe that universal appliance interaction is best supported through the deployment of appliance user interfaces (UIs) that are personalized to a user’s habits and information needs. In this paper, we suggest that, in a truly ubiquitous computing environment, the user will not necessarily think of devices as separate entities; therefore, rather than focus on which device the user may want to use next, we present a method for automatically discovering the user’s common tasks (e.g., watching a movie, or surfing TV channels), predicting the task that the user wishes to engage in, and generating an appropriate interface that spans multiple devices. We have several results. We show that it is possible to discover and cluster collections of commands that represent tasks and to use history to predict the next task reliably. In fact, we show that moving from devices to tasks is not only a useful way of representing our core problem, but that it is, in fact, an easier problem to solve. Finally, we show that tasks can vary from user to user.


Presence: Teleoperators & Virtual Environments | 1999

Designing A Successful HMD-Based Experience

Jeffrey S. Pierce; Randy Pausch; Christopher B. Sturgill; Kevin Christiansen

For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatters Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.


advanced visual interfaces | 2006

Understanding the whethers, hows, and whys of divisible interfaces

Heather M. Hutchings; Jeffrey S. Pierce

Users are increasingly shifting from interacting with a single, personal computer to interacting across multiple, heterogeneous devices. We present results from a pair of studies investigating specifically how and why users might divide an applications interface across devices in private, semi-private, and public environments. Our results suggest that users are interested in dividing interfaces in all of these environments. While the types of divisions and reasons for dividing varied across users and environments, common themes were that users divided interfaces to improve interaction, to share information, and to balance usability and privacy. Based on our results, we present implications for the design of divisible interfaces.

Collaboration


Dive into the Jeffrey S. Pierce's collaboration.

Top Co-Authors

Avatar

Randy Pausch

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian C. Stearns

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Charles Lee Isbell

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge