Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Randy Pausch is active.

Publication


Featured researches published by Randy Pausch.


human factors in computing systems | 1995

Virtual reality on a WIM: interactive worlds in miniature

Richard W. Stoakley; Matthew Conway; Randy Pausch

This paper explores a user interface technique which augments an immersive head tracked display with a hand-held miniature copy of the virtual environment. We call this interface technique the Worlds in Miniature (WIM) metaphor. By establishing a direct relationship between life-size objects in the virtual world and miniature objects in the WIM, we can use the WIM as a tool for manipulating objects in the virtual environment. In addition to describing object manipulation, this paper explores ways in which Worlds in Miniature can act as a single unifying metaphor for such application independent interaction techniques as object selection, navigation, path planning, and visualization. The WIM metaphor naturally offers multiple points of view and multiple scales at which the user can operate, all without requiring explicit modes or commands. Informal user observation indicates that users adapt to the Worlds in Miniature metaphor quickly and that physical props are helpful in manipulating the WIM and other objects in the environment.


ACM Transactions on Computer-Human Interaction | 2000

Past, present, and future of user interface software tools

Brad A. Myers; Scott E. Hudson; Randy Pausch

A user interface software tool helps developers design and implement the user interface. Research on past tools has had enormous impact on todays developers—virtually all applications today are built using some form of user interface tool. In this article, we consider cases of both success and failure in past user interface tools. From these cases we extract a set of themes which can serve as lessons for future work. Using these themes, past tools can be characterized by what aspects of the user interface they addressed, their threshold and ceiling, what path of least resistance they offer, how predictable they are to use, and whether they addressed a target that became irrelevant. We believe the lessons of these past themes are particularly important now, because increasingly rapid technological changes are likely to significantly change user interfaces. We are at the dawn of an era where user interfaces are about to break out of the “desktop” box where they have been stuck for the past 15 years. The next millenium will open with an increasing diversity of user interface on an increasing diversity of computerized devices. These devices include hand-held personal digital assistants (PDAs), cell phones, pages, computerized pens, computerized notepads, and various kinds of desk and wall size-computers, as well as devices in everyday objects (such as mounted on refridgerators, or even embedded in truck tires). The increased connectivity of computers, initially evidenced by the World Wide Web, but spreading also with technologies such as personal-area networks, will also have a profound effect on the user interface to computers. Another important force will be recognition-based user interfaces, especially speech, and camera-based vision systems. Other changes we see are an increasing need for 3D and end-user customization, programming, and scripting. All of these changes will require significant support from the underlying user interface sofware tools.


user interface software and technology | 1994

A survey of design issues in spatial input

Ken Hinckley; Randy Pausch; John C. Goble; Neal F. Kassell

We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues using examples drawn from instances of 3D interfaces. For example, our first issue suggests that users have difficulty understanding three-dimensional space. We offer a set of strategies which may help users to better perceive a 3D virtual environment, including the use of spatial references, relative gesture, two-handed interaction, multisensory feedback, physical constraints, and head tracking. We describe interfaces which employ these strategies. Our major contribution is the synthesis of many scattered results, observations, and examples into a common framework. This framework should serve as a guide to researchers or systems builders who may not be familiar with design issues in spatial input. Where appropriate, we also try to identify areas in free-space 3D interaction which we see as likely candidates for additional research. An extended and annotated version of the references list for this paper is available on-line through mosaic at address http://uvacs.cs.virginia.edu/~kph2q/.


technical symposium on computer science education | 2003

Teaching objects-first in introductory computer science

Stephen Cooper; Wanda Dann; Randy Pausch

An objects-first strategy for teaching introductory computer science courses is receiving increased attention from CS educators. In this paper, we discuss the challenge of the objects-first strategy and present a new approach that attempts to meet this challenge. The new approach is centered on the visualization of objects and their behaviors using a 3D animation environment. Statistical data as well as informal observations are summarized to show evidence of student performance as a result of this approach. A comparison is made of the pedagogical aspects of this new approach with that of other relevant work.


international conference on computer graphics and interactive techniques | 1997

Quantifying immersion in virtual reality

Randy Pausch; Dennis R. Proffitt; George H. Williams

Virtual Reality (VR) has generated much excitement but little formal proof that it is useful. Because VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. In this paper, we show that users with a VR interface complete a search task faster than users with a stationary monitor and a hand-based input device. We placed users in the center of the virtual room shown in Figure 1 and told them to look for camouflaged targets. VR users did not do significantly better than desktop users. However, when asked to search the room and conclude if a target existed, VR users were substantially better at determining when they had searched the entire room. Desktop users took 41% more time, re-examining areas they had already searched. We also found a positive transfer of training from VR to stationary displays and a negative transfer of training from stationary displays to VR.


user interface software and technology | 1996

3D magic lenses

John Viega; Matthew Conway; George H. Williams; Randy Pausch

This work extends the metaphor of a see-through interface embodied in Magic LensesTM to 3D environments. We present two new see-through visualization techniques: jlat lenses in 3D and volumetric lenses. We discuss implementation concerns for platforms that have programmer accessible hardware clipping planes and show several examples of each visualization technique. We also examine composition of multiple lenses in 3D environments, which strengthens the flat lens metaphor, but may have no meaningful semantics in the case of volumetric lenses.


ACM Transactions on Computer-Human Interaction | 1998

Two-handed virtual manipulation

Ken Hinckley; Randy Pausch; Dennis R. Proffitt; Neal F. Kassell

We discuss a two-handed user interface designed to support three-dimesional neurosurgical visualization. By itself, this system is a “point design,” an example of an advanced user interface technique. In this work, we argue that in order to understand why interaction techniques do or do not work, and to suggest possibilities for new techniques, it is important to move beyond point design and to introduce careful scientific measurement of human behavioral principles. In particular, we argue that the common-sense viewpoint that “two hands save time by working in parallel” may not always be an effective way to think about two-handed interface design because the hands do not necessarily work in parallel (there is a structure to two-handed manipulation) and because two hands do more than just save time over one hand (two hands provide the user with more information and can structure how the user thinks about a task). To support these claims, we present an interface design developed in collaboration with neurosurgeons which has undergone extensive informal usability testing, as well as a pair of formal experimental studies which investigate behavioral aspects of two-handed virtual object manipulation. Our hope is that this discussion will help others to apply the lessons in our neurosurgery application to future two-handed user interface designs.


user interface software and technology | 1997

Usability analysis of 3D rotation techniques

Ken Hinckley; Joe Tullio; Randy Pausch; Dennis R. Proffitt; Neal F. Kassell

We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. MultidimensionaI input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36% faster without any statistically detectable loss of accuracy. We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced user acceptance of otherwise identical input sensors. The device should afford some tactile cues, so the user can feel its orientation without looking at it. In the absence of such cues, some test users were unsure of how to use the device.


Presence: Teleoperators & Virtual Environments | 1992

A literature survey for virtual environments: Military flight simulator visual systems and simulator sickness

Randy Pausch; Thomas Crea; Matthew Conway

Researchers in the field of virtual environments (VE), or virtual reality, surround a participant with synthetic stimuli, The flight simulator community, primarily in the U.S. military, has a great deal of experience with aircraft simulations, and VE researchers should be aware of the major results in this field. In this survey of the literature, we have especially focused on military literature that may be hard for traditional academics to locate via the standard journals. One of the authors of this paper is a military helicopter pilot himself, which was quite useful in obtaining access to many of our references. We concentrate on research that produces specific, measured results that apply to VE research. We assume no background other than basic knowledge of computer graphics, and explain simulator terms and concepts as necessary. This paper ends with an annotated bibliography of some harder to find research results in the field of flight simulators: • The effects of display parameters, including field-of-view and scene complexity; • The effect of lag in system response; • The effect of refresh rate in graphics update; • The existing theories on causes of simulator sickness; and • The after-effects of simulator use Many of the results we cite are contradictory. Our global observation is that with flight simulator research, like most human-computer interaction research, there are very few correct answers. Almost always, the answer to a specific question depends on the task the user was attempting to perform with the simulator.


international conference on computer graphics and interactive techniques | 1995

Navigation and locomotion in virtual worlds via flight into hand-held miniatures

Randy Pausch; Tommy Burnette; Dan Brockway; Michael E. Weiblen

This paper describes the use of a World-in-Miniature (WIM) as a navigation and locomotion device in immersive virtual envi ronments. The WIM is a hand-held miniature graphical represent tion of the virtual environment, similar to a map cube. When th user moves an object in the WIM, the object simultaneously move in the surrounding virtual environment. When the user moves a iconic representation of himself, he moves (flies) in the virtua environment. Flying the user in the full scale virtual world is confusing, because the user’s focus of attention is in the miniature, n in the full scale virtual world. We present the novel technique o flying the user into the miniature, providing perceptual and cogn tive constancy when updating the viewpoint.

Collaboration


Dive into the Randy Pausch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wanda Dann

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey S. Pierce

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge