Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott E. Hudson is active.

Publication


Featured researches published by Scott E. Hudson.


ACM Transactions on Computer-Human Interaction | 2000

Past, present, and future of user interface software tools

Brad A. Myers; Scott E. Hudson; Randy Pausch

A user interface software tool helps developers design and implement the user interface. Research on past tools has had enormous impact on todays developers—virtually all applications today are built using some form of user interface tool. In this article, we consider cases of both success and failure in past user interface tools. From these cases we extract a set of themes which can serve as lessons for future work. Using these themes, past tools can be characterized by what aspects of the user interface they addressed, their threshold and ceiling, what path of least resistance they offer, how predictable they are to use, and whether they addressed a target that became irrelevant. We believe the lessons of these past themes are particularly important now, because increasingly rapid technological changes are likely to significantly change user interfaces. We are at the dawn of an era where user interfaces are about to break out of the “desktop” box where they have been stuck for the past 15 years. The next millenium will open with an increasing diversity of user interface on an increasing diversity of computerized devices. These devices include hand-held personal digital assistants (PDAs), cell phones, pages, computerized pens, computerized notepads, and various kinds of desk and wall size-computers, as well as devices in everyday objects (such as mounted on refridgerators, or even embedded in truck tires). The increased connectivity of computers, initially evidenced by the World Wide Web, but spreading also with technologies such as personal-area networks, will also have a profound effect on the user interface to computers. Another important force will be recognition-based user interfaces, especially speech, and camera-based vision systems. Other changes we see are an increasing need for 3D and end-user customization, programming, and scripting. All of these changes will require significant support from the underlying user interface sofware tools.


conference on computer supported cooperative work | 1996

Techniques for addressing fundamental privacy and disruption tradeoffs in awareness support systems

Scott E. Hudson; Ian E. Smith

This paper describes a fundamental dual tradeoff that occurs in systems supporting awareness for distributed work groups, and presents several specific new techniques which illustrate good compromise points within this tradeoff space. This dual tradeoff is between privacy and awareness, and between awareness and disturbance. Simply stated, the more information about oneself that leaves your work area, the more potential for awareness of you exists for your colleagues. Unfortunately, this also represents the greatest potential for intrusion on your privacy. Similarly, the more information that is received about the activities of colleagues, the more potential awareness we have of them. However, at the same time, the more information we receive, the greater the chance that the information will become a disturbance to our normal work. This dual tradeoff seems to be a fundamental one. However, by carefully examining awareness problems in the light of this tradeoff it is possible to devise techniques which expose new points in the design space. These new points provide different types and quantities of information so that awareness can be achieved without invading the privacy of the sender, or creating a disturbance for the receiver. This paper presents four such techniques, each based on a careful selection of the information transmitted.


human factors in computing systems | 1997

Making computers easier for older adults to use: area cursors and sticky icons

Aileen Worden; Neff Walker; Krishna Bharat; Scott E. Hudson

The normal effects of aging include some decline in cognitive, perceptual, and motor abilities. This can have a negative effect on the performance of a number of tasks, including basic pointing and selection tasks common to today’s graphical user interfaces. This paper describes a study of the effectiveness of two interaction techniques: area cursors and sticky icons, in improving the performance of older adults in basic selection tasks. The study described here indicates that when combined, these techniques can decrease target selection times for older adults by as much as 50°/0 when applied to the most difficult cases (smallest selection targets). At the same time these techniques are shown not to impede performance in cases known to be problematical for related techniques (e.g., differentiation between closely spaced targets) and to provide similar but smaller benefits for younger users.


ACM Transactions on Computer-Human Interaction | 2005

Predicting human interruptibility with sensors

James Fogarty; Scott E. Hudson; Christopher G. Atkeson; Daniel Avrahami; Jodi Forlizzi; Sara Kiesler; Johnny Chung Lee; Jie Yang

A person seeking another persons attention is normally able to quickly assess how interruptible the other person currently is. Such assessments allow behavior that we consider natural, socially appropriate, or simply polite. This is in sharp contrast to current computer and communication systems, which are largely unaware of the social situations surrounding their usage and the impact that their actions have on these situations. If systems could model human interruptibility, they could use this information to negotiate interruptions at appropriate times, thus improving human computer interaction.This article presents a series of studies that quantitatively demonstrate that simple sensors can support the construction of models that estimate human interruptibility as well as people do. These models can be constructed without using complex sensors, such as vision-based techniques, and therefore their use in everyday office environments is both practical and affordable. Although currently based on a demographically limited sample, our results indicate a substantial opportunity for future research to validate these results over larger groups of office workers. Our results also motivate the development of systems that use these models to negotiate interruptions at socially appropriate times.


human factors in computing systems | 2003

Predicting human interruptibility with sensors: a Wizard of Oz feasibility study

Scott E. Hudson; James Fogarty; Christopher G. Atkeson; Daniel Avrahami; Jodi Forlizzi; Sara Kiesler; Johnny Chung Lee; Jie Yang

A person seeking someone elses attention is normally able to quickly assess how interruptible they are. This assessment allows for behavior we perceive as natural, socially appropriate, or simply polite. On the other hand, todays computer systems are almost entirely oblivious to the human world they operate in, and typically have no way to take into account the interruptibility of the user. This paper presents a Wizard of Oz study exploring whether, and how, robust sensor-based predictions of interruptibility might be constructed, which sensors might be most useful to such predictions, and how simple such sensors might be.The study simulates a range of possible sensors through human coding of audio and video recordings. Experience sampling is used to simultaneously collect randomly distributed self-reports of interruptibility. Based on these simulated sensors, we construct statistical models predicting human interruptibility and compare their predictions with the collected self-report data. The results of these models, although covering a demographically limited sample, are very promising, with the overall accuracy of several models reaching about 78%. Additionally, a model tuned to avoiding unwanted interruptions does so for 90% of its predictions, while retaining 75% overall accuracy.


user interface software and technology | 2009

Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices

Chris Harrison; Scott E. Hudson

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the devices screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.


human factors in computing systems | 1997

PaperLink: a technique for hyperlinking from real paper to electronic content

Toshifumi Arai; Dietmar Aust; Scott E. Hudson

Paper is a very convenient medium for presenting information. It is familiar, flexible, portable, inexpensive, user modifiable, and offers better readability properties than existing electronic displays. However, paper displays are static and do not offer capabilities such as dynamic content, and hyperlinking that can be provided with electronic media. PaperLink is a system which augments paper documents with electronic features. PaperLink uses a highlighter pen augmented with a camera, along with simple computer vision and pattern recognition techniques, to allow a user to make marks on paper which can have associations and meaning in an electronic world, and to “pick up” printed material for use as electronic input. This paper will consider the prototype PaperLink hardware and software system, and its application to hyperlinking from paper to electronic content.


user interface software and technology | 2004

Haptic pen: a tactile feedback stylus for touch screens

Johnny Chung Lee; Paul H. Dietz; Darren Leigh; William S. Yerazunis; Scott E. Hudson

In this paper we present a system for providing tactile feedback for stylus-based touch-screen displays. The Haptic Pen is a simple low-cost device that provides individualized tactile feedback for multiple simultaneous users and can operate on large touch screens as well as ordinary surfaces. A pressure-sensitive stylus is combined with a small solenoid to generate a wide range of tactile sensations. The physical sensations generated by the Haptic pen can be used to enhance our existing interaction with graphical user interfaces as well as to help make modern computing systems more accessible to those with visual or motor impairments.


human factors in computing systems | 2009

Providing dynamically changeable physical buttons on a visual display

Chris Harrison; Scott E. Hudson

Physical buttons have the unique ability to provide low-attention and vision-free interactions through their intuitive tactile clues. Unfortunately, the physicality of these interfaces makes them static, limiting the number and types of user interfaces they can support. On the other hand, touch screen technologies provide the ultimate interface flexibility, but offer no inherent tactile qualities. In this paper, we describe a technique that seeks to occupy the space between these two extremes - offering some of the flexibility of touch screens, while retaining the beneficial tactile properties of physical interfaces. The outcome of our investigations is a visual display that contains deformable areas, able to produce physical buttons and other interface elements. These tactile features can be dynamically brought into and out of the interface, and otherwise manipulated under program control. The surfaces we describe provide the full dynamics of a visual display (through rear projection) as well as allowing for multitouch input (though an infrared lighting and camera setup behind the display). To illustrate the tactile capabilities of the surfaces, we describe a number of variations we uncovered in our exploration and prototyping. These go beyond simple on/off actuation and can be combined to provide a range of different possible tactile expressions. A preliminary user study indicates that our dynamic buttons perform much like physical buttons in tactile search tasks.


user interface software and technology | 2012

Printed optics: 3D printing of embedded optical elements for interactive devices

Karl D.D. Willis; Eric Brockmeyer; Scott E. Hudson; Ivan Poupyrev

We present an approach to 3D printing custom optical elements for interactive devices labelled Printed Optics. Printed Optics enable sensing, display, and illumination elements to be directly embedded in the casing or mechanical structure of an interactive device. Using these elements, unique display surfaces, novel illumination techniques, custom optical sensors, and embedded optoelectronic components can be digitally fabricated for rapid, high fidelity, highly customized interactive devices. Printed Optics is part of our long term vision for interactive devices that are 3D printed in their entirety. In this paper we explore the possibilities for this vision afforded by fabrication of custom optical elements using todays 3D printing technology.

Collaboration


Dive into the Scott E. Hudson's collaboration.

Top Co-Authors

Avatar

Chris Harrison

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jennifer Mankoff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jodi Forlizzi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

James Fogarty

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Ian E. Smith

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julia Schwarz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Roger King

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Robert Xiao

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gary Hsieh

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge