Wendy Ju
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wendy Ju.
human-robot interaction | 2011
Leila Takayama; Doug Dooley; Wendy Ju
The animation techniques of anticipation and reaction can help create robot behaviors that are human readable such that people can figure out what the robot is doing, reasonably predict what the robot will do next, and ultimately interact with the robot in an effective way. By showing forethought before action and expressing a reaction to the task outcome (success or failure), we prototyped a set of human-robot interaction behaviors. In a 2 (forethought vs. none: between) × 2 (reaction to outcome vs. none: between) × 2 (success vs. failure task outcome: within) experiment, we tested the influences of forethought and reaction upon peoples perceptions of the robot and the robots readability. In this online video prototype experiment (N=273), we have found support for the hypothesis that perceptions of robots are influenced by robots showing forethought, the task outcome (success or failure), and showing goal-oriented reactions to those task outcomes. Implications for theory and design are discussed.
conference on computer supported cooperative work | 2008
Wendy Ju; Brian Lee; Scott R. Klemmer
An important challenge in designing ubiquitous computing experiences is negotiating transitions between explicit and implicit interaction, such as how and when to provide users with notifications. While the paradigm of implicit interaction has important benefits, it is also susceptible to difficulties with hidden modes, unexpected action, and misunderstood intent. To address these issues, this work presents a framework for implicit interaction and applies it to the design of an interactive whiteboard application called Range. Range is a public interactive whiteboard designed to support co-located, ad-hoc meetings. It employs proximity sensing capability to proactively transition between display and authoring modes, to clear space for writing, and to cluster ink strokes. We show how the implicit interaction techniques of user reflection (how systems indicate to users what they perceive or infer), system demonstration (how systems indicate what they are doing), and override (how users can interrupt or stop a proactive system action) can prevent, mitigate, and correct errors in the whiteboards proactive behaviors. These techniques can be generalized to improve the designs of a wide array of ubiquitous computing experiences.
human factors in computing systems | 2001
Wendy Ju; Rebecca Hurwitz; Tilke Judd; Bonny Lee
We introduce CounterActive, an interactive kitchen cookbook that teaches people to cook. After describing the interactive system and the multimedia recipe schema, we discuss results of early user test and evaluation.
Design Issues | 2008
Wendy Ju; Larry Leifer
Introduction Imagine, for a second, a doorman who behaves as automatic doors do. He does not acknowledge you when you approach or pass by. He gives no hint which door can or will open—until you wander within six feet of the door, whereupon he flings the door wide open. If you arrived after hours, you might stand in front of the doors for awhile before you realize that the doors are locked, because the doorman’s blank stare gives no clue. If you met such a doorman, you might suspect psychosis. And yet this behavior is typical of our day-to-day interactions not only with automatic doors, but any number of interactive devices. Our cell phones ring loudly, even though we are clearly in a movie theatre. Our alarm clocks forget to go off if we do not set them to, even if we’ve been getting up at the same time for years. Our computers interrupt presentations to let everyone know that a software update is available. The infiltration of computer technologies into everyday life has brought these interaction crises to a head. As Neil Gershenfeld observes, “There’s a very real sense in which the things around us are infringing a new kind of right that has not needed protection until now. We’re spending more and more time responding to the demands of machines.”1 These problematic interactions are symptoms of our as-yet lack of sophistication in designing interactions that do not constantly demand the input or attention of the user. “Implicit interactions”— those that occur without the explicit behest or awareness of the user—will become increasingly important as human-computer interactions extend beyond the desktop computer into new arenas; arenas such as the automobile, where the driver is physically, socially, or cognitively engaged. Traditional HCI—that involving a commandbased or graphical user interface-based paradigm—has focused on the realm of “explicit interactions,” where the use of computers and interactive products relies on explicit input and output. The values and principles that govern good desktop computing interactions may not apply when we apply computing to the products that populate the rest of our lives. 1 Neil Gershenfeld, When Things Start to Think (New York: Henry Holt, 1999), 102.
conference on computer supported cooperative work | 2004
Wendy Ju; Arna Ionescu; Lawrence Neeley; Terry Winograd
We have built and tested WorkspaceNavigator, which supports knowledge capture and reuse for teams engaged in unstructured, dispersed, and prolonged collaborative design activity in a dedicated physical workspace. It provides a coherent unified interface for post-facto retrieval of multiple streams of data from the work environment, including overview snapshots of the workspace, screenshots of in-space computers, whiteboard images, and digital photos of physical objects. This paper describes the design of WorkspaceNavigator and identifies key considerations for knowledge capture tools for design workspaces, which differ from those of more structured meeting or classroom environments. Iterative field tests in workspace environments for student teams in two graduate Mechanical Engineering design courses helped to identify features that augment the work of both course participants and design researchers.
international conference on persuasive technology | 2010
Wendy Ju; David Sirkin
The primary challenge for information terminals, kiosks, and incidental use systems of all sorts, is that of getting the “first click” from busy passersby. This paper presents two studies that investigate the role of motion and physicality in drawing people to look and actively interact with generic information kiosks. The first study was designed as a 2x2 factorial design, physical v. on-screen gesturing and hand v. arrow motion, on a kiosk deployed in two locations, a bookstore and a computer science building lobby. The second study examined the effect of physical v. projected gesturing, and included a follow-up survey. Over twice as many passersby interacted in the physical v. on-screen condition in the first study and 60% more interacted in the second. These studies, in concert, indicate that physical gesturing does indeed significantly attract more looks and use for the information kiosk, and that form affects people’s impression and interpretation of these gestures.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015
David Miller; Annabel Sun; Mishel Johns; Hillary Page Ive; David Sirkin; Sudipto Aich; Wendy Ju
As vehicle automation proliferates, the current emphasis on preventing driver distraction needs to transition to maintaining driver availability. During automated driving, vehicle operators are likely to use brought-in devices to access entertainment and information. Do these media devices need to be controlled by the vehicle in order to manage driver attention? In a driving simulation study (N=48) investigating driver performance shortly after transitions from automated to human control, we found that participants watching videos or reading on a tablet were far less likely (6% versus 27%) to exhibit behaviors indicative of drowsiness than when overseeing the automated driving system; irrespective of the pre-driving activity, post- transition driving performance after a five-second structured handoff was not impaired. There was not a significant difference in collision avoidance reaction time or minimum headway distance between supervision and media consumption conditions, irrespective of whether messages were presented on the tablet device, or only presented on the instrument panel, or whether there was a single or two-stage handoff.
new interfaces for musical expression | 2017
Edgar Berdahl; Wendy Ju
This paper describes a new Beagle Board-based platform for teaching and practicing interaction design for musical applications. The migration from desktop and laptop computer-based sound synthesis to a compact and integrated control, computation and sound generation platform has enormous potential to widen the range of computer music instruments and installations that can be designed, and improves the portability, autonomy, extensibility and longevity of designed systems. We describe the technical features of the Satellite CCRMA platform and contrast it with personal computer-based systems used in the past as well as emerging smart phone-based platforms. The advantages and trade-offs of the new platform are considered, and some project work is described.
human-robot interaction | 2015
David Sirkin; Brian K. Mok; Stephen Yang; Wendy Ju
This paper describes our approach to designing, developing behaviors for, and exploring the use of, a robotic footstool, which we named the mechanical ottoman. By approaching unsuspecting participants and attempting to get them to place their feet on the footstool, and then later attempting to break the engagement and get people to take their feet down, we sought to understand whether and how motion can be used by non-anthropomorphic robots to engage people in joint action. In several embodied design improvisation sessions, we observed a tension between people perceiving the ottoman as a living being, such as a pet, and simultaneously as a functional object, which requests that they place their feet on it—something they would not ordinarily do with a pet. In a follow-up lab study (N=20), we found that most participants did make use of the footstool, although several chose not to place their feet on it for this reason. We also found that participants who rested their feet understood a brief lift and drop movement as a request to withdraw, and formed detailed notions about the footstool’s agenda, ascribing intentions based on its movement alone.
robot and human interactive communication | 2016
Dirk Rothenbücher; Jamy Li; David Sirkin; Brian K. Mok; Wendy Ju
How will pedestrians and bicyclists interact with autonomous vehicles when there is no human driver? In this paper, we outline a novel method for performing observational field experiments to investigate interactions with driverless cars. We provide a proof-of-concept study (N=67), conducted at a crosswalk and a traffic circle, which applies this method. In the study, participants encountered a vehicle that appeared to have no driver, but which in fact was driven by a human confederate hidden inside. We constructed a car seat costume to conceal the driver, who was specially trained to emulate an autonomous system. Data included video recordings and participant responses to post-interaction questionnaires. Pedestrians who encountered the car reported that they saw no driver, yet they managed interactions smoothly, except when the car misbehaved by moving into the crosswalk just as they were about to cross. This method is the first of its kind, and we believe that it contributes a valuable technique for safely acquiring empirical data and insights about driverless vehicle interactions. These insights can then be used to design vehicle behaviors well in advance of the broad deployment of autonomous technology.