Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert J. K. Jacob is active.

Publication


Featured researches published by Robert J. K. Jacob.


human factors in computing systems | 2008

Reality-based interaction: a framework for post-WIMP interfaces

Robert J. K. Jacob; Audrey Girouard; Leanne M. Hirshfield; Michael S. Horn; Orit Shaer; Erin Treacy Solovey; Jamie Zigelbaum

We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI, we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI provides insights for design and uncovers gaps or opportunities for future research.


human factors in computing systems | 1990

What you look at is what you get: eye movement-based interaction techniques

Robert J. K. Jacob

In seeking hitherto-unused methods by which users and computers can communicate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a natural and unobtrusive way. This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observations on them.


human factors in computing systems | 2000

Evaluation of eye gaze interaction

Linda E. Sibert; Robert J. K. Jacob

Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.


ACM Transactions on Computer-Human Interaction | 1994

Integrality and separability of input devices

Robert J. K. Jacob; Linda E. Sibert; Daniel C. McFarlane; M. Preston Mullen Jr.

Current input device taxonomies and other frameworks typically emphasize the mechanical structure of input devices. We suggest that selecting an appropriate input device for an interactive task requires looking beyond the physical structure of devices to the deeper perceptual structure of the task, the device, and the interrelationship between the perceptual structure of the task and the control properties of the device. We affirm that perception is key to understanding performance of multidimensional input devices on multidimensional tasks. We have therefore extended the theory of processing of percetual structure to graphical interactive tasks and to the control structure of input devices. This allows us to predict task and device combinations that lead to better performance and hypothesize that performance is improved when the perceptual structure of the task matches the control structure of the device. We conducted an experiment in which subjects performed two tasks with different perceptual structures, using two input devices with correspondingly different control structures, a three-dimensional tracker and a mouse. We analyzed both speed and accuracy, as well as the trajectories generated by subjects as they used the unconstrained three-dimensional tracker to perform each task. The result support our hypothesis and confirm the importance of matching the perceptual structure of the task and the control structure of the input device.


designing interactive systems | 2002

ComTouch: design of a vibrotactile communication device

Angela Chang; M. Sile O'Modhrain; Robert J. K. Jacob; Eric Gunther; Hiroshi Ishii

We describe the design of ComTouch, a device that augments remote voice communication with touch, by converting hand pressure into vibrational intensity between users in real-time. The goal of this work is to enrich inter-personal communication by complementing voice with a tactile channel. We present preliminary user studies performed on 24 people to observe possible uses of the tactile channel when used in conjunction with audio. By recording and examining both audio and tactile data, we found strong relationships between the two communication channels. Our studies show that users developed an encoding system similar to that of Morse code, as well as three original uses: emphasis, mimicry, and turn-taking. We demonstrate the potential of the tactile channel to enhance the existing voice communication channel.


ACM Transactions on Graphics | 1986

A specification language for direct-manipulation user interfaces

Robert J. K. Jacob

A direct-manipulation user interface presents a set of visual representations on a display and a repertoire of manipulations that can be performed on any of them. Such representations might include screen buttons, scroll bars, spreadsheet cells, or flowchart boxes. Interaction techniques of this kind were first seen in interactive graphics systems; they are now proving effective in user interfaces for applications that are not inherently graphical. Although they are often easy to learn and use, these interfaces are also typically difficult to specify and program clearly. Examination of direct-manipulation interfaces reveals that they have a coroutine-like structure and, despite their surface appearance, a peculiar, highly moded dialogue. This paper introduces a specification technique for direct-manipulation interfaces based on these observations. In it, each locus of dialogue is described as a separate object with a single-thread state diagram, which can be suspended and resumed, but retains state. The objects are then combined to define the overall user interface as a set of coroutines, rather than inappropriately as a single highly regular state transition diagram. An inheritance mechanism for the interaction objects is provided to avoid repetitiveness in the specifications. A prototype implementation of a user-interface management system based on this approach is described, and example specifications are given.


ACM Transactions on Computer-Human Interaction | 2005

Token+constraint systems for tangible interaction with digital information

Brygg Ullmer; Hiroshi Ishii; Robert J. K. Jacob

We identify and present a major interaction approach for tangible user interfaces based upon systems of tokens and constraints. In these interfaces, tokens are discrete physical objects which represent digital information. Constraints are confining regions that are mapped to digital operations. These are frequently embodied as structures that mechanically channel how tokens can be manipulated, often limiting their movement to a single degree of freedom. Placing and manipulating tokens within systems of constraints can be used to invoke and control a variety of computational interpretations.We discuss the properties of the token+constraint approach; consider strengths that distinguish them from other interface approaches; and illustrate the concept with eleven past and recent supporting systems. We present some of the conceptual background supporting these interfaces, and consider them in terms of Bellotti et al.s [2002] five questions for sensing-based interaction. We believe this discussion supports token+constraint systems as a powerful and promising approach for sensing-based interaction.


ACM Transactions on Computer-Human Interaction | 1999

A software model and specification language for non-WIMP user interfaces

Robert J. K. Jacob; Leonidas Deligiannidis; Stephen A. Morrison

We present a software model and language for describing and programming the fine-grained aspects of interaction in a non-WIMP user interface, such as a virtual environment. Our approach is based on our view that the essence of a non-WIMP dialogue is a set of continuous relationships—most of which are temporary. The model combines a data-flow or constraint-like component for the continuous relationships with an event-based component for discrete interactions, which can enable or diable individual continuous relationships. To demonstrate our approach, we present the PMIW user interface management system for non-WIMP interactions, a set of examples running under it, a visual editor for our user interface description language, and a discussion of our implemantation and our restricted use of constraints for a performance-driven interactive situation. Our goal is to provide a model and language that captures the formal structure of non-WIMP interactions in the way that various previous techniques have captured command-based, textual, and event-based styles and to suggest that using it need and not compromise real-time performance.


human factors in computing systems | 2009

Comparing the use of tangible and graphical programming languages for informal science education

Michael S. Horn; Erin Treacy Solovey; R. Jordan Crouser; Robert J. K. Jacob

Much of the work done in the field of tangible interaction has focused on creating tools for learning; however, in many cases, little evidence has been provided that tangible interfaces offer educational benefits compared to more conventional interaction techniques. In this paper, we present a study comparing the use of a tangible and a graphical interface as part of an interactive computer programming and robotics exhibit that we designed for the Boston Museum of Science. In this study, we have collected observations of 260 museum visitors and conducted interviews with 13 family groups. Our results show that visitors found the tangible and the graphical systems equally easy to understand. However, with the tangible interface, visitors were significantly more likely to try the exhibit and significantly more likely to actively participate in groups. In turn, we show that regardless of the condition, involving multiple active participants leads to significantly longer interaction times. Finally, we examine the role of children and adults in each condition and present evidence that children are more actively involved in the tangible condition, an effect that seems to be especially strong for girls.


human factors in computing systems | 2002

A tangible interface for organizing information using a grid

Robert J. K. Jacob; Hiroshi Ishii; Gian Pangaro; James Patten

The task of organizing information is typically performed either by physically manipulating note cards or sticky notes or by arranging icons on a computer with a graphical user interface. We present a new tangible interface platform for manipulating discrete pieces of abstract information, which attempts to combine the benefits of each of these two alternatives into a single system. We developed interaction techniques and an example application for organizing conference papers. We assessed the effectiveness of our system by experimentally comparing it to both graphical and paper interfaces. The results suggest that our tangible interface can provide a more effective means of organizing, grouping, and manipulating data than either physical operations or graphical computer interaction alone

Collaboration


Dive into the Robert J. K. Jacob's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge