Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roel Vertegaal is active.

Publication


Featured researches published by Roel Vertegaal.


human factors in computing systems | 2001

Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes

Roel Vertegaal; Robert Slagter; Gerrit C. van der Veer; Anton Nijholt

In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened (p=88%) or spoken to (p=77%). We conclude that gaze is an excellent predictor of conversational attention in multiparty conversations. As such, it may form a reliable source of input for conversational systems that need to establish whom the user is speaking or listening to. We implemented our findings in FRED, a multi-agent conversational system that uses eye input to gauge which agent the user is listening or speaking to.


human factors in computing systems | 2005

Paper windows: interaction techniques for digital paper

David Holman; Roel Vertegaal; Mark Altosaar; Nikolaus F. Troje; Derek Johns

In this paper, we present Paper Windows, a prototype windowing environment that simulates the use of digital paper displays. By projecting windows on physical paper, Paper Windows allows the capturing of physical affordances of paper in a digital world. The system uses paper as an input device by tracking its motion and shape with a Vicon Motion Capturing System. We discuss the design of a number of interaction techniques for manipulating information on paper displays.


human factors in computing systems | 2003

GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction

Roel Vertegaal; Ivo Weevers; Changuk Sohn; Chris Cheung

GAZE-2 is a novel group video conferencing system that uses eye-controlled camera direction to ensure parallax-free transmission of eye contact. To convey eye contact, GAZE-2 employs a video tunnel that allows placement of cameras behind participant images on the screen. To avoid parallax, GAZE-2 automatically directs the cameras in this video tunnel using an eye tracker, selecting a single camera closest to where the user is looking for broadcast. Images of users are displayed in a virtual meeting room, and rotated towards the participant each user looks at. This way, eye contact can be conveyed to any number of users with only a single video stream per user. We empirically evaluated whether eye contact perception is affected by automated camera direction, which causes angular shifts in the transmitted images. Findings suggest camera shifts do not affect eye contact perception, and are not considered highly distractive.


human factors in computing systems | 2011

PaperPhone: understanding the use of bend gestures in mobile devices with flexible electronic paper displays

Byron Lahey; Audrey Girouard; Winslow Burleson; Roel Vertegaal

Flexible displays potentially allow for interaction styles that resemble those used in paper documents. Bending the display, e.g., to page forward, shows particular promise as an interaction technique. In this paper, we present an evaluation of the effectiveness of various bend gestures in executing a set of tasks with a flexible display. We discuss a study in which users designed bend gestures for common computing actions deployed on a smartphone-inspired flexible E Ink prototype called PaperPhone. We collected a total of 87 bend gesture pairs from ten participants and their appropriateness over twenty actions in five applications. We identified six most frequently used bend gesture pairs out of 24 unique pairs. Results show users preferred bend gestures and bend gesture pairs that were conceptually simpler, e.g., along one axis, and less physically demanding. There was a strong agreement among participants to use the same three pairs in applications: (1) side of display, up/down (2) top corner, up/down (3) bottom corner, up/down. For actions with a strong directional cue, we found strong consensus on the polarity of the bend gestures (e.g., navigating left is performed with an upwards bend gesture, navigating right, downwards). This implies that bend gestures that take directional cues into account are likely more natural to users.


Communications of The ACM | 2008

Organic user interfaces: designing computers in any way, shape, or form

David Holman; Roel Vertegaal

Displays on real-world objects allow more realistic user interfaces.


human factors in computing systems | 2004

Using mental load for managing interruptions in physiologically attentive user interfaces

Daniel Chen; Roel Vertegaal

Todays user is surrounded by mobile appliances that continuously disrupt activities through instant message, email and phone call notifications. In this paper, we present a system that regulates notifications by such devices dynamically on the basis of direct measures of the users mental load. We discuss a prototype Physiologically Attentive User Interface (PAUI) that measures mental load using Heart Rate Variability (HRV) signals, and motor activity using electroencephalogram (EEG) analysis. The PAUI uses this information to distinguish between 4 attentional states of the user: at rest, moving, thinking and busy. We discuss an example PAUI application in the automated regulation of notifications in a mobile cell phone appliance.


human factors in computing systems | 2005

EyeWindows: evaluation of eye-controlled zooming windows for focus selection

David Fono; Roel Vertegaal

In this paper, we present an attentive windowing technique that uses eye tracking, rather than manual pointing, for focus window selection. We evaluated the performance of 4 focus selection techniques: eye tracking with key activation, eye tracking with automatic activation, mouse and hotkeys in a typing task with many open windows. We also evaluated a zooming windowing technique designed specifically for eye-based control, comparing its performance to that of a stan-dard tiled windowing environment. Results indicated that eye tracking with automatic activation was, on average, about twice as fast as mouse and hotkeys. Eye tracking with key activation was about 72% faster than manual conditions, and preferred by most participants. We believe eye input performed well because it allows manual input to be provided in parallel to focus selection tasks. Results also suggested that zooming windows outperform static tiled windows by about 30%. Furthermore, this performance gain scaled with the number of windows used. We conclude that eye-controlled zooming windows with key activation pro-vides an efficient and effective alternative to current focus window selection techniques.


conference on computer supported cooperative work | 2002

Explaining effects of eye gaze on mediated group conversations:: amount or synchronization?

Roel Vertegaal; Yaping Ding

We present an experiment examining effects of gaze on speech during three-person conversations. Understanding such effects is crucial for the design of teleconferencing systems and Collaborative Virtual Environments (CVEs). Previous findings suggest subjects take more turns when they experience more gaze. We evaluated whether this is because more gaze allowed them to better observe whether they were being addressed. We compared speaking behavior between two conditions: (1) in which subjects experienced gaze synchronized with conversational attention, and (2) in which subjects experienced random gaze. The amount of gaze experienced by subjects was a covariate. Results show subjects were 22% more likely to speak when gaze behavior was synchronized with conversational attention. However, covariance analysis showed these results were due to differences in amount of gaze rather than synchronization of gaze, with correlations of .62 between amount of gaze and amount of subject speech. Task performance was 46% higher when gaze was synchronized. We conclude it is commendable to use synchronized gaze models when designing CVEs, but depending on task situation, random models generating sufficient amounts of gaze may suffice.


Communications of The ACM | 2003

Interacting with groups of computers

Jeffrey S. Shell; Ted Selker; Roel Vertegaal

AUIs recognize human attention in order to respect and react to how users distribute their attention in technology-laden environments.


Computers in Human Behavior | 2006

Designing for augmented attention : Towards a framework for attentive user interfaces

Roel Vertegaal; Jeffrey S. Shell; Daniel Chen; Aadil Mamuji

Abstract Attentive user interfaces are user interfaces that aim to support the user’s attentional capacities. By sensing the users’ attention for objects and people in their everyday environment, and by treating user attention as a limited resource, these interfaces avoid today’s ubiquitous patterns of interruption. Focusing upon attention as a central interaction channel allows development of more sociable methods of communication and repair with ubiquitous devices. Our methods are analogous to human turn taking in group communication. Turn taking improves the user’s ability to conduct foreground processing of conversations. Attentive user interfaces bridge the gap between the foreground and periphery of user activity in a similar fashion, allowing users to move smoothly in between. We present a framework for augmenting user attention through attentive user interfaces. We propose five key properties of attentive systems: (i) to sense attention; (ii) to reason about attention; (iii) to regulate interactions; (iv) to communicate attention and (v) to augment attention.

Collaboration


Dive into the Roel Vertegaal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge