Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Cheng is active.

Publication


Featured researches published by Daniel Cheng.


user interface software and technology | 2005

eyeLook: using attention to facilitate mobile media consumption

Connor Dickie; Roel Vertegaal; Changuk Sohn; Daniel Cheng

One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.


acm workshop on continuous archival and retrieval of personal experiences | 2004

Augmenting and sharing memory with eyeBlog

Connor Dickie; Roel Vertegaal; David Fono; Changuk Sohn; Daniel Chen; Daniel Cheng; Jeffrey S. Shell; Omar Aoudeh

eyeBlog is an automatic personal video recording and publishing system. It consists of ECSGlasses [1], which are a pair of glasses augmented with a wireless eye contact and glyph sensing camera, and a web application that visualizes the video from the ECSGlasses camera as chronologically delineated blog entries. The blog format allows for easy annotation, grading, cataloging and searching of video segments by the wearer or anyone else with internet access. eyeBlog reduces the editing effort of video bloggers by recording video only when something of interest is registered by the camera. Interest is determined by a combination of independent methods. For example, recording can automatically be triggered upon detection of eye contact towards the wearer of the glasses, allowing all face-to-face interactions to be recorded. Recording can also be triggered by the detection of image patterns such as glyphs in the frame of the camera. This allows the wearer to record their interactions with any object that has an associated unique marker. Finally, by pressing a button the user can manually initiate recording.


eye tracking research & application | 2004

ECSGlasses and EyePliances: using attention to open sociable windows of interaction

Jeffrey S. Shell; Roel Vertegaal; Daniel Cheng; Alexander W. Skaburskis; Changuk Sohn; A. James Stewart; Omar Aoudeh; Connor Dickie

We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the users attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.


human factors in computing systems | 2005

Media eyepliances: using eye tracking for remote control focus selection of appliances

Roel Vertegaal; Aadil Mamuji; Changuk Sohn; Daniel Cheng

This paper discusses the use of eye contact sensing for focus selection operations in remote controlled media appliances. Focus selection with remote controls tends to be cumbersome as selection buttons place the remote in a device-specific modality. We addressed this issue with the design of Media EyePliances, home theatre appliances augmented with a digital eye contact sensor. An appliance is selected as the focus of remote commands by looking at its sensor. A central server subsequently routes all commands provided by remote, keyboard or voice input to the focus EyePliance. We discuss a calibration-free digital eye contact sensing technique that allows Media EyePliances to determine the users point of gaze.


human factors in computing systems | 2006

AuraOrb: social notification appliance

Mark Altosaar; Roel Vertegaal; Changuk Sohn; Daniel Cheng

One of the problems with notification appliances is that they can be distracting when providing information not of immediate interest to the user. In this paper, we present AuraOrb, an ambient notification appliance that deploys progressive turn taking techniques to minimize notification disruptions. AuraOrb uses eye contact sensing to detect user interest in an initially ambient light notification. Once detected, it displays a text message with a notification heading visible from 360 degrees. Touching the orb causes the associated message to be displayed on the users computer screen.We performed an initial evaluation of AuraOrbs functionality using a set of heuristics tailored to ambient displays. Results of our evaluation suggest that progressive turn taking techniques allowed AuraOrb users to access notification headings with minimal impact on their focus task.


australasian computer-human interaction conference | 2006

AuraOrb: using social awareness cues in the design of progressive notification appliances

Mark Altosaar; Roel Vertegaal; Changuk Sohn; Daniel Cheng

One of the problems with notification appliances is that they can be distracting when providing information not of immediate interest to the user. In this paper, we present AuraOrb, an ambient notification appliance that deploys progressive turn taking techniques to minimize notification disruptions. AuraOrb uses social awareness cues, such as eye contact to detect user interest in an initially ambient light notification. Once detected, it displays a text message with a notification heading visible from 360 degrees. Touching the orb causes the associated message to be displayed on the users computer screen. When user interest is lost, AuraOrb automatically reverts back to its idle state.We performed an initial evaluation of AuraOrbs functionality using a set of heuristics tailored to ambient displays. We compared progressive notification with the use of persistent ticker tape notifications and Outlook Express system tray messages for notifying the user of incoming emails. Results of our evaluation suggest that progressive turn taking techniques allowed AuraOrb users to access notification headings with minimal impact on their focus task.


eye tracking research & application | 2004

An eye for an eye: a performance evaluation comparison of the LC technologies and Tobii eye trackers

Daniel Cheng; Roel Vertegaal

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail [email protected].


human factors in computing systems | 2004

Attentive display: paintings as attentive user interfaces

David Holman; Roel Vertegaal; Changuk Sohn; Daniel Cheng

In this paper we present ECS Display, a large plasma screen that tracks the users point of gaze from a distance, without any calibration. We discuss how we applied ECS Display in the design of Attentive Art. Artworks displayed on the ECS Display respond directly to user interest by visually highlighting areas of the artwork that receive attention, and by darkening areas that receive little interest. This results in an increasingly abstract artwork that provides guidance to subsequent viewers. We believe such attentive information visualization may be applied more generally to large screen display interactions. The filtering of information on the basis of user interest allows cognitive load associated with large display visualizations to be managed dynamically.


human factors in computing systems | 2005

OverHear: augmenting attention in remote social gatherings through computer-mediated hearing

J. David Smith; Matthew Donald; Daniel Chen; Daniel Cheng; Changuk Sohn; Aadil Mamuji; David Holman; Roel Vertegaal

One of the problems with mediated communication systems is that they limit the users ability to listen to informal conversations of others within a remote space. In what is known as the Cocktail Party phenomenon, participants in noisy face-to-face conversations are able to focus their attention on a single individual, typically the person they look at. Media spaces do not support the cues necessary to establish this attentive mechanism. We addressed this issue in our design of OverHear, a media space that augments the users attention in remote social gatherings through computer mediated hearing. OverHear uses an eye tracker embedded in the webcam display to direct the focal point of a robotic shotgun microphone mounted in the remote space. This directional microphone is automatically pointed towards the currently observed individual, allowing the user to OverHear this persons conversations.


Archive | 2005

Method and apparatus for calibration-free eye tracking

Roel Vertegaal; Changuk Sohn; Daniel Cheng; Victor Macfarlane

Collaboration


Dive into the Daniel Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge