Changuk Sohn
Queen's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Changuk Sohn.
human factors in computing systems | 2003
Roel Vertegaal; Ivo Weevers; Changuk Sohn; Chris Cheung
GAZE-2 is a novel group video conferencing system that uses eye-controlled camera direction to ensure parallax-free transmission of eye contact. To convey eye contact, GAZE-2 employs a video tunnel that allows placement of cameras behind participant images on the screen. To avoid parallax, GAZE-2 automatically directs the cameras in this video tunnel using an eye tracker, selecting a single camera closest to where the user is looking for broadcast. Images of users are displayed in a virtual meeting room, and rotated towards the participant each user looks at. This way, eye contact can be conveyed to any number of users with only a single video stream per user. We empirically evaluated whether eye contact perception is affected by automated camera direction, which causes angular shifts in the transmitted images. Findings suggest camera shifts do not affect eye contact perception, and are not considered highly distractive.
user interface software and technology | 2005
John David Smith; Roel Vertegaal; Changuk Sohn
We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the users eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.
human factors in computing systems | 2002
Roel Vertegaal; Connor Dickie; Changuk Sohn; Myron Flickner
We present a prototype attentive cell phone that uses a low-cost EyeContact sensor and speech analysis to detect whether its user is in a face-to-face conversation. We discuss how this information can be communicated to callers to allow them to employ basic social rules of interrruption.
user interface software and technology | 2005
Connor Dickie; Roel Vertegaal; Changuk Sohn; Daniel Cheng
One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.
acm workshop on continuous archival and retrieval of personal experiences | 2004
Connor Dickie; Roel Vertegaal; David Fono; Changuk Sohn; Daniel Chen; Daniel Cheng; Jeffrey S. Shell; Omar Aoudeh
eyeBlog is an automatic personal video recording and publishing system. It consists of ECSGlasses [1], which are a pair of glasses augmented with a wireless eye contact and glyph sensing camera, and a web application that visualizes the video from the ECSGlasses camera as chronologically delineated blog entries. The blog format allows for easy annotation, grading, cataloging and searching of video segments by the wearer or anyone else with internet access. eyeBlog reduces the editing effort of video bloggers by recording video only when something of interest is registered by the camera. Interest is determined by a combination of independent methods. For example, recording can automatically be triggered upon detection of eye contact towards the wearer of the glasses, allowing all face-to-face interactions to be recorded. Recording can also be triggered by the detection of image patterns such as glyphs in the frame of the camera. This allows the wearer to record their interactions with any object that has an associated unique marker. Finally, by pressing a button the user can manually initiate recording.
eye tracking research & application | 2004
Jeffrey S. Shell; Roel Vertegaal; Daniel Cheng; Alexander W. Skaburskis; Changuk Sohn; A. James Stewart; Omar Aoudeh; Connor Dickie
We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the users attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.
human factors in computing systems | 2005
Roel Vertegaal; Aadil Mamuji; Changuk Sohn; Daniel Cheng
This paper discusses the use of eye contact sensing for focus selection operations in remote controlled media appliances. Focus selection with remote controls tends to be cumbersome as selection buttons place the remote in a device-specific modality. We addressed this issue with the design of Media EyePliances, home theatre appliances augmented with a digital eye contact sensor. An appliance is selected as the focus of remote commands by looking at its sensor. A central server subsequently routes all commands provided by remote, keyboard or voice input to the focus EyePliance. We discuss a calibration-free digital eye contact sensing technique that allows Media EyePliances to determine the users point of gaze.
human factors in computing systems | 2002
Roel Vertegaal; Ivo Weevers; Changuk Sohn
GAZE-2 is an attentive video conferencing system that conveys whom users are talking to by measuring whom a user looks at and then rotating his video image towards that person in a 3D meeting room. Attentive Videotunnels ensure a parallax-free image by automatically broadcasting the feed from the camera closest to where the user looks. The system allows attentive compression by reducing resolution of video and audio feeds from users that are not being looked at.
human factors in computing systems | 2006
Mark Altosaar; Roel Vertegaal; Changuk Sohn; Daniel Cheng
One of the problems with notification appliances is that they can be distracting when providing information not of immediate interest to the user. In this paper, we present AuraOrb, an ambient notification appliance that deploys progressive turn taking techniques to minimize notification disruptions. AuraOrb uses eye contact sensing to detect user interest in an initially ambient light notification. Once detected, it displays a text message with a notification heading visible from 360 degrees. Touching the orb causes the associated message to be displayed on the users computer screen.We performed an initial evaluation of AuraOrbs functionality using a set of heuristics tailored to ambient displays. Results of our evaluation suggest that progressive turn taking techniques allowed AuraOrb users to access notification headings with minimal impact on their focus task.
australasian computer-human interaction conference | 2006
Mark Altosaar; Roel Vertegaal; Changuk Sohn; Daniel Cheng
One of the problems with notification appliances is that they can be distracting when providing information not of immediate interest to the user. In this paper, we present AuraOrb, an ambient notification appliance that deploys progressive turn taking techniques to minimize notification disruptions. AuraOrb uses social awareness cues, such as eye contact to detect user interest in an initially ambient light notification. Once detected, it displays a text message with a notification heading visible from 360 degrees. Touching the orb causes the associated message to be displayed on the users computer screen. When user interest is lost, AuraOrb automatically reverts back to its idle state.We performed an initial evaluation of AuraOrbs functionality using a set of heuristics tailored to ambient displays. We compared progressive notification with the use of persistent ticker tape notifications and Outlook Express system tray messages for notifying the user of incoming emails. Results of our evaluation suggest that progressive turn taking techniques allowed AuraOrb users to access notification headings with minimal impact on their focus task.