Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Connor Dickie is active.

Publication


Featured researches published by Connor Dickie.


human factors in computing systems | 2002

Designing attentive cell phone using wearable eyecontact sensors

Roel Vertegaal; Connor Dickie; Changuk Sohn; Myron Flickner

We present a prototype attentive cell phone that uses a low-cost EyeContact sensor and speech analysis to detect whether its user is in a face-to-face conversation. We discuss how this information can be communicated to callers to allow them to employ basic social rules of interrruption.


user interface software and technology | 2005

eyeLook: using attention to facilitate mobile media consumption

Connor Dickie; Roel Vertegaal; Changuk Sohn; Daniel Cheng

One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.


acm workshop on continuous archival and retrieval of personal experiences | 2004

Augmenting and sharing memory with eyeBlog

Connor Dickie; Roel Vertegaal; David Fono; Changuk Sohn; Daniel Chen; Daniel Cheng; Jeffrey S. Shell; Omar Aoudeh

eyeBlog is an automatic personal video recording and publishing system. It consists of ECSGlasses [1], which are a pair of glasses augmented with a wireless eye contact and glyph sensing camera, and a web application that visualizes the video from the ECSGlasses camera as chronologically delineated blog entries. The blog format allows for easy annotation, grading, cataloging and searching of video segments by the wearer or anyone else with internet access. eyeBlog reduces the editing effort of video bloggers by recording video only when something of interest is registered by the camera. Interest is determined by a combination of independent methods. For example, recording can automatically be triggered upon detection of eye contact towards the wearer of the glasses, allowing all face-to-face interactions to be recorded. Recording can also be triggered by the detection of image patterns such as glyphs in the frame of the camera. This allows the wearer to record their interactions with any object that has an associated unique marker. Finally, by pressing a button the user can manually initiate recording.


eye tracking research & application | 2004

ECSGlasses and EyePliances: using attention to open sociable windows of interaction

Jeffrey S. Shell; Roel Vertegaal; Daniel Cheng; Alexander W. Skaburskis; Changuk Sohn; A. James Stewart; Omar Aoudeh; Connor Dickie

We present ECSGlasses: wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the users attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.


australasian computer-human interaction conference | 2006

LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers

Connor Dickie; Jamie Hart; Roel Vertegaal; Alex Eiser

We present LookPoint, a system that uses eye input for switching input between multiple computing devices. LookPoint uses an eye tracker to detect which screen the user is looking at, and then automatically routes mouse and keyboard input to the computer associated with that screen. We evaluated the use of eye input for switching between three computer monitors during a typing task, comparing its performance with that of three other selection techniques: multiple keyboards, function key selection, and mouse selection. Results show that the use of eye input is 111% faster than the mouse, 75% faster than function keys, and 37% faster than the use of multiple keyboards. A user satisfaction questionnaire showed that participants also preferred the use of eye input over other three techniques. The implications of this work are discussed, as well as future calibration-free implementations.


human factors in computing systems | 2011

Don't touch: social appropriateness of touch sensor placement on interactive lumalive e-textime shirts

Sylvia H. Cheng; Connor Dickie; Andreas Hanewich-Hollatz; Roel Vertegaal; Justin Lee

In this video, we discuss the design of an e-textile shirt with an interactive Lumalive display featuring a touch-controlled image browser. To determine where to place touch sensors, we investigated which areas of the Lumalive shirt users would be comfortable touching or being touched based on how often participants would opt out of touches. For both touchers and touchees, opt-outs occurred mostly in the upper chest. On the front, the upper chest and lower abdominal zones were the least comfortable. Findings suggest participants were less comfortable with touches on the upper chest, the lower abdomen, and the lower back. We conclude that the most appropriate areas for touch sensors on a shirt are on the arms, shoulders, and upper back.


human factors in computing systems | 2013

A biological imperative for interaction design

Amanda Parkes; Connor Dickie

This paper presents an emerging approach to the integration of biological systems- their matter, mechanisms, and metabolisms- into models of interaction design. By bringing together conceptual visions and initial experiments of alternative bio based approaches to sensing, display, fabrication, materiality, and energy, we seek to construct an inspirational discussion platform approaching non-living and living matter as a continuum for computational interaction. We also discuss the emergence of the DIY bio and open source biology movements, which allow non-biologists to gain access to the processes, tools, and infrastructure of this domain, and introduce Synbiota, an integrated, web-based platform for synthetic biology research.


human factors in computing systems | 2012

FlexCam: using thin-film flexible OLED color prints as a camera array

Connor Dickie; Nicholas Fellion; Roel Vertegaal

FlexCam is a novel compound camera platform that explores interactions with color photographic prints using thinfilm flexible color displays. FlexCam augments a thinfilm color Flexible Organic Light Emitting Diode (FOLED) photographic viewfinder display with an array of lenses at the back. Our prototype allows for the photograph to act as a camera, exploiting flexibility of the viewfinder as a means to dynamically re-configure images captured by the photograph. FlexCams flexible camera array has altered optical characteristics when flexed, allowing users to dynamically expand and contract the cameras field of view (FOV). Integrated bend sensors measure the amount of flexion in the display. The degree of flexion is used as input to software, which dynamically stitches images from the camera array and adjusts viewfinder size to reflect the virtual cameras FOV. Our prototype envisions the use of photographs as cameras in one aggregate flexible, thin-film device.


human factors in computing systems | 2004

Eye contact sensing glasses for attention-sensitive wearable video blogging

Connor Dickie; Roel Vertegaal; Jeffrey S. Shell; Changuk Sohn; Daniel Cheng; Omar Aoudeh


Archive | 2008

METHODS AND SYSTEMS FOR DISPLAYING A MESSAGE IN A WIDE-SPECTRUM DISPLAY

Connor Dickie; Jeffrey S. Shell

Collaboration


Dive into the Connor Dickie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge