Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chia-Hsun Jackie Lee is active.

Publication


Featured researches published by Chia-Hsun Jackie Lee.


human factors in computing systems | 2006

Lover's cups: drinking interfaces as new communication channels

Hyemin Chung; Chia-Hsun Jackie Lee; Ted Selker

This paper shows how computer interfaces can enhance common activities and use them as communication method between people. In this paper, the act of drinking is used as an input of remote communication with the support of computer interfaces. We present Lovers Cups which enable people to share the time of drinking with someone they care about in different places. Using a wireless connection, an otherwise ordinary pair of cups becomes a communication device, amplifying the social aspect of drinking behavior.


intelligent user interfaces | 2006

Augmenting kitchen appliances with a shared context using knowledge about daily events

Chia-Hsun Jackie Lee; Leonardo Bonanni; José H. Espinosa; Henry Lieberman; Ted Selker

Networked appliances might make them aware of each other, but interacting with a complex network can be difficult in itself. KitchenSense is a sensor rich networked kitchen research platform that uses Common Sense reasoning to simplify control interfaces and augment interaction. The systems sensor net attempts to interpret peoples intentions to create fail-soft support for safe, efficient and aesthetic activity. By considering embedded sensor data together with daily-event knowledge, a centrally-controlled system can develop a shared context across various appliances. The system is a research platform that is used to evaluate augmented intelligent support of work scenarios in physical spaces.


intelligent user interfaces | 2007

Emotionally reactive television

Chia-Hsun Jackie Lee; Chaochi Chang; Hyemin Chung; Connor Dickie; Ted Selker

When is an interface simple? Is it when it is invisible or very obvious, even intrusive? From the time TV was created, watching TV is considered as a static activity. TV audiences have very limited choices to interact with TV, such as turning on/off, increasing/decreasing volume, and traversing among different channels. This paper suggests that TV program should have social responses to people, such as affording and accepting audiences emotional feeling with the growth of technologies. This paper presents HiTV, an Emotionally-Reactive TV system using a digitally augmented soft ball as affect-input interfaces that can amplify TV programs video/audio signals. HiTV transforms the original video and audio into effects that intrigue and fulfill peoples emotional expectation.


human factors in computing systems | 2008

Shybot: friend-stranger interaction for children living with autism

Chia-Hsun Jackie Lee; Kyunghee Kim; Cynthia Breazeal; Rosalind W. Picard

This paper presents Shybot, a personal mobile robot designed to both embody and elicit reflection on shyness behaviors. Shybot is being designed to detect human presence and familiarity from face detection and proximity sensing in order to categorize people as friends or strangers to interact with. Shybot also can reflect elements of the anxious state of its human companion through LEDs and a spinning propeller. We designed this simple social interaction to open up a new direction for intervention for children living with autism. We hope that from minimal social interaction, a child with autism or social anxiety disorders could reflect on and more deeply attain understanding about personal shyness behaviors, as a first step toward helping make progress in developing greater capacity for complex social interaction.


human factors in computing systems | 2006

Attention meter: a vision-based input toolkit for interaction designers

Chia-Hsun Jackie Lee; Chiun-Yi Ian Jang; Ting-Han Daniel Chen; Jon Wetzel; Yang-Ting Bowbow Shen; Ted Selker

This paper shows how a software toolkit can allow graphic designers to make camera-based interactive environments in a short period of time without experience in user interface design or machine vision. The Attention Meter, a vision-based input toolkit, gives users an analysis of faces found in a given image stream, including facial expression, body motion, and attentive activities. This data is fed to a text file that can be easily understood by humans and programs alike. A four day workshop demonstrated that some Flash-savvy architecture students could construct interactive spaces (e.g. TaiKer-KTV and ScreamMarket) based on body and head motions.


international conference on computer graphics and interactive techniques | 2006

Enhancing interface design using attentive interaction design toolkit

Chia-Hsun Jackie Lee; Jon Wetzel; Ted Selker

This paper shows how a software toolkit enables graphic designers to make camera-based interactive environments in a short period of time without requiring experience in user interface design or machine vision. The Attentive Interaction Design Toolkit, a vision-based input toolkit, gives users an analysis of faces found in a given image stream, including facial expression, body motion, and attentive activities. This data is fed to a text file that can be easily understood by humans and programs alike. A four-day workshop demonstrated that some Flash-savvy architecture students could construct interactive spaces (e.g. Eat-Eat-Eat, TaiKer-KTV and ScreamMarket) based on a group of peoples body and their head motions.


International Journal of Architectural Computing | 2006

iSphere: A Free-Hand 3D Modeling Interface:

Chia-Hsun Jackie Lee; Yuchang Hu; Ted Selker

Making 3D models should be an easy and intuitive task like free-hand sketching. This paper presents iSphere, a 24 degree of freedom 3D input device. iSphere is a dodecahedron embedded with 12 capacitive sensors for pulling-out and pressing-in manipulation on 12 control points of 3D geometries. It exhibits a conceptual 3D modeling approach for saving mental loads of low-level commands. Using analog inputs of 3D manipulation, designers are able to have high-level modeling concepts like pushing or pulling 3D surfaces. Our experiment shows that iSphere saved steps in the selection of control points in the review of menus and leading to a clearer focus on what to build instead of how to build it. Novices saved significant time learning 3D manipulation by using iSphere to making conceptual models. However, one tradeoff of the iSphere is its lack of fidelity in its analog input mechanism.


intelligent user interfaces | 2005

A framework for designing intelligent task-oriented augmented reality user interfaces

Leonardo Bonanni; Chia-Hsun Jackie Lee; Ted Selker

A task-oriented space can benefit from an augmented reality interface that layers the existing tools and surfaces with useful information to make cooking more easy, safe and efficient. To serve experienced users as well as novices, augmented reality interfaces need to adapt modalities to the users expertise and allow for multiple ways to perform tasks. We present a framework for designing an intelligent user interface that informs and choreographs multiple tasks in a single space according to a model of tasks and users. A residential kitchen has been outfitted with systems to gather data from tools and surfaces and project multi-modal interfaces back onto the tools and surfaces themselves. Based on user evaluations of this augmented reality kitchen, we propose a system to tailor information modalities based on the spatial and temporal qualities of the task, and the expertise, location and progress of the user. The intelligent augmented reality user interface choreographs multiple tasks in the same space at the same time.


human factors in computing systems | 2008

Lessons learned from a pilot study quantifying face contact and skin conductance in teens with asperger syndrome

Chia-Hsun Jackie Lee; Robert R. Morris; Matthew S. Goodwin; Rosalind W. Picard

This paper presents lessons learned from a preliminary study quantifying face contact and corresponding physiological reactivity in teenagers with Asperger syndrome. In order to detect face contact and physiological arousability, we created a wearable system that combines a camera with OpenCV face detection and skin conductance sensors. In this paper, we discuss issues involved in setting up experimental environments for wearable platforms to detect face contact and skin conductance levels simultaneously, and address technological, statistical, and ethical considerations for future technological interventions.


international conference on computer graphics and interactive techniques | 2006

Lover's cups: connecting you and your love one

Chia-Hsun Jackie Lee; Hyemin Chung

Lovers Cups explore the idea of sharing the moment of drinking between a couple in different locations. Cups have been essential daily objects to people’s life since thousands of years ago. In this paper, we digitally augmented cups as communication interfaces for drinking. A pair of cups is wireless connected to each other with sip sensors and LED illumination. When your love one is taking a sip, your Lovers Cup will glow as shown in Figure 1. If both of you drink at the same time, Lovers Cups will glow as the celebration for the shared intimacy between you and your love one.

Collaboration


Dive into the Chia-Hsun Jackie Lee's collaboration.

Top Co-Authors

Avatar

Ted Selker

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hyemin Chung

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chaochi Chang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jon Wetzel

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Leonardo Bonanni

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rosalind W. Picard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anna Huang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Connor Dickie

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cynthia Breazeal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edward Yu-Te Shen

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge