Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth S. Kim is active.

Publication


Featured researches published by Elizabeth S. Kim.


International Journal of Social Robotics | 2011

The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents

Wilma Bainbridge; Justin W. Hart; Elizabeth S. Kim; Brian Scassellati

This paper explores how a robot’s physical presence affects human judgments of the robot as a social partner. For this experiment, participants collaborated on simple book-moving tasks with a humanoid robot that was either physically present or displayed via a live video feed. Multiple tasks individually examined the following aspects of social interaction: greetings, cooperation, trust, and personal space. Participants readily greeted and cooperated with the robot whether present physically or in live video display. However, participants were more likely both to fulfill an unusual request and to afford greater personal space to the robot when it was physically present, than when it was shown on live video. The same was true when the live video displayed robot’s gestures were augmented with disambiguating 3-D information. Questionnaire data support these behavioral findings and also show that participants had an overall more positive interaction with the physically present robot.


robot and human interactive communication | 2008

The effect of presence on human-robot interaction

Wilma Bainbridge; Justin W. Hart; Elizabeth S. Kim; Brian Scassellati

This study explores how a robotpsilas physical or virtual presence affects unconscious human perception of the robot as a social partner. Subjects collaborated on simple book-moving tasks with either a physically present humanoid robot or a video-displayed robot. Each task examined a single aspect of interaction: greetings, cooperation, trust, and personal space. Subjects readily greeted and cooperated with the robot in both conditions. However, subjects were more likely to fulfill an unusual instruction and to afford greater personal space to the robot in the physical condition than in the video-displayed condition. The same tendencies occurred when the virtual robot was supplemented by disambiguating 3-D information.


human robot interaction | 2012

Bridging the research gap: making HRI useful to individuals with autism

Elizabeth S. Kim; Rhea Paul; Frederick Shic; Brian Scassellati

While there is a rich history of studies involving robots and individuals with autism spectrum disorders (ASD), few of these studies have made substantial impact in the clinical research community. In this paper we first examine how differences in approach, study design, evaluation, and publication practices have hindered uptake of these research results. Based on ten years of collaboration, we suggest a set of design principles that satisfy the needs (both academic and cultural) of both the robotics and clinical autism research communities. Using these principles, we present a study that demonstrates a quantitatively measured improvement in human-human social interaction for children with ASD, effected by interaction with a robot.


human-robot interaction | 2009

How people talk when teaching a robot

Elizabeth S. Kim; Dan Leyzberg; Katherine M. Tsui; Brian Scassellati

We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learners performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naïvely and spontaneously use intensely affective vocalizations. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teachers vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.


international conference on development and learning | 2007

Learning to refine behavior using prosodic feedback

Elizabeth S. Kim; Brian Scassellati

We demonstrate the utility of speech prosody as a feedback mechanism in a machine learning system. We have constructed a reinforcement learning system for our humanoid robot Nico, which uses prosodic feedback to refine the parameters of a social waving behavior. We define a waving behavior to be an oscillation of Nicos elbow joint, parameterized by amplitude and frequency. Our system explores a space of amplitude and frequency values, using q-learning to learn the wave which optimally satisfies a human tutor. To estimate tutor feedback in real-time, we first segment speech from ambient noise using a maximum-likelihood voice-activation detector. We then use a k-Nearest Neighbors classifier, with A=3, over 15 prosodic features, to estimate a binary approval/disapproval feedback signal from segmented utterances. Both our voice-activation detector and prosody classifier are trained on the speech of the individual tutor. We show that our system learns the tutors desired wave, over the course of a sequence of trial-feedback cycles. We demonstrate our learning results for a single speaker on a space of nine distinct waving behaviors.


Journal of Autism and Developmental Disorders | 2017

Parent-Endorsed Sex Differences in Toddlers with and without ASD: Utilizing the M-CHAT.

Roald A. Øien; Logan Hart; Synnve Schjølberg; Carla A. Wall; Elizabeth S. Kim; Anders Nordahl-Hansen; Martin Eisemann; Katarzyna Chawarska; Fred R. Volkmar; Frederick Shic

Sex differences in typical development can provide context for understanding ASD. Baron-Cohen (Trends Cogn Sci 6(6):248–254, 2002) suggested ASD could be considered an extreme expression of normal male, compared to female, phenotypic profiles. In this paper, sex-specific M-CHAT scores from N = 53,728 18-month-old toddlers, including n = 185 (32 females) with ASD, were examined. Results suggest a nuanced view of the “extreme male brain theory of autism”. At an item level, almost every male versus female disadvantage in the broader population was consistent with M-CHAT vulnerabilities in ASD. However, controlling for total M-CHAT failures, this male disadvantage was more equivocal and many classically ASD-associated features were found more common in non-ASD. Within ASD, females showed relative strengths in joint attention, but impairments in imitation.


IEEE Computational Intelligence Magazine | 2006

Social development [robots]

Brian Scassellati; C. Crick; K. Gold; Elizabeth S. Kim; Frederick Shic; Ganghua Sun

Most robots are designed to operate in environments that are either highly constrained (as is the case in an assembly line) or extremely hazardous (such as the surface of Mars). Machine learning has been an effective tool in both of these environments by augmenting the flexibility and reliability of robotic systems, but this is often a very difficult problem because the complexity of learning in the real world introduces very high dimensional state spaces and applies severe penalties for mistakes. Human children are raised in environments that are just as complex (or even more so) than those typically studied in robot learning scenarios. However, the presence of parents and other caregivers radically changes the type of learning that is possible. Consciously and unconsciously, adults tailor their action and the environment to the child. They draw attention to important aspects of a task, help in identifying the cause of errors and generally tailor the task to the childs capabilities. Our research group builds robots that learn in the same type of supportive environment that human children have and develop skills incrementally through their interactions. Our robots interact socially with human adults using the same natural conventions that a human child would use. Our work sits at the intersection of the fields of social robotics (Fong et al., 2003; Breazeal and Scawellan, 2002) and autonomous mental development (Weng et al., 2000). Together, these two fields offer the vision of a machine that can learn incrementally, directly from humans, in the same ways that humans learn from each other. In this article, we introduce some of the challenges, goals, and applications of this research


Autism | 2017

The relationship between autism symptoms and arousal level in toddlers with autism spectrum disorder, as measured by electrodermal activity

Emily B. Prince; Elizabeth S. Kim; Carla A. Wall; Eugenia Gisin; Matthew S. Goodwin; Elizabeth Schoen Simmons; Kaisa Chawarska; Frederick Shic

Electrodermal activity was examined as a measure of physiological arousal within a naturalistic play context in 2-year-old toddlers (N = 27) with and without autism spectrum disorder. Toddlers with autism spectrum disorder were found to have greater increases in skin conductance level than their typical peers in response to administered play activities. In the autism spectrum disorder group, a positive relationship was observed between restrictive and repetitive behaviors and skin conductance level increases in response to mechanical toys, whereas the opposite pattern was observed for passive toys. This preliminary study is the first to examine electrodermal activity levels in toddlers with autism spectrum disorder during play-based, naturalistic settings, and it highlights the potential for electrodermal activity as a measure of individual variability within autism spectrum disorder and early development.


eye tracking research & application | 2014

A smooth pursuit calibration technique

Feridun M. Celebi; Elizabeth S. Kim; Quan Wang; Carla A. Wall; Frederick Shic

Many different eye-tracking calibration techniques have been developed [e.g. see Talmi and Liu 1999; Zhu and Ji 2007]. A community standard is a 9-point-sparse calibration that relies on sequential presentation of known scene targets. However, fixating different points has been described as tedious, dull and tiring for the eye [Bulling, Gellersen, Pfeuffer, Turner and Vidal 2013].


international conference on development and learning | 2008

What prosody tells infants to believe

Elizabeth S. Kim; Kevin Gold; Brian Scassellati

We examined whether evidence for prosodic signals about shared belief can be quantitatively found within the acoustic signal of infant-directed speech. Two transcripts of infant-directed speech for infants aged 1;4 and 1;6 were labeled with distinct speaker intents to modify shared beliefs, based on Pierrehumbert and Hirschbergpsilas theory of the meaning of prosody [1]. Acoustic predictions were made from intent labels first within a simple single-tone model that reflected only whether the speaker intended to add a wordpsilas information to the discourse (high tone, H*) or not (low tone, L*). We also predicted pitch within a more complicated five-category model that added intents to suggest a word as one of several possible alternatives (L*+H), a contrasting alternative (L+H*), or something about which the listener should make an inference (H*+L). The acoustic signal was then manually segmented and automatically classified based solely on whether the pitches at the beginning, end, and peak intensity points of stressed syllables in salient words, were closer to the utterancepsilas pitch minimum or maximum on a log scale. Evidence supporting our intent-based pitch predictions was found for L*, H*, and L*+H accents, but not for L+H* or H*+L. No evidence was found to support the hypothesis that infant-directed speech simplifies two-tone into single-tone pitch accents.

Collaboration


Dive into the Elizabeth S. Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge