Jacqueline Kory Westlund
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jacqueline Kory Westlund.
PLOS ONE | 2015
Jacqueline Kory Westlund; Sidney D’Mello; Andrew Olney
Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed.
human robot interaction | 2016
Jacqueline Kory Westlund; Jin Joo Lee; Luke Plummer; Fardad Faridi; Jesse Gray; Matt Berlin; Harald Quintus-Bosz; Robert Hartmann; Mike Hess; Stacy Dyer; Kristopher dos Santos; Sigurdur Orn Adalgeirsson; Goren Gordon; Samuel Spaulding; Marayna Martinez; Madhurima Das; Maryam Archie; Sooyeon Jeong; Cynthia Breazeal
Tega is a new expressive “squash and stretch”, Android-based social robot platform, designed to enable long-term interactions with children.
Frontiers in Human Neuroscience | 2017
Jacqueline Kory Westlund; Sooyeon Jeong; Hae W. Park; Samuel Ronfard; Aradhana Adhikari; Paul L. Harris; David DeSteno; Cynthia Breazeal
Prior research with preschool children has established that dialogic or active book reading is an effective method for expanding young children’s vocabulary. In this exploratory study, we asked whether similar benefits are observed when a robot engages in dialogic reading with preschoolers. Given the established effectiveness of active reading, we also asked whether this effectiveness was critically dependent on the expressive characteristics of the robot. For approximately half the children, the robot’s active reading was expressive; the robot’s voice included a wide range of intonation and emotion (Expressive). For the remaining children, the robot read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range (Flat). The robot’s movements were kept constant across conditions. We performed a verification study using Amazon Mechanical Turk (AMT) to confirm that the Expressive robot was viewed as significantly more expressive, more emotional, and less passive than the Flat robot. We invited 45 preschoolers with an average age of 5 years who were either English Language Learners (ELL), bilingual, or native English speakers to engage in the reading task with the robot. The robot narrated a story from a picture book, using active reading techniques and including a set of target vocabulary words in the narration. Children were post-tested on the vocabulary words and were also asked to retell the story to a puppet. A subset of 34 children performed a second story retelling 4–6 weeks later. Children reported liking and learning from the robot a similar amount in the Expressive and Flat conditions. However, as compared to children in the Flat condition, children in the Expressive condition were more concentrated and engaged as indexed by their facial expressions; they emulated the robot’s story more in their story retells; and they told longer stories during their delayed retelling. Furthermore, children who responded to the robot’s active reading questions were more likely to correctly identify the target vocabulary words in the Expressive condition than in the Flat condition. Taken together, these results suggest that children may benefit more from the expressive robot than from the flat robot.
robot and human interactive communication | 2016
Jacqueline Kory Westlund; Marayna Martinez; Maryam Archie; Madhurima Das; Cynthia Breazeal
The presentation or framing of a situation-such as how something or someone is introduced-can influence peoples subsequent behavior. In this paper, we describe a study in which we manipulated how a robot was introduced, framing it as either a social agent or as a machine-like being. We asked whether framing the robot in these ways would influence young childrens social behavior while playing a ten-minute game with the robot. We coded childrens behavior during the robot interaction, including their speech, gaze, and various courteous, prosocial actions. We found several subtle differences in childrens gaze behavior between conditions that may reflect childrens perceptions of the robots status as more, or less, of a social actor. In addition, more parents of children in the Social condition reported that their children acted less shy and more talkative with the robot that parents of children in the Machine condition. This study gives us insight into how the interaction context can influence how children think about and respond to social robots.
human robot interaction | 2016
Jacqueline Kory Westlund; Cynthia Breazeal
Teleoperation or Wizard-of-Oz control of social robots is commonly used in human-robot interaction (HRI) research. This is especially true for child-robot interactions, where technologies like speech recognition (which can help create autonomous interactions for adults) work less well. We propose to study young childrens understanding teleoperation, how they conceptualize social robots in a learning context, and how this affects their interactions. Children will be told about the teleoperators presence either before or after an interaction with a social robot. We will assess childrens behavior, learning, and emotions before, during, and after the interaction. Our goal is to learn whether childrens knowledge about the teleoperator matters (e.g., for their trust and for learning outcomes), and if so, how and when it matters most (e.g. at what age).
human robot interaction | 2016
Jacqueline Kory Westlund; Marayna Martinez; Maryam Archie; Madhurima Das; Cynthia Breazeal
Framing or priming a situation can subtly influence how a person reacts to or thinks about the situation. In this paper, we describe a recent study and some preliminary results in which the framing of a robot is manipulated such that it is presented as a social agent or as a machine-like entity. We ask whether framing the robot in these ways influences young childrens social behavior during an interaction with the robot, independent of any changes in the robot itself. Following the framing manipulation, children play a fifteen-minute game with the robot. Their behavior, such as the amount of conversation, mimicry of the robot, and various courteous, prosocial actions will be coded and compared across conditions.
interaction design and children | 2018
Jacqueline Kory Westlund; Hae Won Park; Randi Williams; Cynthia Breazeal
Social robots are increasingly being developed for long-term interactions with children in domains such as healthcare, education, therapy, and entertainment. As such, we need to deeply understand how childrens relationships with robots develop through time. However, there are few validated assessments for measuring young childrens long-term relationships. In this paper, we present a pilot test of four assessments that we have adapted or created for use in this context with children aged 5--6: the Inclusion of Other in Self task, the Social-Relational Interview, the Narrative Description, and the Self-disclosure Task. We show that children can appropriately respond to these assessments with reasonably high internal reliability, and that the proposed assessments are able to capture child-robot relationship adjustments over a long-term interaction. Furthermore, we discuss gender and population differences in childrens responses.
human robot interaction | 2015
Jacqueline Kory Westlund
The language skills of young children can predict their academic success in later schooling. We may be able to help more children succeed by helping them improve their early language skills: a prime time for intervention is during preschool. Furthermore, because language lives in a social, interactive, and dialogic context, ideal interventions would not only teach vocabulary, but would also engage children as active participants in meaningful dialogues. Social robots could potentially have great impact in this area. They merge the benefits of using technology -- such as accessibility, customization and easy addition of new content, and student-paced, adaptive software -- with the benefits of embodied, social agents -- such as sharing physical spaces with us, communicating in natural ways, and leveraging social presence and social cues. To this end, we developed a robotic learning/teaching companion to support childrens early language development. We performed a microgenetic field study in which we took this robot to two Boston-area preschools for two months. We asked two main questions: Could a robot companion support childrens long-term oral language development through play? How might children build a relationship with and construe the robot over time?The language skills of young children can predict their academic success in later schooling. We may be able to help more children succeed by helping them improve their early language skills: a prime time for intervention is during preschool. Furthermore, because language lives in a social, interactive, and dialogic context, ideal interventions would not only teach vocabulary, but would also engage children as active participants in meaningful dialogues. Social robots could potentially have great impact in this area. They merge the benefits of using technology -- such as accessibility, customization and easy addition of new content, and student-paced, adaptive software -- with the benefits of embodied, social agents -- such as sharing physical spaces with us, communicating in natural ways, and leveraging social presence and social cues. To this end, we developed a robotic learning/teaching companion to support childrens early language development. We performed a microgenetic field study in which we took this robot to two Boston-area preschools for two months. We asked two main questions: Could a robot companion support childrens long-term oral language development through play? How might children build a relationship with and construe the robot over time?
PLOS ONE | 2015
Jacqueline Kory Westlund; Sidney D’Mello; Andrew Olney
Table 1 was erroneously published twice, once as Table 1 and once as Table 2. All references to Table 2 refer to Table 1. The following information is missing from the Funding section: This research was supported by the National Science Foundation (NSF) (ITR 0325428, HCC 0834847, DRL 1235958, and Graduate Research Fellowship under Grant No. 1122374). Any opinions, findings, conclusions, or recommendations expressed are those of the author and do not reflect the views of the NSF. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
national conference on artificial intelligence | 2016
Goren Gordon; Samuel Spaulding; Jacqueline Kory Westlund; Jin Joo Lee; Luke Plummer; Marayna Martinez; Madhurima Das; Cynthia Breazeal