Naomi T. Fitter
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Naomi T. Fitter.
international conference on robotics and automation | 2013
Vivian Chu; Ian McMahon; Lorenzo Riano; Craig G. McDonald; Qin He; Jorge Martinez Perez-Tejada; Michael Arrigo; Naomi T. Fitter; John C. Nappo; Trevor Darrell; Katherine J. Kuchenbecker
Delivering on the promise of real-world robotics will require robots that can communicate with humans through natural language by learning new words and concepts through their daily experiences. Our research strives to create a robot that can learn the meaning of haptic adjectives by directly touching objects. By equipping the PR2 humanoid robot with state-of-the-art biomimetic tactile sensors that measure temperature, pressure, and fingertip deformations, we created a platform uniquely capable of feeling the physical properties of everyday objects. The robot used five exploratory procedures to touch 51 objects that were annotated by human participants with 34 binary adjective labels. We present both static and dynamic learning methods to discover the meaning of these adjectives from the labeled objects, achieving average F1 scores of 0.57 and 0.79 on a set of eight previously unfelt items.
IEEE Transactions on Haptics | 2017
Rebecca P. Khurshid; Naomi T. Fitter; Elizabeth A. Fedalei; Katherine J. Kuchenbecker
The multifaceted human sense of touch is fundamental to direct manipulation, but technical challenges prevent most teleoperation systems from providing even a single modality of haptic feedback, such as force feedback. This paper postulates that ungrounded grip-force, fingertip-contact-and-pressure, and high-frequency acceleration haptic feedback will improve human performance of a teleoperated pick-and-place task. Thirty subjects used a teleoperation system consisting of a haptic device worn on the subjects right hand, a remote PR2 humanoid robot, and a Vicon motion capture system to move an object to a target location. Each subject completed the pick-and-place task 10 times under each of the eight haptic conditions obtained by turning on and off grip-force feedback, contact feedback, and acceleration feedback. To understand how object stiffness affects the utility of the feedback, half of the subjects completed the task with a flexible plastic cup, and the others used a rigid plastic block. The results indicate that the addition of grip-force feedback with gain switching enables subjects to hold both the flexible and rigid objects more stably, and it also allowed subjects who manipulated the rigid block to hold the object more delicately and to better control the motion of the remote robots hand. Contact feedback improved the ability of subjects who manipulated the flexible cup to move the robots arm in space, but it deteriorated this ability for subjects who manipulated the rigid block. Contact feedback also caused subjects to hold the flexible cup less stably, but the rigid block more securely. Finally, adding acceleration feedback slightly improved the subjects performance when setting the object down, as originally hypothesized; interestingly, it also allowed subjects to feel vibrations produced by the robots motion, causing them to be more careful when completing the task. This study supports the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and motivates further improvements to fingertip-contact-and-pressure feedback.
robot and human interactive communication | 2016
Naomi T. Fitter; Katherine J. Kuchenbecker
Human friends and teammates commonly connect through handshakes, high fives, fist bumps, and other forms of hand-to-hand contact. As robots enter everyday human spaces, they will have the opportunity to join in such physical interactions, but few current robots are intended to touch humans. To begin investigating this topic, we sought to discover precisely how robots should move and react in hand-clapping games, which we define as interactions involving repeated hand-to-hand contacts between two agents. We conducted an experiment to observe seven pairs of people performing a variety of hand-clapping activities. Their recorded hand movements were accurately described by sinusoids that have a constant participant-specific maximum velocity across clapping tempos. Behaviorally, people struggled most with hand clapping at fast tempos, but they also smiled and laughed most often during fast trials. We used the human-human experiment findings to select, modify, and program a Rethink Robotics Baxter Research Robot to clap hands with a human partner. Preliminary tests have demonstrated that this robot can move like our participants and reliably detect human hand impacts through its wrist-mounted accelerometers, thereby exhibiting promise as a safe and engaging interaction partner.
international conference on social robotics | 2016
Naomi T. Fitter; Dylan T. Hawkes; Katherine J. Kuchenbecker
Future robots for everyday human environments will need to be capable of physical collaboration and play. We previously designed a robotic system for constant-tempo human-robot hand-clapping games. Since rhythmic timing is crucial in such interactions, we sought to endow our robot with the ability to speed up and slow down to match the human partner’s changing tempo. We tackled this goal by observing human-human entrainment, modeling human synchronization behaviors, and piloting three adaptive tempo behaviors on a Rethink Robotics Baxter Research Robot. The pilot study indicated that a fading memory difference learning timing model may perform best in future human-robot gameplay. We will use the findings of this study to improve our hand-clapping robotic system.
intelligent robots and systems | 2016
Naomi T. Fitter; Katherine J. Kuchenbecker
All over the world, people find joy and amusement in playing hand-clapping games such as “Pat-a-cake” and “Slide.” Thus, as robots enter everyday human spaces and work together with people, we see potential for them to entertain, engage, and assist humans through cooperative clapping games. This paper explores how data recorded from a pair of commonly available inertial measurement units (IMUs) worn on a humans hands can contribute to the teaching of a hand-clapping robot. We identified representative hand-clapping activities, considered approaches to classify games, and conducted a study to record hand-clapping motion data. Analysis of data from fifteen participants indicates that support vector machines and Markov chain analysis can correctly classify 95.5% of the demonstrated hand-clapping motions (from ten discrete actions) and 92.3% of the hand-clapping game demonstrations recorded in the study. These results were calculated by withholding a participants entire dataset for testing, so these results should represent general system behavior for new users. Overall, this research lays the groundwork for a simple and efficient method that people could use to demonstrate hand-clapping games to robots.
human-robot interaction | 2014
Naomi T. Fitter; Katherine J. Kuchenbecker
Creating a robot that can teach humans simple interactive tasks such as high-fiving requires research at the intersection of physical human-robot interaction (PHRI) and socially assistive robotics. This paper shows how observation of natural human-human interaction can improve the design of requirements for social-physical robots and form a framework for autonomous execution of interactive physical tasks. Eleven pairs of human subjects were recruited to perform a set of high-fiving games; a magnetic motion tracker and an accelerometer were mounted to each persons hand for the duration of the experiment, and each subject completed several questionnaires about the experience. The results reveal valuable clues about the generally positive feelings of the participants and the movement of their hands during play. We discuss how we plan to use these results to create a robot that can teach humans similar high-fiving games.
international conference on social robotics | 2016
Naomi T. Fitter; Katherine J. Kuchenbecker
Facial expressions of both humans and robots are known to communicate important social cues to human observers. Nevertheless, faces for use on the flat panel display screens of physical multi-degree-of-freedom robots have not been exhaustively studied. While surveying owners of the Rethink Robotics Baxter Research Robot to establish their interest, we designed a set of 49 Baxter faces, including seven colors (red, orange, yellow, green, blue, purple, and gray) and seven expressions (afraid, angry, disgusted, happy, neutral, sad, and surprised). Online study participants (N = 568) drawn equally from two countries (US and India) then rated photographs of a physical Baxter robot displaying randomized subsets of the faces. Face color, facial expression, and onlooker country of origin all significantly affected the perceived pleasantness and energeticness of the robot, as well as the onlooker’s feelings of safety and pleasedness, with facial expression causing the largest effects. The designed faces are available to researchers online.
human-robot interaction | 2018
Elizabeth Cha; Naomi T. Fitter; Yunkyung Kim; Terrence Fong; Maja J. Matarić
Auditory cues facilitate situational awareness by enabling humans to infer what is happening in the nearby environment. Unlike humans, many robots do not continuously produce perceivable state-expressive sounds. In this work, we propose the use of iconic auditory signals that mimic the sounds produced by a robot»s operations. In contrast to artificial sounds (e.g., beeps and whistles), these signals are primarily functional, providing information about the robot»s actions and state. We analyze the effects of two variations of robot sound, tonal and broadband, on auditory localization during a human-robot collaboration task. Results from 24 participants show that both signals significantly improve auditory localization, but the broadband variation is preferred by participants. We then present a computational formulation for auditory signaling and apply it to the problem of auditory localization using a human-subjects data collection with 18 participants to learn optimal signaling policies.
human robot interaction | 2018
Naomi T. Fitter; Yasmin Chowdhury; Elizabeth Cha; Leila Takayama; Maja J. Matarić
Telepresence robots hold the potential to allow absent students to remain physically embodied and socially connected in the classroom. In this work, we investigate the effects of telepresence robot personalization on K-12 students» perceptions of the robot, perceptions of themselves, and feelings of self-presence. We conducted a between-subjects, 2-condition user study (N=24) on robot personalization. In this study, 9- to 13-year-old participants remotely completed an educational exercise using a telepresence robot. Lessons learned from this study will inform our continued work on using remote presence robots to preserve the educational and social experiences of students during extended absences from school.
Frontiers in Robotics and AI | 2018
Naomi T. Fitter; Katherine J. Kuchenbecker
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the users motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partners experience.