Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephanie Rosenthal is active.

Publication


Featured researches published by Stephanie Rosenthal.


Journal of Intelligent and Robotic Systems | 2012

Is Someone in this Office Available to Help Me

Stephanie Rosenthal; Manuela M. Veloso; Anind K. Dey

Robots are increasingly autonomous in our environments, but they still must overcome limited sensing, reasoning, and actuating capabilities while completing services for humans. While some work has focused on robots that proactively request help from humans to reduce their limitations, the work often assumes that humans are supervising the robot and always available to help. In this work, we instead investigate the feasibility of asking for help from humans in the environment who benefit from its services. Unlike other human helpers that constantly monitor a robot’s progress, humans in the environment are not supervisors and a robot must proactively navigate to them to receive help. We contribute a study that shows that several of our environment occupants are willing to help our robot, but, as expected, they have constraints that limit their availability due to their own work schedules. Interestingly, the study further shows that an available human is not always in close proximity to the robot. We present an extended model that includes the availability of humans in the environment, and demonstrate how a navigation planner can incorporate this information to plan paths that increase the likelihood that a robot can find an available helper when it needs one. Finally, we discuss further opportunities for the robot to adapt and learn from the occupants over time.


intelligent robots and systems | 2012

CoBots: Collaborative robots servicing multi-floor buildings

Manuela M. Veloso; Joydeep Biswas; Brian Coltin; Stephanie Rosenthal; Thomas Kollar; Çetin Meriçli; Mehdi Samadi; Susana Brandão; Rodrigo Ventura

In this video we briefly illustrate the progress and contributions made with our mobile, indoor, service robots CoBots (Collaborative Robots), since their creation in 2009. Many researchers, present authors included, aim for autonomous mobile robots that robustly perform service tasks for humans in our indoor environments. The efforts towards this goal have been numerous and successful, and we build upon them. However, there are clearly many research challenges remaining until we can experience intelligent mobile robots that are fully functional and capable in our human environments.


International Journal of Social Robotics | 2012

Acquiring Accurate Human Responses to Robots’ Questions

Stephanie Rosenthal; Manuela M. Veloso; Anind K. Dey

In task-oriented robot domains, a human is often designated as a supervisor to monitor the robot and correct its inferences about its state during execution. However, supervision is expensive in terms of human effort. Instead, we are interested in robots asking non-supervisors in the environment for state inference help. The challenge with asking non-supervisors for help is that they may not always understand the robot’s state or question and may respond inaccurately as a result. We identify four different types of state information that a robot can include to ground non-supervisors when it requests help—namely context around the robot, the inferred state prediction, prediction uncertainty, and feedback about the sensors used for the predicting the robot’s state. We contribute two wizard-of-oz’d user studies to test which combination of this state information increases the accuracy of non-supervisors’ responses. In the first study, we consider a block-construction task and use a toy robot to study questions regarding shape recognition. In the second study, we use our real mobile robot to study questions regarding localization. In both studies, we identify the same combination of information that increases the accuracy of responses the most. We validate that our combination results in more accurate responses than a combination that a set of HRI experts predicted would be best. Finally, we discuss the appropriateness of our found best combination of information to other task-driven robots.


robot and human interactive communication | 2016

Enhancing human understanding of a mobile robot's state and actions using expressive lights

Kim Baraka; Stephanie Rosenthal; Manuela M. Veloso

In order to be successfully integrated into human-populated environments, mobile robots need to express relevant information about their state to the outside world. In particular, animated lights are a promising way to express hidden robot state information such that it is visible at a distance. In this work, we present an online study to evaluate the effect of robot communication through expressive lights on peoples understanding of the robots state and actions. In our study, we use the CoBot mobile service robot with our light interface, designed to express relevant robot information to humans. We evaluate three designed light animations on three corresponding scenarios for each, for a total of nine scenarios. Our results suggest that expressive lights can play a significant role in helping people accurately hypothesize about a mobile robots state and actions from afar when minimal contextual clues are present. We conclude that lights could be generally used as an effective non-verbal communication modality for mobile robots in the absence of, or as a complement to, other modalities.


robot and human interactive communication | 2016

Dynamic generation and refinement of robot verbalization

Vittorio Perera; Sai P. Selveraj; Stephanie Rosenthal; Manuela M. Veloso

With a growing number of robots performing autonomously without human intervention, it is difficult to understand what the robots experience along their routes during execution without looking at execution logs. Rather than looking through logs, our goal is for robots to respond to queries in natural language about what they experience and what routes they have chosen. We propose verbalization as the process of converting route experiences into natural language, and highlight the importance of varying verbalizations based on user preferences. We present our verbalization space representing different dimensions that verbalizations can be varied, and our algorithm for automatically generating them on our CoBot robot. Then we present our study of how users can request different verbalizations in dialog. Using the study data, we learn a language model to map user dialog to the verbalization space. Finally, we demonstrate the use of the learned model within a dialog system in order for any user to request information about CoBots route experience at varying levels of detail.


human-robot interaction | 2013

Execution memory for grounding and coordination

Stephanie Rosenthal; Sarjoun Skaff; Manuela M. Veloso; Dan Bohus; Eric Horvitz

As robots are introduced into human environments for long periods of time, human owners and collaborators will expect them to remember shared events that occur during execution. Beyond naturalness of having memories about recent and longer-term engagements with people, such execution memories can be important in tasks that persist over time by allowing robots to ground their dialog and to refer efficiently to previous events. In this work, we define execution memory as the capability of saving interaction event information and recalling it for later use. We divide the problem into four parts: salience filtering of sensor evidence and saving to short term memory, archiving from short to long term memory and caching from long to short term memory, and recalling memories for use in state inference and policy execution. We then provide examples of how execution memory can be used to enhance user experience with robots.


The International Journal of Robotics Research | 2018

Natural language instructions for human–robot collaborative manipulation

Rosario Scalise; Shen Li; Henny Admoni; Stephanie Rosenthal; Siddhartha S. Srinivasa

This paper presents a dataset of natural language instructions for object reference in manipulation scenarios. It comprises 1582 individual written instructions, which were collected via online crowdsourcing. This dataset is particularly useful for researchers who work in natural language processing, human–robot interaction, and robotic manipulation. In addition to serving as a rich corpus of domain-specific language, it provides a benchmark of image–instruction pairs to be used in system evaluations and uncovers inherent challenges in tabletop object specification. Example code is provided for easy access via Python.


frontiers in education conference | 2006

Design Collaboration in a Distributed Environment

Stephanie Rosenthal; Susan Finger

In engineering design classes, much of the learning takes place during student team meetings; so much of the learning is hidden from the instructor. Our long-term goal is to capture team interactions in order to develop a better understanding of collaborative learning in engineering design. This paper reports on a pilot study designed to understand the effects of electronic collaboration tools on the design process of student design teams. In the study, all teams were given the same design problem to solve, but some used pencil and paper, some used a regular whiteboard, and some used a shared digital whiteboard. While our study was a pilot study, it hints that the results of the design process are essentially the same whether students are co-located or distributed. However, we observed that students verbalized their arguments more when separated. The students in the distributed setting spent longer in each design step because they spent more time explaining ideas to students in the other room


human robot interaction | 2017

Natural Language Explanations in Human-Collaborative Systems

Rosario Scalise; Stephanie Rosenthal; Siddhartha S. Srinivasa

As autonomous systems and people collaborate more, it is evident that there is an increasing need for systems that are transparent and explicable. Especially in critical decisionmaking applications such as those employed in autonomous vehicles or in-home robotic eldercare, it is important for robots to be coherent and articulate what decisions they are making as well as why they arrived at such decisions. While research has suggested the need for explanations for years [7], there is an increasing interest in explaining machine learning and autonomous behavior. There have been contributions in making classification systems more intelligible (e.g., [6, 10]). In robotics, there has been work in enabling agents to explain task failures [3], to generate task plans that are optimized for explanation [13], and to explain why no plan can be found to begin with [2]. Additionally, there have been contributions towards enabling robots to summarize their experiences and generate natural language descriptions of them [11, 9]. These approaches place emphasis on allowing users to specify their preferences with respect to the level of detail they desire. We also argue that natural language communication is an appealing medium for articulating decision-making for several reasons. First, as robots are increasingly being used by non-expert average users rather than computer science experts, we should aim for interaction modalities that they are most comfortable with including language. Second, natural language affords us the ability to provide rich descriptions and explanations of often-complex robot behavior. However, the richness of natural language also means it can be challenging to generate “good” explanations. We are interested in developing approaches to generating and evaluating natural language explanations of robot behavior in order to improve human-robot collaboration.


international conference on big data | 2015

Developer toolchains for large-scale analytics: Two case studies

Stephanie Rosenthal; Scott McMillan; Matthew E. Gaston

While big data analytics continue to grow in popularity among companies and organizations, their large-scale analytic implementations are often completed by software developers with little or no formal training in machine learning or data analysis. These developers are skilled at writing code but they do not have the understanding of the data analytics process to be efficient or necessarily accurate at it. These developers use processes and tools that are often ad hoc and incomplete as they learn by doing. We followed a development team through two analytics development cycles and analyzed their interactions with their data and tools. In this paper, we first describe the tools the developers used and then present concrete opportunities for the big data community to create tools that empower these developers to build more accurate analytics more efficiently.

Collaboration


Dive into the Stephanie Rosenthal's collaboration.

Top Co-Authors

Avatar

Manuela M. Veloso

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anind K. Dey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Joydeep Biswas

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Rosario Scalise

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Coltin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shen Li

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Rodrigo Ventura

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Susana Brandão

Instituto Superior Técnico

View shared research outputs
Researchain Logo
Decentralizing Knowledge