Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel J. Rea is active.

Publication


Featured researches published by Daniel J. Rea.


human-robot interaction | 2012

The Roomba mood ring: an ambient-display robot

Daniel J. Rea; James Everett Young; Pourang Irani

We present a robot augmented with an ambient display that communicates using a multi-color halo. We use this robot in a public café-style setting where people vote on which colors the robot will display: we ask people to select a color which “best represents their mood.” People can vote from a mobile device (e.g., smart phone or laptop) through a web interface. Thus, the robots display is an abstract aggregate of the current mood of the room. Our research investigates how a robot with an ambient display may integrate into a space. For example, how will the robot alter how people use or perceive the environment, or how people will interact with the robot itself? In this paper we describe our initial prototype, an iRobot Roomba augmented with lights, and highlight the research questions driving our exploration, including initial study design.


software visualization | 2014

ChronoTwigger: A Visual Analytics Tool for Understanding Source and Test Co-evolution

Barrett Ens; Daniel J. Rea; Roiy Shpaner; Hadi Hemmati; James Everett Young; Pourang Irani

Applying visual analytics to large software systems can help users comprehend the wealth of information produced by source repository mining. One concept of interest is the co-evolution of test code with source code, or how source and test files develop together over time. For example, understanding how the testing pace compares to the development pace can help test managers gauge the effectiveness of their testing strategy. A useful concept that has yet to be effectively incorporated into a co-evolution visualization is co-change. Co-change is a quantity that identifies correlations between software artifacts, and we propose using this to organize our visualization in order to enrich the analysis of co-evolution. In this paper, we create, implement, and study an interactive visual analytics tool that displays source and test file changes over time (co-evolution) while grouping files that change together (co-change). Our new technique improves the analysts ability to infer information about the software development process and its relationship to testing. We discuss the development of our system and the results of a small pilot study with three participants. Our findings show that our visualization can lead to inferences that are not easily made using other techniques alone.


international conference on social robotics | 2015

Check Your Stereotypes at the Door: An Analysis of Gender Typecasts in Social Human-Robot Interaction

Daniel J. Rea; Yan Wang; James Everett Young

In this paper, we provide evidence that suggests prominent gender stereotypes might not be as pronounced in human-robot interaction as may be expected based on previous research. We investigate stereotypes about people interacting with robots, such as men being more engaged, and stereotypes which may be applied to robots that have a perceived gender, such as female robots being perceived as more suitable for household duties. Through a user study, we not only fail to find support for many existing stereotypes, but our analysis suggests that if such effects exist, they may be small. This implies that interface and robot designers need to be wary of which stereotypes they bring to the table, and should understand that even stereotypes with prior experimental evidence may not manifest strongly in social human-robot interaction.


international conference on virtual rehabilitation | 2017

Robotic Mirror Game for movement rehabilitation

Shelly Levy-Tzedek; Sigal Berman; Yehuda Stiefel; Ehud Sharlin; James Everett Young; Daniel J. Rea

We present findings on applying the Mirror Game, a technique borrowed from Improvisational Theater, to human-robot interaction, with the ultimate goal of using this game in a rehabilitative physical therapy setting. In our study, participants played the mirror game with a collocated embodied physical robot, the Kinova Mico robotic arm, or with a video projection of the robot. We expected to find a strong preference for interacting with the embodied robot vs. with its screen projection. While our findings do show a preference for the physical robot condition, the virtual rendition of the robotic arm also received positive feedback from the participants. The results suggest that a virtual environment may be a reasonable substitute for an embodied system under certain conditions. Given the significant costs of using actual robots in therapy, we believe it is important to identify where simulations are sufficient and real robots may not be needed.


human-robot interaction | 2017

Movers, Shakers, and Those Who Stand Still: Visual Attention-grabbing Techniques in Robot Teleoperation

Daniel J. Rea; Stela H. Seo; Neil D. B. Bruce; James Everett Young

We designed and evaluated a series of teleoperation interface techniques that aim to draw operator attention while mitigating negative effects of interruption. Monitoring live teleoperation video feeds, for example to search for survivors in search and rescue, can be cognitively taxing, particularly for operators driving multiple robots or monitoring multiple cameras. To reduce workload, emerging computer vision techniques can automatically identify and indicate (cue) salient points of potential interest for the operator. However, it is not clear how to cue such points to a preoccupied operator - whether cues would be distracting and a hindrance to operators-and how the design of the cue may impact operator cognitive load, attention drawn, and primary task performance. In this paper, we detail our iterative design process for creating a range of visual attention-grabbing cues that are grounded in psychological literature on human attention, and two formal evaluations that measure attention-grabbing capability and impact on operator performance. Our results show that visually cueing on-screen points of interest does not distract operators, that operators perform poorly without the cues, and detail how particular cue design parameters impact operator cognitive load and task performance. Specifically, full-screen cues can lower cognitive load, but can increase response time; animated cues may improve accuracy, but increase cognitive load. Finally, from this design process we provide tested, and theoretically grounded cues for attention drawing in teleoperation.


human-robot interaction | 2018

It's All in Your Head: Using Priming to Shape an Operator's Perceptions and Behavior during Teleoperation

Daniel J. Rea; James Everett Young

Perceptions of a technology can shape the way the technology is used and adopted. Thus, in teleoperation, it is important to understand how a teleoperators perceptions of a robot can be shaped, and whether those perceptions can impact how people drive robots. Priming, evoking activity in a person by exposing them to learned stimuli, is one way of shaping someones perception. We investigate priming an operators impression of a robots physical capabilities in order to impact their perception of the robot and teleoperation behavior; that is, we examine if we can change operator driving behavior simply by making them believe that a robot is dangerous or safe, fast or slow, etc., without actually changing robot capability. Our results show that priming (with no change to robot behavior or capability) can impact operator perception of the robot, their teleoperation experience, and in some cases may impact teleoperation performance.


robot and human interactive communication | 2017

Tortoise and the Hare Robot: Slow and steady almost wins the race, but finishes more safely

Daniel J. Rea; Mahdi Rahmani Hanzaki; Neil D. B. Bruce; James Everett Young

We investigated the effects of changing the tele-operation feel of operating a robot by modifying its speed and acceleration profiles, and found that reducing a robots maximum speed by half can reduce collisions by 32%, while only increasing navigation task time by 10%. Teleoperated robots are increasingly popular for enabling people to remotely attend meetings, explore dangerous areas, or view tourist destinations. As these robots are being designed to work in crowded areas with people, obstacles, or even unpredictable debris, interfaces that support piloting them in a safe and controlled manner are important for successful teleoperation. We investigate modifying a teleoperated robots speed and acceleration profiles on an operator remotely navigating through an obstacle course. Our results indicate that lower maximum speeds result in lower operator workload, fewer collisions, and are only slightly slower than other profiles with a higher maximum speed. Our results raise questions about how robot designers should think about physical robot capability design and default driving software settings, the robot control interface, and the relation of robot speed to control.


international conference on computer graphics and interactive techniques | 2015

And he built a crooked camera: a mobile visualization tool to view four-dimensional geometric objects

Nico Li; Daniel J. Rea; James Everett Young; Ehud Sharlin; Mario Costa Sousa

The limitations of human perception make it impossible to grasp four spatial dimensions simultaneously. Visualization techniques of four-dimensional (4D) geometrical shapes rely on visualizing limited projections of the true shape into lower dimensions, often hindering the viewers ability to grasp the complete structure, or to access its spatial structure with a natural 3D perspective. We propose a mobile visualization technique that enables viewers to better understand the geometry of 4D shapes, providing spatial freedom and leveraging the viewers natural knowledge and experience of exploring 3D geometric shapes. Our prototype renders 3D intersections of the 4D object, while allowing the user continuous control of varying values of the fourth dimension, enabling the user to interactively browse and explore a 4D shape using a simple camera-lens-style physical zoom metaphor.


human-agent interaction | 2015

Inspector Baxter: The Social Aspects of Integrating a Robot as a Quality Inspector in an Assembly Line

Amy Banh; Daniel J. Rea; James Everett Young; Ehud Sharlin


robot and human interactive communication | 2016

Playing the ‘trust game’ with robots: Social strategies and experiences

Roberta Cabral Mota; Daniel J. Rea; Anna Le Tran; James Everett Young; Ehud Sharlin; Mario Costa Sousa

Collaboration


Dive into the Daniel J. Rea's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Banh

University of Calgary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barrett Ens

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge