Eric McCann
University of Massachusetts Lowell
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric McCann.
2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) | 2013
Eric McCann; Mikhail S. Medvedev; Daniel J. Brooks; Kate Saenko
Indoor localization is a challenging problem, especially in dynamically changing environments and in the presence of sensor errors such as odometry drift. We present a method for robustly localizing a robot in realistic indoor environments. We improve a popular probabilistic approach called Monte Carlo localization, which estimates the robots position using depth features of the environment and is prone to errors when the topology changes (e.g., due to a moved piece of furniture). We propose a technique that improves localization by augmenting the environment with a set of QR code landmarks. Each landmark embeds information about its 3D pose relative to the world coordinate system, the same coordinate system as the map. Our algorithm detects the landmarks in images from an RGB-D camera, uses depth information to estimates their pose relative to the robot, and incorporates the resulting position evidence in a probabilistic manner. We conducted experiments on an iRobot ATRV-JR robot and show that our method is more reliable in dynamic environments than the exclusively probabilistic localization method.
2011 IEEE Conference on Technologies for Practical Robot Applications | 2011
Mark Micire; Eric McCann; Munjal Desai; Katherine M. Tsui; Adam Norton; Holly A. Yanco
Robot control typically requires many physical joysticks, buttons, and switches. Taking inspiration from video game controllers, we have created a Dynamically Resizing, Ergonomic, and Multi-touch (DREAM) controller to allow for the development of a software-based operator control unit (SoftOCU). The DREAM Controller is created wherever a person places his or her hand; thus we needed to develop an algorithm for accurate hand and finger registration. Tested with a set of 405 hands from 62 users, our algorithm correctly identified 97% of the hands.
Paladyn: Journal of Behavioral Robotics | 2015
Katherine M. Tsui; James M. Dalphond; Daniel J. Brooks; Mikhail S. Medvedev; Eric McCann; Jordan Allspaw; David Kontak; Holly A. Yanco
Abstract The quality of life of people with special needs, such as residents of healthcare facilities, may be improved through operating social telepresence robots that provide the ability to participate in remote activities with friends or family. However, to date, such platforms do not exist for this population. Methodology: Our research utilized an iterative, bottomup, user-centered approach, drawing upon our assistive robotics experiences. Based on the findings of our formative user studies, we developed an augmented reality user interface for our social telepresence robot. Our user interface focuses primarily on the human-human interaction and communication through video, providing support for semi-autonomous navigation. We conducted a case study (n=4) with our target population in which the robot was used to visit a remote art gallery. Results: All of the participants were able to operate the robot to explore the gallery, form opinions about the exhibits, and engage in conversation. Significance: This case study demonstrates that people from our target population can successfully engage in the active role of operating a telepresence robot.
International Journal of Intelligent Computing and Cybernetics | 2014
Katherine M. Tsui; Eric McCann; Amelia McHugh; Mikhail S. Medvedev; Holly A. Yanco; David Kontak; Jill L. Drury
Purpose – The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not been designed for use by people with disabilities as the robot operators. The paper aims to discuss these issues. Design/methodology/approach – The authors conducted two formative evaluations using a participatory action design process. First, the authors conducted a focus group (n=5) to investigate how members of the target audience would want to direct a telepresence robot in a remote environment using speech. The authors then conducted a follow-on experiment in which participants (n=12) used a telepresence robot or directed a human in a scavenger hunt task. Findings – The authors collected a corpus of 312 utterances (first hand as opposed to speculative) relating to spatial navigation. Overall, the analysis of the corpus supported several speculations put forth during the focus group. Further, it showed few statistic...
The International Journal of Robotics Research | 2017
Adam Norton; Willard Ober; Lisa Baraniecki; Eric McCann; Jean Scholtz; David Shane; Anna Skinner; Robert Watson; Holly A. Yanco
In June 2015, the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge (DRC) Finals were held in Pomona, California. The DRC Finals served as the third phase of the program designed to test the capabilities of semi-autonomous, remote humanoid robots to perform disaster response tasks with degraded communications. All competition teams were responsible for developing their own interaction method to control their robot. Of the 23 teams in the competition, 20 consented to participate in this study of human–robot interaction (HRI). The evaluation team observed the consenting teams during task execution in their control rooms (with the operators), and all 23 teams were observed on the field during the public event (with the robot). A variety of data were collected both before the competition and on-site. Each participating team’s interaction methods were distilled into a set of characteristics pertaining to the robot, operator strategies, control methods, and sensor fusion. Each task was decomposed into subtasks that were classified according to the complexity of the mobility and/or manipulation actions being performed. Performance metrics were calculated regarding the number of task attempts, performance time, and critical incidents, which were then correlated to each team’s interaction methods. The results of this analysis suggest that a combination of HRI characteristics, including balancing the capabilities of the operator with those of the robot and multiple sensor fusion instances with variable reference frames, positively impacted task performance. A set of guidelines for designing HRI with remote, semi-autonomous humanoid robots is proposed based on these results.
human-robot interaction | 2012
Eric McCann; Sean McSheehy; Holly A. Yanco
This video demonstrates three users sharing control of eight simulated robots with a Microsoft Surface and two Apple iPads using our Multi-user Multi-touch Multi-robot Command and Control Interface. The command and control interfaces are all capable of moving their world camera through space, tasking one or more robots with a series of waypoints, and assuming manual control of a single robot for inspection of its sensors and tele-operation. They display full-screen images sent from their users world camera, overlaid with icons that show the position and selection state of each robot in the cameras field of view, dots that indicate each robots current destination, and rectangles that correspond to each other users field of view. One multi-touch interface runs on a Microsoft Surface, and the others on Apple iPads; they all have the same functional capabilities, other than a few differences due to the form factor and touch sensing method used by the platforms. The Surface interface is able to interpret gestures that include more than just finger tips, such as placing both fists on the screen to make all robots stop and wait for new commands. As iPads sense touch capacitively, they do not support detection of such gestures. The Surface interface allows its user to move their world camera while simultaneously teleoperating one of the robots with our Dynamically Resizing Ergonomic and Multi-touch Controller (DREAM Controller) [1, 2]. On the iPads, however, the command and control mode and teleoperation mode are mutually exclusive. The robots are simulated in Microsoft Robotics Developer Studio. Each users world camera has similar movement capabilities to a quad-copter. The UDP communications between users and robots are all handled by a single server that routes messages to the appropriate targets, allowing scalability of both the number of robots and users.
ieee international conference on technologies for practical robot applications | 2015
Daniel J. Brooks; Eric McCann; Jordan Allspaw; Mikhail S. Medvedev; Holly A. Yanco
In the field of human-robot interaction, collaborative and/or adversarial game play can be used as a testbed to evaluate theories and hypotheses in areas such as resolving problems with another agents work and turn-taking etiquette. It is often the case that such interactions are encumbered by constraints made to allow the robot to function. This may affect interactions by impeding a participants generalization of their interaction with the robot to similar previous interactions they have had with people. We present a checkers playing system that, with minimal constraints, can play checkers with a human, even crowning the humans kings by placing a piece atop the appropriate checker. Our board and pieces were purchased online, and only required the addition of colored stickers on the checkers to contrast them with the board. This paper describes our system design and evaluates its performance and accuracy by playing games with twelve human players.
Archive | 2011
Eric McCann; Mark Micire; Holly A. Yanco; Adam Norton
intelligent user interfaces | 2011
Mark Micire; Munjal Desai; Jill L. Drury; Eric McCann; Adam Norton; Katherine M. Tsui; Holly A. Yanco
2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) | 2013
Katherine M. Tsui; Adam Norton; Daniel J. Brooks; Eric McCann; Mikhail S. Medvedev; Holly A. Yanco