Mikhail S. Medvedev
University of Massachusetts Lowell
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mikhail S. Medvedev.
human-robot interaction | 2013
Munjal Desai; Poornima Kaniarasu; Mikhail S. Medvedev; Aaron Steinfeld; Holly A. Yanco
Prior work in human trust of autonomous robots suggests the timing of reliability drops impact trust and control allocation strategies. However, trust is traditionally measured post-run, thereby masking the real-time changes in trust, reducing sensitivity to factors like inertia, and subjecting the measure to biases like the primacy-recency effect. Likewise, little is known on how feedback of robot confidence interacts in real-time with trust and control allocation strategies. An experiment to examine these issues showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust. The implications of specific findings on development of trust models and robot design are also discussed.
2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) | 2013
Eric McCann; Mikhail S. Medvedev; Daniel J. Brooks; Kate Saenko
Indoor localization is a challenging problem, especially in dynamically changing environments and in the presence of sensor errors such as odometry drift. We present a method for robustly localizing a robot in realistic indoor environments. We improve a popular probabilistic approach called Monte Carlo localization, which estimates the robots position using depth features of the environment and is prone to errors when the topology changes (e.g., due to a moved piece of furniture). We propose a technique that improves localization by augmenting the environment with a set of QR code landmarks. Each landmark embeds information about its 3D pose relative to the world coordinate system, the same coordinate system as the map. Our algorithm detects the landmarks in images from an RGB-D camera, uses depth information to estimates their pose relative to the robot, and incorporates the resulting position evidence in a probabilistic manner. We conducted experiments on an iRobot ATRV-JR robot and show that our method is more reliable in dynamic environments than the exclusively probabilistic localization method.
Paladyn: Journal of Behavioral Robotics | 2015
Katherine M. Tsui; James M. Dalphond; Daniel J. Brooks; Mikhail S. Medvedev; Eric McCann; Jordan Allspaw; David Kontak; Holly A. Yanco
Abstract The quality of life of people with special needs, such as residents of healthcare facilities, may be improved through operating social telepresence robots that provide the ability to participate in remote activities with friends or family. However, to date, such platforms do not exist for this population. Methodology: Our research utilized an iterative, bottomup, user-centered approach, drawing upon our assistive robotics experiences. Based on the findings of our formative user studies, we developed an augmented reality user interface for our social telepresence robot. Our user interface focuses primarily on the human-human interaction and communication through video, providing support for semi-autonomous navigation. We conducted a case study (n=4) with our target population in which the robot was used to visit a remote art gallery. Results: All of the participants were able to operate the robot to explore the gallery, form opinions about the exhibits, and engage in conversation. Significance: This case study demonstrates that people from our target population can successfully engage in the active role of operating a telepresence robot.
International Journal of Intelligent Computing and Cybernetics | 2014
Katherine M. Tsui; Eric McCann; Amelia McHugh; Mikhail S. Medvedev; Holly A. Yanco; David Kontak; Jill L. Drury
Purpose – The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not been designed for use by people with disabilities as the robot operators. The paper aims to discuss these issues. Design/methodology/approach – The authors conducted two formative evaluations using a participatory action design process. First, the authors conducted a focus group (n=5) to investigate how members of the target audience would want to direct a telepresence robot in a remote environment using speech. The authors then conducted a follow-on experiment in which participants (n=12) used a telepresence robot or directed a human in a scavenger hunt task. Findings – The authors collected a corpus of 312 utterances (first hand as opposed to speculative) relating to spatial navigation. Overall, the analysis of the corpus supported several speculations put forth during the focus group. Further, it showed few statistic...
ieee international conference on technologies for practical robot applications | 2015
Daniel J. Brooks; Eric McCann; Jordan Allspaw; Mikhail S. Medvedev; Holly A. Yanco
In the field of human-robot interaction, collaborative and/or adversarial game play can be used as a testbed to evaluate theories and hypotheses in areas such as resolving problems with another agents work and turn-taking etiquette. It is often the case that such interactions are encumbered by constraints made to allow the robot to function. This may affect interactions by impeding a participants generalization of their interaction with the robot to similar previous interactions they have had with people. We present a checkers playing system that, with minimal constraints, can play checkers with a human, even crowning the humans kings by placing a piece atop the appropriate checker. Our board and pieces were purchased online, and only required the addition of colored stickers on the checkers to contrast them with the board. This paper describes our system design and evaluates its performance and accuracy by playing games with twelve human players.
2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) | 2013
Munjal Desai; Mikhail S. Medvedev; Marynel Vázquez; Sean McSheehy; Sofia Gadea-Omelchenko; Christian Bruggeman; Aaron Steinfeld; Holly A. Yanco
When remote robots operate in unstructured environments, they are typically controlled or monitored by an operator, depending upon the available autonomy levels and the current level of reliability for the available autonomy. When multiple autonomy modes are available, the operator must determine a control allocation strategy. We conducted two sets of experiments designed to investigate how situation awareness and automation reliability affected the control strategies of the experiment participants. Poor situation awareness was found to increase the use of autonomy; however, task performance decreased even when the automation was functioning reliably, demonstrating the need to design robot interfaces that provide good situation awareness.
human-robot interaction | 2012
Daniel J. Brooks; Cameron Finucane; Adam Norton; Constantine Lignos; Vasumathi Raman; Hadas Kress-Gazit; Mikhail S. Medvedev; Ian Perera; Abraham Shultz; Sean McSheehy; Mitch Marcus; Holly A. Yanco
This video shows a demonstration of a fully autonomous robot, an iRobot ATRV-JR, which can be given commands using natural language. Users type commands to the robot on a tablet computer, which are then parsed and processed using semantic analysis. This information is used to build a plan representing the high level autonomous behaviors the robot should perform [2] [1]. The robot can be given commands to be executed immediately (e.g., “Search the floor for hostages.”) as well as standing orders for use over the entire run (e.g., “Let me know if you see any bombs.”). In the scenario shown in the video, the robot is asked to identify and defuse bombs, as well as to report if it finds any hostages or bad guys. Users can also query the robot through this interface. The robot conveys information to the user through text and a graphical interface on a tablet computer. The system can add icons to the map displayed and highlight areas of the map to convey concepts such as “I am here.” The video contains segments taken from a continuous 20 minute long run, shown at 4× speed. This work is a demonstration of a larger project called Situation Understanding Bot Through Language and Environment (SUBTLE). For more information, see www.subtlebot.org.
human-robot interaction | 2012
Munjal Desai; Mikhail S. Medvedev; Marynel Vázquez; Sean McSheehy; Sofia Gadea-Omelchenko; Christian Bruggeman; Aaron Steinfeld; Holly A. Yanco
national conference on artificial intelligence | 2012
Daniel J. Brooks; Constantine Lignos; Cameron Finucane; Mikhail S. Medvedev; Ian Perera; Vasumathi Raman; Hadas Kress-Gazit; Mitch Marcus; Holly A. Yanco
2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) | 2013
Katherine M. Tsui; Adam Norton; Daniel J. Brooks; Eric McCann; Mikhail S. Medvedev; Holly A. Yanco