Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katherine M. Tsui is active.

Publication


Featured researches published by Katherine M. Tsui.


2011 IEEE Conference on Technologies for Practical Robot Applications | 2011

Essential features of telepresence robots

Munjal Desai; Katherine M. Tsui; Holly A. Yanco; Chris Uhlik

Telepresence robots are mobile robot platforms capable of providing two way audio and video communication. Recently there has been a surge in companies designing telepresence robots. We conducted a series of user studies at Google in Mountain View with two different commercially available telepresence robots. Based on the data collected from these user studies, we present a set of guidelines for designing telepresence robots. These essential guidelines pertain to video, audio, user interface, physical features, and autonomous behaviors.


interactive tabletops and surfaces | 2009

Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces

Mark Micire; Munjal Desai; Amanda Courtemanche; Katherine M. Tsui; Holly A. Yanco

Multi-touch technologies hold much promise for the command and control of mobile robot teams. To improve the ease of learning and usability of these interfaces, we conducted an experiment to determine the gestures that people would naturally use, rather than the gestures they would be instructed to use in a pre-designed system. A set of 26 tasks with differing control needs were presented sequentially on a DiamondTouch to 31 participants. We found that the task of controlling robots exposed unique gesture sets and considerations not previously observed, particularly in desktop-like applications. In this paper, we present the details of these findings, a taxonomy of the gesture set, and guidelines for designing gesture sets for robot control.


human-robot interaction | 2008

Development and evaluation of a flexible interface for a wheelchair mounted robotic arm

Katherine M. Tsui; Holly A. Yanco; David Kontak; Linda Beliveau

Accessibility is a challenge for people with disabilities. Differences in cognitive ability, sensory impairments, motor dexterity, behavioral skills, and social skills must be taken into account when designing interfaces for assistive devices. Flexible interfaces tuned for individuals, instead of custom-built solutions, may benefit a larger number of people. The development and evaluation of a flexible interface for controlling a wheelchair mounted robotic arm is described in this paper. There are four versions of the interface based on input device (touch screen or joystick) and a moving or stationary shoulder camera. We describe results from an eight week experiment conducted with representative end users who range in physical and cognitive ability.


human-robot interaction | 2009

How people talk when teaching a robot

Elizabeth S. Kim; Dan Leyzberg; Katherine M. Tsui; Brian Scassellati

We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learners performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naïvely and spontaneously use intensely affective vocalizations. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teachers vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.


Applied Bionics and Biomechanics | 2011

“I want that”: Human-in-the-loop control of a wheelchair-mounted robotic arm

Katherine M. Tsui; Dae-Jin Kim; Aman Behal; David Kontak; Holly A. Yanco

Wheelchair-mounted robotic arms have been commercially available for a decade. In order to operate these robotic arms, a user must have a high level of cognitive function. Our research focuses on replacing a manufacturer-provided, menu-based interface with a vision-based system while adding autonomy to reduce the cognitive load. Instead of manual task decomposition and execution, the user explicitly designates the end goal, and the system autonomously retrieves the object. In this paper, we present the complete system which can autonomously retrieve a desired object from a shelf. We also present the results of a 15-week study in which 12 participants from our target population used our system, totaling 198 trials.


ieee international conference on rehabilitation robotics | 2007

Development of Vision-Based Navigation for a Robotic Wheelchair

Matt Bailey; Andrew Chanler; Bruce Allen Maxwell; Mark Micire; Katherine M. Tsui; Holly A. Yanco

Our environment is replete with visual cues intended to guide human navigation. For example, there are building directories at entrances and room numbers next to doors. By developing a robot wheelchair system that can interpret these cues, we will create a more robust and more usable system. This paper describes the design and development of our robot wheelchair system, called Wheeley, and its vision-based navigation system. The robot wheelchair system uses stereo vision to build maps of the environment through which it travels; this map can then be annotated with information gleaned from signs. We also describe the planned integration of an assistive robot arm to help with pushing elevator buttons and opening door handles.


performance metrics for intelligent systems | 2008

Survey of domain-specific performance measures in assistive robotic technology

Katherine M. Tsui; Holly A. Yanco; David J. Feil-Seifer; Maja J. Matarić

Assistive robotics have been developed for several domains, including autism, eldercare, intelligent wheelchairs, assistive robotic arms, external limb prostheses, and stroke rehabilitation. Work in assistive robotics can be divided into two larger research areas: technology development, where new devices, software, and interfaces are created; and clinical application, where assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots can unveil a larger set of challenges. For example, what well established performance measures should be used for evaluation to lend credence to a particular assistive robotic technology from a clinicians perspective? In this paper, we survey several areas of assistive robotic technology in order to demonstrate domain-specific means for evaluating the performance of an assistive robot system.


Archive | 2009

Performance Evaluation Methods for Assistive Robotic Technology

Katherine M. Tsui; David J. Feil-Seifer; Maja J. Matarić; Holly A. Yanco

Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.


Reviews of Human Factors and Ergonomics | 2013

Design Challenges and Guidelines for Social Interaction Using Mobile Telepresence Robots

Katherine M. Tsui; Holly A. Yanco

In this chapter, we address issues related to using mobile telepresence robotics for social interaction between people in different locations. We examine this problem space from three perspectives: (a) designing for the robot user, who is in a remote location; (b) designing for people near the robot, who are interacting with the user; and (c) designing so that the conversation is not hampered by the technology. We identify and review a number of mobile telepresence robots that have been designed for such social interactions across a variety of applications, including business, education, and healthcare. Finally, we discuss areas for future research and development in mobile telepresence robots for social applications.


human-robot interaction | 2010

Considering the bystander's perspective for indirect human-robot interaction

Katherine M. Tsui; Munjal Desai; Holly A. Yanco

As robots become more pervasive in society, people will find themselves actively interacting with robots, and also rushing past them without any explicit interaction. People are able to maneuver in crowded situations by speeding up or slowing down to slip in between open pockets where people are not standing or walking. Our research focuses on this indirect bystander interaction. Scholtz defines a bystander as a person who “does not explicitly interact with a robot but needs some model of robot behavior to understand the consequences of the robots actions” and does not have formal training about the robot [1], [2]. We investigated the level of trust that a bystander has of a robotic system in a corridor passing scenario by asking people to watch short videos of such scenarios where the hallway is only wide enough to accommodate two entities (either human or robot). Our goal was to understand the bystanders mental model of how a robot should behave when passing a human, the bystanders expectation of the robot to adhere to social protocol, and the overall trust a bystander has of the robot to do the right thing.

Collaboration


Dive into the Katherine M. Tsui's collaboration.

Top Co-Authors

Avatar

Holly A. Yanco

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Munjal Desai

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Eric McCann

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Adam Norton

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Brooks

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Mark Micire

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Mikhail S. Medvedev

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge