Charles C. Kemp
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles C. Kemp.
IEEE Robotics & Automation Magazine | 2007
Charles C. Kemp; Aaron Edsinger; Eduardo Torres-Jara
The state of the art in and the future of robotics are discussed. The potential paths to the long-term vision of robots that work alongside people in homes and workplaces as useful, capable collaborators are discussed. Robot manipulation in human environments is expected to grow in the coming years as more researchers seek to create robots that actively help in the daily lives of people
robot and human interactive communication | 2007
Aaron Edsinger; Charles C. Kemp
For manipulation tasks, the transfer of objects between humans and robots is a fundamental way to coordinate activity and cooperatively perform useful work. Within this paper we demonstrate that robots and people can effectively and intuitively work together by directly handing objects to one another. First, we present experimental results that demonstrate that subjects without explicit instructions or robotics expertise can successfully hand objects to a robot and take objects from a robot in response to reaching gestures. Moreover, when handing an object to the robot, subjects control the objects position and orientation to match the configuration of the robots hand, thereby simplifying robotic grasping and offering opportunities to simplify the manipulation task. Second, we present a robotic application that relies on this form of human-robot interaction. This application enables a humanoid robot to help a user place objects on a shelf, perform bimanual insertion tasks, and hold a box within which the user can place objects. By handing appropriate objects to the robot, the human directly and intuitively controls the robot. Through this interaction, the human and robot complement one anothers abilities and work together to achieve results.
International Journal of Humanoid Robotics | 2004
Rodney A. Brooks; Lijin Aryananda; Aaron Edsinger; Paul Fitzpatrick; Charles C. Kemp; Una-May O'Reilly; Eduardo Torres-Jara; Paulina Varshavskaya; Jeff Weber
We report on a dynamically balancing robot with a dexterous arm designed to operate in built-for-human environments. Our initial target task was for the robot to navigate, identify doors, open them, and proceed through them.
human-robot interaction | 2008
Charles C. Kemp; Cressel D. Anderson; Hai Nguyen; Alexander J. B. Trevor; Zhe Xu
We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to a mobile robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the locations 3D position with respect to the robots frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by “clicking” on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.
IEEE Robotics & Automation Magazine | 2013
Tiffany L. Chen; Matei T. Ciocarlie; Steve Cousins; Phillip M. Grice; Kelsey P. Hawkins; Kaijen Hsiao; Charles C. Kemp; Chih-Hung King; Daniel A. Lazewatsky; Adam Leeper; Hai Nguyen; Andreas Paepcke; Caroline Pantofaru; William D. Smart; Leila Takayama
Assistive mobile manipulators (AMMs) have the potential to one day serve as surrogates and helpers for people with disabilities, giving them the freedom to perform tasks such as scratching an itch, picking up a cup, or socializing with their families.
ieee-ras international conference on humanoid robots | 2006
Aaron Edsinger; Charles C. Kemp
Robots that work alongside us in our homes and workplaces could extend the time an elderly person can live at home, provide physical assistance to a worker on an assembly line, or help with household chores. In order to assist us in these ways, robots will need to successfully perform manipulation tasks within human environments. Human environments present special challenges for robot manipulation since they are complex, dynamic, uncontrolled, and difficult to perceive reliably. In this paper we present a behavior-based control system that enables a humanoid robot, Domo, to help a person place objects on a shelf. Domo is able to physically locate the shelf, socially cue a person to hand it an object, grasp the object that has been handed to it, transfer the object to the hand that is closest to the shelf, and place the object on the shelf. We use this behavior-based control system to illustrate three themes that characterize our approach to manipulation in human environments. The first theme, cooperative manipulation, refers to the advantages that can be gained by having the robot work with a person to cooperatively perform manipulation tasks. The second theme, task relevant features, emphasizes the benefits of carefully selecting the aspects of the world that are to be perceived and acted upon during a manipulation task. The third theme, let the body do the thinking, encompasses several ways in which a robot can use its body to simplify manipulation tasks.
The International Journal of Robotics Research | 2013
Advait Jain; Marc D. Killpack; Aaron Edsinger; Charles C. Kemp
Clutter creates challenges for robot manipulation, including a lack of non-contact trajectories and reduced visibility for line-of-sight sensors. We demonstrate that robots can use whole-arm tactile sensing to perceive clutter and maneuver within it, while keeping contact forces low. We first present our approach to manipulation, which emphasizes the benefits of making contact across the entire manipulator and assumes the manipulator has low-stiffness actuation and tactile sensing across its entire surface. We then present a novel controller that exploits these assumptions. The controller only requires haptic sensing, handles multiple contacts, and does not need an explicit model of the environment prior to contact. It uses model predictive control with a time horizon of length one and a linear quasi-static mechanical model. In our experiments, the controller enabled a real robot and a simulated robot to reach goal locations in a variety of environments, including artificial foliage, a cinder block, and randomly generated clutter, while keeping contact forces low. While reaching, the robots performed maneuvers that included bending objects, compressing objects, sliding objects, and pivoting around objects. In simulation, whole-arm tactile sensing also outperformed per-link force-torque sensing in moderate clutter, with the relative benefits increasing with the amount of clutter.
international conference on robotics and automation | 2010
Advait Jain; Charles C. Kemp
Previously, we have presented an implementation of impedance control inspired by the Equilibrium Point Hypothesis that we refer to as equilibrium point control (EPC). We have demonstrated that EPC can enable a robot in a fixed position to robustly pull open a variety of doors and drawers, and infer their kinematics without detailed prior models.
Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual Meeting | 2012
Cory Ann Smarr; Akanksha Prakash; Jenay M. Beer; Tracy L. Mitzner; Charles C. Kemp; Wendy A. Rogers
Many older adults value their independence and prefer to age in place. Robots can be designed to assist older people with performing everyday living tasks and maintaining their independence at home. Yet, there is a scarcity of knowledge regarding older adults’ attitudes toward robots and their preferences for robot assistance. Twenty-one older adults (M = 80.25 years old, SD = 7.19) completed questionnaires and participated in structured group interviews investigating their openness to and preferences for assistance from a mobile manipulator robot. Although the older adults were generally open to robot assistance for performing home-based tasks, they were selective in their views. Older adults preferred robot assistance over human assistance for many instrumental (e.g., housekeeping, laundry, medication reminders) and enhanced activities of daily living (e.g., new learning, hobbies). However, older adults were less open to robot assistance for some activities of daily living (e.g., shaving, hair care). Results from this study provide insight into older adults’ attitudes toward robot assistance with home-based everyday living tasks.
intelligent robots and systems | 2008
Hai Nguyen; Advait Jain; Cressel D. Anderson; Charles C. Kemp
We present a new behavior selection system for human-robot interaction that maps virtual buttons overlaid on the physical environment to the robotpsilas behaviors, thereby creating a clickable world. The user clicks on a virtual button and activates the associated behavior by briefly illuminating a corresponding 3D location with an off-the-shelf green laser pointer. As we have described in previous work, the robot can detect this click and estimate its 3D location using an omnidirectional camera and a pan/tilt stereo camera. In this paper, we show that the robot can select the appropriate behavior to execute using the 3D location of the click, the context around this 3D location, and its own state. For this work, the robot performs this selection process using a cascade of classifiers. We demonstrate the efficacy of this approach with an assistive object-fetching application. Through empirical evaluation, we show that the 3D location of the click, the state of the robot, and the surrounding context is sufficient for the robot to choose the correct behavior from a set of behaviors and perform the following tasks: pick-up a designated object from a floor or table, deliver an object to a designated person, place an object on a designated table, go to a designated location, and touch a designated location with its end effector.