Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kelleher Guerin is active.

Publication


Featured researches published by Kelleher Guerin.


ieee haptics symposium | 2012

HAPI Bands: A haptic augmented posture interface

Michele F. Rotella; Kelleher Guerin; Xingchi He; Allison M. Okamura

In the instruction of motor tasks, feedback from a teacher in the form of visual demonstration, aural directives, or physical guidance enhances student performance and facilitates motor learning. When the teachers guidance is not available, or visual and aural cues are not appropriate, a wearable presentation of haptic feedback that mimics the teachers touch is an alternative solution. We present HAPI Bands, a set of user-worn bands instrumented with eccentric mass motors that provide vibrotactile feedback for guidance of static pose. Joint misalignment from a target pose is corrected for 15 degrees of freedom (DOFs) of the upper-body. HAPI Bands uses a low-cost range camera, the Microsoft Kinect, and related software to measure the 3D position of a users joints in space. We developed a novel algorithm that computes 6-DOF joint pose by integrating Kinect position sensing with orientation data from body-mounted accelerometers. Accuracy of the systems sensing was measured against Optotrak data, resulting in average joint biases of approximately 2.33°, 7.13°, and 7.48° for torso, shoulder, and elbow joints, respectively, with an average static RMS measurement error of 0.59°. In a user study, haptic feedback was found to be as effective as visual feedback in reducing endpoint error in 1-DOF movements of the torso and arm. Future work is planned to evaluate the HAPI Bands system in realistic applications and explore guidance of dynamic motion trajectories.


international conference on robotics and automation | 2015

A framework for end-user instruction of a robot assistant for manufacturing

Kelleher Guerin; Colin Lea; Chris Paxton; Gregory D. Hager

Small Manufacturing Entities (SMEs) have not incorporated robotic automation as readily as large companies due to rapidly changing product lines, complex and dexterous tasks, and the high cost of start-up. While recent low-cost robots such as the Universal Robots UR5 and Rethink Robotics Baxter are more economical and feature improved programming interfaces, based on our discussions with manufacturers further incorporation of robots into the manufacturing work flow is limited by the ability of these systems to generalize across tasks and handle environmental variation. Our goal is to create a system designed for small manufacturers that contains a set of capabilities useful for a wide range of tasks, is both powerful and easy to use, allows for perceptually grounded actions, and is able to accumulate, abstract, and reuse plans that have been taught. We present an extension to Behavior Trees that allows for representing the system capabilities of a robot as a set of generalizable operations that are exposed to an end-user for creating task plans. We implement this framework in CoSTAR, the Collaborative System for Task Automation and Recognition, and demonstrate its effectiveness with two case studies. We first perform a complex tool-based object manipulation task in a laboratory setting. We then show the deployment of our system in an SME where we automate a machine tending task that was not possible with current off the shelf robots.


international conference on robotics and automation | 2017

CoSTAR: Instructing collaborative robots with behavior trees and vision

Chris Paxton; Andrew Hundt; Felix Jonathan; Kelleher Guerin; Gregory D. Hager

For collaborative robots to become useful, end users who are not robotics experts must be able to instruct them to perform a variety of tasks. With this goal in mind, we developed a system for end-user creation of robust task plans with a broad range of capabilities. CoSTAR: the Collaborative System for Task Automation and Recognition is our winning entry in the 2016 KUKA Innovation Award competition at the Hannover Messe trade show, which this year focused on Flexible Manufacturing. CoSTAR is unique in how it creates natural abstractions that use perception to represent the world in a way users can both understand and utilize to author capable and robust task plans. Our Behavior Tree-based task editor integrates high-level information from known object segmentation and pose estimation with spatial reasoning and robot actions to create robust task plans. We describe the cross-platform design and implementation of this system on multiple industrial robots and evaluate its suitability for a wide variety of use cases.


intelligent robots and systems | 2014

Adjutant: A framework for flexible human-machine collaborative systems

Kelleher Guerin; Sebastian Riedel; Jonathan Bohren; Gregory D. Hager

Flexible interaction and instruction is a key enabling technology for expanding robotics into small to medium scale manufacturing, in-home assistance for physically disabled individuals, and robotic surgery. In these cases, performing a task manually is neither practical nor scalable, yet complete automation is cost-prohibitive or impossible. Thus, our interest is in collaborative systems that can be easily trained to work with an operator. This collaborative robotic system should be instructable in a generalizable way for a wide range of tasks, and should generalize to new tasks gracefully with minimal retraining. At the same time, for a given task, the system should take advantage of user interaction modalities needed to accomplish the task, subject to the constraints of the available interfaces. These ideas motivate the Adjutant framework. Adjutant supports human-robot collaborative operations for ranges of user roles and robot capability. Adjutant models human-robot systems via sets of robot capabilities, composable high-level functions that can be specialized to specific tasks, and collaborative behaviors which relate these capabilities to specific user interfaces or interaction paradigms. Adjutant also incorporates several methods encapsulating reusable task information into capabilities, thus specializing them, including tool affordances, perceptual grounding templates, and tool movement primitives. We have implemented Adjutant as a software framework in ROS and, in this paper, explore the utility of Adjutant for performing several real-world collaborative manufacturing tasks on an industrial robot test-bed.


intelligent robots and systems | 2011

Sensor and Sampling-based motion planning for minimally invasive robotic exploration of osteolytic lesions

Wen P. Liu; Blake C. Lucas; Kelleher Guerin; Erion Plaku

This paper develops a sensor- and sampling-based motion planner to control a surgical robot in order to explore osteolytic lesions in orthopedic surgery. Because of the difficulty of using conventional surgical tools, such exploration is needed in minimally-invasive treatments of “particle diseases,” which commonly result from material wear in total hip replacements. Since a geometric model of the osteolytic cavity is not always available, the planner relies only on a robot model that can detect collisions. As such, the planner can work in conjunction with real systems. The planner effectively combines global and local exploration. The global layer determines which regions to explore, while local exploration uses information gain to move the robot tip to positions in the region that increase exploration. Simulation experiments are conducted using a snake-like cannula robot on surgically-relevant osteolytic cavities. As desired in minimally-invasive treatment of osteolysis, performance is measured as the volume explored by the robot tip. The proposed method achieves 83–92% performance rate when compared to methods that require 3D models of osteolytic cavities. Comparisons to sensor-based related work (i.e., no 3D models) show significant improvements in performance.


advanced robotics and its social impacts | 2011

Toward practical semi-autonomous teleoperation: Do what i intend, not what i do

Jonathan Bohren; Kelleher Guerin; Tian Xia; Gregory D. Hager; Peter Kazanzides; Louis L. Whitcomb

The discussed research trajectory could begin improving the capabilities and robustness of teleoperation immediately. Initial work related to the RRM project described in Section I-A is beginning to incorporate explicit task models that can be used to bootstrap more sophisticated approaches like those used in the Language of Surgery project. The inclusion of task and skill modeling into robotic automation, and in general the movement to more seamless human-robotic task collaboration, will have long term effects in both the technical aspects of robotics, as well as industrial and societal acceptance of robotics. One of the significant limitations to the continuing adoption of robotics has been a lack of trust. This lack of trust often manifests itself in a technical form; for instance, engineers in industry will often double check by hand the calculations or task execution plans generated by an autonomous system before allowing an action to be performed. This gives us greater confidence that the robot is performing as we would, given the situation.


Archive | 2013

System and Method for Sensor Fusion of Single Range Camera Data and Inertial Measurement for Motion Capture

Kelleher Guerin; Allison M. Okamura; Michele F. Rotella; Xingchi He; Jonathan Bohren


Archive | 2015

Robot control, training and collaboration in an immersive virtual reality environment

Kelleher Guerin; Gregory D. Hager


Archive | 2013

Dockable Tool Framework for Interaction with Large Scale Wall Displays

Kelleher Guerin; Gregory D. Hager


Archive | 2017

METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM FOR MOBILE DEVICE MANAGEMENT OF COLLABORATIVE INDUSTRIAL ROBOT

Kelleher Guerin; Gregory D. Hager

Collaboration


Dive into the Kelleher Guerin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Paxton

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Xingchi He

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Hundt

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Blake C. Lucas

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Colin Lea

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Felix Jonathan

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge