Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Paxton is active.

Publication


Featured researches published by Chris Paxton.


international conference on robotics and automation | 2015

A framework for end-user instruction of a robot assistant for manufacturing

Kelleher Guerin; Colin Lea; Chris Paxton; Gregory D. Hager

Small Manufacturing Entities (SMEs) have not incorporated robotic automation as readily as large companies due to rapidly changing product lines, complex and dexterous tasks, and the high cost of start-up. While recent low-cost robots such as the Universal Robots UR5 and Rethink Robotics Baxter are more economical and feature improved programming interfaces, based on our discussions with manufacturers further incorporation of robots into the manufacturing work flow is limited by the ability of these systems to generalize across tasks and handle environmental variation. Our goal is to create a system designed for small manufacturers that contains a set of capabilities useful for a wide range of tasks, is both powerful and easy to use, allows for perceptually grounded actions, and is able to accumulate, abstract, and reuse plans that have been taught. We present an extension to Behavior Trees that allows for representing the system capabilities of a robot as a set of generalizable operations that are exposed to an end-user for creating task plans. We implement this framework in CoSTAR, the Collaborative System for Task Automation and Recognition, and demonstrate its effectiveness with two case studies. We first perform a complex tool-based object manipulation task in a laboratory setting. We then show the deployment of our system in an SME where we automate a machine tending task that was not possible with current off the shelf robots.


international conference on robotics and automation | 2015

An incremental approach to learning generalizable robot tasks from human demonstration

M E Amir Ghalamzan; Chris Paxton; Gregory D. Hager; Luca Bascetta

Dynamic Movement Primitives (DMPs) are a common method for learning a control policy for a task from demonstration. This control policy consists of differential equations that can create a smooth trajectory to a new goal point. However, DMPs only have a limited ability to generalize the demonstration to new environments and solve problems such as obstacle avoidance. Moreover, standard DMP learning does not cope with the noise inherent to human demonstrations. Here, we propose an approach for robot learning from demonstration that can generalize noisy task demonstrations to a new goal point and to an environment with obstacles. This strategy for robot learning from demonstration results in a control policy that incorporates different types of learning from demonstration, which correspond to different types of observational learning as outlined in developmental psychology.


intelligent robots and systems | 2016

Do what i want, not what i did: Imitation of skills by planning sequences of actions

Chris Paxton; Felix Jonathan; Marin Kobilarov; Gregory D. Hager

We propose a learning-from-demonstration approach for grounding actions from expert data and an algorithm for using these actions to perform a task in new environments. Our approach is based on an application of sampling-based motion planning to search through the tree of discrete, high-level actions constructed from a symbolic representation of a task. Recursive sampling-based planning is used to explore the space of possible continuous-space instantiations of these actions. We demonstrate the utility of our approach with a magnetic structure assembly task, showing that the robot can intelligently select a sequence of actions in different parts of the workspace and in the presence of obstacles. This approach can better adapt to new environments by selecting the correct high-level actions for the particular environment while taking human preferences into account.


intelligent robots and systems | 2017

Combining neural networks and tree search for task and motion planning in challenging environments

Chris Paxton; Vasumathi Raman; Gregory D. Hager; Marin Kobilarov

Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level “option policies” that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.


international conference on robotics and automation | 2017

CoSTAR: Instructing collaborative robots with behavior trees and vision

Chris Paxton; Andrew Hundt; Felix Jonathan; Kelleher Guerin; Gregory D. Hager

For collaborative robots to become useful, end users who are not robotics experts must be able to instruct them to perform a variety of tasks. With this goal in mind, we developed a system for end-user creation of robust task plans with a broad range of capabilities. CoSTAR: the Collaborative System for Task Automation and Recognition is our winning entry in the 2016 KUKA Innovation Award competition at the Hannover Messe trade show, which this year focused on Flexible Manufacturing. CoSTAR is unique in how it creates natural abstractions that use perception to represent the world in a way users can both understand and utilize to author capable and robust task plans. Our Behavior Tree-based task editor integrates high-level information from known object segmentation and pose estimation with spatial reasoning and robot actions to create robust task plans. We describe the cross-platform design and implementation of this system on multiple industrial robots and evaluate its suitability for a wide variety of use cases.


american medical informatics association annual symposium | 2013

Developing predictive models using electronic medical records: challenges and pitfalls.

Chris Paxton; Alexandru Niculescu-Mizil; Suchi Saria


human robot interaction | 2016

Semi-Autonomous Telerobotic Assembly over High-Latency Networks

Jonathan Bohren; Chris Paxton; Ryan Howarth; Gregory D. Hager; Louis L. Whitcomb


arXiv: Robotics | 2018

Visual Robot Task Planning.

Chris Paxton; Yotam Barnoy; Kapil D. Katyal; Raman Arora; Gregory D. Hager


arXiv: Robotics | 2018

Training Frankenstein's Creature to Stack: HyperTree Architecture Search.

Andrew Hundt; Varun Jain; Chris Paxton; Gregory D. Hager


Archive | 2018

Occupancy Map Prediction Using Generative and Fully Convolutional Networks for Vehicle Navigation.

Kapil D. Katyal; Katie Popek; Chris Paxton; Joseph O. Moore; Kevin C. Wolfe; Philippe Burlina; Gregory D. Hager

Collaboration


Dive into the Chris Paxton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Jonathan

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Andrew Hundt

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raman Arora

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Colin Lea

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge