Jim Mainprice
Worcester Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jim Mainprice.
intelligent robots and systems | 2013
Jim Mainprice; Dmitry Berenson
In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the humans motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a prediction of human workspace occupancy by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy while interleaving planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motions and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robots task, even in cases where the initial prediction of the humans motion is incorrect. We also show that taking into account the predicted human workspace occupancy in the robots motion planner leads to safer and more efficient interactions between the user and the robot than only considering the humans current configuration.
international conference on robotics and automation | 2015
Jim Mainprice; Rafi Hayne; Dmitry Berenson
To enable safe and efficient human-robot collaboration in shared workspaces, it is important for the robot to predict how a human will move when performing a task. While predicting human motion for tasks not known a priori is very challenging, we argue that single-arm reaching motions for known tasks in collaborative settings (which are especially relevant for manufacturing) are indeed predictable. Two hypotheses underlie our approach for predicting such motions: First, that the trajectory the human performs is optimal with respect to an unknown cost function, and second, that human adaptation to their partners motion can be captured well through iterative replanning with the above cost function. The key to our approach is thus to learn a cost function which “explains” the motion of the human. To do this, we gather example trajectories from two participants performing a collaborative assembly task using motion capture. We then use Inverse Optimal Control to learn a cost function from these trajectories. Finally, we predict a humans motion for a given task by iteratively replanning a trajectory for a 23 DoF human kinematic model using the STOMP algorithm with the learned cost function in the presence of a moving collaborator. Our results suggest that our method outperforms baseline methods and generalizes well for tasks similar to those that were demonstrated.
Intelligent Service Robotics | 2014
Calder Phillips-Grafflin; Nicholas Alunni; Halit Bener Suay; Jim Mainprice; Daniel M. Lofaro; Dmitry Berenson; Sonia Chernova; Robert W. Lindeman; Paul Y. Oh
This paper presents our progress toward a user-guided manipulation framework for high degree-of-freedom robots operating in environments with limited communication. The system we propose consists of three components: (1) a user-guided perception interface that assists the user in providing task-level commands to the robot, (2) planning algorithms that autonomously generate robot motion while obeying relevant constraints, and (3) a trajectory execution and monitoring system which detects errors in execution. We report quantitative experiments performed on these three components and qualitative experiments of the entire pipeline with the PR2 robot turning a valve for the DARPA robotics challenge. We also describe how the framework was ported to the Hubo2+ robot with minimal changes which demonstrates its applicability to different types of robots.
IEEE Transactions on Robotics | 2016
Jim Mainprice; Rafi Hayne; Dmitry Berenson
To enable safe and efficient human-robot collaboration in shared workspaces, it is important for the robot to predict how a human will move when performing a task. While predicting human motion for tasks not known a priori is very challenging, we argue that single-arm reaching motions for known tasks in collaborative settings (which are especially relevant for manufacturing) are indeed predictable. Two hypotheses underlie our approach for predicting such motions: First, that the trajectory the human performs is optimal with respect to an unknown cost function, and second, that human adaptation to their partners motion can be captured well through iterative replanning with the above cost function. The key to our approach is thus to learn a cost function that “explains” the motion of the human. To do this, we gather example trajectories from pairs of participants performing a collaborative assembly task using motion capture. We then use inverse optimal control to learn a cost function from these trajectories. Finally, we predict reaching motions from the humans current configuration to a task-space goal region by iteratively replanning a trajectory using the learned cost function. Our planning algorithm is based on the trajectory optimizer: stochastic trajectory optimizer for motion planning [1]; it plans for a 23-degree-of-freedom human kinematic model and accounts for the presence of a moving collaborator and obstacles in the environment. Our results suggest that in most cases, our method outperforms baseline methods when predicting motions. We also show that our method outperforms baselines for predicting human motion when a human and a robot share the workspace.
International Journal of Social Robotics | 2014
Xavier Broquère; Alberto Finzi; Jim Mainprice; Silvia Rossi; Daniel Sidobre; Mariacarla Staffa
Human robot collaborative work requires interactive manipulation and object handover. During the execution of such tasks, the robot should monitor manipulation cues to assess the human intentions and quickly determine the appropriate execution strategies. In this paper, we present a control architecture that combines a supervisory attentional system with a human aware manipulation planner to support effective and safe collaborative manipulation. After detailing the approach, we present experimental results describing the system at work with different manipulation tasks (give, receive, pick, and place).
international conference on robotics and automation | 2014
Nicholas Alunni; Halit Bener Suay; Calder Phillips-Grafflin; Jim Mainprice; Dmitry Berenson; Sonia Chernova; Robert W. Lindeman; Daniel M. Lofaro; Paul Y. Oh
Supervision and teleoperation of high degree-of-freedom robots is a complex task due to environmental constraints such as obstacles and limited communication, as well as task specific requirements such as using more than one end-effector at the same time. In this work we present a supervision and teleoperation framework that allows an operator to see the surroundings of a robot in 3D, make necessary adjustments for a dual or single arm manipulation task, preview the task in simulation before execution, and finally execute the task on a real robot. The framework has been applied to the valve turning task of the DARPA Robotics Challenge on the PR2, Hubo2+, and DRCHubo robots.
Journal of Intelligent and Robotic Systems | 2016
Calder Phillips-Grafflin; Halit Bener Suay; Jim Mainprice; Nicholas Alunni; Daniel M. Lofaro; Dmitry Berenson; Sonia Chernova; Robert W. Lindeman; Paul Y. Oh
In this paper, we present our system design, operational procedure, testing process, field results, and lessons learned for the valve-turning task of the DARPA Robotics Challenge (DRC). We present a software framework for cooperative traded control that enables a team of operators to control a remote humanoid robot over an unreliable communication link. Our system, composed of software modules running on-board the robot and on a remote workstation, allows the operators to specify the manipulation task in a straightforward manner. In addition, we have defined an operational procedure for the operators to manage the teleoperation task, designed to improve situation awareness and expedite task completion. Our testing process, consisting of hands-on intensive testing, remote testing, and remote practice runs , demonstrates that our framework is able to perform reliably and is resilient to unreliable network conditions. We analyze our approach, field tests, and experience at the DRC Trials and discuss lessons learned which may be useful for others when designing similar systems.
intelligent robots and systems | 2014
Jim Mainprice; Calder Phillips-Grafflin; Halit Bener Suay; Nicholas Alunni; Daniel M. Lofaro; Dmitry Berenson; Sonia Chernova; Robert W. Lindeman; Paul Y. Oh
In this paper, we report lessons learned through the design of a framework for teleoperating a humanoid robot to perform a manipulation task. We present a software framework for cooperative traded control that enables a team of operators to control a remote humanoid robot over an unreliable communications link. The framework produces statically-stable motion trajectories that are collision-free and respect end-effector pose constraints. After operator confirmation, these trajectories are sent over the data link for execution on the robot. Additionally, we have defined a clear operational procedure for the operators to manage the teleoperation task. We applied our system to the valve turning task in the DARPA Robotics Challenge (DRC). Our framework is able to perform reliably and is resilient to unreliable network conditions, as we demonstrate in a set of test runs performed remotely over the internet. We analyze our approach and discuss lessons learned which may be useful for others when designing such a system.
intelligent robots and systems | 2016
Jim Mainprice; Nathan D. Ratliff; Stefan Schaal
national conference on artificial intelligence | 2014
Jim Mainprice; Dmitry Berenson