Robotics Auton. Syst. | 2019

Deep effect trajectory prediction in robot manipulation

 
 
 

Abstract


Abstract Imagining the consequences of one’s own actions, before and during their execution, allows the agents to choose actions based on their simulated performance, and to monitor the progress by comparing observed to simulated behavior. In this study, we propose a deep model that enables a robot to learn to predict the consequences of its manipulation actions from its own interaction experience on objects of various shapes. Given the top-down image of the object, the robot learns to predict the movement trajectory of the object during execution of a lever-up action performed with a screwdriver in a physics-based simulator. The prediction is realized in two stages; the system first computes a number of features from the object and then generates the complete motion trajectory of the center of mass of the object using Long Short Term Memory (LSTM) models. In the first step, we investigated use of various feature descriptors such as shape context that encodes a distributed representation of positions of the object boundary points, unsupervised features that are extracted from autoencoders, Convolutional Neural Network (CNN) based features that are conjointly trained with the LSTMs, and finally task-specific supervised features that are engineered to well-encode the underlying dynamics of the lever-up action. The models are trained in simulation with objects of varying edge numbers and tested in the simulated and the real world. Our deep and generic CNN-based LSTM model outperformed the predictors that use unsupervised representations such as shape descriptors or autoencoder features in the simulated test set. Additionally, it was shown to generalize well to novel object shapes that were not experienced during model training. Finally, our model was shown to perform well in predicting the consequences of lever-up actions generated by a screwdriver that was attached to the gripper of the real UR10 robot. We further showed that our system can predict qualitatively different trajectories of objects that roll off the table or tumble over as the result of lever-up action.

Volume 119
Pages 173-184
DOI 10.1016/J.ROBOT.2019.07.003
Language English
Journal Robotics Auton. Syst.

Full Text