2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) | 2019

3D Move to See: Multi-perspective visual servoing towards the next best view within unstructured and occluded environments

 
 
 
 

Abstract


In this paper we present a novel approach termed 3D Move to See (3DMTS) which is based on the principle of finding the next best view using a 3D camera array and a robotic manipulator to obtain multiple samples of the scene from different perspectives. Distinct from traditional visual servoing and next best view approaches, the proposed method uses simultaneously-captured multiple views, scene segmentation and an objective function applied to each perspective to estimate a gradient representing the direction of the next best view in a “single shot”. The method is demonstrated within simulation and on a real robot containing a custom 3D camera array for the challenging scenario of robotic harvesting in a highly occluded and unstructured environment. We show, on a real robotic platform, that by moving the eye-in-hand camera using the gradient of an objective function leads to a locally optimal view of the object of interest, even amongst occlusions. The overall performance of the 3DMTS approach obtains a mean increase in target size of 29.3% compared to a baseline method using a single RGB-D camera, which obtained 9.17%. The results demonstrate qualitatively and quantitatively that the 3DMTS method performed better in most scenarios, and yielded three times the target size compared to the baseline method. Increasing the target size in the image given occlusions can improve robotic systems detecting key object features for further manipulation tasks, such as grasping and harvesting.

Volume None
Pages 3890-3897
DOI 10.1109/IROS40897.2019.8967918
Language English
Journal 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

Full Text