2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) | 2021

Spatiotemporal 2D Skeleton-based Image for Dynamic Gesture Recognition Using Convolutional Neural Networks

 
 
 
 

Abstract


This paper presents a dynamic gesture recognition approach using a novel spatiotemporal 2D skeleton image representation that can be fed to computationally efficient deep convolutional neural networks, for applications on human-robot interaction. Gestures are a seamless modality of human interaction and represent a potentially natural way to interact with the smart devices around us, like robots. The contribution of this paper is the proposal of a visually interpretable representation of dynamic gestures, which has a two-fold advantage: (i) conveys both spatial and temporal characteristics relying on a technique inspired in computer graphics, (ii) and can be used with simple and efficient architectures of convolutional neural networks. In our representation, a 3D skeleton model is projected to a 2D camera’s point-of-view, preserving spatial relations, and through a sliding window the temporal domain is encoded in a fused image of consecutive frames, through a shading motion effect achieved by manipulating a transparency coefficient. The result is a 2D image that when fed to simple custom-designed convolutional neural networks, it is achieved accurate classification of dynamic gestures. Experimmental reuslts obtained with a purposely captured 6 gesture dataset of 11 subjects, and also 2 public datasets, give evidence of a strong performance of our approach, when compared to other methods.

Volume None
Pages 1138-1144
DOI 10.1109/RO-MAN50785.2021.9515418
Language English
Journal 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)

Full Text