2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII) | 2021

Positioning the 5-DOF Robotic Arm using Single Stage Deep CNN Model

 
 
 
 
 

Abstract


In teleoperation mechanism, the surgical robots are controlled using hand gestures from remote location. The remote location robotic arm control using hand gesture recognition is a challenging computer vision problem. The hand action recognition under complex environment (cluttered background, lighting variation, scale variation etc.) is a difficult and time consuming process. In this paper, a light weight Convolutional Neural Network (CNN) model Single Shot Detector (SSD) Lite MobileNet-V2 is proposed for real-time hand gesture recognition. SSD Lite versions tend to run hand gesture recognition applications on low-power computing devices like Raspberry Pi due to its light weight and timely recognition. The model is deployed using a Camera and two Raspberry Pi Controllers For the hand gesture recognition and data transfer to the cloud server, the Raspberry Pi controller 1 is used. The Raspberry Pi Controller 2 receives the cloud information and controls the Robotic arm operations. The performance of the proposed model is also compared with a SSD Inception-V2 model for the MITI Hand dataset-II (MITI HD-II). The average precision, average recall and F1-score for SSD Lite MobileNet-V2 and SSD Inception-V2 models are analyzed by training and testing the model with the learning rate of 0.0002 using Adam optimizer. SSD MobileNet-V2 model obtained an Average precision of 98.74% and SSD Inception-V2 model as 99.27%, The prediction time for SSD Lite MobileNet-V2 model using Raspberry Pi controller takes only 0.67s whereas, 1.2s for SSD Inception-V2 Model.

Volume None
Pages 1-6
DOI 10.1109/ICBSII51839.2021.9445124
Language English
Journal 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII)

Full Text