2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN) | 2019

Development of MEMS Sensor-Based Double Handed Gesture-To-Speech Conversion System

 
 
 

Abstract


The amount of effort required in communication is high in the case of vocally and hearing impaired population as they express their emotions through sign language. An uninterrupted communication between the impaired and unimpaired groups of the society is possible only if the unimpaired population is trained to interpret the sign language. The aim of this work is to develop a double handed gesture-to-speech conversion system, which will aid in facilitating effective communication between an untrained, unimpaired listener and an impaired speaker. The proposed work involves the development of MEMS sensor-based double handed gesture recognition module that uses a hidden Markov model-based sign-to-text conversion system to convert the input hand gestures to bilingual text and a hidden Markov model-based text-to-speech synthesizer to convert the corresponding text to synthetic speech. The proposed system performs with an accuracy of 98% and 97% for single handed and double handed gestures respectively.

Volume None
Pages 1-6
DOI 10.1109/ViTECoN.2019.8899435
Language English
Journal 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN)

Full Text