Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. Geetha is active.

Publication


Featured researches published by M. Geetha.


international conference on technology for education | 2011

Gesture Recognition for American Sign Language with Polygon Approximation

M. Geetha; Rohit Menon; Suranya Jayan; Raju James; G V V Janardhan

We propose a novel method to recognize symbols of the American Sign Language alphabet (A-Z) that have static gestures. Many of the existing systems require the use of special data acquisition devices like data gloves which are expensive and difficult to handle. Some of the methods like finger tip detection do not recognize the alphabets which have closed fingers. We propose a method where the boundary of the gesture image is approximated into a polygon with Douglas -- Peucker algorithm. Each edge of the polygon is assigned the difference Freeman Chain Code Direction. We use finger tips count along with difference chain code sequence as a feature vector. The matching is done by looking for either perfect match and in case there is no perfect match, substring matching is done. The method efficiently recognizes the open and closed finger gestures.


2013 International Conference on Emerging Trends in Communication, Control, Signal Processing and Computing Applications (C2SPCA) | 2013

A vision based dynamic gesture recognition of Indian Sign Language on Kinect based depth images

M. Geetha; C. Manjusha; P. Unnikrishnan; R. Harikrishnan

Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. Our proposed work aims at recognizing 3D dynamic signs corresponding to ISL words. With the advent of 3D sensors like Microsoft Kinect Cameras, 3D geometric processing of images has received much attention in recent researches. We have captured 3D dynamic gestures of ISL words using Kinect camera and has proposed a novel method for feature extraction of dynamic gestures of ISL words. While languages like the American sign language(ASL) are of huge popularity in the field of research and development, Indian Sign Language on the other hand has been standardized recently and hence its (ISLs) recognition is less explored. The method extracts features from the signs and convert it to the intended textual form. The proposed method integrates both local as well as global information of the dynamic sign. A new trajectory based feature extraction method using the concept of Axis of Least Inertia (ALI) is proposed for global feature extraction. An eigen distance based method using the seven 3D key points- (five corresponding to each finger tips, one corresponding to centre of the palm and another corresponding to lower part of palm), extracted using Kinect is proposed for local feature extraction. Integrating 3D local feature has improved the performance of the system as shown in the result. Apart from serving as an aid to the disabled people, other applications of the system also include serving as a sign language tutor, interpreter and also be of use in electronic systems that take gesture input from the users.


2013 International Conference on Emerging Trends in Communication, Control, Signal Processing and Computing Applications (C2SPCA) | 2013

Animation system for Indian Sign Language communication using LOTS notation

Ruviansh J. Raghavan; Kiran A. Prasad; Rahul Muraleedharan; M. Geetha

This paper presents an application aiding the social integration of the deaf community in India into the mainstream of society. This is achieved by feeding text in English and generating an animated gesture sequence representative of its content. This application consists of three main portions: an interface that allows the user to enter words, a language processing system that converts English text to ISL format and a virtual avatar that acts as an interpreter conveying the information at the interface. These gestures are dynamically animated based on a novel method devised by us in order to map the kinematic data for the corresponding word. The word after translation into ISL will be queried in the database where in lies the notation format for each word. This notation called as LOTS Notation will represent parameters enabling the system to identify features like hand location(L), Hand Orientation (O) in the 3D space, Hand Trajectory movement (T), hand shapes (S) and non-manual components like facial expression. The animation of a sentence fed is thus produced from the sequence of notations which are queued in order of appearance. We are also inserting the movement -epenthesis which is the inter sign transition gesture inorder to avoid jitters in gesturing. More than a million deaf adults and around half a million deaf children in India use the Indian Sign Language (ISL) as a mode of communication. However, this system would serve as an initiative in propelling the Sign Language Communication in the Banking Domain. The low audio dependency in the working domain supports the cause.


Proceedings of the 2014 International Conference on Interdisciplinary Advances in Applied Computing | 2014

An improved Human Action Recognition system using RSD Code generation

M. Geetha; B. Anandsankar; Lakshmi S. Nair; T. Amrutha; Amith Rajeev

This paper presents a novel method for recognizing human actions from a series of video frames. The paper uses the idea of an RSD (Region Speed Direction) code generation, which is capable of recognizing most of the common activities in spite of the spatiotemporal variability between subjects. Majority of the researches focus on upper body part or make use of hand and leg trajectories. The trajectory-based approach gives less accurate results due to variability of action pattern between subjects. In RSD Code, we give importance to three factors Region, Speed and Direction to detect the action. These three factors together gives better result for recognizing actions. The proposed method is free from occlusion, positional errors and missing information. The results from our algorithm are comparable to the results of the existing human action detection algorithms.


ieee recent advances in intelligent computational systems | 2013

Dynamic gesture recognition of Indian sign language considering local motion of hand using spatial location of Key Maximum Curvature Points

M. Geetha; P. V. Aswathi

Sign language is the most natural way of expression for the deaf community. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a new method for, vision-based recognition of dynamic signs corresponding to Indian Sign Language words. A new method is proposed for key frame extraction which is more accurate than the existing methods. The frames corresponding to the Maximum Curvature Points (MCPs) of the global trajectory are taken as the keyframes. The method accomodates the spatio temporal variability that may occur when different persons perform the same gesture. We are also proposing a new method based on spatial location of the Key Maximum Curvature Points of the boundary for shape feature extraction of key frames. Our method when compared with three other exisiting methods has given better performance. The method has considered the local as well as global trajectory information for recognition. The feature extraction method has proved to be scale invariant and translation invariant.


international conference on advanced computing | 2013

A Stroke Based Representation of Indian Sign Language Signs Incorporating Global and Local Motion Information

M. Geetha; P. V. Aswathi; M. R. Kaimal

Sign Language is a visual gesture language used by speech impaired people to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper presents a stroke based representation of dynamic gestures of Indian Sign Language Signs incorporating both local as well as global motion information. This compact representation of a gesture is analogous to phonemic representation of speech signals. To incorporate the local motion of the hand, each stroke contains the features corresponding to the hand shape as well. The dynamic gesture trajectories are segmented based on Maximum Curvature Points(MCPs). MCPs are selected based on the direction change of trajectories. The frames corresponding to the MCP points of the trajectory are considered as the key frames. Local information features are taken as the hand shape of the Key frames. The existing methods of Sign Language Recognition has scalability problems apart from high complexity and the need for extensive training data. In contrast, our proposed method of stroke based representation has less expensive training phase since it only requires the training of stroke features and stroke sequences of each word. Our algorithms also address the issue of scalability. We have tested our approach in the context of Indian Sign Language recognition and we present the results from this study.


Multimedia Tools and Applications | 2018

A 3D stroke based representation of sign language signs using key maximum curvature points and 3D chain codes

M. Geetha; M. R. Kaimal

Sign Language is a visual spatial language used by deaf and dumb community to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper proposes a novel 3D stroke based representation of dynamic gestures of Sign Language Signs incorporating local as well as global motion information. The dynamic gesture trajectories are segmented into strokes or sub-units based on Key Maximum Curvature Points (KMCPs) of the trajectory. This new representation has helped us in uniquely representing the signs with fewer number of key frames. We extract 3D global features from global trajectories using a scheme of representing strokes as 3D codes, which involves dividing strokes into smaller units (stroke subsegment vectors or SSVs), and representing them as belonging to one of the 22 partitions. These partitions are obtained using a discretisation procedure which we call an equivolumetric partition (EVP) of sphere. The codes representing the strokes are referred to as an EVP code. In addition to global hand motion and local hand motion, facial expressions are also considered for non-manual signs to interpret the meaning of words completely. In contrast to existing methods, our method of stroke based representation has less expensive training phase since it only requires the training of key stroke features and stroke sequences of each word.


ieee international conference on electronics computing and communication technologies | 2015

Disrupted structural connectivity using diffusion tensor tractography in epilepsy

M. Geetha; Suchithra S Pillay

Human thoughts and emotions are communicated between different brain regions through pathways comprising of white matter tracts. Diffusion Tensor Imaging (DTI) is a newly developed Magnetic Resonance Imaging (MRI) technique to locate the white matter lesions which cannot be found on other types of clinical MRI. Fiber tracking using streamline tractography approaches has a limitation that it could not detect crossing or branching fibers. This limitation is overcome in Fast Marching technique of tractography where branching fibers are detected correctly but it takes more time than streamline tracking technique. For tracking fiber pathways in a noninvasive way, we propose an approach which utilizes the advantages of both tracking techniques: Fiber Assignment by Continuous Tracking and Fast Marching, to give a better and accurate tracking of fiber pathways as given by Fast Marching tracking technique and in less time as given by Fiber Assignment by Continuous tracking.


international conference on communications | 2014

An improved method for segmentation of point cloud using Minimum Spanning Tree

M. Geetha; Rakendu R

With the development of low-cost 3D sensing hardware such as the Kinect, three dimensional digital images have become popular in medical diagnosis, robotics etc. One of the difficult task in image processing is image segmentation. The problem become simpler if we add the depth channel along with height and width. The proposed algorithm uses Minimum Spanning Tree (MST) for the segmentation of point cloud. As a pre processing step, first level clustering is done which gives group of cluttered objects. Each of this cluttered group is subjected to more finite level of segmentation using MST based on distance and normal. In our method, we build a weighted planar graph of each of the clustered cloud and construct the MST of the corresponding graph. By taking the advantage of normal, we can separate the surface from the object. The proposed method is applied to different 3D scenes and the results are discussed.


international conference on communications | 2014

An improved content based image retrieval in RGBD images using Point Clouds

M. Geetha; Meera P Paul; M. R. Kaimal

Content-based image retrieval (CBIR) system helps users to retrieve images based on their contents. Therefore, a reliable CBIR method is required to extract important information from the image. This important information includes texture, color, shape of the object in the image etc. For RGBD images, the 3D surface of the object is the most important feature. We propose a new algorithm which recognize the 3D object by using 3D surface shape features, 2D boundary shape features, and the color features. We present an efficient method for 3D object shape extraction. For that we are using first and second order derivatives over the 3D coordinates of point clouds for detecting landmark points on the surface of RGBD object. Proposed algorithm identifies the 3D surface shape features efficiently. For the implementation we use Point Cloud Library(PCL). Experimental results show that the proposed method is effective and efficient and it is able to give more than 80% classification rate for any objects in our test data. Also it eliminates false positive results and it yields higher retrieval accuracy.

Collaboration


Dive into the M. Geetha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

P. V. Aswathi

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar

Amith Rajeev

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar

B. Anandsankar

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar

G V V Janardhan

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar

Kiran A. Prasad

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar

Lakshmi S. Nair

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar

Meera P Paul

Amrita Vishwa Vidyapeetham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raju James

Amrita Vishwa Vidyapeetham

View shared research outputs
Researchain Logo
Decentralizing Knowledge