Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where E. Kiran Kumar is active.

Publication


Featured researches published by E. Kiran Kumar.


international conference on signal processing | 2015

Medical image watermarking with DWT-BAT algorithm

P. V. V. Kishore; S. R. C. Kishore; E. Kiran Kumar; K. V. V. Kumar; P. Aparna

Medical images communicate imperative information to the doctors about a patients health situation. Internet broadcasts these medical images to inaccessible sites of the globe which are inspected by specialist doctors. But data transmissions through unsecured web invoke validation problems for any image data. Medical images that are transmitted through the internet must be watermarked with patient pictures for substantiation by the doctors to ascertain the medical image. Watermarking medical images necessitate attentive adjustments to protect the information in the medical images with patient image watermarks. The medical images are used as an envelope image in the watermarking process which is visible on the network. These envelope medical images are watermarked with patient images in wavelet domain there by using the BAT algorithm form optimizing the embedding process for peak signal to noise ratio (psnr) and normalized cross correlation coefficient (ncc) values. The medical image envelope and letter inside envelope i.e. watermark image are transformed into wavelet domain and are mixed using scaling factor alpha which is termed as embedding strength. BAT algorithm is an optimization algorithm specialized in optimizing the values of peak-signal-to-noise ratio for a particular value of alpha, the embedding watermark strength. Finally these watermarked medical images are put on the network along with the secret key that will be used for extraction. At the receiving the embedded watermark is extracted using 2DWT using the embedding strength value using BAT algorithm. The robustness of the proposed watermarking techniques is tested with various attacks on the watermarked medical images. Peak-Signal-to-Noise ratios and Normalized cross correlation coefficients are computed to accesses the quality of the watermarked medical images and extracted patient images. The results are produced for three types of medical images with one patient image watermarks using single key by using four wavelets (haar, db, symlets, bior) at four different levels.


advances in multimedia | 2018

Indian Classical Dance Action Identification and Classification with Convolutional Neural Networks

P. V. V. Kishore; K. V. V. Kumar; E. Kiran Kumar; A. S. C. S. Sastry; M. Teja Kiran; D. Anil Kumar; M. V. D. Prasad

Extracting and recognizing complex human movements from unconstrained online/offline video sequence is a challenging task in computer vision. This paper proposes the classification of Indian classical dance actions using a powerful artificial intelligence tool: convolutional neural networks (CNN). In this work, human action recognition on Indian classical dance videos is performed on recordings from both offline (controlled recording) and online (live performances, YouTube) data. The offline data is created with ten different subjects performing 200 familiar dance mudras/poses from different Indian classical dance forms under various background environments. The online dance data is collected from YouTube for ten different subjects. Each dance pose is occupied for 60 frames or images in a video in both the cases. CNN training is performed with 8 different sample sizes, each consisting of multiple sets of subjects. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our data to obtain a better accuracy in recognition. We achieved a 93.33% recognition rate compared to other classifier models reported on the same dataset.


Archive | 2018

Selfie Continuous Sign Language Recognition with Neural Network Classifier

G. Anantha Rao; P. V. V. Kishore; A. S. C. S. Sastry; D. Anil Kumar; E. Kiran Kumar

This works objective is to bring sign language closer to real-time implementation on mobile platforms with a video database of Indian sign language created with a mobile front camera in selfie mode. Pre-filtering, segmentation, and feature extraction on video frames creates a sign language feature space. Artificial Neural Network classifier on the sign feature space are trained with feed forward nets and tested. ASUS smart phone with 5M pixel front camera captures continuous sign videos containing an average of 220 frames for 18 single-handed signs at a frame rate of 30 fps. Sobel edge operator’s power is enhanced with morphology and adaptive thresholding giving a near perfect segmentation of hand and head portions. Word matching score (WMS) gives the performance of the proposed method with an average WMS of around 90% for ANN with an execution time of 0.5221 s during classification. Fully novel method of implementing sign language to introduce sign language recognition systems on smart phones for making it a real-time usage application.


Archive | 2018

Sign Language Conversion Tool (SLCTooL) Between 30 World Sign Languages

A. S. C. S. Sastry; P. V. V. Kishore; D. Anil Kumar; E. Kiran Kumar

This paper proposes to find similarity between sign language finger spellings of alphabets from 30 countries with computer vision and support vector machine classifier. A database of 30 countries sign language alphabets is created in laboratory conditions with nine test subjects per country. Binarization of sign images and subsequent feature extraction with histogram of oriented gradients gives a feature vector. Classification with support vector machine provides insight into the similarity between world sign languages. The results show a similarity of 61% between Indian sign language and Bangladesh sign language belonging to the same continent, whereas the similarity is 11 and 7% with American and French sign languages in different continents. The overall classification rate of multiclass support vector machine is 95% with histogram of oriented gradient features when compared to other feature types. Cross-validation of the classifier is performed by finding an image structural similarity measure with Structural Similarity Index Measure.


Archive | 2018

3D Motion Capture for Indian Sign Language Recognition (SLR)

E. Kiran Kumar; P. V. V. Kishore; A. S. C. S. Sastry; D. Anil Kumar

A 3D motion capture system is being used to develop a complete 3D sign language recognition (SLR) system. This paper introduces motion capture technology and its capacity to capture human hands in 3D space. A hand template is designed with marker positions to capture different characteristics of Indian sign language. The captured 3D models of hands form a dataset for Indian sign language. We show the superiority of 3D hand motion capture over 2D video capture for sign language recognition. 3D model dataset is immune to lighting variations, motion blur, color changes, self-occlusions and external occlusions. We conclude that 3D model based sign language recognizer will provide full recognition and has a potential for development of a complete sign language recognizer.


Multimedia Tools and Applications | 2018

Indian sign language recognition using graph matching on 3D motion captured signs

D. Anil Kumar; A. S. C. S. Sastry; P. V. V. Kishore; E. Kiran Kumar

A machine cannot easily understand and interpret three-dimensional (3D) data. In this study, we propose the use of graph matching (GM) to enable 3D motion capture for Indian sign language recognition. The sign classification and recognition problem for interpreting 3D motion signs is considered an adaptive GM (AGM) problem. However, the current models for solving an AGM problem have two major drawbacks. First, spatial matching can be performed on a fixed set of frames with a fixed number of nodes. Second, temporal matching divides the entire 3D dataset into a fixed number of pyramids. The proposed approach solves these problems by employing interframe GM for performing spatial matching and employing multiple intraframe GM for performing temporal matching. To test the proposed model, a 3D sign language dataset is created that involves 200 continuous sentences in the sign language through a motion capture setup with eight cameras.The method is also validated on 3D motion capture benchmark action dataset HDM05 and CMU. We demonstrated that our approach increases the accuracy of recognizing signs in continuous sentences.


International journal of engineering and technology | 2017

SWIFT cognitive behavioral assessment model built on cognitive analytics of empirical mode internet of things

P. V. V. Kishore; Sk Azma; K Gayathri; A. S. C. S. Sastry; E. Kiran Kumar; D. Anil Kumar

This paper introduces a study and analysis to predict the present human behaviour through his/her object interactions in the physical environment. The physical environment consists of a door, chair and telephone with accelerometer sensors attached to them and connected to computer using a raspberry pi IoT (Internet of Things) kit. Two other parameters used for assessment are human voice intensities and human motion analysis through a motion capture camera with inbuilt microphone and Wi-Fi module. The dataset is a collection of accelerometer data from chair and telephone, human interaction with door through camera and voice sample of a word ‘Hello’. These 4 parameter measurements are collected from 15 test subjects in the age group 19-21 without their knowledge. We used the dataset to train and test 3 predominant behaviours in the chosen age group namely, excitable, assertive and pleasant on an artificial neural network with backpropagation training algorithm. The overall recognition accuracy is 84.89% based on the physical assessment from a physiatrist of all the test subjects. This study can help individuals, doctors and machines to predict the current human emotional state and provide feedback to modify unpleasant current state of behaviour to a pleasant state to maximize human performance.


IEEE Signal Processing Letters | 2018

Training CNNs for 3-D Sign Language Recognition With Color Texture Coded Joint Angular Displacement Maps

E. Kiran Kumar; P. V. V. Kishore; A. S. C. S. Sastry; M. Teja Kiran Kumar; D. Anil Kumar


IEEE Sensors Journal | 2018

Motionlets Matching With Adaptive Kernels for 3-D Indian Sign Language Recognition

P. V. V. Kishore; D. Anil Kumar; A. S. Chandra Sekhara Sastry; E. Kiran Kumar


International Journal of Intelligent Systems and Applications | 2018

Selfie Sign Language Recognition with Convolutional Neural Networks

P. V. V. Kishore; G. Anantha Rao; E. Kiran Kumar; M. Teja Kiran Kumar; D. Anil Kumar

Collaboration


Dive into the E. Kiran Kumar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge