C. Vasantha Lakshmi
Dayalbagh Educational Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by C. Vasantha Lakshmi.
Pattern Analysis and Applications | 2004
C. Vasantha Lakshmi; C. Patvardhan
Telugu is one of the oldest and popular languages of India, spoken by more than 66 million people, especially in South India. Not much work has been reported on the development of optical character recognition (OCR) systems for Telugu text. Therefore, it is an area of current research. Some characters in Telugu are made up of more than one connected symbol. Compound characters are written by associating modifiers with consonants, resulting in a huge number of possible combinations, running into hundreds of thousands. A compound character may contain one or more connected symbols. Therefore, systems developed for documents of other scripts, like Roman, cannot be used directly for the Telugu language.The individual connected portions of a character or a compound character are defined as basic symbols in this paper and treated as a unit of recognition. The algorithms designed exploit special characteristics of Telugu script for processing the document images efficiently. The algorithms have been implemented to create a Telugu OCR system for printed text (TOSP). The output of TOSP is in phonetic English that can be transliterated to generate editable Telugu text. A special feature of TOSP is that it is designed to handle a large variety of sizes and multiple fonts, and still provides raw OCR accuracy of nearly 98%. The phonetic English representation can be also used to develop a Telugu text-to-speech system; work is in progress in this regard.
International Journal of Computer Applications | 2012
Nitin Mishra; C. Patvardhan; C. Vasantha Lakshmi; Sarika Singh
OCR Engine is one of the most efficient open source OCR engines currently available. Recently, Tesseract OCR 3.01 is capable of recognizing Hindi language but still it needs some enhancement to improve the performance. The Hindi language recognition accuracy is quite low even for the printed text, as the conjunct character combinations of Hindi Language are not easily separable due to partial overlapping. The proposed approach solves this problem, so that Devanagari conjunct characters can easily be segmented and recognized using Tesseract OCR Engine. This paper presents a complete methodology to improve The Hindi Language Recognition accuracy. This paper also presents comparison with other Devanagari OCR engines available on the basis of recognition accuracy, processing time, font variations and database size.
indian conference on computer vision, graphics and image processing | 2006
C. Vasantha Lakshmi; Ritu Jain; C. Patvardhan
Telugu is one of the oldest and popular languages of India spoken by more than 66 million people especially in South India. Development of Optical Character Recognition systems for Telugu text is an area of current research. OCR of Indian scripts is much more complicated than the OCR of Roman script because of the use of huge number of combinations of characters and modifiers. Basic Symbols are identified as the unit of recognition in Telugu script. Edge Histograms are used for a feature based recognition scheme for these basic symbols. During recognition, it is observed that, in many cases, the recognizer incorrectly outputs a very similar looking symbol. Special logic and algorithms are developed using simple structural features for improving recognition accuracies considerably without too much additional computational effort. It is shown that recognition accuracies of 98.5 % can be achieved on laser quality prints with such a procedure.
Multimedia Tools and Applications | 2018
C. Patvardhan; Pragyesh Kumar; C. Vasantha Lakshmi
A digital image watermarking technique is proposed to hide the relevant information in color digital images. The image is converted from RGB color space to YCbCr color space. This enables the algorithm to exploit characteristics of the Human Visual System (HVS) for embedding the watermark. The scheme embeds the watermark information utilizing wavelets transforms and Singular Value Decomposition (SVD) for this purpose. It uses a Quick Response (QR) code as the watermark. The QR code is a robust code from which embedded information can be extracted even if the retrieved QR code image is distorted. Thus the proposed technique employs a judicious combination of different algorithmic ideas including altered YCbCr color space, transformation into wavelet domain, SVD for selection of places to embed and QR codes for enhanced robustness. The watermarking scheme proposed is robust against various signal processing attacks (e.g. filtering, compression, noise addition etc.) as well as geometric attacks (e.g. rotation, cropping etc.). Computational experiments on a variety of cover images show that embedding QR code is more effective than the other watermarks in terms of better information carrying capacity, robustness and imperceptibility. The proposed scheme is novel and effective as it simultaneously provides advantages of each of the individual elements combined in this approach.
indian conference on computer vision, graphics and image processing | 2012
C. Patvardhan; A. K. Verma; C. Vasantha Lakshmi
In this paper, a new Wavelet Transform based scheme is proposed for the binarization of document images with complex background and non-uniform illumination. The proposed scheme is simple and effective and does not require manual tuning of any parameters. The binarization of document images is accomplished by background estimation. This is done using low frequency approximation coefficients of the wavelet transform of the image at appropriate decomposition level. The estimated background is utilized for final binarization. The input image binarized with global Otsu and wavelet treated image binarized with adaptive Otsu are used to obtain final binarized image of the document. The performance of proposed scheme is tested and compared to some well-known methods of document image binarization. The results demonstrate the superior performance and relative effectiveness of the proposed scheme on various images having complex as well as non-uniform illuminated backgrounds.
pattern recognition and machine intelligence | 2009
C. Vasantha Lakshmi; Sarika Singh; Ritu Jain; C. Patvardhan
A novel approach to generate skeletons of binary patterns that has a wide variety of applications including multi-font OCR is proposed in this paper. The proposed algorithm ensures connectedness of the pattern and minimizes loss of information while capturing the essential shape characteristics. Computational tests on printed Telugu characters show that the algorithm is useful in getting a generalized form of the character symbols on the common multiple dissimilar fonts.
ieee region humanitarian technology conference | 2016
Sukriti Paliwal; C. Vasantha Lakshmi; C. Patvardhan
Smart phones have become ubiquitous in todays world. This paper presents a simple, inexpensive and easily available approach for the detection of the human heart beats per minute and computation of heart rate variability parameters using a smart phone camera. It uses Photoplethysmography (PPG) which is a technique to detect the fluctuations in the blood volumes underneath the skin surface. A video is recorded using a smart phone camera by placing left index finger on the camera lens. PPG signals are extracted from the video and analyzed using Fourier transformation to obtain the requisite information that is used to compute heart related parameters such as Heart Beats Rate and Heart Rate Variability. Knowledge of these parameters can be used for early detection of any abnormality and timely diagnosis of diseases and hence for saving lives.
2015 39th National Systems Conference (NSC) | 2015
C. Patvardhan; Pragyesh Kumar; C. Vasantha Lakshmi
This paper presents a robust Discrete Wavelet Transform (DWT) based color image watermarking technique using YCbCr color model. Watermark text data is transformed into QR code for enhanced security and a varying length text can be watermarked in an efficient manner. The proposed method uses diagonal components from DWT of the cover and watermark image. A high retrieval rate of watermarks is obtained with an appropriate embedding strength. It has high resistance to most watermarking attacks and retrieves the exact watermark information even from low quality watermarked images.
2015 39th National Systems Conference (NSC) | 2015
C. Vasantha Lakshmi; Sarika Singh; C. Patvardhan
OCR (Optical Character Recognition) systems are being developed due to their numerous applications even for Indian scripts like Telugu which are complicated due to the usage of a large number of symbols. OCR systems typically store pre-computed features of symbols to be recognized in a database. Recognition of an unknown symbol is performed by finding the symbol in the database that is nearest in features space. Design of an appropriate database is, therefore, a critical step. This is especially so when the OCR system targets recognition of numerous symbols in multiple fonts and sizes. The idea is to develop an OCR system that has small recognition times and high recognition accuracies. The naive approach of putting features of all symbols in all fonts and sizes in the database might be counterproductive on both counts. Experimental results on text document images with multiple fonts and sizes show that the strategy for database design for OCR of printed Telugu text proposed in this paper achieves both the objectives. This is the first reported approach for such a database design for Telugu OCR.
hybrid intelligent systems | 2012
C. Vasantha Lakshmi; Ritu Jain; C. Patvardhan
Automatic recognition of handwritten numerals is difficult because of the huge variety of ways in which people write. Attempts in the literature employ complicated features and recognition engines in trying to cope with the variety of symbols. But this makes the process slow. In this work, a hybrid technique is proposed to achieve the objective of recognition of handwritten Devanagari numerals with less time consumption and without sacrificing recognition accuracy. A database of 11,000 samples is created while ensuring that the samples include a variety of handwritings which are written with different writing instruments and in different colors. The features employed are density features and spline-based edge direction histogram features and combination thereof. The database size is reduced by using clustering to identify similar samples and putting only one representative sample in lieu of the whole cluster as well as reducing the number of features using PCA. This two-fold reduction provides a smaller database. A hybrid technique utilizing artificial Neural Networks A-NN, K-nearest neighbour K-NN and other learning methods is implemented to ensure higher recognition accuracy and speed. These ideas are put together to provide a fast and robust scheme for recognition of handwritten Devanagari numerals with high recognition accuracy, i.e., 99.40% at a reasonable speed.