Naresh Kumar Garg
Punjab Technical University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Naresh Kumar Garg.
International Journal of Computer Applications | 2010
Naresh Kumar Garg; Lakhwinder Kaur; Manish Kumar
The main purpose of this paper is to provide the new segmentation technique based on structure approach for Handwritten Hindi text. Segmentation is one of the major stages of character recognition. The handwritten text is separated into lines, lines into words and words into characters. The errors in segmentation propagate to recognition. The performance is evaluated on handwritten data of 1380 words of 200 lines written by 15 different writers. The overall results of segmentation are very promising.
international conference on information technology: new generations | 2010
Naresh Kumar Garg; Lakhwinder Kaur; M. K. Jindal
In this paper, we have discussed the new method for Line Segmentation of Handwritten Hindi text. The method is based on header line detection, base line detection and contour following technique. No preprocessing like skew correction, thinning or noise removal has been done on the data. The purpose of this paper is three fold. Firstly, we explained by experiments that this method is suitable for fluctuating lines or variable skew lines of text. Also, we confirm that this method is invariant of non uniform skew between words in a line (non uniform text line skew). Secondly, the contour following after header line detection correctly separates some of the overlapped lines of text. Thirdly, this paper provides a brief review of text line segmentation techniques for handwritten text which can be very useful for the beginners who want to work on text line segmentation.
International Journal of Computer Applications | 2011
Naresh Kumar Garg; Lakhwinder Kaur; M. K. Jindal
Optical Character Recognition (OCR) is a process to recognize the handwritten or printed scanned text with the help of a computer. Segmentation is very important stage of any text recognition system. The problems in segmentation can lead to decrease in segmentation rate and hence recognition rate. A good segmentation technique can improve the recognition rate. This paper deals with the hazards that occur in segmentation of handwritten Hindi text. We also explained the main reasons for some of these problems.
International Journal of Computer Applications | 2013
Puneet; Naresh Kumar Garg
ABSTARCT Image binarization is important step in the OCR (Optical Character Recognition). There are several methods used for image binarization recently, but there is no way to select single or best method which is used for all images. The main objective of this paper is to present the study on various existing binarization algorithms and compared their measurements. This paper will act as guide for fresher’s to start their work on binarization.
international conference on information systems | 2011
Naresh Kumar Garg; Lakhwinder Kaur; M. K. Jindal
Character recognition is an important stage of any text recognition system. In Optical Character Recognition (OCR) system, the presence of half characters decreases the recognition rate. Due to touching of half character with full characters, the determination of presence of half character is very challenging task. In this paper, we have proposed new algorithm based on structural properties of text to segment the half characters in handwritten Hindi text. The results are shown for both handwritten Hindi text as well as for printed Hindi text. The proposed algorithm achieves the segmentation accuracy as 83.02% for half characters in handwritten text and 87.5% in printed text.
Archive | 2018
Sakshi; Naresh Kumar Garg; Munish Kumar
In this paper, we are exploring various features and classifiers for writer identification in light of Gurmukhi text handwriting. The identification of the writers based on a piece of handwriting is a challenging task for pattern recognition. The writer identification framework proposed in this paper includes diverse stages like image preprocessing, feature extraction, training, and classification. The framework first prepares a skeleton of the character so that meaningful data about the handwriting of writers can be extracted. The feature extraction stage incorporates various plans, namely, zoning, diagonal, transition, intersection and open end points, centroid, the horizontal peak extent, the vertical peak extent, parabola curve fitting, and power curve fitting based features. In order to assess the prominence of these features, we have used four classification techniques, namely, Naive Bayes, Decision Tree, Random Forest and AdaBoostM1. For experimental results, we have collected 49,000 samples from 70 different writers. In this work, maximum accuracy of 81.75% has been obtained with centroid features and AdaBoostM1 classifier.
Multimedia Tools and Applications | 2018
Munish Kumar; Payal Chhabra; Naresh Kumar Garg
In the progression of web and multi-media, substantial measure of pictures is created and appropriated, to viably store and offer such vast measure of bulky database is a big issue. In this way, Content Based Image Retrieval (CBIR) techniques are used to retrieve images from the massive database based on the desired information. In this proposed work, we are considering two local image feature extraction methods, namely, SIFT and ORB. Scale Invariant Feature Transform (SIFT) is used for detecting features and feature descriptor of an image. Oriented Fast Rotated and BRIEF (ORB) uses FAST (Features from Accelerated Segment Test) key point detector and binary BRIEF (Binary Robust Independent Elementary Features) descriptor of an image. K-Means clustering algorithm is also used in the present paper for analyzing the data, which generates number of clusters using the descriptor vector. Locality Preserving Projection (LPP) is employed to reduce the length of the feature vector to enhance the performance of image retrieval system. For classification, we have considered two classifiers, namely, BayesNet and K-Nearest Neighbours (K-NN). Wang image dataset has been used for experimentation work. We have accomplished the highest precision rate of 88.9% using proposed CBIR system.
annual acis international conference on computer and information science | 2015
Naresh Kumar Garg; Lakhwinder Kaur; Manish Jndal
Offline Handwritten Hindi text recognition is very challenging task. A novel attempt is made by us to recognize handwritten Hindi text using segmentation based approach. Although many efforts have been made to recognize isolated characters and words, a little work has been done to recognize the offline handwritten Hindi text by segmenting the sentences into lines and lines into words. The uniqueness of this approach lies in the fact that many of the commonly used words can be recognized when all the characters in the middle are recognized even by ignoring the upper modifiers, lower modifiers and half characters. Another advantage is that it is not word specific i.e. any number of words can be added to the list to be recognized. Topological features are extracted in programming by applying many heuristics and efforts are made on the correctness of the features. Results obtained with the proposed technique are very challenging.
Neural Computing and Applications | 2018
Payal Chhabra; Naresh Kumar Garg; Munish Kumar
Measures of components in digital images are expanded and to locate a specific image in the light of substance from a huge database is sometimes troublesome. In this paper, a content-based image retrieval (CBIR) system has been proposed to extract a feature vector from an image and to effectively retrieve content-based images. In this work, two types of image feature descriptor extraction methods, namely Oriented Fast and Rotated BRIEF (ORB) and scale-invariant feature transform (SIFT) are considered. ORB detector uses a fast key points and descriptor use a BRIEF descriptor. SIFT be used for analysis of images based on various orientation and scale. K -means clustering algorithm is used over both descriptors from which the mean of every cluster is obtained. Locality-preserving projection dimensionality reduction algorithm is used to reduce the dimensions of an image feature vector. At the time of retrieval, the image feature vectors are stored in the image database and matched with testing data feature vector for CBIR. The execution of the proposed work is assessed by utilizing a decision tree, random forest, and MLP classifiers. Two, public databases, namely Wang database and corel database, have been considered for the experimentation work. Combination of ORB and SIFT feature vectors are tested for images in Wang database and corel database which accomplishes a highest precision rate of 99.53% and 86.20% for coral database and Wang database, respectively.
Multimedia Tools and Applications | 2018
Diksha Garg; Naresh Kumar Garg; Munish Kumar
In this paper, a method has been proposed for enhancement of underwater images commonly suffering from low contrast and degraded shading quality. The entirety of the image is changed when we move to capture of images, from air to the water. During capturing some absorption, reflection and scattering effects are induced in the form of contrast, quality and noise as the images look hazy or blurred. This makes one shading to overwhelm the image. For use of underwater resources and overcome these factors the enhancement of the images is required. So, in this paper, we proposed a strategy for underwater image enhancement using Contrast-Limited Adaptive Histogram Equalization (CLAHE) and Percentile methodologies. Finally, these two methodologies are blended for improving the outcomes. Two parameters, namely, Root Mean Squared Error (RMSE) and entropy have been considered for comparing the experimental results of the proposed methodology with the state-of-the-art works. It has been noticed that the proposed system performs better than already existing techniques for underwater image enhancement.