Lakhwinder Kaur
Punjabi University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lakhwinder Kaur.
ieee region 10 conference | 2003
Savita Gupta; Lakhwinder Kaur; R. C. Chauhan; S. C. Saxena
A novel speckle-reduction method is introduced, based on soft thresholding of the wavelet coefficients of a logarithmically transformed medical ultrasound image. The method is based on the generalised Gaussian distributed (GGD) modelling of sub-band coefficients. The method used was a variant of the recently published BayesShrink method by Chang and Vetterli, derived in the Bayesian framework for denoising natural images. It was scale adaptive, because the parameters required for estimating the threshold depend on scale and sub-band data. The threshold was computed by Kσ/σx, where σ and σx were the standard deviation of the noise and the sub-band data of the noise-free image, respectively, and K was a scale parameter. Experimental results showed that the proposed method outperformed the median filter and the homomorphic Wiener filter by 29% in terms of the coefficient of correlation and 4% in terms of the edge preservation parameter. The numerical values of these quantitative parameters indicated the good feature preservation performance of the algorithm, as desired for better diagnosis in medical image processing.
Digital Signal Processing | 2007
Savita Gupta; Lakhwinder Kaur; R. C. Chauhan; S. C. Saxena
The paper presents a versatile wavelet domain despeckling technique to visually enhance the medical ultrasound (US) images for improving the clinical diagnosis. The method uses the two-sided generalized Nakagami distribution (GND) for modeling the speckle wavelet coefficients and the signal wavelet coefficients are approximated using the generalized Gaussian distribution (GGD). Combining these statistical priors with the Bayesian maximum a posteriori (MAP) criterion, the thresholding/shrinkage estimators are derived for processing the wavelet coefficients of detail subbands. Consequently, two blind speckle suppressors named as GNDThresh and GNDShrink have been implemented and evaluated on both the artificial speckle simulated images and real US images. The experimental results demonstrate the superiority of the suggested technique both quantitatively and qualitatively as compared to other competitive schemes reported in the image denoising literature, e.g., the proposed method yields a gain of more than 0.36 dB over the best state-of-the-art despeckling method (GenLik), 0.93 dB over SRAD filter, 2.35 dB over Lee filter, and 1.34 dB over Kuan filter in terms of signal-to-noise ratio, when tested on the realistic US images. The visual comparison of despeckled US images and the higher values of quality metrics (coefficient of correlation, edge preservation index, quality index, and structural similarity index) indicate that the new method suppresses the speckle noise well while preserving the texture and organ surfaces. Further, the proposed method will be evaluated on other class of images as well as by employing multiple observer evaluation.
International Journal of Computer Applications | 2010
Naresh Kumar Garg; Lakhwinder Kaur; Manish Kumar
The main purpose of this paper is to provide the new segmentation technique based on structure approach for Handwritten Hindi text. Segmentation is one of the major stages of character recognition. The handwritten text is separated into lines, lines into words and words into characters. The errors in segmentation propagate to recognition. The performance is evaluated on handwritten data of 1380 words of 200 lines written by 15 different writers. The overall results of segmentation are very promising.
international conference on information technology: new generations | 2010
Naresh Kumar Garg; Lakhwinder Kaur; M. K. Jindal
In this paper, we have discussed the new method for Line Segmentation of Handwritten Hindi text. The method is based on header line detection, base line detection and contour following technique. No preprocessing like skew correction, thinning or noise removal has been done on the data. The purpose of this paper is three fold. Firstly, we explained by experiments that this method is suitable for fluctuating lines or variable skew lines of text. Also, we confirm that this method is invariant of non uniform skew between words in a line (non uniform text line skew). Secondly, the contour following after header line detection correctly separates some of the overlapped lines of text. Thirdly, this paper provides a brief review of text line segmentation techniques for handwritten text which can be very useful for the beginners who want to work on text line segmentation.
International Journal of Computer Applications | 2012
Avneet Kaur; Lakhwinder Kaur; Savita Gupta
The goal of this paper is to analyse and improve the performance of metrics like Coefficient of Correlation (CoC) and Structural Similarity Index (SSIM) for image recognition in real-time environment. The main novelties of the methods are; it can work under uncontrolled environment and no need to store multiple copies of the same image at different orientations. The values of CoC and SSIM get changed if images are rotated or flipped or captured under bad/highly illuminated conditions. To increase the recognition accuracy, the input test image is pre-processed. First, discrete wavelet transform is applied to recognize the image captured under bad illuminated and dull lightning conditions. Second, to make the method rotation invariant, the test image is compared against the stored database image without and with rotations in the horizontal, vertical, diagonal, reverse diagonal and flipped directions. The image recognition performance is evaluated using the Recognition Rate and Rejection Rate. The results indicate that recognition performance of Correlation Coefficient and SSIM gets improved with rotations and discrete wavelet transform. Also it was observed that CoC with proposed modifications yield better results as compared to state of the art enhanced Principal Component Analysis and Enhanced Subspace Linear Discriminant Analysis. Keywords—Image Recognition, Discrete Wavelet Transforms, Correlation Coefficient, Structural Similarity Index Metrics.
international conference on electronic design | 2015
Navdeep Singh; Lakhwinder Kaur
Diabetic Retinopathy occurs in patients who suffer from diabetes for many years and as a result of which the vision gets affected. The affect can be low to severe depending on the extent to which the disease has occurred. There are two stages of the disease. The early stage is Non proliferative diabetic retinopathy (NPDR) and later is Proliferative diabetic retinopathy (PDR). In NPDR, various problems may occur such as macular edema which is swelling in the central retina and retinal ischemia which occurs due to poor blood flow. In PDR, the advanced stage of NPDR, new blood vessels starts growing in the retina known as neovascularization. The extraction of retinal blood vessels if done at early stages can be very helpful in diagnosing the severity of the disease and accordingly the treatment can be followed. In later stages, treatment is not very effective. In this paper various blood vessel segmentation techniques are discussed. Besides segmentation techniques, retinal image enhancement techniques are also discussed. The evaluation of techniques is done on publically available databases DRIVE and STARE. These databases contain retinal images along with the ground truth images which are accurately marked by the experts for the purpose of evaluation of the techniques. The paper also discusses various metrics which are frequently used for the evaluation of image segmentation techniques.
International Journal of Computer Applications | 2011
Naresh Kumar Garg; Lakhwinder Kaur; M. K. Jindal
Optical Character Recognition (OCR) is a process to recognize the handwritten or printed scanned text with the help of a computer. Segmentation is very important stage of any text recognition system. The problems in segmentation can lead to decrease in segmentation rate and hence recognition rate. A good segmentation technique can improve the recognition rate. This paper deals with the hazards that occur in segmentation of handwritten Hindi text. We also explained the main reasons for some of these problems.
international conference on information systems | 2011
Naresh Kumar Garg; Lakhwinder Kaur; M. K. Jindal
Character recognition is an important stage of any text recognition system. In Optical Character Recognition (OCR) system, the presence of half characters decreases the recognition rate. Due to touching of half character with full characters, the determination of presence of half character is very challenging task. In this paper, we have proposed new algorithm based on structural properties of text to segment the half characters in handwritten Hindi text. The results are shown for both handwritten Hindi text as well as for printed Hindi text. The proposed algorithm achieves the segmentation accuracy as 83.02% for half characters in handwritten text and 87.5% in printed text.
Medical & Biological Engineering & Computing | 2005
Lakhwinder Kaur; R. C. Chauhan; S. C. Saxena
The paper addresses the problem of how the spatial quantisation mode and subband adaptive uniform scalar quantiser can be jointly optimised in the minimum description length (MDL) framework for compression of ultrasound images. It has been shown that the statistics of wavelet coefficients in the medical ultrasound (US) image can be better approximated by the generalised Student t-distribution. By combining these statistics with the operational rate-distortion (RD) criterion, a space-frequency quantiser (SFQ) called the MDL-SFQ was designed, which used an efficient zero-tree quantisation technique for zeroing out the tree-structured sets of wavelet coefficients and an adaptive scalar quantiser to quantise the non-zero coefficients. The algorithm used the statistical ‘variance of quantisation error’ to achieve the different bit-rates ranging from near-lossless to lossy compression. Experimental results showed that the proposed coder outperformed the set partitioning in hierarchical trees (SPIHT) image coder both quantitatively and qualitatively. It yielded an improved compression performance of 1.01 dB over the best zero-tree based coder SPIHIT at 0.25 bits per pixel when averaged over five ultrasound images.
International Journal of Computer Applications | 2012
Neelofar Sohi; Lakhwinder Kaur; Savita Gupta
Aim of this paper is to develop an efficient fuzzy c-mean based segmentation algorithm to extract tumor region from MR brain images. First, cluster centroids are initialized through data analysis of tumor region, which optimizes the standard fuzzy cmean algorithm. Next, reconstruction based morphological operations are applied to enhance its performance for brain tumor extraction. The results show that simple fuzzy c-mean could not segment the region of interest properly, whereas enhanced algorithm effectively extracts the tumor region. From comparison with existing segmentation methods, enhanced fuzzy c-mean algorithm emerges as the most effective algorithm for extracting region of interest.