Mrityunjaya V. Latte
JSSATE Noida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mrityunjaya V. Latte.
Digital Signal Processing | 2006
Mrityunjaya V. Latte; Narasimha H. Ayachit; D.K. Deshpande
Abstract In this paper a variant of the set partitioned embedded block coder (SPECK) image compression called listless SPECK (LSK) is presented. LSK operates without lists and is suitable for a fast, simple hardware implementation. LSK has a fixed predetermined memory requirement about 50% larger than needed for the independent image. Instead of lists, a state table with two bits per coefficient is used to keep track of the block coding and the information that has been encoded. LSK sparsely marks selected block nodes of insignificant blocks in the state table, in such a way that large groups of predictably insignificant pixels are easily identified and skipped during the coding process. The image data is stored as a one-dimensional recursive zigzag array for computational efficiency and algorithmic simplicity. Performance of the proposed algorithm on standard test images is nearly same as SPECK.
International Journal of Advanced Computer Science and Applications | 2011
Lalitha Y. S; Mrityunjaya V. Latte
In this paper a hierarchical coding technique for variable bit rate service is developed using embedded zero block coding approach. The suggested approach enhances the variable rate coding by zero tree based block-coding architecture with Context Modeling for low complexity and high performance. The proposed algorithm utilizes the significance state-table forming the context modeling to control the coding passes with low memory requirement and low implementation complexity with the nearly same performance as compared to the existing coding techniques. Keyword- image coding; embedded block coding; context modeling; multi rate services. I. INTRODUCTION With rapid development of heterogeneous services in image application the future digital medical images and video coding applications finds various limitations with available resource. The traditional multi-bit stream approach to the heterogeneity issue is very constrained and inefficient under multi bit rate applications. The multi bit stream coding techniques allow partial decoding at a various resolution and quality levels. Several scalable coding algorithms have been proposed in the international standards over the past decade, but these former methods can only accommodate relatively limited decoding properties. The rapid growth of digital imaging technology in conjunction with the ever-expanding array of access technologies has led to a new set of requirements for image compression algorithms. Not only are high quality reconstructed medical images required at medium-low bitrates, but also as the bit rate decreases, the quality of the reconstructed MRI image should degrade gracefully. The traditional multi-bit stream solution to the issue of widely varying user resources is both inefficient and rapidly becoming impractical. The bit level scalable codes developed for this system allow optimum reconstruction of a medical image from an arbitrary truncation point within a single bit stream. For progressive transmission, image browsing, medical image analysis, multimedia applications, and compatible trans coding, in a digital hierarchy of multiple bit rates, the problem of obtaining the best MRI image quality and accomplishing it in an embedded fashion i.e. all the encoded bits making compatible to the target bit rate is a bottleneck task for todays engineer. As medical images are of huge data set and encoding it for a lower bit rate results in loss of data, which intern results in very low image quality under compression. Coming to the transmission over a noisy channel this problem becomes more effective due to narrow bandwidth effect. Various algorithms were proposed for encoding and compressing the MRI image data before transmission. These algorithms show high-end results under high bandwidth systems but show poor result under low data rate systems. The problem of transmission of MRI images over a low bit rate bandwidth can be overcome if the medical image data bits are such encoded and compressed that the data bit rate is made compatible to the provided low bit rate. Embedded zero tree wavelet algorithm is a proposed image compression algorithm which encode the bit in the bit stream in the order of importance which embed the bit stream in hierarchical fashion.
international conference on digital image processing | 2010
Geeta Hanji; Mrityunjaya V. Latte
Removing Noise from the image is a challenging problem for the researchers. This paper proposes a two phase threshold based median filtering technique for salt and pepper impulse noise removal. It is implemented as a two pass algorithm: In the first pass corrupted pixels are perfectly detected using min-max strategy and an adaptive working window based on estimated noise density. Second phase is a threshold based filtering technique to correct the corrupted pixels by a valid median. Experimental results have shown that the proposed technique performs far more superior than many of the efficient median based filtering techniques reported in the literature in terms of Peak Signal (PSNR) and visual perception of the images corrupted by impulse noise even to the tune of seventy percent.
Image Processing and Communications | 2011
Geeta Hanji; Mrityunjaya V. Latte
A new impulse noise detection and filtering algorithm A new impulse detection and filtering algorithm is proposed for restoration of images that are highly corrupted by impulse noise. It is based on the average absolute value of four convolutions obtained by one-dimensional Laplacian operators. The proposed algorithm can effectively remove the impulse noise with a wide range of noise density and produce better results in terms of the qualitative and quantitative measures of the images even at noise density as high as 90%. Extensive simulations show that the proposed algorithm provides better performance than many of the existing switching median filters in terms of noise suppression and detail preservation.
Journal of intelligent systems | 2017
Sangeeta K. Siri; Mrityunjaya V. Latte
Abstract Liver segmentation from abdominal computed tomography (CT) scan images is a complicated and challenging task. Due to the haziness in the liver pixel range, the neighboring organs of the liver have the same intensity level and existence of noise. Segmentation is necessary in the detection, identification, analysis, and measurement of objects in CT scan images. A novel approach is proposed to meet the challenges in extracting liver images from abdominal CT scan images. The proposed approach consists of three phases: (1) preprocessing, (2) CT scan image transformation to neutrosophic set, and (3) postprocessing. In preprocessing, noise in the CT scan is reduced by median filter. A “new structure” is introduced to transform a CT scan image into a neutrosophic domain, which is expressed using three membership subsets: true subset (T), false subset (F), and indeterminacy subset (I). This transform approximately extracts the liver structure. In the postprocessing phase, morphological operation is performed on the indeterminacy subset (I). A novel algorithm is designed to identify the start points within the liver section automatically. The fast marching method is applied at start points that grow outwardly to detect the accurate liver boundary. The evaluation of the proposed segmentation algorithm is concluded using area- and distance-based metrics.
Journal of Computer Applications in Technology | 2015
T. M. P. Rajkumar; Mrityunjaya V. Latte
A novel search algorithm called superimposed diamond search algorithm SDSA based on lifting wavelet transform LWT for medical image compression is proposed in this paper. Fuzzy C means clustering FCM is applied to extract the region of interest ROI from the medical image. MAXSHIFT method is used to scale the coefficients so that the bits associated with the ROI are placed in higher bit planes than the bits associated with the background without the requirement of the shape information and without the need for calculating the ROI mask. SDSA keeps track of significant pixels of wavelet sub-band in hexagonal search in the scan order of left to right and top to bottom. The experimental results show the good compression ratio over the other existing methods such as set partitioning in hierarchical trees SPIHT and embedded zerotree wavelet EZW.
Journal of intelligent systems | 2017
S. Pramod Kumar; Mrityunjaya V. Latte
Abstract The traditional segmentation methods available for pulmonary parenchyma are not accurate because most of the methods exclude nodules or tumors adhering to the lung pleural wall as fat. In this paper, several techniques are exhaustively used in different phases, including two-dimensional (2D) optimal threshold selection and 2D reconstruction for lung parenchyma segmentation. Then, lung parenchyma boundaries are repaired using improved chain code and Bresenham pixel interconnection. The proposed method of segmentation and repairing is fully automated. Here, 21 thoracic computer tomography slices having juxtapleural nodules and 115 lung parenchyma scans are used to verify the robustness and accuracy of the proposed method. Results are compared with the most cited active contour methods. Empirical results show that the proposed fully automated method for segmenting lung parenchyma is more accurate. The proposed method is 100% sensitive to the inclusion of nodules/tumors adhering to the lung pleural wall, the juxtapleural nodule segmentation is >98%, and the lung parenchyma segmentation accuracy is >96%.
Journal of intelligent systems | 2017
S. Pramod Kumar; Mrityunjaya V. Latte
Abstract Computer-aided diagnosis of lung segmentation is the fundamental requirement to diagnose lung diseases. In this paper, a two-dimensional (2D) Otsu algorithm by Darwinian particle swarm optimization (DPSO) and fractional-order Darwinian particle swarm optimization (FODPSO) is proposed to segment the pulmonary parenchyma from the lung image obtained through computed tomography (CT) scans. The proposed method extracts pulmonary parenchyma from multi-sliced CT. This is a preprocessing step to identify pulmonary diseases such as emphysema, tumor, and lung cancer. Image segmentation plays a significant role in automated pulmonary disease diagnosis. In traditional 2D Otsu, exhaustive search plays an important role in image segmentation. However, the main disadvantage of the 2D Otsu method is its complex computation and processing time. In this paper, the 2D Otsu method optimized by DPSO and FODPSO is developed to reduce complex computations and time. The efficient segmentation is very important in object classification and detection. The particle swarm optimization (PSO) method is widely used to speed up the computation and maintain the same efficiency. In the proposed algorithm, the limitation of PSO of getting trapped in local optimum solutions is overcome. The segmentation technique is assessed and equated with the traditional 2D Otsu method. The test results demonstrate that the proposed strategy gives better results. The algorithm is tested on the Lung Image Database Consortium image collections.
international conference on advanced computing | 2012
Geeta Hanji; Mrityunjaya V. Latte; N. M. Shweta
A nonlinear decision based algorithm for the removal of blotches in the presence of impulse noise in grayscale images is proposed in this paper. The algorithm is implemented in two stages. In the first stage, decision rule based on the switching threshold is applied to the whole image unconditionally to detect the pixels as corrupted/uncorrupted. In the second stage the new pixel value is estimated only for the corrupted pixels. The algorithm uses an adaptive length window whose maximum size is 5×5 to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The proposed algorithm is tested on different images. The performance of the algorithm is analyzed quantitatively in terms of Mean Square Error [MSE], Peak-Signal to-Noise Ratio [PSNR], Image Enhancement Factor [IEF] and computation time and compared with other algorithms. Extensive simulations show that proposed algorithm removes the noise effectively even at noise level as high as 50% and preserves the edges without any loss, thus producing better results in terms of the qualitative and quantitative measures of the image.
Iete Journal of Research | 2018
S. Pramod Kumar; Mrityunjaya V. Latte
ABSTRACT Computer-aided detection and diagnosis (CAD) of lung-related diseases will be helpful for early detection. Lung parenchyma segmentation is considered as a prerequisite for most of CAD systems. The available traditional methods for lung parenchyma segmentation are not accurate because the nodules that adhere to the lung pleura are recognized as fat. This paper proposes an automated lung parenchyma segmentation for accurate detection of lung nodules, mainly juxtapleural nodules. The proposed method includes the bidirectional chain code to improve the segmentation, and the support vector machine classifier is used to avoid false inclusion of regions. The proposed method is verified on various datasets for robustness of the algorithm. This automated method provides an accuracy of 97% in segmentation compared to ground truth results obtained by experts, which drastically reduces the complexity and intervention of a radiologist.