Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nithin Nagaraj is active.

Publication


Featured researches published by Nithin Nagaraj.


IEEE Transactions on Circuits and Systems for Video Technology | 2004

Efficient, low-complexity image coding with a set-partitioning embedded block coder

William A. Pearlman; Asad Islam; Nithin Nagaraj; Amir Said

We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. It uses a recursive set-partitioning procedure to sort subsets of wavelet coefficients by maximum magnitude with respect to thresholds that are integer powers of two. It exploits two fundamental characteristics of an image transform-the well-defined hierarchical structure, and energy clustering in frequency and in space. The two partition strategies allow for versatile and efficient coding of several image transform structures, including dyadic, blocks inside subbands, wavelet packets, and discrete cosine transform (DCT). We describe the use of this coding algorithm in several implementations, including reversible (lossless) coding and its adaptation for color images, and show extensive comparisons with other state-of-the-art coders, such as set partitioning in hierarchical trees (SPIHT) and JPEG2000. We conclude that this algorithm, in addition to being very flexible, retains all the desirable features of these algorithms and is highly competitive to them in compression efficiency.


Medical Imaging 2004: Image Processing | 2004

Automatic partitioning of head CTA for enabling segmentation

Srikanth Suryanarayanan; Rakesh Mullick; Yogish Mallya; Vidya Pundalik Kamath; Nithin Nagaraj

Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: “proximal”, “middle”, and “distal”. The “proximal” and “distal” sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the “middle” partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the “proximal” and “distal” partitions. Complex methods are restricted to only the “middle” partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.


visual communications and image processing | 2004

Block based embedded color image and video coding

Nithin Nagaraj; William A. Pearlman; Asad Islam

Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.


Medical Imaging 2004: Image Processing | 2004

Zero-distortion lossless data embedding

Nithin Nagaraj; Rakesh Mullick

All known methods of lossless or reversible data embedding that exist today suffer from two major disadvantages: 1) The embedded image suffers from distortion, however small it may be by the very process of embedding and 2) The requirement of a special parser (decoder), which is necessary for the client to remove the embedded data in order to obtain the original image (lossless). We propose a novel lossless data embedding method where both these disadvantages are circumvented. Zero-distortion lossless data embedding (ZeroD-LDE) claims zero-distortion of the embedded image for all viewing purposes and further maintaining that clients without any specialized parser can still recover the original image losslessly but would not have direct access to the embedded data. The fact that not all gray levels are used by most images is exploited to embed data by selective lossless compression of run-lengths of zeros (or any compressible pattern). Contiguous runs of zeros are changed such that the leading zero is made equal to the maximum original intensity plus the run-length and the succeeding zeros are converted to the embedded data (plus maximum original intensity) thus achieving extremely high embedding capacities. This way, the histograms of the host-data and the embedded data do not overlap and hence we can obtain zero-distortion by using the window-level setting of standard DICOM viewers. The embedded image is thus not only DICOM compatible but also zero-distortion visually and requires no clinical validation.


Medical Imaging 2003: PACS and Integrated Medical Information Systems: Design and Evaluation | 2003

Block-based conditional entropy coding for medical image compression

Sriperumbudur Vangeepuram Bharath Kumar; Nithin Nagaraj; Sudipta Mukhopadhyay; Xiaofeng Xu

In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we dont indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.


Medical Imaging 2005: Image Processing | 2005

On the use of lossless integer wavelet transforms in medical image segmentation

Nithin Nagaraj; Yogish Mallya

Recent trends in medical image processing involve computationally intensive processing techniques on large data sets, especially for 3D applications such as segmentation, registration, volume rendering etc. Multi-resolution image processing techniques have been used in order to speed-up these methods. However, all well-known techniques currently used in multi-resolution medical image processing rely on using Gaussain-based or other equivalent floating point representations that are lossy and irreversible. In this paper, we study the use of Integer Wavelet Transforms (IWT) to address the issue of lossless representation and reversible reconstruction for such medical image processing applications while still retaining all the benefits which floating-point transforms offer such as high speed and efficient memory usage. In particular, we consider three low-complexity reversible wavelet transforms namely the - Lazy-wavelet, the Haar wavelet or (1,1) and the S+P transform as against the Gaussian filter for multi-resolution speed-up of an automatic bone removal algorithm for abdomen CT Angiography. Perfect-reconstruction integer wavelet filters have the ability to perfectly recover the original data set at any step in the application. An additional advantage with the reversible wavelet representation is that it is suitable for lossless compression for purposes of storage, archiving and fast retrieval. Given the fact that even a slight loss of information in medical image processing can be detrimental to diagnostic accuracy, IWTs seem to be the ideal choice for multi-resolution based medical image segmentation algorithms. These could also be useful for other medical image processing methods.


ieee region 10 conference | 2003

A very low-complexity multi-resolution prediction-based wavelet transform method for medical image compression

Nithin Nagaraj

Wavelet based lossless compression techniques have been popular for medical image compression due to a number of features, like multi-resolution representation, progressive transmission and high compression ratios. As decoding time is of paramount importance in medical applications, low complexity wavelets would be preferred for fast decoding and retrieval of data from picture archiving and communications systems (PACS) enabling quicker diagnosis and higher productivity of the physician. We propose a novel image compression system that claims extremely low complexity, in fact lower than the Haar wavelet, and at the same time providing higher compression ratios. The high pixel-to-pixel correlation inherent in medical images is first exploited by the application of differential pulse code modulation (DPCM) followed by a modified version of the Haar wavelet applied in an incomplete fashion. We report extensive results (first-order entropy estimates) on a large database of medical images.


Medical Imaging 2003: PACS and Integrated Medical Information Systems: Design and Evaluation | 2003

Region of interest and windowing-based progressive medical image delivery using JPEG2000

Nithin Nagaraj; Sudipta Mukhopadhyay; Frederick Wilson Wheeler; Ricardo Scott Avila

An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.


Archive | 2006

Image registration system and method

Rakesh Mullick; Timothy Poston; Nithin Nagaraj


Archive | 2003

Method and apparatus for segmenting structure in CT angiography

Srikanth Suryanarayanan; Rakesh Mullick; Yogisha Mallya; Vidya Pundalik Kamath; Nithin Nagaraj

Collaboration


Dive into the Nithin Nagaraj's collaboration.

Top Co-Authors

Avatar

Sudipta Mukhopadhyay

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William A. Pearlman

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Asad Islam

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge