Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sheshadri Thiruvenkadam is active.

Publication


Featured researches published by Sheshadri Thiruvenkadam.


nuclear science symposium and medical imaging conference | 2012

Comparison of 4-class and continuous fat/water methods for whole-body, MR-based PET attenuation correction

Scott D. Wollenweber; Sonal Ambwani; Albert Henry Roger Lonn; Dattesh Shanbhag; Sheshadri Thiruvenkadam; Sandeep Suryanarayana Kaushik; Rakesh Mullick; Florian Wiesinger; Hua Qian; Gaspar Delso

The goal of this study was to compare two approaches for MR-based PET patient attenuation correction (AC) in whole-body FDG-PET imaging using a tri-modality PET/CT & MR setup. Sixteen clinical whole-body FDG patients were included in this study. Mean standard uptake values (SUV) were measured for liver and lung volumes-of-interest for comparison. Maximum SUV values were measured in 18 FDGavid features in ten of the patients. The AC methods compared to gold-standard CT-based AC were segmentation of the CT (air, lung, fat, water), MR image segmentation with 4 tissue classes (air, lung, fat, water) and segmentation with air, lung and a continuous fat/water method. Results: The magnitude of uptake value differences induced by CT-based image segmentation were similar but lower on average than those found using the MRderived AC methods. The average liver SUV difference with that found using CTAC was 1.3%, 10.4% and 5.7% for 4-class segmented CT, 4-class MRAC and continuous fat/water MRAC methods, respectively. The average FDG-avid feature SUV max difference was -0.5%,1.7% and -1.6% for 4-class segmented CT, 4-class MRAC and continuous fat/water MRAC methods, respectively. Conclusion: The results demonstrated that both 4class and continuous fat/water AC methods provided adequate quantitation in the body, and that the continuous fat/water method was within 5.7% on average for SUV mean in liver and 1.6% on average for SUV max for FDG-avid features.


international symposium on biomedical imaging | 2010

A region based active contour method for x-ray lung segmentation using prior shape and low level features

Pavan Annangi; Sheshadri Thiruvenkadam; Anand Raja; Hao Xu; Xiwen Sun; Ling Mao

In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge/corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.


international symposium on biomedical imaging | 2010

A region-based active contour method for extraction of breast skin-line in mammograms

Sheshadri Thiruvenkadam; M. Acharyya; N. V. Neeba; S. Ranjan

In this work, we present a novel region-based active contour technique to extract the breast boundary in mammograms. Skin-line extraction in mammograms is non-trivial due to the presence of noise, intensity inhomogeneities, and scanning artifacts. Also, weak contrast near the skin-line boundary, especially in the case of low-density breasts, poses a lot of difficulty in its extraction. Here, we represent the breast boundary by a smooth, parametric curve. The region based data term of the active contour energy is based on the assumption that in a small neighborhood around the skin boundary, the intensity is piece-wise constant. Further, to achieve a good initial guess, we make use of the assumption that at a large scale, the breast image is binary with a fuzzy boundary. A PDE based energy is employed to get the fuzzy membership function. The smoothness term of the above energy allows us to handle noise/ scanning artifacts in mammograms, and allows easy extraction of a single contour, which is then input to the active contour energy. Our experiments on the MIAS database gave promising results as validated by an experienced radiologist.


arXiv: Computer Vision and Pattern Recognition | 2016

Understanding the Mechanisms of Deep Transfer Learning for Medical Images

Hariharan Ravishankar; Prasad Sudhakar; Rahul Venkataramani; Sheshadri Thiruvenkadam; Pavan Annangi; Narayanan Babu; Vivek Vaidya

The ability to automatically learn task specific feature representations has led to a huge success of deep learning methods. When large training data is scarce, such as in medical imaging problems, transfer learning has been very effective. In this paper, we systematically investigate the process of transferring a Convolutional Neural Network, trained on ImageNet images to perform image classification, to kidney detection problem in ultrasound images. We study how the detection performance depends on the extent of transfer. We show that a transferred and tuned CNN can outperform a state-of-the-art feature engineered pipeline and a hybridization of these two techniques achieves 20 % higher performance. We also investigate how the evolution of intermediate response images from our network. Finally, we compare these responses to state-of-the-art image processing filters in order to gain greater insight into how transfer learning is able to effectively manage widely varying imaging regimes.


medical image computing and computer assisted intervention | 2017

Learning and Incorporating Shape Models for Semantic Segmentation

Hariharan Ravishankar; Rahul Venkataramani; Sheshadri Thiruvenkadam; Prasad Sudhakar; Vivek Vaidya

Semantic segmentation has been popularly addressed using Fully convolutional networks (FCN) (e.g. U-Net) with impressive results and has been the forerunner in recent segmentation challenges. However, FCN approaches do not necessarily incorporate local geometry such as smoothness and shape, whereas traditional image analysis techniques have benefitted greatly by them in solving segmentation and tracking problems. In this work, we address the problem of incorporating shape priors within the FCN segmentation framework. We demonstrate the utility of such a shape prior in robust handling of scenarios such as loss of contrast and artifacts. Our experiments show \(\approx 5\%\) improvement over U-Net for the challenging problem of ultrasound kidney segmentation.


medical image computing and computer assisted intervention | 2010

Automated interventricular septum thickness measurement from B-mode echocardiograms

Navneeth Subramanian; Dirk R. Padfield; Sheshadri Thiruvenkadam; Anand Narasimhamurthy; Sigmund Frigstad

In this work, we address the problem of automated measurement of the interventricular septum thickness, one of the key parameters in cardiology, from B-mode echocardiograms. The problem is challenging due to high levels of noise, multi modal intensity, weak contrast due to near field haze, and non rigid motion of the septum across frames. We introduce a complete system for automated measurement of septum thickness from B-mode echocardiograms incorporating three main components: a 1D curve evolution algorithm using region statistics for segmenting the septum, a motion clustering method to locate the mitral valve, and a robust method to calculate the septum width from these inputs in accordance with medical standards. Our method effectively handles the challenges of such measurements and runs in near real time. Results on 57 patient recordings showed excellent agreement of the automated measurements with expert manual measurements.


ieee nuclear science symposium | 2011

Robust motion correction for respiratory gated PET/CT using weighted averaging

K. Thielemans; Girish Gopalakrishnan; Arunabha S. Roy; V Srikrishnan; Sheshadri Thiruvenkadam; Scott D. Wollenweber; Ravindra Mohan Manjeshwar

Movement degrades image quality in PET/CT. A common strategy is to gate the PET data, reconstruct the images, register each image to a reference gate, and average the registered images (Reconstruct, Registered, Average or RRA).


international conference information processing | 2017

Joint Deep Learning of Foreground, Background and Shape for Robust Contextual Segmentation

Hariharan Ravishankar; Sheshadri Thiruvenkadam; Rahul Venkataramani; Vivek Vaidya

Encouraged by the success of CNNs in classification problems, CNNs are being actively applied to image-wide prediction problems such as segmentation, optic flow, reconstruction, restoration etc. These approaches fall under the category of fully convolutional networks [FCN] and have been very successful in bringing contexts into learning for image analysis. In this work, we address the problem of segmentation from medical images. Segmentation or object delineation from medical images/volumes is a fundamental step for subsequent quantification tasks key to diagnosis. Semantic segmentation has been popularly addressed using FCN (e.g. U-NET) with impressive results and has been the fore runner in recent segmentation challenges. However, there are a few drawbacks of FCN approaches which recent works have tried to address. Firstly, local geometry such as smoothness and shape are not reliably captured. Secondly, spatial context captured by FCNs while giving the advantage of a richer representation carries the intrinsic drawback of overfitting, and is quite sensitive to appearance and shape changes. To handle above issues, in this work, we propose a hybrid of generative modeling of image formation to jointly learn the triad of foreground (F), background (B) and shape (S). Such generative modeling of F, B, S would carry the advantages of FCN in capturing contexts. Further we expect the approach to be useful under limited training data, results easy to interpret, and enable easy transfer of learning across segmentation problems. We present \({\sim }8\%\) improvement over state of art FCN approaches for US kidney segmentation and while achieving comparable results on CT lung nodule segmentation.


medical image computing and computer assisted intervention | 2015

Robust PET Motion Correction Using Non-local Spatio-temporal Priors

Sheshadri Thiruvenkadam; K. S. Shriram; Ravindra Mohan Manjeshwar; Scott D. Wollenweber

Respiratory motion presents significant challenges for PET/ CT acquisitions, potentially leading to inaccurate SUV quantitation. Non Rigid Registration [NRR] of gated PET images is quite challenging due to large motion, intrinsic noise, and the need to preserve definitive features like tumors. In this work, we use non-local spatio-temporal constraints within group-wise NRR to get a stable framework which can work with few number of PET gates, and handle the above challenges of PET data. Additionally, we propose metrics for measuring alignment and artifacts introduced by NRR which is rarely addressed. Our results are quantitatively compared to related works, on 20 clinical PET cases.


Information Fusion | 2014

Guest Editorial: Special issue on medical image computing and systems

Alex Pappachen James; Sheshadri Thiruvenkadam; Joseph Suresh Paul; Michael Braun

This special issue provides a collection of papers that focus on information fusion in medical imaging to improve the quality of images, applications of image fusion in medical diagnostics, and different models/approaches for achieving image fusion. Image quality indicators, texture analysis, morphology-based studies, transform-based fusion approaches, and segmentation techniques are presented in this special issue.

Collaboration


Dive into the Sheshadri Thiruvenkadam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge