Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vivek Vaidya is active.

Publication


Featured researches published by Vivek Vaidya.


arXiv: Computer Vision and Pattern Recognition | 2016

Understanding the Mechanisms of Deep Transfer Learning for Medical Images

Hariharan Ravishankar; Prasad Sudhakar; Rahul Venkataramani; Sheshadri Thiruvenkadam; Pavan Annangi; Narayanan Babu; Vivek Vaidya

The ability to automatically learn task specific feature representations has led to a huge success of deep learning methods. When large training data is scarce, such as in medical imaging problems, transfer learning has been very effective. In this paper, we systematically investigate the process of transferring a Convolutional Neural Network, trained on ImageNet images to perform image classification, to kidney detection problem in ultrasound images. We study how the detection performance depends on the extent of transfer. We show that a transferred and tuned CNN can outperform a state-of-the-art feature engineered pipeline and a hybridization of these two techniques achieves 20 % higher performance. We also investigate how the evolution of intermediate response images from our network. Finally, we compare these responses to state-of-the-art image processing filters in order to gain greater insight into how transfer learning is able to effectively manage widely varying imaging regimes.


medical image computing and computer assisted intervention | 2017

Learning and Incorporating Shape Models for Semantic Segmentation

Hariharan Ravishankar; Rahul Venkataramani; Sheshadri Thiruvenkadam; Prasad Sudhakar; Vivek Vaidya

Semantic segmentation has been popularly addressed using Fully convolutional networks (FCN) (e.g. U-Net) with impressive results and has been the forerunner in recent segmentation challenges. However, FCN approaches do not necessarily incorporate local geometry such as smoothness and shape, whereas traditional image analysis techniques have benefitted greatly by them in solving segmentation and tracking problems. In this work, we address the problem of incorporating shape priors within the FCN segmentation framework. We demonstrate the utility of such a shape prior in robust handling of scenarios such as loss of contrast and artifacts. Our experiments show \(\approx 5\%\) improvement over U-Net for the challenging problem of ultrasound kidney segmentation.


international symposium on biomedical imaging | 2016

Hybrid approach for automatic segmentation of fetal abdomen from ultrasound images using deep learning

Hariharan Ravishankar; Sahana M. Prabhu; Vivek Vaidya; Nitin Singhal

In this paper, we propose a hybrid approach combining traditional texture analysis methods with deep learning for the automatic detection and measurement of abdominal contour from 2-D fetal ultrasound images. Following a learning-based procedure for region of interest (ROI) localization to segment the abdominal boundary, we show that convolutional neural networks (CNNs) outperform other state-of-the-art texture features and conventional classifiers, in addressing the binary classification problem of distinguishing between abdomen versus non-abdomen regions. However, we obtain significantly better segmentation results in identifying the best ROI containing fetal abdomen, when the predictions from CNN are combined with those from gradient boosting machine (GBM) using histogram of oriented gradient (HOG) features. We trained our method on a set of 70 images and tested them on another distinct set of 70 images. We obtained a mean DICE similarity coefficient of 0.90, which shows excellent overlap with the ground truth. We report that the mean computed gestational age difference between our segmentation results and the ground truth, is within two weeks for 90% (and within one week for 70%) of the testing cases.


international symposium on biomedical imaging | 2017

Lung nodule detection in CT using 3D convolutional neural networks

Xiaojie Huang; Junjie Shan; Vivek Vaidya

We propose a new computer-aided detection system that uses 3D convolutional neural networks (CNN) for detecting lung nodules in low dose computed tomography. The system leverages both a priori knowledge about lung nodules and confounding anatomical structures and data-driven machine-learned features and classifier. Specifically, we generate nodule candidates using a local geometric-model-based filter and further reduce the structure variability by estimating the local orientation. The nodule candidates in the form of 3D cubes are fed into a deep 3D convolutional neural network that is trained to differentiate nodule and non-nodule inputs. We use data augmentation techniques to generate a large number of training examples and apply regularization to avoid overfitting. On a set of 99 CT scans, the proposed system achieved state-of-the-art performance and significantly outperformed a similar hybrid system that uses conventional shallow learning. The experimental results showed benefits of using a priori models to reduce the problem space for data-driven machine learning of complex deep neural networks. The results also showed the advantages of 3D CNN over 2D CNN in volumetric medical image analysis.


international conference of the ieee engineering in medicine and biology society | 2014

Improved mass detection in 3D automated breast ultrasound using region based features and multi-view information

Chuyang Ye; Vivek Vaidya; Fei Zhao

Breast cancer is one of the leading causes of cancer death for women. Early detection of breast cancer is crucial for reducing mortality rates and improving prognosis of patients. Recently, 3D automated breast ultrasound (ABUS) has gained increasing attentions for reducing subjectivity, operator-dependence, and providing 3D context of the whole breast. In this work, we propose a breast mass detection algorithm improving voxel-based detection results by incorporating 3D region-based features and multi-view information in 3D ABUS images. Based on the candidate mass regions produced by voxel-based method, our proposed approach further improves the detection results with three major steps: 1) 3D mass segmentation in geodesic active contours framework with edge points obtained from directional searching; 2) region-based single-view and multi-view feature extraction; 3) support vector machine (SVM) classification to discriminate candidate regions as breast masses or normal background tissues. 22 patients including 51 3D ABUS volumes with 44 breast masses were used for evaluation. The proposed approach reached sensitivities of 95%, 90%, and 70% with averaged 4.3, 3.8, and 1.6 false positives per volume, respectively. The results also indicate that the multi-view information plays an important role in false positive reduction in 3D breast mass detection.


international conference information processing | 2017

Joint Deep Learning of Foreground, Background and Shape for Robust Contextual Segmentation

Hariharan Ravishankar; Sheshadri Thiruvenkadam; Rahul Venkataramani; Vivek Vaidya

Encouraged by the success of CNNs in classification problems, CNNs are being actively applied to image-wide prediction problems such as segmentation, optic flow, reconstruction, restoration etc. These approaches fall under the category of fully convolutional networks [FCN] and have been very successful in bringing contexts into learning for image analysis. In this work, we address the problem of segmentation from medical images. Segmentation or object delineation from medical images/volumes is a fundamental step for subsequent quantification tasks key to diagnosis. Semantic segmentation has been popularly addressed using FCN (e.g. U-NET) with impressive results and has been the fore runner in recent segmentation challenges. However, there are a few drawbacks of FCN approaches which recent works have tried to address. Firstly, local geometry such as smoothness and shape are not reliably captured. Secondly, spatial context captured by FCNs while giving the advantage of a richer representation carries the intrinsic drawback of overfitting, and is quite sensitive to appearance and shape changes. To handle above issues, in this work, we propose a hybrid of generative modeling of image formation to jointly learn the triad of foreground (F), background (B) and shape (S). Such generative modeling of F, B, S would carry the advantages of FCN in capturing contexts. Further we expect the approach to be useful under limited training data, results easy to interpret, and enable easy transfer of learning across segmentation problems. We present \({\sim }8\%\) improvement over state of art FCN approaches for US kidney segmentation and while achieving comparable results on CT lung nodule segmentation.


international symposium on biomedical imaging | 2014

Topological texture-based method for mass detection in breast ultrasound image

Fei Zhao; Xiaoxing Li; Soma Biswas; Rakesh Mullick; Paulo Ricardo Mendonca; Vivek Vaidya

Texture analysis plays an important role in many image processing tasks. In this work, we present a texture descriptor based on the topology of excursion sets, derived from the concept of Minkowski functionals, and evaluate their usefulness in the detection of breast masses in 2D breast ultrasound images. The application includes three major stages: preprocessing, including candidate generation through computation of gradient concentration under a Fisher-Tippet noise model (in itself another contribution of the paper); texture feature extraction; and region classification using a Random Forests classifier. Performance of the proposed method is evaluated on 135 2D BUS images with 139 masses. Our method reaches 91% sensitivity with an averaged 1.19 false detections, and the proposed texture feature compares favorably against the often-used grey level co-occurrence matrices on the exact the same task.


Proceedings of SPIE | 2011

Automated Localization of Vertebra Landmarks in MRI Images

Akshay Pai; Anand Narasimhamurthy; V.S. Veeravasarapu Rao; Vivek Vaidya

The identification of key landmark points in an MR spine image is an important step for tasks such as vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in order to detect/localize the landmarks. A straightforward extension of the work described here is an automated classification of spine section(s). It also serves as a useful building block for further automatic processing such as extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.


Medical Imaging 2006: Visualization, Image-Guided Procedures, and Display | 2006

Volume rendering segmented data using 3D textures: a practical approach for intra-operative visualization

Navneeth Subramanian; Rakesh Mullick; Vivek Vaidya

Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.


international conference of the ieee engineering in medicine and biology society | 2016

Breast lesion detection and characterization with 3D features

Arathi Sreekumari; K. S. Shriram; Vivek Vaidya

Automated Breast Ultrasound (ABUS) is highly effective as breast cancer screening adjunct technology. Automation can greatly enhance the efficiency of the clinician sifting through the quantum of data in ABUS volumes to spot lesions. We have implemented a fully automatic generic algorithm pipeline for detection and characterization of lesions on such 3D volumes. We compare a wide range of features for region description on their effectiveness at the dual goals of lesion detection and characterization. On multiple feature images, we compute region descriptors at lesion candidate locations obviating the need for explicit lesion segmentation. We use Random Forests classifier to evaluate candidate region descriptors for lesion detection. Further, we categorize true lesions as Malignant or other masses (e.g. Cysts). Over a database of 145 volumes, with 36 biopsy verified lesions, we achieved Area Under the Curve (AUC) values of 92.6% for lesion detection and 89% for lesion characterization.

Collaboration


Dive into the Vivek Vaidya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge