Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dwarikanath Mahapatra is active.

Publication


Featured researches published by Dwarikanath Mahapatra.


Pattern Recognition | 2017

Semi-supervised learning and graph cuts for consensus based medical image segmentation

Dwarikanath Mahapatra

Abstract Medical image segmentation requires consensus ground truth segmentations to be derived from multiple expert annotations. A novel approach is proposed that obtains consensus segmentations from experts using graph cuts (GC) and semi supervised learning (SSL). Popular approaches use iterative Expectation Maximization (EM) to estimate the final annotation and quantify annotators performance. Such techniques pose the risk of getting trapped in local minima. We propose a self consistency (SC) score to quantify annotator consistency using low level image features. SSL is used to predict missing annotations by considering global features and local image consistency. The SC score also serves as the penalty cost in a second order Markov random field (MRF) cost function optimized using graph cuts to derive the final consensus label. Graph cut obtains a global maximum without an iterative procedure. Experimental results on synthetic images, real data of Crohns disease patients and retinal images show our final segmentation to be accurate and more consistent than competing methods.


medical image computing and computer assisted intervention | 2017

Image Super Resolution Using Generative Adversarial Networks and Local Saliency Maps for Retinal Image Analysis

Dwarikanath Mahapatra; Behzad Bozorgtabar; Sajini Hewavitharanage; Rahil Garnavi

We propose an image super resolution (ISR) method using generative adversarial networks (GANs) that takes a low resolution input fundus image and generates a high resolution super resolved (SR) image upto scaling factor of 16. This facilitates more accurate automated image analysis, especially for small or blurred landmarks and pathologies. Local saliency maps, which define each pixel’s importance, are used to define a novel saliency loss in the GAN cost function. Experimental results show the resulting SR images have perceptual quality very close to the original images and perform better than competing methods that do not weigh pixels according to their importance. When used for retinal vasculature segmentation, our SR images result in accuracy levels close to those obtained when using the original images.


International Workshop on Machine Learning in Medical Imaging | 2016

Retinal Image Quality Classification Using Saliency Maps and CNNs

Dwarikanath Mahapatra; Pallab Kanti Roy; Suman Sedai; Rahil Garnavi

Retinal image quality assessment (IQA) algorithms use different hand crafted features without considering the important role of the human visual system (HVS). We solve the IQA problem using the principles behind the working of the HVS. Unsupervised information from local saliency maps and supervised information from trained convolutional neural networks (CNNs) are combined to make a final decision on image quality. A novel algorithm is proposed that calculates saliency values for every image pixel at multiple scales to capture global and local image information. This extracts generalized image information in an unsupervised manner while CNNs provide a principled approach to feature learning without the need to define hand-crafted features. The individual classification decisions are fused by weighting them according to their confidence scores. Experimental results on real datasets demonstrate the superior performance of our proposed algorithm over competing methods.


digital image computing techniques and applications | 2016

Automatic Eye Type Detection in Retinal Fundus Image Using Fusion of Transfer Learning and Anatomical Features

Pallab Kanti Roy; Rajib Chakravorty; Suman Sedai; Dwarikanath Mahapatra; Rahil Garnavi

Retinal fundus images are mainly used by ophthalmologists to diagnose and monitor the development of retinal and systemic diseases. A number of computer-aided diagnosis (CAD) systems have been developed aimed at automation of mass screening and diagnosis of retinal diseases. Eye type (left or right eye) of a given retinal image is an important meta data information for a CAD. At present, eye type is graded manually which is time consuming and error prone. This article presents an automatic method for eye type detection, which can be integrated into existing retinal CAD systems to make them more faster and accurate. Our method combines transfer learning and anatomical prior knowledge based features to maximize the classification accuracy. We evaluate the proposed method on a retinal image set containing 5000 images. Our method shows a classification accuracy of 94% (area under the receiver operating characteristics curve (AUC) = 0.990).


medical image computing and computer-assisted intervention | 2018

Efficient Active Learning for Image Classification and Segmentation Using a Sample Selection and Conditional Generative Adversarial Network

Dwarikanath Mahapatra; Behzad Bozorgtabar; Jean-Philippe Thiran; Mauricio Reyes

Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about 35% of the full dataset, thus saving significant time and effort over conventional methods.


arXiv: Computer Vision and Pattern Recognition | 2018

Joint Segmentation and Uncertainty Visualization of Retinal Layers in Optical Coherence Tomography Images Using Bayesian Deep Learning

Suman Sedai; Bhavna J. Antony; Dwarikanath Mahapatra; Rahil Garnavi

Optical coherence tomography (OCT) is commonly used to analyze retinal layers for assessment of ocular diseases. In this paper, we propose a method for retinal layer segmentation and quantification of uncertainty based on Bayesian deep learning. Our method not only performs end-to-end segmentation of retinal layers, but also gives the pixel wise uncertainty measure of the segmentation output. The generated uncertainty map can be used to identify erroneously segmented image regions which is useful in downstream analysis. We have validated our method on a dataset of 1487 images obtained from 15 subjects (OCT volumes) and compared it against the state-of-the-art segmentation algorithms that does not take uncertainty into account. The proposed uncertainty based segmentation method results in comparable or improved performance, and most importantly is more robust against noise.


arXiv: Computer Vision and Pattern Recognition | 2018

Deep Multiscale Convolutional Feature Learning for Weakly Supervised Localization of Chest Pathologies in X-ray Images

Suman Sedai; Dwarikanath Mahapatra; Zongyuan Ge; Rajib Chakravorty; Rahil Garnavi

Localization of chest pathologies in chest X-ray images is a challenging task because of their varying sizes and appearances. We propose a novel weakly supervised method to localize chest pathologies using class aware deep multiscale feature learning. Our method leverages intermediate feature maps from CNN layers at different stages of a deep network during the training of a classification model using image level annotations of pathologies. During the training phase, a set of layer relevance weights are learned for each pathology class and the CNN is optimized to perform pathology classification by convex combination of feature maps from both shallow and deep layers using the learned weights. During the test phase, to localize the predicted pathology, the multiscale attention map is obtained by convex combination of class activation maps from each stage using the layer relevance weights learned during the training phase. We have validated our method using 112000 X-ray images and compared with the state-of-the-art localization methods. We experimentally demonstrate that the proposed weakly supervised method can improve the localization performance of small pathologies such as nodule and mass while giving comparable performance for bigger pathologies e.g., Cardiomegaly.


International Workshop on Machine Learning in Medical Imaging | 2018

Joint Registration And Segmentation Of Xray Images Using Generative Adversarial Networks

Dwarikanath Mahapatra; Zongyuan Ge; Suman Sedai; Rajib Chakravorty

Medical image registration and segmentation are complementary functions and combining them can improve each other’s performance. Conventional deep learning (DL) based approaches tackle the two problems separately without leveraging their mutually beneficial information. We propose a DL based approach for joint registration and segmentation (JRS) of chest Xray images. Generative adversarial networks (GANs) are trained to register a floating image to a reference image by combining their segmentation map similarity with conventional feature maps. Intermediate segmentation maps from the GAN’s convolution layers are used in the training stage to generate the final segmentation mask at test time. Experiments on chest Xray images show that JRS gives better registration and segmentation performance than when solving them separately.


medical image computing and computer assisted intervention | 2017

Semi-supervised Segmentation of Optic Cup in Retinal Fundus Images Using Variational Autoencoder

Suman Sedai; Dwarikanath Mahapatra; Sajini Hewavitharanage; Stefan Maetschke; Rahil Garnavi

Accurate segmentation of optic cup and disc in retinal fundus images is essential to compute the cup to disc ratio parameter, which is important for glaucoma assessment. The ill-defined boundaries of optic cup makes the segmentation a lot more challenging compared to optic disc. Existing approaches have mainly used fully supervised learning that requires many labeled samples to build a robust segmentation framework. In this paper, we propose a novel semi-supervised method to segment the optic cup, which can accurately localize the anatomy using limited number of labeled samples. The proposed method leverages the inherent feature similarity from a large number of unlabeled images to train the segmentation model from a smaller number of labeled images. It first learns the parameters of a generative model from unlabeled images using variational autoencoder. The trained generative model provides the feature embedding of the images which allows the clustering of the related observation in the latent feature space. We combine the feature embedding with the segmentation autoencoder which is trained on the labeled images for pixel-wise segmentation of the cup region. The main novelty of the proposed approach is in the utilization of generative models for semi-supervised segmentation. Experimental results show that the proposed method successfully segments optic cup with small number of labeled images, and unsupervised feature embedding learned from unlabeled data improves the segmentation accuracy. Given the challenge of access to annotated medical images in every clinical application, the proposed framework is a key contribution and applicable for segmentation of different anatomies across various medical imaging modalities.


international symposium on biomedical imaging | 2017

A novel hybrid approach for severity assessment of Diabetic Retinopathy in colour fundus images

Pallab Kanti Roy; Ruwan B. Tennakoon; Khoa Cao; Suman Sedai; Dwarikanath Mahapatra; Stefan Maetschke; Rahil Garnavi

Diabetic Retinopathy (DR) is one of the leading causes of blindness worldwide. Detecting DR and grading its severity is essential for disease treatment. Convolutional neural networks (CNNs) have achieved state-of-the-art performance in many different visual classification tasks. In this paper, we propose to combine CNNs with dictionary based approaches, which incorporates pathology specific image representation into the learning framework, for improved DR severity classification. Specifically, we construct discriminative and generative pathology histograms and combine them with feature representations extracted from fully connected CNN layers. Our experimental results indicate that the proposed method shows improvement in quadratic kappa score (κ2 = 0.86) compared to the state-of-the-art CNN based method (κ2 = 0.81).

Collaboration


Dive into the Dwarikanath Mahapatra's collaboration.

Researchain Logo
Decentralizing Knowledge