Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rajib Chakravorty.
international symposium on biomedical imaging | 2016
Sergey Demyanov; Rajib Chakravorty; Mani Abedini; Alan Halpern; Rahil Garnavi
Detection of dermoscopic patterns, such as typical network and regular globules, is an important step in the skin lesion analysis. This is one of the steps, required to compute the ABCD-score, commonly used for lesion type classification. In this article, we investigate the possibility of automatically detect dermoscopic patterns using deep convolutional neural networks and other image classification algorithms. For the evaluation, we employ the dataset obtained through collaboration with the International Skin Imaging Collaboration (ISIC), including 211 lesions manually annotated by domain experts, generating over 2000 samples of each class (network and globules). Experimental results demonstrates that we can correctly classify 88% of network examples, and 83% of globules example. The best results are achieved by a convolutional neural network with 8 layers.
international symposium on biomedical imaging | 2017
Zongyuan Ge; Sergey Demyanov; Behzad Bozorgtabar; Mani Abedini; Rajib Chakravorty; Adrian Bowling; Rahil Garnavi
Similarity in appearance between various skin diseases, often makes it challenging for clinicians to identify the type of skin condition, and the accuracy is highly reliant on the level of expertise. There is also a great degree of subjectivity and inter/intra observer variability found in the clinical practices. In this paper, we propose a method for automatic skin diseases recognition that combines two different types of deep convolutional neural network features. We hold the hypothesis that it is equally important to capture global features such as color and lesion shape, as well as local features such as local patterns within the lesion area. The proposed method leverages deep residual network to represent global information, and bilinear pooling technique which allows to extract local features to differentiate between skin conditions with subtle visual differences in local regions. We have evaluated our proposed method on MoleMap dataset with 32,195 and ISBI-2016 challenge dataset with 1,279 skin images. Without any lesion localisation or segmentation, our proposed method has achieved state-of-the-art results on the large-scale MoleMap datasets with 15 various disease categories and multiple imaging modalities, and compares favorably with the best method on ISBI-2016 Melanoma challenge dataset.
medical image computing and computer assisted intervention | 2017
Zongyuan Ge; Sergey Demyanov; Rajib Chakravorty; Adrian Bowling; Rahil Garnavi
Skin cancer is the most common cancer world-wide, among which Melanoma the most fatal cancer, accounts for more than 10,000 deaths annually in Australia and United States. The 5-year survival rate for Melanoma can be increased over 90% if detected in its early stage. However, intrinsic visual similarity across various skin conditions makes the diagnosis challenging both for clinicians and automated classification methods. Many automated skin cancer diagnostic systems have been proposed in literature, all of which consider solely dermoscopy images in their analysis. In reality, however, clinicians consider two modalities of imaging; an initial screening using clinical photography images to capture a macro view of the mole, followed by dermoscopy imaging which visualizes morphological structures within the skin lesion. Evidences show that these two modalities provide complementary visual features that can empower the decision making process. In this work, we propose a novel deep convolutional neural network (DCNN) architecture along with a saliency feature descriptor to capture discriminative features of the two modalities for skin lesions classification. The proposed DCNN accepts a pair images of clinical and dermoscopic view of a single lesion and is capable of learning single-modality and cross-modality representations, simultaneously. Using one of the largest collected skin lesion datasets, we demonstrate that the proposed multi-modality method significantly outperforms single-modality methods on three tasks; differentiation between 15 various skin diseases, distinguishing cancerous (3 cancer types including melanoma) from non-cancerous moles, and detecting melanoma from benign cases.
international conference of the ieee engineering in medicine and biology society | 2016
Mani Abedini; Noel C. F. Codella; Rajib Chakravorty; Rahil Garnavi; David A. Gutman; Brian Helba; John R. Smith
This paper presents a robust segmentation method based on multi-scale classification to identify the lesion boundary in dermoscopic images. Our proposed method leverages a collection of classifiers which are trained at various resolutions to categorize each pixel as “lesion” or “surrounding skin”. In detection phase, trained classifiers are applied on new images. The classifier outputs are fused at pixel level to build probability maps which represent lesion saliency maps. In the next step, Otsu thresholding is applied to convert the saliency maps to binary masks, which determine the border of the lesions. We compared our proposed method with existing lesion segmentation methods proposed in the literature using two dermoscopy data sets (International Skin Imaging Collaboration and Pedro Hispano Hospital) which demonstrates the superiority of our method with Dice Coefficient of 0.91 and accuracy of 94%.
international conference of the ieee engineering in medicine and biology society | 2016
Rajib Chakravorty; Sisi Liang; Mani Abedini; Rahil Garnavi
Asymmetry is one of key characteristics for early diagnosis of melanoma according to medical algorithms such as (ABCD, CASH etc.). Besides shape information, cues such as irregular distribution of colors and structures within the lesion area are assessed by dermatologists to determine lesion asymmetry. Motivated by the clinical practices, we have used Kullback-Leibler divergence of color histogram and Structural Similarity metric as a measures of these irregularities. We have presented performance of several classifiers using these features on publicly available PH2 dataset. The obtained result shows better asymmetry classification than available literature. Besides being a new benchmark, the proposed technique can be used for early diagnosis of melanoma by both clinical experts and other automated diagnosis systems.
arXiv: Computer Vision and Pattern Recognition | 2018
Suman Sedai; Dwarikanath Mahapatra; Zongyuan Ge; Rajib Chakravorty; Rahil Garnavi
Localization of chest pathologies in chest X-ray images is a challenging task because of their varying sizes and appearances. We propose a novel weakly supervised method to localize chest pathologies using class aware deep multiscale feature learning. Our method leverages intermediate feature maps from CNN layers at different stages of a deep network during the training of a classification model using image level annotations of pathologies. During the training phase, a set of layer relevance weights are learned for each pathology class and the CNN is optimized to perform pathology classification by convex combination of feature maps from both shallow and deep layers using the learned weights. During the test phase, to localize the predicted pathology, the multiscale attention map is obtained by convex combination of class activation maps from each stage using the layer relevance weights learned during the training phase. We have validated our method using 112000 X-ray images and compared with the state-of-the-art localization methods. We experimentally demonstrate that the proposed weakly supervised method can improve the localization performance of small pathologies such as nodule and mass while giving comparable performance for bigger pathologies e.g., Cardiomegaly.
International Workshop on Machine Learning in Medical Imaging | 2018
Dwarikanath Mahapatra; Zongyuan Ge; Suman Sedai; Rajib Chakravorty
Medical image registration and segmentation are complementary functions and combining them can improve each other’s performance. Conventional deep learning (DL) based approaches tackle the two problems separately without leveraging their mutually beneficial information. We propose a DL based approach for joint registration and segmentation (JRS) of chest Xray images. Generative adversarial networks (GANs) are trained to register a floating image to a reference image by combining their segmentation map similarity with conventional feature maps. Intermediate segmentation maps from the GAN’s convolution layers are used in the training stage to generate the final segmentation mask at test time. Experiments on chest Xray images show that JRS gives better registration and segmentation performance than when solving them separately.
international symposium on biomedical imaging | 2017
Sergey Demyanov; Rajib Chakravorty; Zongyuan Ge; Seyedbehzad Bozorgtabar; Michelle Pablo; Adrian Bowling; Rahil Garnavi
Neural networks are powerful tools for medical image classification and segmentation. However, existing network structures and training procedures assume that the output classes are mutually exclusive and equally important. Many datasets of medical images do not satisfy these conditions. For example, some skin disease datasets have images labelled as coarse-grained class (such as Benign) in addition to images with fine-grained labels (such as a Benign subclass called Blue Nevus), and conventional neural network can not leverage such additional data for training. Also, in the clinical decision making, some classes (such as skin cancer or Melanoma) often carry more importance than other lesion types. We propose a novel Tree-Loss function for training and fine-tuning a neural network classifier using all available labelled images. The key step is the definition of the class taxonomy tree, which is used to describe the relations between labels. The tree can be also adjusted to reflect the desired importance of each class. These steps can be performed by a domain expert without detailed knowledge of machine learning techniques. The experiments demonstrate the improved performance compared with the conventional approach even without using additional data.
international symposium on biomedical imaging | 2017
Behzad Bozorgtabar; Zongyuan Ge; Rajib Chakravorty; Mani Abedini; Sergey Demyanov; Rahil Garnavi
Accurate skin lesion segmentation is an important yet challenging problem for medical image analysis. The skin lesion segmentation is subject to variety of challenges such as the significant pattern and colour diversity found within the lesions, presence of various artifacts, etc. In this paper, we present two fully convolutional networks with several side outputs to take advantage of discriminative capability of features learned at intermediate layers with varying resolutions and scales for the lesion segmentation. More specifically, we integrate fine and coarse prediction scores of the side-layers which allows our framework to not only output accurate probability map for the lesion, but also extract fine lesion boundary details such as the fuzzy border, which further improves the lesion segmentation. Quantitative evaluation is performed on the 2016 International Symposium on Biomedical Imaging (ISBI 2016) dataset, which shows our proposed approach compares favorably with state-of-the-art skin segmentation methods.
Studies in health technology and informatics | 2015
Mani Abedini; Stefan Von Cavallar; Rajib Chakravorty; Matthew Davis; Rahil Garnavi