Imon Banerjee
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Imon Banerjee.
Cell systems | 2018
Brian D. Piening; Wenyu Zhou; Kévin Contrepois; Hannes L. Röst; Gucci Jijuan Gu Urban; Tejaswini Mishra; Blake M. Hanson; Eddy J. Bautista; Shana Leopold; Christine Y. Yeh; Daniel J. Spakowicz; Imon Banerjee; Cynthia Chen; Kimberly R. Kukurba; Dalia Perelman; Colleen M. Craig; Elizabeth Colbert; Denis Salins; Shannon Rego; Sunjae Lee; Cheng Zhang; Jessica Wheeler; M. Reza Sailani; Liang Liang; Charles W. Abbott; Mark Gerstein; Adil Mardinoglu; Ulf Smith; Daniel L. Rubin; Sharon J. Pitteri
Advances in omics technologies now allow an unprecedented level of phenotyping for human diseases, including obesity, in which individual responses to excess weight are heterogeneous and unpredictable. To aid the development of better understanding of these phenotypes, we performed a controlled longitudinal weight perturbation study combining multiple omics strategies (genomics, transcriptomics, multiple proteomics assays, metabolomics, and microbiomics) during periods of weight gain and loss in humans. Results demonstrated that: (1) weight gain is associated with the activation of strong inflammatory and hypertrophic cardiomyopathy signatures in blood; (2) although weight loss reverses some changes, a number of signatures persist, indicative of long-term physiologic changes; (3) we observed omics signatures associated with insulin resistance that may serve as novel diagnostics; (4) specific biomolecules were highly individualized and stable in response to perturbations, potentially representing stable personalized markers. Most data are available open access and serve as a valuable resource for the community.
computer assisted radiology and surgery | 2016
Imon Banerjee; Chiara Eva Catalano; Giuseppe Patanè; Michela Spagnuolo
PurposeWhile 3D patient-specific digital models are currently available, thanks to advanced medical acquisition devices, there is still a long way to go before these models can be used in clinical practice. The goal of this paper is to demonstrate how 3D patient-specific models of anatomical parts can be analysed and documented accurately with morphological information extracted automatically from the data. Part-based semantic annotation of 3D anatomical models is discussed as a basic approach for sharing and reusing knowledge among clinicians for next-generation CAD-assisted diagnosis and treatments.MethodsWe have developed (1) basic services for the analysis of 3D anatomical models and (2) a methodology for the enrichment of such models with relevant descriptions and attributes, which reflect the parameters of interest for medical investigations. The proposed semantic annotation is ontology-driven and includes both descriptive and quantitative labelling. Most importantly, the developed methodology permits to identify and annotate also parts-of-relevance of anatomical entities.ResultsThe computational tools for the automatic computation of qualitative and quantitative parameters have been integrated in a prototype system, the SemAnatomy3D framework, which demonstrates the functionalities needed to support effective annotation of 3D patient-specific models. From the first evaluation, SemAnatomy3D appears as an effective tool for clinical data analysis and opens new ways to support clinical diagnosis.ConclusionsThe SemAnatomy3D framework integrates several functionalities for 3D part-based annotation. The idea has been presented and discussed for the case study of rheumatoid arthritis of carpal bones; however, the framework can be extended to support similar annotations in different clinical applications.
Journal of Biomedical Informatics | 2018
Imon Banerjee; Matthew C. Chen; Matthew P. Lungren; Daniel L. Rubin
We proposed an unsupervised hybrid method - Intelligent Word Embedding (IWE) that combines neural embedding method with a semantic dictionary mapping technique for creating a dense vector representation of unstructured radiology reports. We applied IWE to generate embedding of chest CT radiology reports from two healthcare organizations and utilized the vector representations to semi-automate report categorization based on clinically relevant categorization related to the diagnosis of pulmonary embolism (PE). We benchmark the performance against a state-of-the-art rule-based tool, PeFinder and out-of-the-box word2vec. On the Stanford test set, the IWE model achieved average F1 score 0.97, whereas the PeFinder scored 0.9 and the original word2vec scored 0.94. On UPMC dataset, the IWE models average F1 score was 0.94, when the PeFinder scored 0.92 and word2vec scored 0.85. The IWE model had lowest generalization error with highest F1 scores. Of particular interest, the IWE model (trained on the Stanford dataset) outperformed PeFinder on the UPMC dataset which was used originally to tailor the PeFinder model.
Computerized Medical Imaging and Graphics | 2017
Imon Banerjee; Alexis Crawley; Mythili Bhethanabotla; Heike E. Daldrup-Link; Daniel L. Rubin
This paper presents a deep-learning-based CADx for the differential diagnosis of embryonal (ERMS) and alveolar (ARMS) subtypes of rhabdomysarcoma (RMS) solely by analyzing multiparametric MR images. We formulated an automated pipeline that creates a comprehensive representation of tumor by performing a fusion of diffusion-weighted MR scans (DWI) and gadolinium chelate-enhanced T1-weighted MR scans (MRI). Finally, we adapted transfer learning approach where a pre-trained deep convolutional neural network has been fine-tuned based on the fused images for performing classification of the two RMS subtypes. We achieved 85% cross validation prediction accuracy from the fine-tuned deep CNN model. Our system can be exploited to provide a fast, efficient and reproducible diagnosis of RMS subtypes with less human interaction. The framework offers an efficient integration between advanced image processing methods and cutting-edge deep learning techniques which can be extended to deal with other clinical domains that involve multimodal imaging for disease diagnosis.
Archive | 2014
Imon Banerjee; Chiara Eva Catalano; Francesco Robbiano; Michela Spagnuolo
In the era of digitalization a large amount of medical data is produced, and many activities spanning from diagnosis to simulation and from assisted surgery to patient-specific treatment and follow-up are carried out with the support of software tools. Computer-aided medicine can undoubtedly take advantage of a structured organization of the digital data involved, through the aid of knowledge and visualization technologies. In this chapter, we will survey recent approaches to the access and presentation of medical data in order to exemplify how knowledge-driven data organization may support medical activities. These approaches will be analyzed paying special attention to two different trends: we will show their potential in providing a visual effective reference and their capability of exploiting shared and structured vocabularies. Perspectives on the integration of these two trends will also be presented.
international conference on image analysis and processing | 2015
Imon Banerjee; Hamid Laga; Giuseppe Patanè; Sebastian Kurtek; Anuj Srivastava; Michela Spagnuolo
The paper discusses the initial results obtained for the generation of canonical 3D models of anatomical parts, built on real patient data. 3D canonical models of anatomy are key elements in a computer-assisted diagnosis; for instance, they can support pathology detection, semantic annotation of patient-specific 3D reconstructions, quantification of pathological markers. Our approach is focused on carpal bones and on the elastic analysis of 3D reconstructions of these bones, which are segmented from MRI scans, represented as 0-genus triangle meshes, and parameterized on the sphere. The original method [8] relies on a set of sparse correspondences, defined as matching vertices. For medical applications, it is desirable to constrain the mean shape generation to set-up the correspondences among a larger set of anatomical landmarks, including vertices, lines, and areas. Preliminary results are discussed and future development directions are sketched.
The Visual Computer | 2016
Imon Banerjee; Asan Agibetov; Chiara Eva Catalano; Giuseppe Patanè; Michela Spagnuolo
In the digital era, patient-specific 3D models (3D-PSMs) are becoming increasingly relevant in computer-assisted diagnosis, surgery training on digital models, or implant design. While advanced imaging and reconstruction techniques can create accurate and detailed 3D models of patients’ anatomy, software tools that are able to fully exploit the potential of 3D-PSMs are still far from being satisfactory. In particular, there is still a lack of integrated approaches for extracting, coding, sharing and retrieving medically relevant information from 3D-PSMs and use it concretely as a support to diagnosis and treatment. In this article, we propose the SemAnatomy3D framework, which demonstrates how the ontology-driven annotation of 3D-PSMs and of their anatomically relevant features (parts of relevance) can assist clinicians to document more effectively pathologies and their evolution. We exemplify the idea in the context of the diagnosis of rheumatoid arthritis of the hand district, and show how feature extraction tools and semantic 3D annotation can provide a rich characterization of anatomical landmarks (e.g., articular facets, prominent features, ligament attachments) and pathological markers (erosions, bone loss). The core contributions are an ontology-driven part-based annotation method for the 3D-PSMs and a novel automatic localization of erosion and quantification of the OMERACT RAMRIS erosion score. Finally, our results have been compared against a medical ground truth.
Scientific Reports | 2018
Imon Banerjee; M.F. Gensheimer; Douglas J. Wood; Solomon Henry; Sonya Aggarwal; Daniel T. Chang; Daniel L. Rubin
We propose a deep learning model - Probabilistic Prognostic Estimates of Survival in Metastatic Cancer Patients (PPES-Met) for estimating short-term life expectancy (>3 months) of the patients by analyzing free-text clinical notes in the electronic medical record, while maintaining the temporal visit sequence. In a single framework, we integrated semantic data mapping and neural embedding technique to produce a text processing method that extracts relevant information from heterogeneous types of clinical notes in an unsupervised manner, and we designed a recurrent neural network to model the temporal dependency of the patient visits. The model was trained on a large dataset (10,293 patients) and validated on a separated dataset (1818 patients). Our method achieved an area under the ROC curve (AUC) of 0.89. To provide explain-ability, we developed an interactive graphical tool that may improve physician understanding of the basis for the model’s predictions. The high accuracy and explain-ability of the PPES-Met model may enable our model to be used as a decision support tool to personalize metastatic cancer treatment and provide valuable assistance to the physicians.
Journal of Biomedical Semantics | 2018
Asan Agibetov; Ernesto Jiménez-Ruiz; Marta Ondrésik; Alessandro Solimando; Imon Banerjee; Giovanna Guerrini; Chiara Eva Catalano; Joaquim M. Oliveira; Giuseppe Patanè; Rui L. Reis; Michela Spagnuolo
BackgroundPathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses.ResultsIn this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences.ConclusionsWe evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.
Journal of Biomedical Informatics | 2018
Anupama Gupta; Imon Banerjee; Daniel L. Rubin
To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F1-score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules.