Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Md. Mahmudur Rahman is active.

Publication


Featured researches published by Md. Mahmudur Rahman.


computer-based medical systems | 2009

A medical image retrieval framework in correlation enhanced visual concept feature space

Md. Mahmudur Rahman; Sameer K. Antani; George R. Thoma

This paper presents a medical image retrieval framework that uses visual concepts in a feature space employing statistical models built using a probabilistic multi-class support vector machine (SVM). The images are represented using concepts that comprise color and texture patches from local image regions in a multi-dimensional feature space. A major limitation of concept feature representation is that the structural relationship or spatial ordering between concepts are ignored. We present a feature representation scheme as visual concept structure descriptor (VCSD) that overcomes this challenge and captures both the concept frequency similar to a color histogram and the local spatial relationships of the concepts. A probabilistic framework makes the descriptor robust against classification and quantization errors. Evaluation of the proposed image retrieval framework on a biomedical image dataset with different imaging modalities validates its benefits.


document recognition and retrieval | 2010

Biomedical article retrieval using multimodal features and image annotations in region-based CBIR

Daekeun You; Sameer K. Antani; Dina Demner-Fushman; Md. Mahmudur Rahman; Venu Govindaraju; George R. Thoma

Biomedical images are invaluable in establishing diagnosis, acquiring technical skills, and implementing best practices in many areas of medicine. At present, images needed for instructional purposes or in support of clinical decisions appear in specialized databases and in biomedical articles, and are often not easily accessible to retrieval tools. Our goal is to automatically annotate images extracted from scientific publications with respect to their usefulness for clinical decision support and instructional purposes, and project the annotations onto images stored in databases by linking images through content-based image similarity. Authors often use text labels and pointers overlaid on figures and illustrations in the articles to highlight regions of interest (ROI). These annotations are then referenced in the caption text or figure citations in the article text. In previous research we have developed two methods (a heuristic and dynamic time warping-based methods) for localizing and recognizing such pointers on biomedical images. In this work, we add robustness to our previous efforts by using a machine learning based approach to localizing and recognizing the pointers. Identifying these can assist in extracting relevant image content at regions within the image that are likely to be highly relevant to the discussion in the article text. Image regions can then be annotated using biomedical concepts from extracted snippets of text pertaining to images in scientific biomedical articles that are identified using National Library of Medicines Unified Medical Language System® (UMLS) Metathesaurus. The resulting regional annotation and extracted image content are then used as indices for biomedical article retrieval using the multimodal features and region-based content-based image retrieval (CBIR) techniques. The hypothesis that such an approach would improve biomedical document retrieval is validated through experiments on an expert-marked biomedical article dataset.


international symposium on biomedical imaging | 2010

Retrieval and classification of ultrasound images of ovarian cysts combining texture features and histogram moments

Abu Sayeed Md. Sohail; Md. Mahmudur Rahman; Prabir Bhattacharya; Srinivasan Krishnamurthy; Sudhir P. Mudur

This paper presents an effective solution for content-based retrieval and classification of ultrasound medical images representing three types of ovarian cysts: Simple Cyst, Endometrioma, and Teratoma. Our proposed solution comprises of the followings: extraction of low level ultrasound image features combining histogram moments with Gray Level Co-Occurrence Matrix (GLCM) based statistical texture descriptors, image retrieval using a similarity model based on Gowers similarity coefficient which measures the relevance between the query image and the target images, and use of multiclass Support Vector Machine (SVM) for classifying the low level ultrasound image features into their corresponding high level categories. Efficiency of the above solution for ultrasound medical image retrieval and classification has been evaluated using an inprogress database, presently consisting of 478 ultrasound ovarian images. Performance-wise, in retrieval of ultrasound images, our proposed solution has demonstrated above 77% and 75% of average precision considering the first 20 and 40 retrieved results respectively, and an average classification accuracy of 86.90%.


document recognition and retrieval | 2011

Automatic identification of ROI in figure images toward improving hybrid (text and image) biomedical document retrieval

Daekeun You; Sameer K. Antani; Dina Demner-Fushman; Md. Mahmudur Rahman; Venu Govindaraju; George R. Thoma

Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. They appear in specialized databases or in biomedical publications and are not meaningfully retrievable using primarily textbased retrieval systems. The task of automatically finding the images in an article that are most useful for the purpose of determining relevance to a clinical situation is quite challenging. An approach is to automatically annotate images extracted from scientific publications with respect to their usefulness for CDS. As an important step toward achieving the goal, we proposed figure image analysis for localizing pointers (arrows, symbols) to extract regions of interest (ROI) that can then be used to obtain meaningful local image content. Content-based image retrieval (CBIR) techniques can then associate local image ROIs with identified biomedical concepts in figure captions for improved hybrid (text and image) retrieval of biomedical articles. In this work we present methods that make robust our previous Markov random field (MRF)-based approach for pointer recognition and ROI extraction. These include use of Active Shape Models (ASM) to overcome problems in recognizing distorted pointer shapes and a region segmentation method for ROI extraction. We measure the performance of our methods on two criteria: (i) effectiveness in recognizing pointers in images, and (ii) improved document retrieval through use of extracted ROIs. Evaluation on three test sets shows 87% accuracy in the first criterion. Further, the quality of document retrieval using local visual features and text is shown to be better than using visual features alone.


Computerized Medical Imaging and Graphics | 2015

Literature-based biomedical image classification and retrieval

Matthew S. Simpson; Daekeun You; Md. Mahmudur Rahman; Zhiyun Xue; Dina Demner-Fushman; Sameer K. Antani; George R. Thoma

Literature-based image informatics techniques are essential for managing the rapidly increasing volume of information in the biomedical domain. Compound figure separation, modality classification, and image retrieval are three related tasks useful for enabling efficient access to the most relevant images contained in the literature. In this article, we describe approaches to these tasks and the evaluation of our methods as part of the 2013 medical track of ImageCLEF. In performing each of these tasks, the textual and visual features used to represent images are an important consideration often left unaddressed. Therefore, we also describe a gradient-based optimization strategy for determining meaningful combinations of features and apply the method to the image retrieval task. An evaluation of our optimization strategy indicates the method is capable of producing statistically significant improvements in retrieval performance. Furthermore, the results of the 2013 ImageCLEF evaluation demonstrate the effectiveness of our techniques. In particular, our text-based and mixed image retrieval methods ranked first among all the participating groups.


MCBR-CDS'11 Proceedings of the Second MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support | 2011

Biomedical image retrieval using multimodal context and concept feature spaces

Md. Mahmudur Rahman; Sameer K. Antani; Dina Demner Fushman; George R. Thoma

This paper presents a unified medical image retrieval method that integrates visual features and text keywords using multimodal classification and filtering. For content-based image search, concepts derived from visual features are modeled using support vector machine (SVM)-based classification of local patches from local image regions. Text keywords from associated metadata provides the context and are indexed using the vector space model of information retrieval. The concept and context vectors are combined and trained for SVM classification at a global level for image modality (e.g., CT, MR, x-ray, etc.) detection. In this method, the probabilistic outputs from the modality categorization are used to filter images so that the search can be performed only on a candidate subset. An evaluation of the method on ImageCLEFmed 2010 dataset of 77,000 images, XML annotations and topics results in a mean average precision (MAP) score of 0.1125. It demonstrates the effectiveness and efficiency of the proposed multimodal framework compared to using only a single modality or without using any classification information.


International Journal of Multimedia Information Retrieval | 2014

Interactive cross and multimodal biomedical image retrieval based on automatic region-of-interest (ROI) identification and classification

Md. Mahmudur Rahman; Daekeun You; Matthew S. Simpson; Sameer K. Antani; Dina Demner-Fushman; George R. Thoma

In biomedical articles, authors often use annotation markers such as arrows, letters, or symbols overlaid on figures and illustrations to highlight ROIs. These annotations are then referenced and correlated with concepts in the caption text or figure citations in the article text. This association creates a bridge between the visual characteristics of important regions within an image and their semantic interpretation. Identifying these assists in extracting ROIs that are likely to be highly relevant to the discussion in the article text. The aim of this work is to perform semantic search without knowing the concept keyword or the specific name of the visual pattern or appearance. We consider the problem of cross and multimodal retrieval of images from articles which contains components of text and images. Our proposed method localizes and recognizes the annotations by utilizing a combination of rule-based and statistical image processing techniques. The image regions are then annotated for classification using biomedical concepts obtained from a glossary of imaging terms. Similar automatic ROI extraction can be applied to query images, or to interactively mark an ROI. As a result, visual characteristics of the ROIs can be mapped to text concepts and then used to search image captions. In addition, the system can toggle the search process from purely perceptual to a conceptual one (crossmodal) based on utilizing user feedback or integrate both perceptual and conceptual search in a multimodal search process. The hypothesis, that such approaches would improve biomedical image retrieval, was validated through experiments on a biomedical article dataset of thoracic CT scans.


computer based medical systems | 2011

Biomedical CBIR using “bag of keypoints” in a modified inverted index

Md. Mahmudur Rahman; Sameer K. Antani; George R. Thoma

This paper presents a “bag of keypoints” based medical image retrieval approach to cope with a large variety of visually different instances under the same category or modality. Keypoint similarities in the codebook are computed using a quadratic similarity measure. The codebook is implemented using a topology preserving Self Organizing Map (SOM) which represents images as sparse feature vectors and an inverted index is created on top of this to facilitate efficient retrieval. In addition, to increase the retrieval effectiveness, query expansion is performed by exploiting the similarities between the keypoints based on analyzing the local neighborhood structure of the SOM generated codebook. The search is thus query-specific and restricted to a sub-space spanned only by the original and expanded keypoints of the query images. A systematic evaluation of retrieval results on a biomedical image collection of 5000 biomedical images of different modalities, body parts, and orientations shows a halving in computation time (efficiency) and 10% to 15% improvement in precision at each recall level (effectiveness) when compared to individual color, texture, edge-related features.


international symposium on biomedical imaging | 2011

A biomedical image retrieval framework based on classification-driven image filtering and similarity fusion

Md. Mahmudur Rahman; Sameer K. Antani; George R. Thoma

This paper presents a classification-driven biomedical image retrieval approach based on multi-class support vector machine (SVM) and uses image filtering and similarity fusion. In this framework, the probabilistic outputs of the SVM are exploited to reduce the search space for similarity matching. In addition, the predicted category of the query image is used for linear combination of similarity. The method is evaluated on a diverse collection of 5000 biomedical images of different modalities, body parts, and orientations and shows a halving in computation time (efficiency) and 10% to 15% improvement in precision at each recall level (effectiveness).


computer vision and pattern recognition | 2010

Local concept-based medical image retrieval with correlation-enhanced similarity matching based on global analysis

Md. Mahmudur Rahman; Sameer K. Antani; George R. Thoma

A correlation-enhanced similarity matching framework for medical image retrieval is presented in a local concept-based feature space. In this framework, images are presented by vectors of concepts that comprise of local color and texture patches of image regions in a multi-dimensional feature space. To generate the concept vocabularies and represent the images, statistical models are built using a probabilistic multi-class support vector machine (SVM). For the similarity search, the concept correlations in the collection as a whole are analyzed as a global thesaurus-like structure and incorporated in a similarity matching function. The proposed scheme overcomes some limitations of the “bag of concepts” model, such as the assumption of feature independence. A systematic evaluation of image retrieval on a biomedical image collection of different modalities demonstrates the advantages of the proposed retrieval framework in terms of precision-recall.

Collaboration


Dive into the Md. Mahmudur Rahman's collaboration.

Top Co-Authors

Avatar

Sameer K. Antani

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

George R. Thoma

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Dina Demner-Fushman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew S. Simpson

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

L. Rodney Long

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dina Demner Fushman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge