Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sameer K. Antani is active.

Publication


Featured researches published by Sameer K. Antani.


Pattern Recognition | 2002

A survey on the use of pattern recognition methods for abstraction, indexing and retrieval of images and video

Sameer K. Antani; Rangachar Kasturi; Ramesh Jain

The need for content-based access to image and video information from media archives has captured the attention of researchers in recent years. Research e0orts have led to the development of methods that provide access to image and video data. These methods have their roots in pattern recognition. The methods are used to determine the similarity in the visual information content extracted from low level features. These features are then clustered for generation of database indices. This paper presents a comprehensive surveyon the use of these pattern recognition methods which enable image and video retrieval bycontent. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.


Journal of Digital Imaging | 2009

Ontology of Gaps in Content-Based Image Retrieval

Thomas Martin Deserno; Sameer K. Antani; L. Rodney Long

Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the inability of these applications in overcoming the “semantic gap.” The semantic gap divides the high-level scene understanding and interpretation available with human cognitive capabilities from the low-level pixel analysis of computers, based on mathematical processing and artificial intelligence methods. In this paper, we suggest a more systematic and comprehensive view of the concept of “gaps” in medical CBIR research. In particular, we define an ontology of 14 gaps that addresses the image content and features, as well as system performance and usability. In addition to these gaps, we identify seven system characteristics that impact CBIR applicability and performance. The framework we have created can be used a posteriori to compare medical CBIR systems and approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR application, as the systematic analysis of gaps provides detailed insight in system comparison and helps to direct future research.


Computer Methods and Programs in Biomedicine | 2012

Histology image analysis for carcinoma detection and grading

Lei He; L. Rodney Long; Sameer K. Antani; George R. Thoma

This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems.


Journal of Lower Genital Tract Disease | 2009

The accuracy of colposcopic grading for detection of high-grade cervical intraepithelial neoplasia

L. Stewart Massad; Jose Jeronimo; Hormuzd A. Katki; Mark Schiffman; Sameer K. Antani; Lori A. Boardman; Peter S. Cartwright; Philip E. Castle; Charles J. Dunton; Julia C. Gage; Richard Guido; Fernando B. Guijon; Thomas J. Herzog; Warner K. Huh; Abner P. Korn; Edward R. Kost; Ramey D. Littell; Rodney Long; Jorge Morales; Leif Neve; Dennis M. O'Connor; Janet S. Rader; George F. Sawaya; Mario Sideri; Karen Smith-McCune; Mark Spitzer; Alan G. Waxman; Claudia L. Werner

Objective. To relate aspects of online colposcopic image assessment to the diagnosis of grades 2 and 3 cervical intraepithelial neoplasia (CIN 2+). Methods: To simulate colposcopic assessment, we obtained digitized cervical images at enrollment after acetic acid application from 919 women referred for equivocal or minor cytologic abnormalities into the ASCUS-LSIL Triage Study. For each, 2 randomly assigned evaluators from a pool of 20 colposcopists assessed images using a standardized tool online. We calculated the accuracy of these assessments for predicting histologic CIN 2+ over the 2 years of study. For validation, a subset of online results was compared with same-day enrollment colposcopic assessments. Results. Identifying any acetowhite lesion in images yielded high sensitivity: 93% of women with CIN 2+ had at least 1 acetowhite lesion. However, 74% of women without CIN 2+ also had acetowhitening, regardless of human papillomavirus status. The sensitivity for CIN 2+ of an online colpophotographic assessment of high-grade disease was 39%. The sensitivity for CIN 2+ of a high-grade diagnosis by Reid Index scoring was 30%, and individual Reid Index component scores had similar levels of sensitivity and specificity. The performance of online assessment was not meaningfully different from that of same-day enrollment colposcopy, suggesting that these approaches have similar utility. Conclusions. Finding acetowhite lesions identifies women with CIN 2+, but using subtler colposcopic characteristics to grade lesions is insensitive. All acetowhite lesions should be assessed with biopsy to maximize sensitivity of colposcopic diagnosis with good specificity.


IEEE Transactions on Medical Imaging | 2014

Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration

Sema Candemir; Stefan Jaeger; Kannappan Palaniappan; Jonathan P. Musco; Rahul Singh; Zhiyun Xue; Alexandros Karargyris; Sameer K. Antani; George R. Thoma; Clement J. McDonald

The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.


IEEE Transactions on Medical Imaging | 2014

Automatic Tuberculosis Screening Using Chest Radiographs

Stefan Jaeger; Alexandros Karargyris; Sema Candemir; Les R. Folio; Jenifer Siegelman; Fiona M. Callaghan; Zhiyun Xue; Kannappan Palaniappan; Rahul K. Singh; Sameer K. Antani; George R. Thoma; Yi-Xiang J. Wang; Pu-Xuan Lu; Clement J. McDonald

Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drug-resistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis still remains a challenge. When left undiagnosed and thus untreated, mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on methods developed in the last century. They are slow and often unreliable. In an effort to reduce the burden of the disease, this paper presents our automated approach for detecting tuberculosis in conventional posteroanterior chest radiographs. We first extract the lung region using a graph cut segmentation method. For this lung region, we compute a set of texture and shape features, which enable the X-rays to be classified as normal or abnormal using a binary classifier. We measure the performance of our system on two datasets: a set collected by the tuberculosis control program of our local countys health department in the United States, and a set collected by Shenzhen Hospital, China. The proposed computer-aided diagnostic system for TB screening, which is ready for field deployment, achieves a performance that approaches the performance of human experts. We achieve an area under the ROC curve (AUC) of 87% (78.3% accuracy) for the first set, and an AUC of 90% (84% accuracy) for the second set. For the first set, we compare our system performance with the performance of radiologists. When trying not to miss any positive cases, radiologists achieve an accuracy of about 82% on this set, and their false positive rate is about half of our systems rate.


International Journal of Medical Informatics | 2009

SPIRS: A Web-based Image Retrieval System for Large Biomedical Databases

William Hsu; Sameer K. Antani; L. Rodney Long; Leif Neve; George R. Thoma

PURPOSE With the increasing use of images in disease research, education, and clinical medicine, the need for methods that effectively archive, query, and retrieve these images by their content is underscored. This paper describes the implementation of a Web-based retrieval system called SPIRS (Spine Pathology & Image Retrieval System), which permits exploration of a large biomedical database of digitized spine X-ray images and data from a national health survey using a combination of visual and textual queries. METHODS SPIRS is a generalizable framework that consists of four components: a client applet, a gateway, an indexing and retrieval system, and a database of images and associated text data. The prototype system is demonstrated using text and imaging data collected as part of the second U.S. National Health and Nutrition Examination Survey (NHANES II). Users search the image data by providing a sketch of the vertebral outline or selecting an example vertebral image and some relevant text parameters. Pertinent pathology on the image/sketch can be annotated and weighted to indicate importance. RESULTS During the course of development, we explored different algorithms to perform functions such as segmentation, indexing, and retrieval. Each algorithm was tested individually and then implemented as part of SPIRS. To evaluate the overall system, we first tested the systems ability to return similar vertebral shapes from the database given a query shape. Initial evaluations using visual queries only (no text) have shown that the system achieves up to 68% accuracy in finding images in the database that exhibit similar abnormality type and severity. Relevance feedback mechanisms have been shown to increase accuracy by an additional 22% after three iterations. While we primarily demonstrate this system in the context of retrieving vertebral shape, our framework has also been adapted to search a collection of 100,000 uterine cervix images to study the progression of cervical cancer. CONCLUSIONS SPIRS is automated, easily accessible, and integratable with other complementary information retrieval systems. The system supports the ability for users to intuitively query large amounts of imaging data by providing visual examples and text keywords and has beneficial implications in the areas of research, education, and patient care.


international conference on document analysis and recognition | 2003

Extraction of special effects caption text events from digital video

David J. Crandall; Sameer K. Antani; Rangachar Kasturi

Abstract. The popularity of digital video is increasing rapidly. To help users navigate libraries of video, algorithms that automatically index video based on content are needed. One approach is to extract text appearing in video, which often reflects a scenes semantic content. This is a difficult problem due to the unconstrained nature of general-purpose video. Text can have arbitrary color, size, and orientation. Backgrounds may be complex and changing. Most work so far has made restrictive assumptions about the nature of text occurring in video. Such work is therefore not directly applicable to unconstrained, general-purpose video. In addition, most work so far has focused only on detecting the spatial extent of text in individual video frames. However, text occurring in video usually persists for several seconds. This constitutes a text event that should be entered only once in the video index. Therefore it is also necessary to determine the temporal extent of text events. This is a non-trivial problem because text may move, rotate, grow, shrink, or otherwise change over time. Such text effects are common in television programs and commercials but so far have received little attention in the literature. This paper discusses detecting, binarizing, and tracking caption text in general-purpose MPEG-1 video. Solutions are proposed for each of these problems and compared with existing work found in the literature.


international conference of the ieee engineering in medicine and biology society | 2011

A Learning-Based Similarity Fusion and Filtering Approach for Biomedical Image Retrieval Using SVM Classification and Relevance Feedback

Md. Mahmudur Rahman; Sameer K. Antani; George R. Thoma

This paper presents a classification-driven biomedical image retrieval framework based on image filtering and similarity fusion by employing supervised learning techniques. In this framework, the probabilistic outputs of a multiclass support vector machine (SVM) classifier as category prediction of query and database images are exploited at first to filter out irrelevant images, thereby reducing the search space for similarity matching. Images are classified at a global level according to their modalities based on different low-level, concept, and keypoint-based features. It is difficult to find a unique feature to compare images effectively for all types of queries. Hence, a query-specific adaptive linear combination of similarity matching approach is proposed by relying on the image classification and feedback information from users. Based on the prediction of a query image category, individual precomputed weights of different features are adjusted online. The prediction of the classifier may be inaccurate in some cases and a user might have a different semantic interpretation about retrieved images. Hence, the weights are finally determined by considering both precision and rank order information of each individual feature representation by considering top retrieved relevant images as judged by the users. As a result, the system can adapt itself to individual searches to produce query-specific results. Experiment is performed in a diverse collection of 5 000 biomedical images of different modalities, body parts, and orientations. It demonstrates the efficiency (about half computation time compared to search on entire collection) and effectiveness (about 10%-15% improvement in precision at each recall level) of the retrieval approach.


international conference of the ieee engineering in medicine and biology society | 2008

A Spine X-Ray Image Retrieval System Using Partial Shape Matching

Xiaoqian Xu; Dah-Jye Lee; Sameer K. Antani; L.R. Long

In recent years, there has been a rapid increase in the size and number of medical image collections. Thus, the development of appropriate methods for medical information retrieval is especially important. In a large collection of spine X-ray images, maintained by the National Library of Medicine, vertebral boundary shape has been determined to be relevant to pathology of interest. This paper presents an innovative partial shape matching (PSM) technique using dynamic programming (DP) for the retrieval of spine X-ray images. The improved version of this technique called corner-guided DP is introduced. It uses nine landmark boundary points for DP search and improves matching speed by approximately 10 times compared to traditional DP. The retrieval accuracy and processing speed of the retrieval system based on the new corner-guided PSM method are evaluated and included in this paper.

Collaboration


Dive into the Sameer K. Antani's collaboration.

Top Co-Authors

Avatar

George R. Thoma

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

L. Rodney Long

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Dina Demner-Fushman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Zhiyun Xue

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Md. Mahmudur Rahman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Daekeun You

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Stefan Jaeger

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Sema Candemir

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Dah-Jye Lee

Brigham Young University

View shared research outputs
Researchain Logo
Decentralizing Knowledge