Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George R. Thoma is active.

Publication


Featured researches published by George R. Thoma.


Pattern Recognition | 1994

Automated page orientation and skew angle detection for binary document images

Daniel S Le; George R. Thoma; Harry Wechsler

Abstract We describe the development and implementation of algorithms for detecting the page orientation (portrait/landscape) and the degree of skew for documents available as binary images. A new and fast approach is advanced herein whereby skew angle detection takes advantage of information found using the page orientation algorithm. Page orientation is accomplished using local analysis, while skew angle detection is implemented based on the processing of pixels of last black run-lengths of binary image objects. The experiments carried out on a variety of medical journals show the feasibility of the new approach and indicate that detection accuracy can be improved by minimizing the effects of non-textual data.


Computer Methods and Programs in Biomedicine | 2012

Histology image analysis for carcinoma detection and grading

Lei He; L. Rodney Long; Sameer K. Antani; George R. Thoma

This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems.


IEEE Transactions on Medical Imaging | 2014

Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration

Sema Candemir; Stefan Jaeger; Kannappan Palaniappan; Jonathan P. Musco; Rahul Singh; Zhiyun Xue; Alexandros Karargyris; Sameer K. Antani; George R. Thoma; Clement J. McDonald

The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.


IEEE Transactions on Medical Imaging | 2014

Automatic Tuberculosis Screening Using Chest Radiographs

Stefan Jaeger; Alexandros Karargyris; Sema Candemir; Les R. Folio; Jenifer Siegelman; Fiona M. Callaghan; Zhiyun Xue; Kannappan Palaniappan; Rahul K. Singh; Sameer K. Antani; George R. Thoma; Yi-Xiang J. Wang; Pu-Xuan Lu; Clement J. McDonald

Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drug-resistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis still remains a challenge. When left undiagnosed and thus untreated, mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on methods developed in the last century. They are slow and often unreliable. In an effort to reduce the burden of the disease, this paper presents our automated approach for detecting tuberculosis in conventional posteroanterior chest radiographs. We first extract the lung region using a graph cut segmentation method. For this lung region, we compute a set of texture and shape features, which enable the X-rays to be classified as normal or abnormal using a binary classifier. We measure the performance of our system on two datasets: a set collected by the tuberculosis control program of our local countys health department in the United States, and a set collected by Shenzhen Hospital, China. The proposed computer-aided diagnostic system for TB screening, which is ready for field deployment, achieves a performance that approaches the performance of human experts. We achieve an area under the ROC curve (AUC) of 87% (78.3% accuracy) for the first set, and an AUC of 90% (84% accuracy) for the second set. For the first set, we compare our system performance with the performance of radiologists. When trying not to miss any positive cases, radiologists achieve an accuracy of about 82% on this set, and their false positive rate is about half of our systems rate.


International Journal of Medical Informatics | 2009

SPIRS: A Web-based Image Retrieval System for Large Biomedical Databases

William Hsu; Sameer K. Antani; L. Rodney Long; Leif Neve; George R. Thoma

PURPOSE With the increasing use of images in disease research, education, and clinical medicine, the need for methods that effectively archive, query, and retrieve these images by their content is underscored. This paper describes the implementation of a Web-based retrieval system called SPIRS (Spine Pathology & Image Retrieval System), which permits exploration of a large biomedical database of digitized spine X-ray images and data from a national health survey using a combination of visual and textual queries. METHODS SPIRS is a generalizable framework that consists of four components: a client applet, a gateway, an indexing and retrieval system, and a database of images and associated text data. The prototype system is demonstrated using text and imaging data collected as part of the second U.S. National Health and Nutrition Examination Survey (NHANES II). Users search the image data by providing a sketch of the vertebral outline or selecting an example vertebral image and some relevant text parameters. Pertinent pathology on the image/sketch can be annotated and weighted to indicate importance. RESULTS During the course of development, we explored different algorithms to perform functions such as segmentation, indexing, and retrieval. Each algorithm was tested individually and then implemented as part of SPIRS. To evaluate the overall system, we first tested the systems ability to return similar vertebral shapes from the database given a query shape. Initial evaluations using visual queries only (no text) have shown that the system achieves up to 68% accuracy in finding images in the database that exhibit similar abnormality type and severity. Relevance feedback mechanisms have been shown to increase accuracy by an additional 22% after three iterations. While we primarily demonstrate this system in the context of retrieving vertebral shape, our framework has also been adapted to search a collection of 100,000 uterine cervix images to study the progression of cervical cancer. CONCLUSIONS SPIRS is automated, easily accessible, and integratable with other complementary information retrieval systems. The system supports the ability for users to intuitively query large amounts of imaging data by providing visual examples and text keywords and has beneficial implications in the areas of research, education, and patient care.


international conference of the ieee engineering in medicine and biology society | 2011

A Learning-Based Similarity Fusion and Filtering Approach for Biomedical Image Retrieval Using SVM Classification and Relevance Feedback

Md. Mahmudur Rahman; Sameer K. Antani; George R. Thoma

This paper presents a classification-driven biomedical image retrieval framework based on image filtering and similarity fusion by employing supervised learning techniques. In this framework, the probabilistic outputs of a multiclass support vector machine (SVM) classifier as category prediction of query and database images are exploited at first to filter out irrelevant images, thereby reducing the search space for similarity matching. Images are classified at a global level according to their modalities based on different low-level, concept, and keypoint-based features. It is difficult to find a unique feature to compare images effectively for all types of queries. Hence, a query-specific adaptive linear combination of similarity matching approach is proposed by relying on the image classification and feedback information from users. Based on the prediction of a query image category, individual precomputed weights of different features are adjusted online. The prediction of the classifier may be inaccurate in some cases and a user might have a different semantic interpretation about retrieved images. Hence, the weights are finally determined by considering both precision and rank order information of each individual feature representation by considering top retrieved relevant images as judged by the users. As a result, the system can adapt itself to individual searches to produce query-specific results. Experiment is performed in a diverse collection of 5 000 biomedical images of different modalities, body parts, and orientations. It demonstrates the efficiency (about half computation time compared to search on entire collection) and effectiveness (about 10%-15% improvement in precision at each recall level) of the retrieval approach.


Journal of computing science and engineering | 2012

Design and Development of a Multimodal Biomedical Information Retrieval System

Dina Demner-Fushman; Sameer K. Antani; Matthew S. Simpson; George R. Thoma

The search for relevant and actionable information is a key to achieving clinical and research goals in biomedicine. Biomedical information exists in different forms: as text and illustrations in journal articles and other documents, in images stored in databases, and as patients’ cases in electronic health records. This paper presents ways to move beyond conventional text-based searching of these resources, by combining text and visual features in search queries and document representation. A combination of techniques and tools from the fields of natural language processing, information retrieval, and content-based image retrieval allows the development of building blocks for advanced information services. Such services enable searching by textual as well as visual queries, and retrieving documents enriched by relevant images, charts, and other illustrations from the journal literature, patient records and image databases.


document recognition and retrieval | 2000

Automated labeling in document images

Jongwoo Kim; Daniel X. Le; George R. Thoma

The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.


computer based medical systems | 2000

Use of shape models to search digitized spine X-rays

L.R. Long; George R. Thoma

We are building a biomedical information resource consisting of digitized X-ray images and associated textual data from national health surveys. This resource, the Web-based Medical Information Retrieval System, or WebMIRS, is currently in beta test. In a future WebMIRS system, we plan to have not only text and raw image data, but quantitative anatomical feature information derived from the images and capability to retrieve images based on image characteristics, either alone or in conjunction with text descriptions associated with the images. Our archive consists of data collected in the second and third National Health and Nutrition Examination Surveys (NHANES), conducted by the National Center for Health Statistics. For the NHANES II survey, the records contain information for approximately 20,000 participants. Each record contains about two thousand data points, including demographic information, answers to health questionnaires, anthropometric information, and the results of a physicians examination. In addition, approximately 10,000 cervical spine and 7,000 lumbar spine X-rays were collected. WebMIRS makes the text and images retrievable. Only raw images are returned; no quantitative or descriptive information about the images is stored in the database. We are conducting research into the problem of automatically or semi-automatically segmenting spine vertebrae in these images and determining vertebral boundaries with enough accuracy to be useful in classifying the vertebrae into categories of interest to researchers in osteoarthritis.


Storage and Retrieval for Image and Video Databases | 1997

WebMIRS: Web-based medical information retrieval system

L. Rodney Long; Stanley R. Pillemer; Reva C. Lawrence; Gin-Hua Goh; Leif Neve; George R. Thoma

At the Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM), we are developing a prototype multimedia database system to provide World Wide Web access to biomedical databases. WebMIRS (Web-based Medical Information Retrieval System) will allow access to databases containing text and images and will allow database query by standard SQL, by image content, or by a combination of the two. The system is being developed in the form of Java applets, which will communicate with the Informix DBMS on an NLM Sun workstation running the Solaris operating system. The system architecture will allow access from any hardware platform, which supports a Java-enabled Web browser, such as Netscape or Internet Explorer. Initial databases will include data from two national health surveys conducted by the National Center for Health Statistics (NCHS), and will include x-ray images from those surveys. In addition to describing in- house research in database access systems, this paper describes ongoing work toward querying by image content. Image content search capability will include capability to search for x-ray images similar to an input image with respect to vertebral morphometry used to characterize features such as fractures and disc space narrowing.

Collaboration


Dive into the George R. Thoma's collaboration.

Top Co-Authors

Avatar

Sameer K. Antani

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

L. Rodney Long

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Dina Demner-Fushman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Zhiyun Xue

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Daniel X. Le

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Susan E. Hauser

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Daekeun You

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Jaeger

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Sema Candemir

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge