Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Khan M. Siddiqui is active.

Publication


Featured researches published by Khan M. Siddiqui.


Medical Image Analysis | 2013

Regression forests for efficient anatomy detection and localization in computed tomography scans

Antonio Criminisi; Ender Konukoglu; Jamie Shotton; Sayan D. Pathak; Steve White; Khan M. Siddiqui

This paper proposes a new algorithm for the efficient, automatic detection and localization of multiple anatomical structures within three-dimensional computed tomography (CT) scans. Applications include selective retrieval of patients images from PACS systems, semantic visual navigation and tracking radiation dose over time. The main contribution of this work is a new, continuous parametrization of the anatomy localization problem, which allows it to be addressed effectively by multi-class random regression forests. Regression forests are similar to the more popular classification forests, but trained to predict continuous, multi-variate outputs, where the training focuses on maximizing the confidence of output predictions. A single pass of our probabilistic algorithm enables the direct mapping from voxels to organ location and size. Quantitative validation is performed on a database of 400 highly variable CT scans. We show that the proposed method is more accurate and robust than techniques based on efficient multi-atlas registration and template-based nearest-neighbor detection. Due to the simplicity of the regressors context-rich visual features and the algorithms parallelism, these results are achieved in typical run-times of only ∼4 s on a conventional single-core machine.


American Journal of Roentgenology | 2011

The relative effect of vendor variability in CT perfusion results: a method comparison study.

Benjamin Zussman; Garen Boghosian; Richard Gorniak; Mark E. Olszewski; Katrina Read; Khan M. Siddiqui; Adam E. Flanders

OBJECTIVEnThere are known interoperator, intraoperator, and intervendor software differences that can influence the reproducibility of quantitative CT perfusion values. The purpose of this study was to determine the relative impact of operator and software differences in CT perfusion variability.nnnMATERIALS AND METHODSnCT perfusion imaging data were selected for 11 patients evaluated for suspected ischemic stroke. Three radiologists each independently postprocessed the source data twice, using four different vendor software applications. Results for cerebral blood volume (CBV), cerebral blood flow (CBF), and mean transit time (MTT) were recorded for the lentiform nuclei in both hemispheres. Repeated variables multivariate analysis of variance was used to assess differences in the means of CBV, CBF, and MTT. Bland-Altman analysis was used to assess agreement between pairs of vendors, readers, and read times.nnnRESULTSnChoice of vendor software, but not interoperator or intraoperator disagreement, was associated with significant variability (p < 0.001) in CBV, CBF, and MTT. The mean difference in CT perfusion values was greater for pairs of vendors than for pairs of operators.nnnCONCLUSIONnDifferent vendor software applications do not generate quantitative perfusion results equivalently. Intervendor difference is, by far, the largest cause of variability in perfusion results relative to interoperator and intraoperator difference. Caution should be exercised when interpreting quantitative CT perfusion results because these values may vary considerably depending on the postprocessing software.


Journal of Digital Imaging | 2011

Ontology-Assisted Analysis of Web Queries to Determine the Knowledge Radiologists Seek

Daniel L. Rubin; Adam E. Flanders; Woojin Kim; Khan M. Siddiqui; Charles E. Kahn

Radiologists frequently search the Web to find information they need to improve their practice, and knowing the types of information they seek could be useful for evaluating Web resources. Our goal was to develop an automated method to categorize unstructured user queries using a controlled terminology and to infer the type of information users seek. We obtained the query logs from two commonly used Web resources for radiology. We created a computer algorithm to associate RadLex-controlled vocabulary terms with the user queries. Using the RadLex hierarchy, we determined the high-level category associated with each RadLex term to infer the type of information users were seeking. To test the hypothesis that the term category assignments to user queries are non-random, we compared the distributions of the term categories in RadLex with those in user queries using the chi square test. Of the 29,669 unique search terms found in user queries, 15,445 (52%) could be mapped to one or more RadLex terms by our algorithm. Each query contained an average of one to two RadLex terms, and the dominant categories of RadLex terms in user queries were diseases and anatomy. While the same types of RadLex terms were predominant in both RadLex itself and user queries, the distribution of types of terms in user queries and RadLex were significantly different (pu2009<u20090.0001). We conclude that RadLex can enable processing and categorization of user queries of Web resources and enable understanding the types of information users seek from radiology knowledge resources on the Web.


Proceedings of SPIE | 2011

Robust linear registration of CT images using random regression forests

Ender Konukoglu; Antonio Criminisi; Sayan D. Pathak; Steve White; David R. Haynor; Khan M. Siddiqui

Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the failure rate of the global linear registration from 12.5% (Elastix) to only 1.9%.


Proceedings of SPIE | 2011

Validating Automatic Semantic Annotation of Anatomy in DICOM CT Images

Sayan D. Pathak; Antonio Criminisi; Jamie Shotton; Steve White; Bobbi Sparks; Indeera Munasinghe; Khan M. Siddiqui

In the current health-care environment, the time available for physicians to browse patients scans is shrinking due to the rapid increase in the sheer number of images. This is further aggravated by mounting pressure to become more productive in the face of decreasing reimbursement. Hence, there is an urgent need to deliver technology which enables faster and effortless navigation through sub-volume image visualizations. Annotating image regions with semantic labels such as those derived from the RADLEX ontology can vastly enhance image navigation and sub-volume visualization. This paper uses random regression forests for efficient, automatic detection and localization of anatomical structures within DICOM 3D CT scans. A regression forest is a collection of decision trees which are trained to achieve direct mapping from voxels to organ location and size in a single pass. This paper focuses on comparing automated labeling with expert-annotated ground-truth results on a database of 50 highly variable CT scans. Initial investigations show that regression forest derived localization errors are smaller and more robust than those achieved by state-of-the-art global registration approaches. The simplicity of the algorithms context-rich visual features yield typical runtimes of less than 10 seconds for a 5123 voxel DICOM CT series on a single-threaded, single-core machine running multiple trees; each tree taking less than a second. Furthermore, qualitative evaluation demonstrates that using the detected organs locations as index into the image volume improves the efficiency of the navigational workflow in all the CT studies.


Proceedings of SPIE | 2012

Linking DICOM pixel data with radiology reports using automatic semantic annotation

Sayan D. Pathak; Woojin Kim; Indeera Munasinghe; Antonio Criminisi; Steve White; Khan M. Siddiqui

Improved access to DICOM studies to both physicians and patients is changing the ways medical imaging studies are visualized and interpreted beyond the confines of radiologists PACS workstations. While radiologists are trained for viewing and image interpretation, a non-radiologist physician relies on the radiologists reports. Consequently, patients historically have been typically informed about their imaging findings via oral communication with their physicians, even though clinical studies have shown that patients respond to physicians advice significantly better when the individual patients are shown their own actual data. Our previous work on automated semantic annotation of DICOM Computed Tomography (CT) images allows us to further link radiology report with the corresponding images, enabling us to bridge the gap between image data with the human interpreted textual description of the corresponding imaging studies. The mapping of radiology text is facilitated by natural language processing (NLP) based search application. When combined with our automated semantic annotation of images, it enables navigation in large DICOM studies by clicking hyperlinked text in the radiology reports. An added advantage of using semantic annotation is the ability to render the organs to their default window level setting thus eliminating another barrier to image sharing and distribution. We believe such approaches would potentially enable the consumer to have access to their imaging data and navigate them in an informed manner.


Proceedings of SPIE | 2010

An automatic system to detect and extract texts in medical images for de-identification

Yingxuan Zhu; Prabhdeep Singh; Khan M. Siddiqui; Michael Gillam

Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.


Proceedings of SPIE | 2012

Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images.

Sayan D. Pathak; Antonio Criminisi; Steve White; David R. Haynor; Oliver Chen; Khan M. Siddiqui

Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests (RRF), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.


Archive | 2010

Medical Image Rendering

Toby Sharp; Antonio Criminisi; Khan M. Siddiqui


Archive | 2010

AUTOMATED IMAGE DATA PROCESSING AND VISUALIZATION

Sayan D. Pathak; Antonio Criminisi; Steven James White; Liqun Fu; Khan M. Siddiqui; Toby Sharp; Ender Konukoglu; Bryan Dove; Michael Gillam

Collaboration


Dive into the Khan M. Siddiqui's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam E. Flanders

Thomas Jefferson University Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge