Pattanasak Mongkolwat
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pattanasak Mongkolwat.
Journal of Digital Imaging | 2010
David S. Channin; Pattanasak Mongkolwat; Vladimir Kleper; Kastubh Sepukar; Daniel L. Rubin
Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.
Radiology | 2009
David S. Channin; Pattanasak Mongkolwat; Vladimir Kleper; Daniel L. Rubin
The Annotation and Image Mark-up Project is a standardized semantically interoperable information model with storage and communication formats for image annotation and markup.
IEEE Intelligent Systems | 2009
Daniel L. Rubin; Pattanasak Mongkolwat; Vladimir Kleper; Kaustubh Supekar; David S. Channin
The annotation and image markup project makes large distributed collections of medical images in cyberspace and hospital information systems accessible using an information model of image content and ontologies. Interest in applying semantic Web technologies to the life sciences continues to accelerate. Biomedical research is increasingly an online activity as scientists combine and explore different types of data in cyberspace, putting together complementary views on problems that lead to new insights and discoveries. An e-Science paradigm is thus emerging; the biomedical community is looking for tools to help access, query, and analyze a myriad of data in cyberspace. Specifically, the biomedical community is beginning to embrace technologies such as ontologies to integrate scientific knowledge, standard syntaxes, and semantics to make biomedical knowledge explicit, and the semantic Web to establish virtual collaborations.
Journal of Digital Imaging | 2014
Pattanasak Mongkolwat; Vladimir Kleper; Skip Talbot; Daniel L. Rubin
Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health’s (NIH) National Cancer Institute’s (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.
Radiographics | 2012
Pattanasak Mongkolwat; David S. Channin; Vladimir Kleper Daniel L. Rubin
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institutes cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.
Journal of Digital Imaging | 2011
Rebecca Hazen; Alexander Van Esbroeck; Pattanasak Mongkolwat; David S. Channin
RadLex™, the Radiology Lexicon, is a controlled vocabulary of terms used in radiology. It was developed by the Radiological Society of North America in recognition of a lack of coverage of these radiology concepts by other lexicons. There are still additional concepts, particularly those related to imaging observations and imaging observation characteristics, that could be added to the lexicon. We used a free and open source software system to extract these terms from the medical literature. The system retrieved relevant articles from the PubMed repository and passed them through modules in the Apache Unstructured Information Management Architecture. Image observations and image observation characteristics were identified through a seven-step process. The system was run on a corpus of 1,128 journal articles. The system generated lists of 624 imaging observations and 444 imaging observation characteristics. Three domain experts evaluated the top 100 terms in each list and determined a precision of 52% and 26%, respectively, for identification of image observations and image observation characteristics. We conclude that candidate terms for inclusion in standardized lexicons may be extracted automatically from the peer-reviewed literature. These terms can then be reviewed for curation into the lexicon.
international conference of the ieee engineering in medicine and biology society | 2009
David S. Channin; Pattanasak Mongkolwat; Vladimir Kleper; Daniel L. Rubin
An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human (or machine) observer. An image markup is the graphical symbols placed over the image to depict an annotation. In the majority of current, clinical and research imaging practice, markup is captured in proprietary formats and annotations are referenced only in free text radiology reports. This makes these annotations difficult to query, retrieve and compute upon, hampering their integration into other data mining and analysis efforts. This paper describes the National Cancer Institute’s Cancer Biomedical Informatics Grid’s (caBIG) Annotation and Image Markup (AIM) project, focusing on how to use AIM to query for annotations. The AIM project delivers an information model for image annotation and markup. The model uses controlled terminologies for important concepts. All of the classes and attributes of the model have been harmonized with the other models and common data elements in use at the National Cancer Institute. The project also delivers XML schemata necessary to instantiate AIMs in XML as well as a software application for translating AIM XML into DICOM S/R and HL7 CDA. Large collections of AIM annotations can be built and then queried as Grid or Web services. Using the tools of the AIM project, image annotations and their markup can be captured and stored in human and machine readable formats. This enables the inclusion of human image observation and inference as part of larger data mining and analysis activities.
Journal of Digital Imaging | 2005
Pattanasak Mongkolwat; Pankit Bhalodia; James A. Gehl; David S. Channin
Verifying the integrity of DICOM files transmitted between separate archives (eg, storage service providers, network attached storage, or storage area networks) is of critical importance. The software application described in this article retrieves a specified number of DICOM studies from two different DICOM storage applications; the primary picture archiving and communication system (PACS) and an off-site long-term archive. The system includes a query/retrieve (Q/R) module, storage service class provider (SCP), a DICOM comparison module, and a graphical user interface. The system checks the two studies for DICOM 3.0 compliance and then verifies that the DICOM data elements and pixel data are identical. Discrepancies in the two data sets are recorded with the data elements (tag number, value representation, value length, and value field) and pixel data (pixel value and pixel location) in question. The system can be operated automatically, in batch mode, and manually to meet a wide variety of use cases. We ran this program on a 15% statistical sample of 50,000 studies (7500 studies examined). We found 2 pixel data mismatches (resolved on retransmission) and 831 header element mismatches. We subsequently ran the program against a smaller batch of 1000 studies, identifying no pixel data mismatches and 958 header element mismatches. Although we did not find significant issues in our limited study, given other incidents that we have experienced when moving images between systems, we conclude that it is vital to maintain an ongoing, automatic, systematic validation of DICOM transfers so as to be proactive in preventing possibly catastrophic data loss.
Journal of Digital Imaging | 2008
Richard Chen; Pattanasak Mongkolwat; David S. Channin
This paper describes the web-based visualization interface of RadMonitor, a platform-independent web application designed to help manage the complexity of information flow within a health care enterprise. The system eavesdrops on Health Layer 7 traffic and parses statistical operational information into a database. The information is then presented to the user as a treemap—a graphical visualization scheme that simplifies the display of hierarchical information. While RadMonitor has been implemented for the purpose of analyzing radiology operations, its XML backend allows it to be reused for virtually any other hierarchical data set.
Medical Imaging 2001: PACS and Integrated Medical Information Systems: Design and Evaluation | 2001
Sandeep S. Bhangoo; David S. Channin; Pattanasak Mongkolwat; Nicky Leung; Raymond Wu
We have developed a new mechanism for the delivery of image processing functions to multi-modality PACS diagnostic viewing stations. The tools use the Java programming language. The core visible element of this system is a graphical user interface component that moves around the image(s) like a magnifying glass. Included with the component are controls that are capable of manipulating the image seen within it. The client-server architecture allows for dynamically adding and removing image processing functions as they are developed. In addition to several standard image-processing functions, the component has several novel functions.