Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sae Hwang is active.

Publication


Featured researches published by Sae Hwang.


international conference on image processing | 2007

Polyp Detection in Colonoscopy Video using Elliptical Shape Feature

Sae Hwang; JungHwan Oh; Wallapak Tavanapong; Johnny Wong; P. C. De Groen

Early detection of polyps and cancers is one of the most important goals of colonoscopy. Computer-based analysis of video files using texture features, as has been proposed for polyps of the stomach and colon, has two major limitations: this method uses a fixed size analysis window and relies heavily on a training set of images for accuracy. To overcome these limitations, we propose a new technique focusing on shape instead of texture in this paper. The proposed polyp region detection method is based on the elliptical shape that is common for nearly all small colon polyps.


Medical Image Analysis | 2007

Informative frame classification for endoscopy video

JungHwan Oh; Sae Hwang; Jeongkyu Lee; Wallapak Tavanapong; Johnny Wong; Piet C. de Groen

Advances in video technology allow inspection, diagnosis and treatment of the inside of the human body without or with very small scars. Flexible endoscopes are used to inspect the esophagus, stomach, small bowel, colon, and airways, whereas rigid endoscopes are used for a variety of minimal invasive surgeries (i.e., laparoscopy, arthroscopy, endoscopic neurosurgery). These endoscopes come in various sizes, but all have a tiny video camera at the tip. During an endoscopic procedure, the tiny video camera generates a video signal of the interior of the human organ, which is displayed on a monitor for real-time analysis by the physician. However, many out-of-focus frames are present in endoscopy videos because current endoscopes are equipped with a single, wide-angle lens that cannot be focused. We need to distinguish the out-of-focus frames from the in-focus frames to utilize the information of the out-of-focus and/or the in-focus frames for further automatic or semi-automatic computer-aided diagnosis (CAD). This classification can reduce the number of images to be viewed by a physician and to be analyzed by a CAD system. We call an out-of-focus frame a non-informative frame and an in-focus frame an informative frame. The out-of-focus frames have characteristics that are different from those of in-focus frames. In this paper, we propose two new techniques (edge-based and clustering-based) to classify video frames into two classes, informative and non-informative frames. However, because intensive specular reflections reduce the accuracy of the classification we also propose a specular reflection detection technique, and use the detected specular reflection information to increase the accuracy of informative frame classification. Our experimental studies indicate that precision, sensitivity, specificity, and accuracy for the specular reflection detection technique and the two informative frame classification techniques are greater than 90% and 95%, respectively.


international conference on management of data | 2005

STRG-Index: spatio-temporal region graph indexing for large video databases

Jeongkyu Lee; JungHwan Oh; Sae Hwang

In this paper, we propose new graph-based data structure and indexing to organize and retrieve video data. Several researches have shown that a graph can be a better candidate for modeling semantically rich and complicated multimedia data. However, there are few methods that consider the temporal feature of video data, which is a distinguishable and representative characteristic when compared with other multimedia (i.e., images). In order to consider the temporal feature effectively and efficiently, we propose a new graph-based data structure called Spatio-Temporal Region Graph (STRG). Unlike existing graph-based data structures which provide only spatial features, the proposed STRG further provides temporal features, which represent temporal relationships among spatial objects. The STRG is decomposed into its subgraphs in which redundant subgraphs are eliminated to reduce the index size and search time, because the computational complexity of graph matching (subgraph isomorphism) is NP-complete. In addition, a new distance measure, called Extended Graph Edit Distance (EGED), is introduced in both non-metric and metric spaces for matching and indexing respectively. Based on STRG and EGED, we propose a new indexing method STRG-Index, which is faster and more accurate since it uses tree structure and clustering algorithm. We compare the STRG-Index with the M-tree, which is a popular tree-based indexing method for multimedia data. The STRG-Index outperforms the M-tree for various query loads in terms of cost and speed.


IEEE Transactions on Biomedical Engineering | 2009

Measuring Objective Quality of Colonoscopy

JungHwan Oh; Sae Hwang; Yu Cao; Wallapak Tavanapong; Danyu Liu; Johnny Wong; P. C. De Groen

Advances in video technology are being incorporated into todays healthcare practices. Colonoscopy is regarded as one of the most important diagnostic tools for colorectal cancer. Indeed, colonoscopy has contributed to a decline in the number of colorectal-cancer-related deaths. Although colonoscopy has become the preferred screening modality for prevention of colorectal cancer, recent data suggest that there is a significant miss rate for the detection of large polyps and cancers, and methods to investigate why this occurs are needed. To address this problem, we present a new computer-based method that analyzes a digitized video file of a colonoscopic procedure and produces a number of metrics that likely reflect the quality of the procedure. The method consists of a set of novel image-processing algorithms designed to address new technical challenges due to uncommon characteristics of videos captured during colonoscopy. As these measurements can be obtained automatically, our method enables future quality control in large-scale day-to-day medical practice, which is currently not feasible. In addition, our method can be adapted to other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, and bronchoscopy. Last but not least, our method may be useful to assess progress during colonoscopy training.


acm multimedia | 2005

Automatic measurement of quality metrics for colonoscopy videos

Sae Hwang; JungHwan Oh; Jeongkyu Lee; Yu Cao; Wallapak Tavanapong; Danyu Liu; Johnny Wong; Piet C. de Groen

Colonoscopy is the accepted screening method for detection of colorectal cancer or its precursor lesions, colorectal polyps. Indeed, colonoscopy has contributed to a decline in the number of colorectal cancer related deaths. However, not all cancers or large polyps are detected at the time of colonoscopy, and methods to investigate why this occurs are needed. We present a new computer-based method that allows automated measurement of a number of metrics that likely reflect the quality of the colonoscopic procedure. The method is based on analysis of a digitized video file created during colonoscopy, and produces information regarding insertion time, withdrawal time, images at the time of maximal intubation, the time and ratio of clear versus blurred or non-informative images, and a first estimate of effort performed by the endoscopist. As these metrics can be obtained automatically, our method allows future quality control in the day-to-day medical practice setting on a large scale. In addition, our method can be adapted to other healthcare procedures. Last but not least, our method may be useful to assess progress during colonoscopy training, or as part of endoscopic skills assessment evaluations.


acm multimedia | 2005

Scenario based dynamic video abstractions using graph matching

Jeongkyu Lee; JungHwan Oh; Sae Hwang

In this paper, we present scenario based dynamic video abstractions using graph matching. Our approach has two main components: multi-level scenario generations and dynamic video abstractions. Multi-level scenarios are generated by a graph-based video segmentation and a hierarchy of the segments. Dynamic video abstractions are accomplished by accessing the generated hierarchy level by level. The first step in the proposed approach is to segment a video into shots using Region Adjacency Graph (RAG). A RAG expresses spatial relationships among segmented regions of a frame. To measure the similarity between two consecutive RAGs, we propose a new similarity measure, called Graph Similarity Measure (GSM). Next, we construct a tree structure called scene tree based on the correlation between the detected shots. The correlation is computed by the GSM since it considers the relations between the detected shots properly. Multi-level scenarios which provide various levels of video abstractions are generated using the constructed scene tree. We provide two types of abstraction using multi-level scenarios: multi-level highlights and multi-length summarizations. Multi-level highlights are made by entire shots in each scenario level. To summarize a video in various lengths, we select key frames by considering temporal relationships among RAGs computed by the GSM. We have developed a system, called Automatic Video Analysis System (AVAS), by integrating the proposed techniques to show their effectiveness. The experimental results show that the proposed techniques are promising.


electronic imaging | 2003

Blurry-frame detection and shot segmentation in colonoscopy videos

JungHwan Oh; Sae Hwang; Wallapak Tavanapong; Piet C. de Groen; Johnny Wong

Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.


international conference on multimedia and expo | 2005

Clustering of Video Objects by Graph Matching

Jeongkyu Lee; JungHwan Oh; Sae Hwang

We propose a new graph-based data structure, called spatio temporal region graph (STRG) which can represent the content of video sequence. Unlike existing ones which consider mainly spatial information in the frame level of video, the proposed STRG is able to formulate its temporal information in the video level additionally. After an STRG is constructed from a given video sequence, it is decomposed into its subgraphs called object graphs (OGs), which represent the temporal characteristics of video objects. For unsupervised learning, we cluster similar OGs into a group, in which we need to match two OGs. For this graph matching, we introduce a new distance measure, called extended graph edit distance (EGED), which can handle the temporal characteristics of OGs. For actual clustering, we exploit expectation maximization (EM) with EGED. The experiments have been conducted on real video streams, and their results show the effectiveness and robustness of the proposed schemes


Progress in Biomedical Optics and Imaging - Proceedings of SPIE | 2005

Informative-frame filtering in endoscopy videos

Yong Hwan An; Sae Hwang; JungHwan Oh; Jeongkyu Lee; Wallapak Tavanapong; Piet C. de Groen; Johnny Wong

Advances in video technology are being incorporated into today’s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert’s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).


Medical Imaging 2007: Computer-Aided Diagnosis | 2007

Automatic polyp region segmentation for colonoscopy images using watershed algorithm and ellipse segmentation

Sae Hwang; JungHwan Oh; Wallapak Tavanapong; Johnny Wong; Piet C. de Groen

In the US, colorectal cancer is the second leading cause of all cancer deaths behind lung cancer. Colorectal polyps are the precursor lesions of colorectal cancer. Therefore, early detection of polyps and at the same time removal of these precancerous lesions is one of the most important goals of colonoscopy. To objectively document detection and removal of colorectal polyps for quality purposes, and to facilitate real-time detection of polyps in the future, we have initiated a computer-based research program that analyzes video files created during colonoscopy. For computer-based detection of polyps, texture based techniques have been proposed. A major limitation of the existing texture-based analytical methods is that they depend on a fixed-size analytical window. Such a fixed-sized window may work for still images, but is not efficient for analysis of colonoscopy video files, where a single polyp can have different relative sizes and color features, depending on the viewing position and distance of the camera. In addition, the existing methods do not consider shape features. To overcome these problems, we here propose a novel polyp region segmentation method primarily based on the elliptical shape that nearly all small polyps and many larger polyps possess. Experimental results indicate that our proposed polyp detection method achieves a sensitivity and specificity of 93% and 98%, respectively.

Collaboration


Dive into the Sae Hwang's collaboration.

Top Co-Authors

Avatar

JungHwan Oh

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Jeongkyu Lee

University of Bridgeport

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danyu Liu

Iowa State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Cao

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Yong Hwan An

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge