Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where JungHwan Oh is active.

Publication


Featured researches published by JungHwan Oh.


international conference on data engineering | 2000

Multi-level multi-channel air cache designs for broadcasting in a mobile environment

Kiran Prabhakara; Kien A. Hua; JungHwan Oh

Investigates efficient ways of broadcasting data to mobile users over multiple physical channels, which cannot be coalesced into a lesser number of high-bandwidth channels. We propose the use of an MLMC (multi-level multi-channel) air cache, which can provide mobile users with data based on their popularity factor. We provide a wide range of design considerations for the server which broadcasts over the MLMC cache. We also investigate some novel techniques for a mobile user to access data from the MLMC cache and show the advantages of designing the broadcasting strategy in tandem with the access behavior of the mobile users. Finally, we provide experimental results to compare the techniques we introduce.


international conference on management of data | 2000

Efficient and cost-effective techniques for browsing and indexing large video databases

JungHwan Oh; Kien A. Hua

We present in this paper a fully automatic content-based approach to organizing and indexing video data. Our methodology involves three steps:<ul><li>Step 1: We segment each video into shots using a Camera-Tracking technique. This process also extracts the feature vector for each shot, which consists of two statistical variances <i>Var<sup>BA</sup></i> and <i>Var<sup>OA</sup></i>. These values capture how much things are changing in the background and foreground areas of the video shot. </li><li>Step 2: For each video, We apply a fully automatic method to build a browsing hierarchy using the shots identified in Step 1. </li><li>Step 3: Using the <i>Var<sup>BA</sup></i> and <i>Var<sup>OA</sup></i> values obtained in Step 1, we build an index table to support a variance-based video similarity model. That is, video scenes/shots are retrieved based on given values of <i>Var<sup>BA</sup></i> and <i>Var<sup>OA</sup></i>. </li></ul> The above three inter-related techniques offer an integrated framework for modeling, browsing, and searching large video databases. Our experimental results indicate that they have many advantages over existing methods.


acm symposium on applied computing | 2007

Automatic classification of digestive organs in wireless capsule endoscopy videos

Jeongkyu Lee; JungHwan Oh; Subodh Kumar Shah; Xiaohui Yuan; Shou-Jiang Tang

Wireless Capsule Endoscopy (WCE) allows a physician to examine the entire small intestine without any surgical operation. With the miniaturization of wireless and camera technologies the ability comes to view the entire gestational track with little effort. Although WCE is a technical break-through that allows us to access the entire intestine without surgery, it is reported that a medical clinician spends one or two hours to assess a WCE video, It limits the number of examinations possible, and incur considerable amount of costs. To reduce the assessment time, it is critical to develop a technique to automatically discriminate digestive organs such as esophagus, stomach, small intestinal (i.e., duodenum, jejunum, and ileum) and colon. In this paper, we propose a novel technique to segment a WCE video into these anatomic parts based on color change pattern analysis. The basic idea is that the each digestive organ has different patterns of intestinal contractions that are quantified as the features. We present the experimental results that demonstrate the effectiveness of the proposed method.


international conference on image processing | 2007

Polyp Detection in Colonoscopy Video using Elliptical Shape Feature

Sae Hwang; JungHwan Oh; Wallapak Tavanapong; Johnny Wong; P. C. De Groen

Early detection of polyps and cancers is one of the most important goals of colonoscopy. Computer-based analysis of video files using texture features, as has been proposed for polyps of the stomach and colon, has two major limitations: this method uses a fixed size analysis window and relies heavily on a training set of images for accuracy. To overcome these limitations, we propose a new technique focusing on shape instead of texture in this paper. The proposed polyp region detection method is based on the elliptical shape that is common for nearly all small colon polyps.


Medical Image Analysis | 2007

Informative frame classification for endoscopy video

JungHwan Oh; Sae Hwang; Jeongkyu Lee; Wallapak Tavanapong; Johnny Wong; Piet C. de Groen

Advances in video technology allow inspection, diagnosis and treatment of the inside of the human body without or with very small scars. Flexible endoscopes are used to inspect the esophagus, stomach, small bowel, colon, and airways, whereas rigid endoscopes are used for a variety of minimal invasive surgeries (i.e., laparoscopy, arthroscopy, endoscopic neurosurgery). These endoscopes come in various sizes, but all have a tiny video camera at the tip. During an endoscopic procedure, the tiny video camera generates a video signal of the interior of the human organ, which is displayed on a monitor for real-time analysis by the physician. However, many out-of-focus frames are present in endoscopy videos because current endoscopes are equipped with a single, wide-angle lens that cannot be focused. We need to distinguish the out-of-focus frames from the in-focus frames to utilize the information of the out-of-focus and/or the in-focus frames for further automatic or semi-automatic computer-aided diagnosis (CAD). This classification can reduce the number of images to be viewed by a physician and to be analyzed by a CAD system. We call an out-of-focus frame a non-informative frame and an in-focus frame an informative frame. The out-of-focus frames have characteristics that are different from those of in-focus frames. In this paper, we propose two new techniques (edge-based and clustering-based) to classify video frames into two classes, informative and non-informative frames. However, because intensive specular reflections reduce the accuracy of the classification we also propose a specular reflection detection technique, and use the detected specular reflection information to increase the accuracy of informative frame classification. Our experimental studies indicate that precision, sensitivity, specificity, and accuracy for the specular reflection detection technique and the two informative frame classification techniques are greater than 90% and 95%, respectively.


acm multimedia | 2002

Multimedia data mining framework for raw video sequences

JungHwan Oh; Babitha Bandi

In this paper, we propose a general framework for real time video data mining to be applied to the raw videos (traffic videos, surveillance videos, etc.). We investigate whether the existing techniques would be applicable to this type of videos. Then, we introduce new techniques which are essential to process them in real time. The first step of our frame work for mining raw video data is grouping input frames to a set of basic units which are relevant to the structure of the video. We call this unit as segment. This is one of the most important tasks since it is the step to construct the building blocks for video database and video data mining. The second step is characterizing each segment to cluster into similar groups, to discover unknown knowledge, and to detect interesting patterns. To do this, we extract some features (motion, object, colors, etc.) from each segment. In our framework, we focus on motion as a feature, and study how to compute and represent it for further processes. The third step of our framework is to cluster the decomposed segments into similar groups. In our clustering, we employ a multi-level hierarchical clustering approach to group segments using category and motion. Our preliminary experimental studies indicate that the proposed framework is promising.


international conference on management of data | 2005

STRG-Index: spatio-temporal region graph indexing for large video databases

Jeongkyu Lee; JungHwan Oh; Sae Hwang

In this paper, we propose new graph-based data structure and indexing to organize and retrieve video data. Several researches have shown that a graph can be a better candidate for modeling semantically rich and complicated multimedia data. However, there are few methods that consider the temporal feature of video data, which is a distinguishable and representative characteristic when compared with other multimedia (i.e., images). In order to consider the temporal feature effectively and efficiently, we propose a new graph-based data structure called Spatio-Temporal Region Graph (STRG). Unlike existing graph-based data structures which provide only spatial features, the proposed STRG further provides temporal features, which represent temporal relationships among spatial objects. The STRG is decomposed into its subgraphs in which redundant subgraphs are eliminated to reduce the index size and search time, because the computational complexity of graph matching (subgraph isomorphism) is NP-complete. In addition, a new distance measure, called Extended Graph Edit Distance (EGED), is introduced in both non-metric and metric spaces for matching and indexing respectively. Based on STRG and EGED, we propose a new indexing method STRG-Index, which is faster and more accurate since it uses tree structure and clustering algorithm. We compare the STRG-Index with the M-tree, which is a popular tree-based indexing method for multimedia data. The STRG-Index outperforms the M-tree for various query loads in terms of cost and speed.


IEEE Transactions on Biomedical Engineering | 2007

Computer-Aided Detection of Diagnostic and Therapeutic Operations in Colonoscopy Videos

Yu Cao; Danyu Liu; Wallapak Tavanapong; Johnny Wong; JungHwan Oh; P. C. De Groen

Colonoscopy is an endoscopic technique that allows a physician to inspect the inside of the human colon and to perform - if deemed necessary - at the same time a number of diagnostic and therapeutic operations. In order to see the inside of the colon, a video signal of the internal mucosa of the colon is generated by a tiny video camera at the tip of the endoscope and displayed on a monitor for real-time analysis by the physician. We have captured and stored these videos in digital format and call these colonoscopy videos. Based on new algorithms for instrument detection and shot segmentation, we introduce new spatio-temporal analysis techniques to automatically identify an operation shot - a segment of visual data in a colonoscopy video that corresponds to a diagnostic or therapeutic operation. Our experiments on real colonoscopy videos demonstrate the effectiveness of the proposed approach. The proposed techniques and software are useful for 1) postprocedure review for causes of complications due to diagnostic or therapeutic operations; 2) establishment of an effective content-based retrieval system to facilitate endoscopic research and education; 3) development of a systematic approach to assess and improve the procedural skills of endoscopists.


international conference of the ieee engineering in medicine and biology society | 2008

Bleeding detection from capsule endoscopy videos

Balathasan Giritharan; Xiaohui Yuan; Jianguo Liu; Bill P. Buckles; JungHwan Oh; Shou-Jiang Tang

Reviewing medical videos for the presence of disease signs presents a unique problem to the conventional image classification tasks. The learning process based on imbalanced data set is heavily biased and tends to result in low sensitivity. In this article, we present a classification method for finding video frames that contain bleeding lesions. Our method re-balances the training samples by over-sampling the minority class and under-sampling the majority class. An SVM ensemble is then constructed using re-balanced data of three kinds of image features. Five sets of image frames were used in our experiments, each of which contains approximately 55,000 images and the ratio of minority and majority class is about 1:145. Our preliminary results demonstrated superior performance in sensitivity and comparative subjectivity with slight improvement.


IEEE Transactions on Biomedical Engineering | 2009

Measuring Objective Quality of Colonoscopy

JungHwan Oh; Sae Hwang; Yu Cao; Wallapak Tavanapong; Danyu Liu; Johnny Wong; P. C. De Groen

Advances in video technology are being incorporated into todays healthcare practices. Colonoscopy is regarded as one of the most important diagnostic tools for colorectal cancer. Indeed, colonoscopy has contributed to a decline in the number of colorectal-cancer-related deaths. Although colonoscopy has become the preferred screening modality for prevention of colorectal cancer, recent data suggest that there is a significant miss rate for the detection of large polyps and cancers, and methods to investigate why this occurs are needed. To address this problem, we present a new computer-based method that analyzes a digitized video file of a colonoscopic procedure and produces a number of metrics that likely reflect the quality of the procedure. The method consists of a set of novel image-processing algorithms designed to address new technical challenges due to uncommon characteristics of videos captured during colonoscopy. As these measurements can be obtained automatically, our method enables future quality control in large-scale day-to-day medical practice, which is currently not feasible. In addition, our method can be adapted to other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, and bronchoscopy. Last but not least, our method may be useful to assess progress during colonoscopy training.

Collaboration


Dive into the JungHwan Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kien A. Hua

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Sae Hwang

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeongkyu Lee

University of Bridgeport

View shared research outputs
Top Co-Authors

Avatar

Yu Cao

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge