Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeongkyu Lee is active.

Publication


Featured researches published by Jeongkyu Lee.


advances in social networks analysis and mining | 2012

@Phillies Tweeting from Philly? Predicting Twitter User Locations with Spatial Word Usage

Hau-Wen Chang; Dongwon Lee; Mohammed Eltaher; Jeongkyu Lee

We study the problem of predicting home locations of Twitter users using contents of their tweet messages. Using three probability models for locations, we compare both the Gaussian Mixture Model (GMM) and the Maximum Likelihood Estimation (MLE). In addition, we propose two novel unsupervised methods based on the notions of Non-Localness and Geometric-Localness to prune noisy data from tweet messages. In the experiments, our unsupervised approach improves the baselines significantly and shows comparable results with the supervised state-of-the-art method. For 5,113 Twitter users in the test set, on average, our approach with only 250 selected local words or less is able to predict their home locations (within 100 miles) with the accuracy of 0.499, or has 509.3 miles of average error distance at best.


acm symposium on applied computing | 2007

Automatic classification of digestive organs in wireless capsule endoscopy videos

Jeongkyu Lee; JungHwan Oh; Subodh Kumar Shah; Xiaohui Yuan; Shou-Jiang Tang

Wireless Capsule Endoscopy (WCE) allows a physician to examine the entire small intestine without any surgical operation. With the miniaturization of wireless and camera technologies the ability comes to view the entire gestational track with little effort. Although WCE is a technical break-through that allows us to access the entire intestine without surgery, it is reported that a medical clinician spends one or two hours to assess a WCE video, It limits the number of examinations possible, and incur considerable amount of costs. To reduce the assessment time, it is critical to develop a technique to automatically discriminate digestive organs such as esophagus, stomach, small intestinal (i.e., duodenum, jejunum, and ileum) and colon. In this paper, we propose a novel technique to segment a WCE video into these anatomic parts based on color change pattern analysis. The basic idea is that the each digestive organ has different patterns of intestinal contractions that are quantified as the features. We present the experimental results that demonstrate the effectiveness of the proposed method.


Medical Image Analysis | 2007

Informative frame classification for endoscopy video

JungHwan Oh; Sae Hwang; Jeongkyu Lee; Wallapak Tavanapong; Johnny Wong; Piet C. de Groen

Advances in video technology allow inspection, diagnosis and treatment of the inside of the human body without or with very small scars. Flexible endoscopes are used to inspect the esophagus, stomach, small bowel, colon, and airways, whereas rigid endoscopes are used for a variety of minimal invasive surgeries (i.e., laparoscopy, arthroscopy, endoscopic neurosurgery). These endoscopes come in various sizes, but all have a tiny video camera at the tip. During an endoscopic procedure, the tiny video camera generates a video signal of the interior of the human organ, which is displayed on a monitor for real-time analysis by the physician. However, many out-of-focus frames are present in endoscopy videos because current endoscopes are equipped with a single, wide-angle lens that cannot be focused. We need to distinguish the out-of-focus frames from the in-focus frames to utilize the information of the out-of-focus and/or the in-focus frames for further automatic or semi-automatic computer-aided diagnosis (CAD). This classification can reduce the number of images to be viewed by a physician and to be analyzed by a CAD system. We call an out-of-focus frame a non-informative frame and an in-focus frame an informative frame. The out-of-focus frames have characteristics that are different from those of in-focus frames. In this paper, we propose two new techniques (edge-based and clustering-based) to classify video frames into two classes, informative and non-informative frames. However, because intensive specular reflections reduce the accuracy of the classification we also propose a specular reflection detection technique, and use the detected specular reflection information to increase the accuracy of informative frame classification. Our experimental studies indicate that precision, sensitivity, specificity, and accuracy for the specular reflection detection technique and the two informative frame classification techniques are greater than 90% and 95%, respectively.


international conference on management of data | 2005

STRG-Index: spatio-temporal region graph indexing for large video databases

Jeongkyu Lee; JungHwan Oh; Sae Hwang

In this paper, we propose new graph-based data structure and indexing to organize and retrieve video data. Several researches have shown that a graph can be a better candidate for modeling semantically rich and complicated multimedia data. However, there are few methods that consider the temporal feature of video data, which is a distinguishable and representative characteristic when compared with other multimedia (i.e., images). In order to consider the temporal feature effectively and efficiently, we propose a new graph-based data structure called Spatio-Temporal Region Graph (STRG). Unlike existing graph-based data structures which provide only spatial features, the proposed STRG further provides temporal features, which represent temporal relationships among spatial objects. The STRG is decomposed into its subgraphs in which redundant subgraphs are eliminated to reduce the index size and search time, because the computational complexity of graph matching (subgraph isomorphism) is NP-complete. In addition, a new distance measure, called Extended Graph Edit Distance (EGED), is introduced in both non-metric and metric spaces for matching and indexing respectively. Based on STRG and EGED, we propose a new indexing method STRG-Index, which is faster and more accurate since it uses tree structure and clustering algorithm. We compare the STRG-Index with the M-tree, which is a popular tree-based indexing method for multimedia data. The STRG-Index outperforms the M-tree for various query loads in terms of cost and speed.


acm multimedia | 2005

Automatic measurement of quality metrics for colonoscopy videos

Sae Hwang; JungHwan Oh; Jeongkyu Lee; Yu Cao; Wallapak Tavanapong; Danyu Liu; Johnny Wong; Piet C. de Groen

Colonoscopy is the accepted screening method for detection of colorectal cancer or its precursor lesions, colorectal polyps. Indeed, colonoscopy has contributed to a decline in the number of colorectal cancer related deaths. However, not all cancers or large polyps are detected at the time of colonoscopy, and methods to investigate why this occurs are needed. We present a new computer-based method that allows automated measurement of a number of metrics that likely reflect the quality of the colonoscopic procedure. The method is based on analysis of a digitized video file created during colonoscopy, and produces information regarding insertion time, withdrawal time, images at the time of maximal intubation, the time and ratio of clear versus blurred or non-informative images, and a first estimate of effort performed by the endoscopist. As these metrics can be obtained automatically, our method allows future quality control in the day-to-day medical practice setting on a large scale. In addition, our method can be adapted to other healthcare procedures. Last but not least, our method may be useful to assess progress during colonoscopy training, or as part of endoscopic skills assessment evaluations.


acm multimedia | 2005

Scenario based dynamic video abstractions using graph matching

Jeongkyu Lee; JungHwan Oh; Sae Hwang

In this paper, we present scenario based dynamic video abstractions using graph matching. Our approach has two main components: multi-level scenario generations and dynamic video abstractions. Multi-level scenarios are generated by a graph-based video segmentation and a hierarchy of the segments. Dynamic video abstractions are accomplished by accessing the generated hierarchy level by level. The first step in the proposed approach is to segment a video into shots using Region Adjacency Graph (RAG). A RAG expresses spatial relationships among segmented regions of a frame. To measure the similarity between two consecutive RAGs, we propose a new similarity measure, called Graph Similarity Measure (GSM). Next, we construct a tree structure called scene tree based on the correlation between the detected shots. The correlation is computed by the GSM since it considers the relations between the detected shots properly. Multi-level scenarios which provide various levels of video abstractions are generated using the constructed scene tree. We provide two types of abstraction using multi-level scenarios: multi-level highlights and multi-length summarizations. Multi-level highlights are made by entire shots in each scenario level. To summarize a video in various lengths, we select key frames by considering temporal relationships among RAGs computed by the GSM. We have developed a system, called Automatic Video Analysis System (AVAS), by integrating the proposed techniques to show their effectiveness. The experimental results show that the proposed techniques are promising.


knowledge discovery and data mining | 2003

Real time video data mining for surveillance video streams

JungHwan Oh; Jeongkyu Lee; Sanjaykumar Kote

We extend our previous work [1] of the general framework for video data mining to further address the issue such as how to mine video data using motions in video streams. To extract and characterize these motions, we use an accumulation of quantized pixel differences among all frames in a video segment. As a result, the accumulated motions of segment are represented as a two dimensional matrix. Further, we develop how to capture the location of motions occurring in a segment using the same matrix generated for the calculation of the amount. We study how to cluster those segmented pieces using the features (the amount and the location of motions) we extract by the matrix above. We investigate an algorithm to find whether a segment has normal or abnormal events by clustering and modeling normal events, which occur mostly. In addition to deciding normal or abnormal, the algorithm computes Degree of Abnormality of a segment, which represents to what extent a segment is distant to the existing segments in relation with normal events. Our experimental studies indicate that the proposed techniques are promising.


european conference on information retrieval | 2010

BASIL: effective near-duplicate image detection using gene sequence alignment

Hung-sik Kim; Hau-Wen Chang; Jeongkyu Lee; Dongwon Lee

Finding near-duplicate images is a task often found in Multimedia Information Retrieval (MIR). Toward this effort, we propose a novel idea by bridging two seemingly unrelated fields – MIR and Biology. That is, we propose to use the popular gene sequence alignment algorithm in Biology, i.e., BLAST, in detecting near-duplicate images. Under the new idea, we study how various image features and gene sequence generation methods (using gene alphabets such as A, C, G, and T in DNA sequences) affect the accuracy and performance of detecting near-duplicate images. Our proposal, termed as BLASTed Image Linkage (BASIL), is empirically validated using various real data sets. This work can be viewed as the “first” step toward bridging MIR and Biology fields in the well-studied near-duplicate image detection problem.


Diagnostic and Therapeutic Endoscopy | 2012

A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video

Yingju Chen; Jeongkyu Lee

Wireless capsule endoscopy (WCE) enables a physician to diagnose a patients digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans), tomography-based images (e.g., MRT and CT scans), and photography-based images (e.g., endoscopy, dermatology, and microscopic histology). Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI) pathology and methods of shot boundary detection.


international conference on multimedia and expo | 2005

Clustering of Video Objects by Graph Matching

Jeongkyu Lee; JungHwan Oh; Sae Hwang

We propose a new graph-based data structure, called spatio temporal region graph (STRG) which can represent the content of video sequence. Unlike existing ones which consider mainly spatial information in the frame level of video, the proposed STRG is able to formulate its temporal information in the video level additionally. After an STRG is constructed from a given video sequence, it is decomposed into its subgraphs called object graphs (OGs), which represent the temporal characteristics of video objects. For unsupervised learning, we cluster similar OGs into a group, in which we need to match two OGs. For this graph matching, we introduce a new distance measure, called extended graph edit distance (EGED), which can handle the temporal characteristics of OGs. For actual clustering, we exploit expectation maximization (EM) with EGED. The experiments have been conducted on real video streams, and their results show the effectiveness and robustness of the proposed schemes

Collaboration


Dive into the Jeongkyu Lee's collaboration.

Top Co-Authors

Avatar

JungHwan Oh

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sae Hwang

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Dongwon Lee

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Khaled Almgren

University of Bridgeport

View shared research outputs
Top Co-Authors

Avatar

Yingju Chen

University of Bridgeport

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hassan Bajwa

University of Bridgeport

View shared research outputs
Top Co-Authors

Avatar

Hung-sik Kim

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge