Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyoung-Ho Choi is active.

Publication


Featured researches published by Kyoung-Ho Choi.


The first computers | 2013

A Review on Video-Based Human Activity Recognition

Shian-Ru Ke; Hoang Le Uyen Thuc; Yong-Jin Lee; Jenq-Neng Hwang; Jang-Hee Yoo; Kyoung-Ho Choi

This review article surveys extensively the current progresses made toward video-based human activity recognition. Three aspects for human activity recognition are addressed including core technology, human activity recognition systems, and applications from low-level to high-level representation. In the core technology, three critical processing stages are thoroughly discussed mainly: human object segmentation, feature extraction and representation, activity detection and classification algorithms. In the human activity recognition systems, three main types are mentioned, including single person activity recognition, multiple people interaction and crowd behavior, and abnormal activity recognition. Finally the domains of applications are discussed in detail, specifically, on surveillance environments, entertainment environments and healthcare systems. Our survey, which aims to provide a comprehensive state-of-the-art review of the field, also addresses several challenges associated with these systems and applications. Moreover, in this survey, various applications are discussed in great detail, specifically, a survey on the applications in healthcare monitoring systems.


signal processing systems | 2001

Hidden Markov Model Inversion for Audio-to-Visual Conversion in an MPEG-4 Facial Animation System

Kyoung-Ho Choi; Ying Luo; Jenq-Neng Hwang

MPEG-4 standard allows composition of natural or synthetic video with facial animation. Based on this standard, an animated face can be inserted into natural or synthetic video to create new virtual working environments such as virtual meetings or virtual collaborative environments. For these applications, audio-to-visual conversion techniques can be used to generate a talking face that is synchronized with the voice. In this paper, we address audio-to-visual conversion problems by introducing a novel Hidden Markov Model Inversion (HMMI) method. In training audio-visual HMMs, the model parameters {λav} can be chosen to optimize some criterion such as maximum likelihood. In inversion of audio-visual HMMs, visual parameters that optimize some criterion can be found based on given speech and model parameters {λav}. By using the proposed HMMI technique, an animated talking face can be synchronized with audio and can be driven realistically. The HMMI technique combined with MPEG-4 standard to create a virtual conference system, named VIRTUAL-FACE, is introduced to show the role of HMMI for applications of MPEG-4 facial animation.


international conference on consumer electronics | 2011

A motion and similarity-based fake detection method for biometric face recognition systems

Younghwan Kim; Jang-Hee Yoo; Kyoung-Ho Choi

In this paper, a motion and similarity-based fake detection algorithm is presented for biometric face recognition systems. First, an input video is segmented as foreground and background regions. Second, the similarity is measured between a background region, i.e., a region without a face and upper body, and an original background region recorded at an initializing stage. Third, a background motion index is calculated to indicate the amount of motion in the background region compared with the motion in the foreground region. By combining the result of similarity and the result of background motion index, a fake video can be detected robustly with a regular USB camera.


international geoscience and remote sensing symposium | 2003

MPEG-7 metadata for video-based GIS applications

Tae-Hyun Hwang; Kyoung-Ho Choi; In-Hak Joo; Jong-Hun Lee

In this paper, we present a MPEG-7 metadata scheme designed for a video-based Geographic Information System (VideoGIS). Recently, VideoGIS have been developed for integrated management of spatial data such as map, 3D graphic, still image, and video to provide better and more realistic information of spatial objects. Main functions of VideoGIS are to calculate 3D geographic position of a spatial object and to search image and video that correspond to the object. In this paper, a MPEG-7 metadata scheme is proposed for emerging Location-Based Services (LBS), e.g., tourist information and restaurant reservation services, which becomes more and more popular in emerging mobile environments.


Journal of Intelligent Transportation Systems | 2010

Methods to Detect Road Features for Video-Based In-Vehicle Navigation Systems

Kyoung-Ho Choi; Soon Young Park; Seong Hoon Kim; Kisung Lee; Jeong Ho Park; Seong Ik Cho; Jong Hyun Park

Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.


vehicular technology conference | 2003

A bimodal approach for GPS and IMU integration for land vehicle applications

Seong-Baek Kim; Seung-Yong Lee; Ji-Hoon Choi; Kyoung-Ho Choi; Byung-Tae Jang

In this paper, we present a novel idea to integrate low cost inertial measurement unit (IMU) and differential global positioning system (DGPS) for land vehicle applications. Since the usage of high performance inertial navigation systems (INS) is restricted by their high price and the government regulation, low cost IMU is generally used for most land vehicle applications. However, low cost IMU produces large positioning and attitude errors in very short time due to the poor quality of inertial sensor assembly. More specifically, when GPS information is available, a good navigation solution can be obtained by incorporating DGPS data. However, a reliable result cannot be provided for land vehicle navigation when GPS signal is not available. To conquer the limitation, we present a bimodal approach for integrating IMU and DGPS, taking advantage of positioning and orientation data calculated from CCD images based on photogrammetry and stereo-vision techniques. The positioning and orientation data from the photogrammetric approach are fed back into the Kalman filter to reduce and compensate IMU errors and improve the performance. Experimental results are presented to show the robustness of the proposed method that can provide accurate position and attitude information for extended period for nonaided GPS information.


international conference on acoustics, speech, and signal processing | 2004

A scalable VideoGIS system for GPS-guided vehicles

Qiang Liu; Kyoung-Ho Choi; Jaejun Yoo; Jenq-Neng Hwang

A VideoGIS system aims at combining georeferenced video information with traditional geographic information in order to provide a more comprehensive understanding over a spatial location. Video data have been used with geographic information in some projects to facilitate a better understanding of the spatial objects of interest. We present an on-going VideoGIS project, in which scalable georeferenced video and geographic information (GI) are transmitted to GPS-guided vehicles. The hypermedia, which contains cross-referenced video and GI, are organized in scalable (layered) fashion. The remote user can request, through 3G mobile devices, the abundant information related to the objects of interest, while adapting to heterogeneous network condition and other factors such as display area size, CPU processing power, etc.


vehicular technology conference | 2004

An advanced approach for navigation and image sensor integration for land vehicle navigation

Seong-Baek Kim; Seung-Yong Lee; Tae-Hyun Hwang; Kyoung-Ho Choi

In the last couple of decades, the demands of navigation technology have been increasing and the technology has been expanded in several application areas including automated car navigation and mobile mapping systems. However, there has been little research and study on the integration of GPS, IMU, and image sensors such as CCDs, and video cameras. This paper aims at aiding inertial navigation sensors with image sensor measurements and distance measurements even in an environment with no GPS assistance. Using an extended Kalman filtering technique, a novel integration scheme, a multi-modal approach, is presented in this paper for various sensors such as IMU, GPS, odometer, and CCD cameras to improve positioning accuracy. Experimental results show that the proposed multimodal approach can be used to enhance positioning accuracy during GPS outages.


IEEE Transactions on Signal Processing | 2004

Constrained optimization for audio-to-visual conversion

Kyoung-Ho Choi; Jenq-Neng Hwang

We have developed a new audio-to-visual conversion algorithm that uses a constrained optimization approach to take advantage of dynamics of mouth movements. Based on facial muscle analysis, the dynamics of mouth movements is modeled, and constraints are obtained from it. The obtained constraints are used to estimate visual parameters from speech in a framework of hidden Markov model (HMM)-based visual parameter estimation. To solve the constrained optimization problem, the Lagrangian approach is used to transform the constrained problem into an unconstrained problem in our implementation. The proposed method is tested on various noisy environments to show its robustness and correctness. Our proposed algorithm is favorably compared with the mixture-based HMM method, which also uses audio-visual HMMs and finds optimal estimates based on a joint audio-visual probability distribution. Our proposed algorithm can estimate optimal visual parameters while satisfying the constraints and avoiding performance degradation in noisy environments.


multimedia signal processing | 1999

Baum-Welch hidden Markov model inversion for reliable audio-to-visual conversion

Kyoung-Ho Choi; Jenq-Neng Hwang

In this paper, a novel audio-to-visual conversion method is presented. Many multimedia applications, such as videophones, videoconferencing, man-machine interface, language dubbing, character animation in virtual reality, etc., require techniques for synchronizing audio and video in a synthesized talking head sequence. For these applications, it is necessary to reliably estimate accurate mouth (visual) movements from the corresponding speech (audio) data. The hidden Markov model inversion (HMMI) technique introduced for robust speech recognition is extended in this paper into the audio-visual feature space. Based on the Baum-Welch HMMI method, reliable visual parameters are extracted given speech data only. Our preliminary simulation results show that the estimated visual parameters from the proposed method match the true visual parameters smoothly as well as accurately. The proposed estimation technique can be combined with video coding and graphics techniques for other multimedia applications.

Collaboration


Dive into the Kyoung-Ho Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soonyoung Park

Mokpo National University

View shared research outputs
Top Co-Authors

Avatar

Tae-Hyun Hwang

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Do Hyun Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

In-Hak Joo

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jang-Hee Yoo

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jong-Hyun Park

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Seong Ik Cho

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Sunghoon Kim

Mokpo National University

View shared research outputs
Top Co-Authors

Avatar

Jaejun Yoo

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge