Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyhyun Um is active.

Publication


Featured researches published by Kyhyun Um.


Sensors | 2014

Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

Yoonsu Park; Seok Min Yun; Chee Sun Won; Kyungeun Cho; Kyhyun Um; Sungdae Sim

Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.


Sensors | 2012

A Development Architecture for Serious Games Using BCI (Brain Computer Interface) Sensors

Yunsick Sung; Kyungeun Cho; Kyhyun Um

Games that use brainwaves via brain–computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.


Sensors | 2012

Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

Wei Song; Kyungeun Cho; Kyhyun Um; Chee Sun Won; Sungdae Sim

Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.


Journal of Information Processing Systems | 2012

A Framework for Processing Brain Waves Used in a Brain-computer Interface

Yunsick Sung; Kyungeun Cho; Kyhyun Um

Recently, methodologies for developing brain-computer interface (BCI) games using the BCI have been actively researched. The existing general framework for processing brain waves does not provide the functions required to develop BCI games. Thus, developing BCI games is difficult and requires a large amount of time. Effective BCI game development requires a BCI game framework. Therefore the BCI game framework should provide the functions to generate discrete values, events, and converted waves considering the difference between the brain waves of users and the BCIs of those. In this paper, BCI game frameworks for processing brain waves for BCI games are proposed. A variety of processes for converting brain waves to apply the measured brain waves to the games are also proposed. In an experiment the frameworks proposed were applied to a BCI game for visual perception training. Furthermore, it was verified that the time required for BCI game development was reduced when the framework proposed in the experiment was applied


international conference on intelligent pervasive computing | 2007

Gaussian Distribution for NPC Character in Real-Life Simulation

Kyungeun Cho; Wei Song; Kyhyun Um

This paper describes a real-life behavior framework in simulation game based on Probabilistic State Machine (PSM) with Gaussian random distribution. According to the dynamic environment information, NPC can generate behavior planning autonomously associated with defined FSM. After planning process, we illuminate Gaussian probabilistic function for real-life action simulation in time and spatial domains. The expected value of distribution is estimated during behavior planning process and variance is determined by NPC personality in order to realize real life behavior simulation. We experiment the framework and Gaussian PSM on a restaurant simulation game. Furthermore we give some suggestions to enhance emotion engine on behavior planning for virtual reality.Development and diffusion of embedded systems, directly connected to communication technologies, move people towards the era of ubiquitous computing. An ubiquitous environment needs of many self-contained authentication sensors, opportunely distributed, for users recognition and their secure access. In this work the study and the implementation of a fingerprints-based embedded biometric system for personal ubiquitous authentication is proposed. The system is a self-contained sensor since it is able to perform fingerprint acquisition and processing for user authentication, to strengthen security: the processor performs the entire elaboration steps on board, so that all critical information (i.e. biometric data and cryptographic keys), are securely managed and stored inside the sensor, without any data leaking out. Sensor has been realized on a FPGA-based platform achieving fast execution time and a good final throughput. Resources used, elaboration times and recognition rates of the sensor are finally reported.


Lecture Notes in Computer Science | 2004

Human Action Recognition by Inference of Stochastic Regular Grammars

Kyungeun Cho; Hyung-Je Cho; Kyhyun Um

In this paper, we present a new method of recognizing human actions by inference of stochastic grammars for the purpose of automatic analysis of nonverbal actions of human beings. We applied the principle that a human action can be defined as a combination of multiple articulation movements. We measure and quantize each articulation movements in 3D and represent two sets of 4-connected chain code for xy and zy projection planes, so that they are appropriate for the stochastic grammar inference method. This recognition method is tested by using 900 actions of human upper body. The result shows a comparatively successful achievement of 93.8% recognition rate through the experiments of 8 action types of head and 84.9% recognition rate of 60 action types of upper body.


statistical and scientific database management | 1999

2D+ string: a spatial metadata to reason topological and directional relationships

Bowon Kim; Kyhyun Um

Spatial relationships have gained increasing attention in spatial databases. Reasoning spatial relationships is very expensive since it requires massive geometric computations. In particular, reading and comparing information to reason directional and topological relationships can be repeated as they are processed separately in previous works. In order to avoid the repetition and improve the performance of reasoning spatial relationships, we propose a novel metadata representation scheme, 2D+ (2 dimensional plus) string which contains direction and topology information between objects in a picture, with a generation method of the string, inference rules to reason directional and topological relationships and their application to reasoning. Our analysis of the reasoning performance using 2D+ string show that it depends on the amount of information stored in 2D+ string, which is less than that of previous works. The simplicity and expressive power are the main advantages of 2D+ string.


The Scientific World Journal | 2014

Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

Seoungjae Cho; Jonghyun Kim; Warda Ikram; Kyungeun Cho; Young-Sik Jeong; Kyhyun Um; Sungdae Sim

A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.


International Journal of Distributed Sensor Networks | 2014

Traversable Ground Surface Segmentation and Modeling for Real-Time Mobile Mapping

Wei Song; Seoungjae Cho; Kyungeun Cho; Kyhyun Um; Chee Sun Won; Sungdae Sim

Remote vehicle operator must quickly decide on the motion and path. Thus, rapid and intuitive feedback of the real environment is vital for effective control. This paper presents a real-time traversable ground surface segmentation and intuitive representation system for remote operation of mobile robot. Firstly, a terrain model using voxel-based flag map is proposed for incrementally registering large-scale point clouds in real time. Subsequently, a ground segmentation method with Gibbs-Markov random field (Gibbs-MRF) model is applied to detect ground data in the reconstructed terrain. Finally, we generate a texture mesh for ground surface representation by mapping the triangles in the terrain mesh onto the captured video images. To speed up the computation, we program a graphics processing unit (GPU) to implement the proposed system for large-scale datasets in parallel. Our proposed methods were tested in an outdoor environment. The results show that ground data is segmented effectively and the ground surface is represented intuitively.


ieee international conference on dependable autonomic and secure computing | 2013

Gesture-Based NUI Application for Real-Time Path Modification

Hongzhe Liu; Yulong Xi; Wei Song; Kyhyun Um; Kyungeun Cho

Since the birth of Natural User Interface (NUI) concept, the NUI has become widely used. NUI-based applications have grown rapidly, particularly those using gestures, which have come to occupy a pivotal place in technology. The ever-popular Smartphone is one of the best examples. Recently, video conferencing has also begun adopting gesture-based NUIs with augmented reality (AR) technology. The NUI and AR have greatly enriched and facilitated human experience. In addition, path planning has been a popular topic in research area. Traditional path planning uses automatic navigation to solve problems, it cannot practically interact with people. Its algorithm calculates complexly, moreover, in certain extenuating circumstances, automatic real-time processing is much less efficient than human path modification. Therefore, considering such extenuating circumstances, we present a solution that employs NUI technology for 3D path modification in real time. In our proposed solution, users can manually operate and edit their own paths. The core method is based on 3D point detection to change paths. We did a simulation experiment about city path modification. Experiment resulted that computer can accurately identify a valid gesture. By using gesture it can effectively change the path. Among other applications, this solution can be used in virtual military maps and car navigation.

Collaboration


Dive into the Kyhyun Um's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyung-Eun Cho

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sungdae Sim

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge