Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyungeun Cho is active.

Publication


Featured researches published by Kyungeun Cho.


Sensors | 2014

Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

Yoonsu Park; Seok Min Yun; Chee Sun Won; Kyungeun Cho; Kyhyun Um; Sungdae Sim

Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.


Sensors | 2012

A Development Architecture for Serious Games Using BCI (Brain Computer Interface) Sensors

Yunsick Sung; Kyungeun Cho; Kyhyun Um

Games that use brainwaves via brain–computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.


Sensors | 2012

Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

Wei Song; Kyungeun Cho; Kyhyun Um; Chee Sun Won; Sungdae Sim

Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.


Biodata Mining | 2016

Adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique algorithm for tackling binary imbalanced datasets in biomedical data classification

Jinyan Li; Simon Fong; Yunsick Sung; Kyungeun Cho; Raymond K. Wong; Kelvin K. L. Wong

BackgroundAn imbalanced dataset is defined as a training dataset that has imbalanced proportions of data in both interesting and uninteresting classes. Often in biomedical applications, samples from the stimulating class are rare in a population, such as medical anomalies, positive clinical tests, and particular diseases. Although the target samples in the primitive dataset are small in number, the induction of a classification model over such training data leads to poor prediction performance due to insufficient training from the minority class.ResultsIn this paper, we use a novel class-balancing method named adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique (ASCB_DmSMOTE) to solve this imbalanced dataset problem, which is common in biomedical applications. The proposed method combines under-sampling and over-sampling into a swarm optimisation algorithm. It adaptively selects suitable parameters for the rebalancing algorithm to find the best solution. Compared with the other versions of the SMOTE algorithm, significant improvements, which include higher accuracy and credibility, are observed with ASCB_DmSMOTE.ConclusionsOur proposed method tactfully combines two rebalancing techniques together. It reasonably re-allocates the majority class in the details and dynamically optimises the two parameters of SMOTE to synthesise a reasonable scale of minority class for each clustered sub-imbalanced dataset. The proposed methods ultimately overcome other conventional methods and attains higher credibility with even greater accuracy of the classification model.


IEEE Transactions on Automation Science and Engineering | 2015

Persim 3D: Context-Driven Simulation and Modeling of Human Activities in Smart Spaces

Jae Woong Lee; Seoungjae Cho; Sirui Liu; Kyungeun Cho; Sumi Helal

Automated understanding and recognition of human activities and behaviors in a smart space (e.g., smart house) is of paramount importance to many critical human-centered applications. Recognized activities are the input to the pervasive computer (the smart space) which intelligently interacts with the users to maintain the applications goal be it assistance, safety, child-development, entertainment or other goals. Research in this area is fascinating but severely lacks adequate validation which often relies on datasets that contain sensory data representing the activities. Availing adequate datasets that can be used in a large variety of spaces, for different user groups, and aiming at different goals is very challenging. This is due to the prohibitive cost and the human capital needed to instrument physical spaces and to recruit human subjects to perform the activities and generate data. Simulation of human activities in smart spaces has therefore emerged as an alternative approach to bridge this deficit. Traditional event-driven approaches have been proposed. However, the complexity of human activity simulation was proved to be challenging to these initial simulation efforts. In this paper, we present Persim 3D-an alternative context-driven approach to simulating human activities capable of supporting complex activity scenarios. We present the context-activity-action nexus and show how our approach combines modeling and visualization of actions with context and activity simulation. We present the Persim 3D architecture and algorithms, and describe a detailed validation study of our approach to verify the accuracy and realism of the simulation output (datasets and visualizations) and the scalability of the human effort in using Persim 3D to simulate complex scenarios. We show positive and promising results that validate our approach.


Multimedia Tools and Applications | 2015

Real-time terrain reconstruction using 3D flag map for point clouds

Wei Song; Kyungeun Cho

Mobile robot operators need to make quick decisions based on information about the robot’s surrounding environment. This study proposes a graphics processing unit (GPU)-based terrain modeling system for large-scale LiDAR (Light Detection And Ranging) dataset visualization using a voxel map and a textured mesh. A 3D flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. The sensed 3D point clouds are quantized into regular 3D grids that are allocated in the GPU memory to remove redundant spatial and temporal points. Subsequently, the sensed vertices are segmented as ground and non-ground classes. The ground indices are rendered using a textured mesh to represent the ground surface, and the non-ground indices, using a colored voxel map by a particle rendering method. The proposed approach was tested using a mobile robot equipped with a LiDAR sensor, video camera, GPS receiver, and gyroscope. The simulation was evaluated through a test in an outdoor environment containing trees and buildings, demonstrating the real-time visualization performance of the proposed method in a large-scale environment.


Journal of Information Processing Systems | 2012

A Framework for Processing Brain Waves Used in a Brain-computer Interface

Yunsick Sung; Kyungeun Cho; Kyhyun Um

Recently, methodologies for developing brain-computer interface (BCI) games using the BCI have been actively researched. The existing general framework for processing brain waves does not provide the functions required to develop BCI games. Thus, developing BCI games is difficult and requires a large amount of time. Effective BCI game development requires a BCI game framework. Therefore the BCI game framework should provide the functions to generate discrete values, events, and converted waves considering the difference between the brain waves of users and the BCIs of those. In this paper, BCI game frameworks for processing brain waves for BCI games are proposed. A variety of processes for converting brain waves to apply the measured brain waves to the games are also proposed. In an experiment the frameworks proposed were applied to a BCI game for visual perception training. Furthermore, it was verified that the time required for BCI game development was reduced when the framework proposed in the experiment was applied


international conference on intelligent pervasive computing | 2007

Gaussian Distribution for NPC Character in Real-Life Simulation

Kyungeun Cho; Wei Song; Kyhyun Um

This paper describes a real-life behavior framework in simulation game based on Probabilistic State Machine (PSM) with Gaussian random distribution. According to the dynamic environment information, NPC can generate behavior planning autonomously associated with defined FSM. After planning process, we illuminate Gaussian probabilistic function for real-life action simulation in time and spatial domains. The expected value of distribution is estimated during behavior planning process and variance is determined by NPC personality in order to realize real life behavior simulation. We experiment the framework and Gaussian PSM on a restaurant simulation game. Furthermore we give some suggestions to enhance emotion engine on behavior planning for virtual reality.Development and diffusion of embedded systems, directly connected to communication technologies, move people towards the era of ubiquitous computing. An ubiquitous environment needs of many self-contained authentication sensors, opportunely distributed, for users recognition and their secure access. In this work the study and the implementation of a fingerprints-based embedded biometric system for personal ubiquitous authentication is proposed. The system is a self-contained sensor since it is able to perform fingerprint acquisition and processing for user authentication, to strengthen security: the processor performs the entire elaboration steps on board, so that all critical information (i.e. biometric data and cryptographic keys), are securely managed and stored inside the sensor, without any data leaking out. Sensor has been realized on a FPGA-based platform achieving fast execution time and a good final throughput. Resources used, elaboration times and recognition rates of the sensor are finally reported.


Symmetry | 2017

3D Reconstruction Framework for Multiple Remote Robots on Cloud System

Phuong Minh Chu; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho

This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using a single computer. In the proposed framework, reconstruction is carried out in cloud-based servers via distributed data processing. Consequently, users do not need to consider computing resources even when utilizing multiple remote robots. The sensors’ bulk data are transferred to a master server that divides the data and allocates the processing to a set of slave servers. Thus, the segmentation and reconstruction tasks are implemented in the slave servers. The reconstructed 3D space is created by fusing all the results in a visualization server, and the results are saved in a database that users can access and visualize in real-time. The results of the experiments conducted verify that the proposed system is capable of providing real-time 3D scenes of the surroundings of remote robots.


Human-centric Computing and Information Sciences | 2017

A 3D localisation method in indoor environments for virtual reality applications

Wei Song; Liying Liu; Yifei Tian; Guodong Sun; Simon Fong; Kyungeun Cho

Virtual Reality (VR) has recently experienced rapid development for human–computer interactions. Users wearing VR headsets gain an immersive experience when interacting with a 3-dimensional (3D) world. We utilise a light detection and ranging (LiDAR) sensor to detect a 3D point cloud from the real world. To match the scale between a virtual environment and a user’s real world, this paper develops a boundary wall detection method using the Hough transform algorithm. A connected-component-labelling (CCL) algorithm is applied to classify the Hough space into several distinguishable blocks that are segmented using a threshold. The four largest peaks among the segmented blocks are extracted as the parameters of the wall plane. The virtual environment is scaled to the size of the real environment. In order to synchronise the position of the user and his/her avatar in the virtual world, a wireless Kinect network is proposed for user localisation. Multiple Kinects are mounted in an indoor environment to sense the user’s information from different viewpoints. The proposed method supports the omnidirectional detection of the user’s position and gestures. To verify the performance of our proposed system, we developed a VR game using several Kinects and a Samsung Gear VR device.

Collaboration


Dive into the Kyungeun Cho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Song

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sungdae Sim

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Yong Woon Park

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Raymond K. Wong

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge