Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seoungjae Cho is active.

Publication


Featured researches published by Seoungjae Cho.


IEEE Transactions on Automation Science and Engineering | 2015

Persim 3D: Context-Driven Simulation and Modeling of Human Activities in Smart Spaces

Jae Woong Lee; Seoungjae Cho; Sirui Liu; Kyungeun Cho; Sumi Helal

Automated understanding and recognition of human activities and behaviors in a smart space (e.g., smart house) is of paramount importance to many critical human-centered applications. Recognized activities are the input to the pervasive computer (the smart space) which intelligently interacts with the users to maintain the applications goal be it assistance, safety, child-development, entertainment or other goals. Research in this area is fascinating but severely lacks adequate validation which often relies on datasets that contain sensory data representing the activities. Availing adequate datasets that can be used in a large variety of spaces, for different user groups, and aiming at different goals is very challenging. This is due to the prohibitive cost and the human capital needed to instrument physical spaces and to recruit human subjects to perform the activities and generate data. Simulation of human activities in smart spaces has therefore emerged as an alternative approach to bridge this deficit. Traditional event-driven approaches have been proposed. However, the complexity of human activity simulation was proved to be challenging to these initial simulation efforts. In this paper, we present Persim 3D-an alternative context-driven approach to simulating human activities capable of supporting complex activity scenarios. We present the context-activity-action nexus and show how our approach combines modeling and visualization of actions with context and activity simulation. We present the Persim 3D architecture and algorithms, and describe a detailed validation study of our approach to verify the accuracy and realism of the simulation output (datasets and visualizations) and the scalability of the human effort in using Persim 3D to simulate complex scenarios. We show positive and promising results that validate our approach.


Symmetry | 2017

3D Reconstruction Framework for Multiple Remote Robots on Cloud System

Phuong Minh Chu; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho

This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using a single computer. In the proposed framework, reconstruction is carried out in cloud-based servers via distributed data processing. Consequently, users do not need to consider computing resources even when utilizing multiple remote robots. The sensors’ bulk data are transferred to a master server that divides the data and allocates the processing to a set of slave servers. Thus, the segmentation and reconstruction tasks are implemented in the slave servers. The reconstructed 3D space is created by fusing all the results in a visualization server, and the results are saved in a database that users can access and visualize in real-time. The results of the experiments conducted verify that the proposed system is capable of providing real-time 3D scenes of the surroundings of remote robots.


The Scientific World Journal | 2014

Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

Seoungjae Cho; Jonghyun Kim; Warda Ikram; Kyungeun Cho; Young-Sik Jeong; Kyhyun Um; Sungdae Sim

A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.


International Journal of Distributed Sensor Networks | 2014

Traversable Ground Surface Segmentation and Modeling for Real-Time Mobile Mapping

Wei Song; Seoungjae Cho; Kyungeun Cho; Kyhyun Um; Chee Sun Won; Sungdae Sim

Remote vehicle operator must quickly decide on the motion and path. Thus, rapid and intuitive feedback of the real environment is vital for effective control. This paper presents a real-time traversable ground surface segmentation and intuitive representation system for remote operation of mobile robot. Firstly, a terrain model using voxel-based flag map is proposed for incrementally registering large-scale point clouds in real time. Subsequently, a ground segmentation method with Gibbs-Markov random field (Gibbs-MRF) model is applied to detect ground data in the reconstructed terrain. Finally, we generate a texture mesh for ground surface representation by mapping the triangles in the terrain mesh onto the captured video images. To speed up the computation, we program a graphics processing unit (GPU) to implement the proposed system for large-scale datasets in parallel. Our proposed methods were tested in an outdoor environment. The results show that ground data is segmented effectively and the ground surface is represented intuitively.


Multimedia Tools and Applications | 2017

Real-time single camera natural user interface engine development

Wei Song; Xingquan Cai; Yulong Xi; Seoungjae Cho; Kyungeun Cho

Natural user interfaces (NUIs) provide human computer interaction (HCI) with natural and intuitive operation interfaces, such as using human gestures and voice. We have developed a real-time NUI engine architecture using a web camera as a means of implementing NUI applications. The system captures video via the web camera, implements real-time image processing using graphic processing unit (GPU) programming. This paper describes the architecture of the engine and the real-virtual environment interaction methods, such as foreground segmentation and hand gesture recognition. These methods are implemented using GPU programming in order to realize real-time image processing for HCI. To verify the efficacy of our proposed NUI engine, we utilized it in the development and implementation of several mixed reality games and touch-less operation applications, using the developed NUI engine and the DirectX SDK. Our results confirm that the methods implemented by the engine operate in real time and the interactive operations are intuitive.


Future Generation Computer Systems | 2017

Simulation framework of ubiquitous network environments for designing diverse network robots

Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho

Abstract Smart homes provide residents with services that offer convenience using sensor networks and a variety of ubiquitous instruments. Network robots based on such networks can perform direct services for these residents. Information from various ubiquitous instruments and sensors located in smart homes is shared with network robots. These robots effectively help residents in their daily routine by accessing this information. However, the development of network robots in an actual environment requires significant time, space, labor, and money. A network robot that has not been fully developed may cause physical damage in unexpected situations. In this paper, we propose a framework that allows the design and simulation of network robot avatars and a variety of smart homes in a virtual environment to address the above problems. This framework activates a network robot avatar based on information obtained from various sensors mounted in the smart home; these sensors identify the daily routine of the human avatar residing in the smart home. Algorithms that include reinforcement learning and action planning are integrated to enable the network robot avatar to serve the human avatar. Further, this paper develops a network robot simulator to verify whether the network robot functions effectively using the framework.


Neurocomputing | 2016

Automatic agent generation for IoT-based smart house simulator

Wonsik Lee; Seoungjae Cho; Phuong Minh Chu; Hoang Vu; Sumi Helal; Wei Song; Young-Sik Jeong; Kyungeun Cho

Abstract In order to evaluate the quality of Internet of Things (IoT) environments in smart houses, large datasets containing interactions between people and ubiquitous environments are essential for hardware and software testing. Both testing and simulation require a substantial amount of time and volunteer resources. Consequently, the ability to simulate these ubiquitous environments has recently increased in importance. In order to create an easy-to-use simulator for designing ubiquitous environments, we propose a simulator and autonomous agent generator that simulates human activity in smart houses. The simulator provides a three-dimensional (3D) graphical user interface (GUI) that enables spatial configuration, along with virtual sensors that simulate actual sensors. In addition, the simulator provides an artificial intelligence agent that automatically interacts with virtual smart houses using a motivation-driven behavior planning method. The virtual sensors are designed to detect the states of the smart house and its living agents. The sensed datasets simulate long-term interaction results for ubiquitous computing researchers, reducing the testing costs associated with smart house architecture evaluation.


International Journal of Distributed Sensor Networks | 2013

Human-Robot Interaction Learning Using Demonstration-Based Learning and Q-Learning in a Pervasive Sensing Environment

Yunsick Sung; Seoungjae Cho; Kyhyun Um; Young-Sik Jeong; Simon Fong; Kyungeun Cho

Given that robots provide services in any locations after they move toward humans, the pervasive sensing environment can provide diverse kinds of services through the robots not depending on the locations of humans. For various services, robots need to learn accurate motor primitives such as walking and grabbing objects. However, learning motor primitives in a pervasive sensing environment are very time consuming. Several previous studies have considered robots learning motor primitives and interacting with humans in virtual environments. Given that a robot learns motor primitives based on observations, a disadvantage is that there is no way of defining motor primitives that cannot be observed by a robot. In this paper, we develop a novel interaction learning approach based on a virtual environment. The motor primitives are defined by manipulating a robot directly using demonstration-based learning. In addition, a robot can apply Q-learning to learn interactions with humans. In an experiment, using the proposed method, the motor primitives were generated intuitively and the amount of movement required by a virtual human in one of the experiments was reduced by about 25% after applying the generated motor primitives.


Archive | 2014

Intuitive NUI for Controlling Virtual Objects Based on Hand Movements

Yeji Kim; Sohyun Sim; Seoungjae Cho; Woon-woo Lee; Young-Sik Jeong; Kyung-Eun Cho; Kyhyun Um

Natural user interfaces (NUIs) have attracted considerable research attention, and they are increasingly being applied in various fields. NUIs afford more intuitive user inputs than existing UIs. This study proposes an approach to recognize arm motions for intuitive control of an object in a virtual environment. Kinect is used for virtual object control system. For an experimental test, a virtual room is created and the Smart Interior NUI is implemented for arranging furniture.


Archive | 2014

Design and Implementation of a Web Camera-Based Natural User Interface Engine

Wei Song; Yulong Xi; Warda Ikram; Seoungjae Cho; Kyungeun Cho; Kyhyun Um

Natural User Interfaces (NUIs) are a novel way to provide Human Computer Interaction (HCI) with natural and intuitive operation interfaces, such as using human gestures and voice. This paper proposes a real-time NUI engine architecture using a web camera as a means of implementing NUI applications with an inexpensive device. The engine integrates the OpenCV library, the CUDA toolkit, and the DirectX SDK. We utilize the OpenCV library to capture video via the web camera, implement real-time image processing using Graphic Processing Unit (GPU) programming, and present the NUI applications using the DirectX SDK; for example, to implant a 3D object in a captured scene and play sounds. To verify the efficacy of our proposed NUI engine, we utilized it in the development and implementation of several mixed reality games and touch-less operation applications. Our results confirm that the methods of the engine are implemented in real time and the interactive operations are intuitive.

Collaboration


Dive into the Seoungjae Cho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong Woon Park

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sungdae Sim

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Wei Song

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kiho Kwak

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge