Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Phuong Minh Chu is active.

Publication


Featured researches published by Phuong Minh Chu.


Symmetry | 2017

3D Reconstruction Framework for Multiple Remote Robots on Cloud System

Phuong Minh Chu; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho

This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using a single computer. In the proposed framework, reconstruction is carried out in cloud-based servers via distributed data processing. Consequently, users do not need to consider computing resources even when utilizing multiple remote robots. The sensors’ bulk data are transferred to a master server that divides the data and allocates the processing to a set of slave servers. Thus, the segmentation and reconstruction tasks are implemented in the slave servers. The reconstructed 3D space is created by fusing all the results in a visualization server, and the results are saved in a database that users can access and visualize in real-time. The results of the experiments conducted verify that the proposed system is capable of providing real-time 3D scenes of the surroundings of remote robots.


Neurocomputing | 2016

Automatic agent generation for IoT-based smart house simulator

Wonsik Lee; Seoungjae Cho; Phuong Minh Chu; Hoang Vu; Sumi Helal; Wei Song; Young-Sik Jeong; Kyungeun Cho

Abstract In order to evaluate the quality of Internet of Things (IoT) environments in smart houses, large datasets containing interactions between people and ubiquitous environments are essential for hardware and software testing. Both testing and simulation require a substantial amount of time and volunteer resources. Consequently, the ability to simulate these ubiquitous environments has recently increased in importance. In order to create an easy-to-use simulator for designing ubiquitous environments, we propose a simulator and autonomous agent generator that simulates human activity in smart houses. The simulator provides a three-dimensional (3D) graphical user interface (GUI) that enables spatial configuration, along with virtual sensors that simulate actual sensors. In addition, the simulator provides an artificial intelligence agent that automatically interacts with virtual smart houses using a motivation-driven behavior planning method. The virtual sensors are designed to detect the states of the smart house and its living agents. The sensed datasets simulate long-term interaction results for ubiquitous computing researchers, reducing the testing costs associated with smart house architecture evaluation.


Multimedia Tools and Applications | 2018

Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho

In this paper, a convergent multimedia application for filtering traces of dynamic objects from accumulated point cloud data is presented. First, a fast ground segmentation algorithm is designed by dividing each frame data item into small groups. Each group is a vertical line limited by two points. The first point is orthogonally projected from a sensor’s position to the ground. The second one is a point in the outermost data circle. Two voxel maps are employed to save information on the previous and current frames. The position and occupancy status of each voxel are considered for detecting the voxels containing past data of moving objects. To increase detection accuracy, the trace data are sought in only the nonground group. Typically, verifying the intersection between the line segment and voxel is repeated numerous times, which is time-consuming. To increase the speed, a method is proposed that relies on the three-dimensional Bresenham’s line algorithm. Experiments were conducted, and the results showed the effectiveness of the proposed filtering system. In both static and moving sensors, the system immediately eliminated trace data and maintained other static data, while operating three times faster than the sensor rate.


The Journal of Supercomputing | 2016

A collaborative client participant fusion system for realistic remote conferences

Wei Song; Mingyun Wen; Yulong Xi; Phuong Minh Chu; Hoang Vu; Shokh-Jakhon Kayumiy; Kyungeun Cho

Remote conferencing systems provide a shared environment where people in different locations can communicate and collaborate in real time. Currently, remote video conferencing systems present separate video images of the individual participants. To achieve a more realistic conference experience, we enhance video conferencing by integrating the remote images into a shared virtual environment. This paper proposes a collaborative client participant fusion system using a real-time foreground segmentation method. In each client system, the foreground pixels are extracted from the participant images using a feedback background modeling method. Because the segmentation results often contain noise and holes caused by adverse environmental lighting conditions and substandard camera resolution, a Markov Random Field model is applied in the morphological operations of dilation and erosion. This foreground segmentation refining process is implemented using graphics processing unit programming, to facilitate real-time image processing. Subsequently, segmented foreground pixels are transmitted to a server, which fuses the remote images of the participants into a shared virtual environment. The fused conference scene is represented by a realistic holographic projection.


Journal of Sensors | 2015

Automated Space Classification for Network Robots in Ubiquitous Environments

Jiwon Choi; Seoungjae Cho; Phuong Minh Chu; Hoang Vu; Kyhyun Um; Kyungeun Cho

Network robots provide services to users in smart spaces while being connected to ubiquitous instruments through wireless networks in ubiquitous environments. For more effective behavior planning of network robots, it is necessary to reduce the state space by recognizing a smart space as a set of spaces. This paper proposes a space classification algorithm based on automatic graph generation and naive Bayes classification. The proposed algorithm first filters spaces in order of priority using automatically generated graphs, thereby minimizing the number of tasks that need to be predefined by a human. The filtered spaces then induce the final space classification result using naive Bayes space classification. The results of experiments conducted using virtual agents in virtual environments indicate that the performance of the proposed algorithm is better than that of conventional naive Bayes space classification.


Symmetry | 2018

Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho

Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot.


international conference on multisensor fusion and integration for intelligent systems | 2017

Fast point cloud segmentation based on flood-fill algorithm

Phuong Minh Chu; Seoungjae Cho; Yong Woon Park; Kyungeun Cho

With the aim of providing a fast and effective segmentation method based on the flood-fill algorithm, in this study, we propose a new approach to segment a 3D point cloud gained by a 3D multi-channel laser range sensor into different objects. First, we divide the point cloud into two groups: ground and nonground points. Next, we segment clusters in each scanline dataset from the group of nonground points. Each scanline cluster is joined with other scanline clusters using the flood-fill algorithm. In this manner, each group of scanline clusters represents an object in the 3D environment. Finally, we obtain each object separately. Experiments show that our method has the ability to segment objects accurately and in real time.


international conference on multisensor fusion and integration for intelligent systems | 2017

Real-time 3D scene modeling using dynamic billboard for remote robot control systems

Phuong Minh Chu; Seoungjae Cho; Hieu Trong Nguyen; Sungdae Sim; Kiho Kwak; Kyungeun Cho

In this paper, a method for modeling three-dimensional scenes from a Lidar point cloud as well as a billboard calibration approach for remote mobile robot control applications are presented as a combined two-step approach. First, by projecting a local three-dimensional point cloud on two-dimensional coordinate system, we obtain a list of colored points. Based on this list, we apply a proposed ground segmentation algorithm to separate ground and non-ground areas. With the ground part, a dynamic triangular mesh is created by means of a height map and the vehicle position. The non-ground part is divided into small groups. Then, a local voxel map is applied for modeling each group. As a result, all the inner surfaces are eliminated. Second, for billboard calibration, we implement three stages in each frame. In the first stage, at the billboard location, an average ground point is estimated. In the second stage, the distortion angle is calculated. The billboard is updated for each frame in the final stage and corresponds to the terrain gradient.


Archive | 2017

A Ground Segmentation Method Based on Gradient Fields for 3D Point Clouds

Hoang Vu; Hieu Trong Nguyen; Phuong Minh Chu; Seoungjae Cho; Kyungeun Cho

In order to navigate in an unknown environment, autonomous robots must distinguish traversable ground regions from impassible obstacles. Thus, ground segmentation is a crucial step for handling this issue. This study proposes a new ground segmentation method combining of two different techniques: gradient threshold segmentation and mean height evaluation. Ground regions near the center of the sensor are segmented using the gradient threshold technique, while sparse regions are segmented using mean height evaluation. The main contribution of this study is a new ground segmentation algorithm that can be applied to various 3D point clouds. The processing time is acceptable and allows real-time processing of sensor data.


International Journal of Advanced Robotic Systems | 2017

Adaptive ground segmentation method for real-time mobile robot control:

Hoang Vu; Hieu Trong Nguyen; Phuong Minh Chu; Weiqiang Zhang; Seoungjae Cho; Yong Woon Park; Kyungeun Cho

For an autonomous mobile robot operating in an unknown environment, distinguishing obstacles from the traversable ground region is an essential step in determining whether the robot can traverse the area. Ground segmentation thus plays a critical role in autonomous mobile robot navigation in challenging environments, especially in real time. In this article, a ground segmentation method is proposed that combines three techniques: gradient threshold, adaptive break point detection, and mean height evaluation. Based on three-dimensional (3D) point clouds obtained from a Velodyne HDL-32E sensor, and by exploiting the structure of a two-dimensional reference image, the 3D data are represented as a graph data structures. This process serves as both a preprocessing step and a visualization of very large data sets, mobile-generated data for segmentation, and building maps of the area. Various types of 3D data—such as ground regions near the sensor center, uneven regions, and sparse regions—need to be represented and segmented. For the ground regions, we apply the gradient threshold technique for segmentation. We address the uneven regions using adaptive break points. Finally, for the sparse region, we segment the ground by using a mean height evaluation.

Collaboration


Dive into the Phuong Minh Chu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kiho Kwak

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Sungdae Sim

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Yong Woon Park

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Song

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge