Jaebyung Park
Chonbuk National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jaebyung Park.
Intelligent Service Robotics | 2008
Kong-Woo Lee; Jaebyung Park; Beom Hee Lee
In this paper, we propose a new localization algorithm based on a hybrid trilateration algorithm for obtaining an accurate position of a robot in intelligent space. The proposed algorithm is also able to estimate a position of the moving robot by using the extended Kalman filter, taking into consideration time synchronization and velocity of the robot. For realizing the localization system, we employ several smart sensors as beacons on the ceiling in intelligent space and as a listener attached to the robot. Finally, simulation results show the feasibility and effectiveness of the proposed localization algorithm compared with existing trilateration algorithms.
Journal of Institute of Control, Robotics and Systems | 2015
Jungkil Park; Jaebyung Park
In this paper, an object recognition method based on the depth information from the RGB-D camera, Xtion, is proposed for an indoor mobile robot. First, the RANdom SAmple Consensus (RANSAC) algorithm is applied to the point cloud obtained from the RGB-D camera to detect and remove the floor points. Next, the removed point cloud is classified by the k-means clustering method as each objects point cloud, and the normal vector of each point is obtained by using the k-d tree search. The obtained normal vectors are classified by the trained multi-layer perceptron as 18 classes and used as features for object recognition. To distinguish an object from another object, the similarity between them is measured by using Levenshtein distance. To verify the effectiveness and feasibility of the proposed object recognition method, the experiments are carried out with several similar boxes.
robot and human interactive communication | 2013
Seongkeun Kwak; Ji-Hyoung Ryu; Kil-To Chong; Jaebyung Park
In a pipeline type UV sterilizer, UV lamps are located in the conduit. So it is hard to find out whether the lamp is working or not from the outside of sterilizer. Thus, it is necessary to install a control panel at outside to monitor and control of UV lamp and cleaning wiper. However, setting up the control panel is not only needed the installation cost but also needed another space. In this paper, proposed interface is using smart device to monitor and control of a pipeline type UV sterilizer. It is more convenient and economical than the existing control panel. Also it is extensible to add more sensors and motion for cleaning UV lamp guard to keep sterilizing power.
Journal of the Korea Academia-Industrial cooperation Society | 2013
Byeung-Jun Park; Ji-Hyoung Ryu; Jaebyung Park
In this paper, an integrated control system is proposed for automatic control and remote monitoring of pipeline type UV sterilizer. The proposed system can control the cleaning wiper in the sterilizer with various cleaning motions, and periodically check the contamination level of the UV lamps with the UV power sensors. Therefore, sterilizer repair and maintenance can be more effectively done. In addition, the control system based on the open-source processor can communicate with external smart devices via Bluetooth, and thus wirelessly exchange control commands and sensor data. Furthermore, the system is able to flexibly cope with changes of cleaning motions and sensors since its firmware can be wirelessly upgraded by using the smart device. Consequently, the proposed system is suitable to construct a smart sewage treatment system in small towns.
Journal of Electronic Imaging | 2012
Woonchul Ham; Jaebyung Park; Enkhbaatar Tumenjargal; Luubaatar Badarch; Hyeokjae Kwon
Abstract. This paper presents a new model of a complementary metal–oxide–semiconductor (CMOS) camera using combinations of several pin hole camera models, and its validity is verified by using synthesized stereo images based on OpenGL software. Our embedded three-dimensional (3-D) image capturing hardware system consists of five motor controllers and two CMOS camera modules based on an S3C6410 processor. An optimal alignment for capturing nine segment images that have their own convergence planes is implemented using a pi controller based on the measures of alignment and sharpness. A new synthesizing fusion with the optimized nine segmentation images is proposed for the best 3-D depth perception. Based on the experimental results of the disparity values in each of the nine segments, the multi-segment method proposed in this paper is a good method to improve the perception of 3-D depth in stereo images.
human robot interaction | 2018
WonHyong Lee; Jaebyung Park; Chung Hyuk Park
This paper presents a novel acceptability study of a tele-assistive robotic nurse for human-robot collaboration in medical environments. We designed a telepresence robotic platform with multi-modal interaction framework, which provides haptic tele-operative control, gesture control, and voice command for the robotic system to achieve a mobile manipulative goal. We then performed a pilot user study to experience the robotic control scenarios and analyze the acceptability of our robotic system in terms of usability and safety criteria. This paper then presents the analysis results on surveys from 11 participants and concludes that the gesture-based control is the most difficult for users and the voice-based control is easier while safety is not assured.
international conference on ubiquitous robots and ambient intelligence | 2015
Jungkil Park; Jaebyung Park
A ball tracking method is proposed by obtaining the ball and a cue direction of the user for billiard assistant system. The captured image is processed by using the following steps. Binarization of the image is done to extract a table, the ball and cue by separating channels using HSV model. Morphology is used to reduce a noise before applying a proposed tracking method. The method is used to detect the ball using the geometric prediction on cue direction and position.
international conference on ubiquitous robots and ambient intelligence | 2014
Sungmin Lee; Jungkil Park; Jaebyung Park
The tip-over terrain can be analyzed by using a support polygon which is determined by contact points between the mobile robot and the ground. If the intersection between the extended line of the resultant force acting on the robot and the ground plane is located inside the support polygon, the tip-over does not occur. Otherwise, the tip-over must occur. However, when it comes to tip-over analysis with a support polygon, not only the position of the mobile robot but also the orientation should be considered. The tip-over under given terrain cannot be analyzed in advance unless the orientation of the robot on the terrain is determined beforehand. Thus, in this paper, the support polygon is approximated as an inscribed circle of it, which is defined as a support inscribed circle. The support inscribed circle has equidistance from its center. Therefore, it makes possible to analyze the tip-over without considering the orientation of the robot. At this time, the tip-over terrain is analyzed by considering the support inscribed circle and the acceleration of the mobile robot. Finally, the tip-over of a 4 wheeled mobile robot, ERP-42, is analyzed by using the support inscribed circle.
international conference on ubiquitous robots and ambient intelligence | 2014
Jungkil Park; Sungmin Lee; Jaebyung Park
In this paper, we figure out the problems of existing wall estimation methods using a 2D sensor and propose a reliable wall estimation method using a RGB-D camera. First, a point cloud is obtained by the RGB-D camera, and a surface normal is calculated at each point. Next, planes such as floor, right and left planes are classified by angle thresholding of the normal vector, where the thresholding angle is determined as an angle between the normal vector and the gravity. In this case, the right and left planes can be used as reference walls for wall following. Since the relatively long wall segment can be obtained by using just 1 frame of the depth images, the wall can be obtained reliably and rapidly.
international conference on ubiquitous robots and ambient intelligence | 2013
Sungmin Lee; Jaebyung Park
This paper proposes the iterative measurement method for precise welding plane detection for automatic welding robots based on the principal component analysis (PCA). The robot has the laser scanner on its wrist and can detect many points on the plane by rotational motion of the wrist. By applying the PCA method to the detected points, the plane equation of the welding plane can be obtained. Since the plane is most accurately detected when the normal vector of the plane is parallel with the central axis of the laser scanner, the manipulator is iteratively controlled to align the central axis parallel with the normal vector. According to the iterative measurement process, the precise plane equation can be obtained. For verifying the proposed method, the experiments with the prototype of the rotational laser scanner are carried out.