Kwang-soo Kim
Hanbat National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kwang-soo Kim.
international conference on control, automation and systems | 2010
Muhammad Latif Anjum; Jaehong Park; Wonsang Hwang; Hyun-il Kwon; Jong-hyeon Kim; Changhun Lee; Kwang-soo Kim; Dong-il Danr Cho
This paper presents a sensor-data-fusion method using an Unscented Kalman Filter (UKF), to implement an accurate localization system for mobile robots. Integration of data from various sensors using an efficient sensor fusion algorithm is required to achieve a continuous and accurate localization of mobile robots. We use data from low cost accelerometer, gyroscope, and encoders to obtain robot motion information. The UKF, used as an efficient sensor fusion algorithm, is an advanced filtering technique which outperforms the widely-used Extended Kalman Filter (EKF) in many applications. The system is able to compensate for the slip errors by switching between two different UKF models built for slip and no-slip cases. Since the accelerometer error accumulates with time because of the double integration, the data from accelerometer is only used in slip model of the UKF. The results obtained from UKF sensor fusion algorithm are compared with the results from an accurate distance laser sensor. The experimental results show that the system is able to accurately track the motion of the robot in various robot motion scenarios including the scenario when robots encoders data is not reliable due to the slip occurrence.
ieee/sice international symposium on system integration | 2011
Tae-il Kim; Wook Bahn; Chang-hun Lee; Tae-jae Lee; Muhammad Muneeb Shaikh; Kwang-soo Kim
This paper presents a vision-tracking for mobile robots, which tracks a moving target based on robot motion and stereo vision information. The proposed system controls pan and tilt actuators attached to a stereo camera, using the data from a gyroscope, robot wheel encoders, pan and tilt actuator encoders, and the stereo camera. Using this proposed system, the stereo camera always faces the moving target. The developed system calculates the angles of the pan and tilt actuators by estimating the relative position of the target with respect to the position of the robot. The developed system estimates the target position using the robot motion information and the stereo vision information. The movement of the robot is modeled as the transformation of the frame, which consists of a rotation and a translation. The developed system calculates the rotation using 3-axis gyroscope data and the translation using robot wheel encoder data. The proposed system measures the position of the target relative to the robot, combining the encoder data of pan and tilt actuators and the disparity map of the stereo vision. The inevitable mismatch of the data, which occurs from the asynchrony of the multiple sensors, is prevented by the proposed system, which compensates for the communication latency and the computation time. The experimental results show that the developed system achieves excellent tracking performance in several motion scenarios, including combinations of straights and curves and climbing of slopes.
international conference on control, automation and systems | 2010
Wonsang Hwang; Jaehong Park; Hyun-il Kwon; Muhammad Latif Anjum; Jong-hyeon Kim; Changhun Lee; Kwang-soo Kim; Dong-il Dan Cho
The vision tracking system in this paper estimates the robot position relative to a target and rotates the camera towards the target. To estimate the robot position of mobile robot, the system combines information from an accelerometer, a gyroscope, two encoders, and a vision sensor. The encoders can provide fairly accurate robot position information, but the encoder data are not reliable when robot wheels slip. Accelerometer data can provide the robot position information even when the wheels are slipping, but a long term position estimation is difficult, because of integration of errors arising from bias and noise. To overcome the drawbacks of each method mentioned in the above, the proposed system uses data fusion with two Kalman filters and a slip detector. One Kalman filter is for the slip case, and the other is for the no-slip case. Each Kalman filter uses a different sensor combination for estimating the robot motion. The slip detector compares the data from the accelerometer with the data from the encoders, and decides if a slip condition has occurred. Accordingly, based on the decision of the slip detector, the system chooses one of the outputs of the two Kalman filters, which is subsequently used for calculating the camera angle of the vision tracking system. The vision tracking system is implemented on a two-wheeled robot. To evaluate the tracking and recognition performance of the implemented system, experiments are performed for various robot motion scenarios in various environments.
intelligent robots and systems | 2010
Jaehong Park; Wonsang Hwang; Hyun-il Kwon; Jong-hyeon Kim; Chang-hun Lee; M. Latif Anjum; Kwang-soo Kim; Dong-il Dan Cho
This paper introduces a high performance vision tracking system for mobile robot using sensor data fusion. For mobile robots, it is difficult to collect continuous vision information due to robots motion. To solve this problem, the proposed vision tracking system estimates the robots position relative to a target and rotates the camera towards the target. This concept is derived from the human eye reflex mechanism, known as the Vestibulo-Ocular Reflex (VOR), for compensating the head motion. This concept for tracking the target results in much higher performance levels, when compared with the conventional method that rotates the camera using only vision information. The proposed system do not require heavy computing loads to process image data and can track the target continuously even during vision occlusion. The robot motion information is estimated using data from accelerometer, gyroscope, and encoders. This multi-sensor data fusion is achieved using Kalman filter. The proposed vision tracking system is implemented on a two-wheeled robot. The experimental results show that the proposed system achieves excellent tracking and recognition performance in various motion scenarios, including scenarios where camera is temporarily blocked from the target.
ieee/sice international symposium on system integration | 2011
Muhammad Muneeb Shaikh; Wook Bahn; Changhun Lee; Tae-Il Kim; Tae-Jae Lee; Kwang-soo Kim; Dong-il Dan Cho
This paper introduces a vision tracking system for mobile robot by using Unscented Kalman Filter (UKF). The proposed system accurately estimates the position and orientation of the mobile robot by integrating information received from encoders, inertial sensors, and active beacons. These position and orientation estimates are used to rotate the camera towards the target during robot motion. The UKF, used as an efficient sensor fusion algorithm, is an advanced filtering technique which reduces the position and orientation errors of the sensors. The designed system compensates for the slip error by switching between two different UKF models, which are designed for slip and no-slip cases, respectively. The slip detector is used to detect the slip condition by comparing the data from the accelerometer and encoder to select the either UKF model as the output of the system. The experimental results show that proposed system is able to locate robot position with significantly reduced position errors and successful tracking of the target for various environments and robot motion scenarios.
intelligent robots and systems | 2010
Hyun-il Kwon; Jaehong Park; Wonsang Hwang; Jong-hyeon Kim; Chang-hun Lee; M. Latif Anjum; Kwang-soo Kim; Dong-il Dan Cho
This paper presents a vision tracking system to achieve high recognition performance under dynamic circumstances, using a fuzzy logic controller. The main concept of the proposed system is based on the vestibulo-ocular reflex (VOR) and the opto-kinetic reflex (OKR) of the human eye. To realize the VOR concept, MEMS inertial sensors and encoders are used for robot motion detection. This concept turns the camera towards a selected target, counteracting the robot motion. Based on the OKR concept, the targeting errors are periodically compensated, using vision information. The fuzzy logic controller uses sensor data fusion to detect slip or collision occurrences. To calculate a heading angle of the camera accurately, the output of the fuzzy logic controller and the vision information from the camera are combined, using an extended Kalman filter. The proposed vision tracking system is implemented in a mobile robot and evaluated experimentally. The experimental results are obtained as the tracking and the recognition success rate using a mobile robot. The developed system achieved the excellent tracking and recognition performance during slip or collision occurrences under dynamic circumstances.
international conference on robotics and automation | 2011
Wook Bahn; Jaehong Park; Chang-hun Lee; Tae-il Kim; Teajae Lee; Muhammad Muneeb Shaikh; Kwang-soo Kim; Dong-il Dan Cho
This paper presents a vision-tracking system for a mobile robot, using robot motion and stereo vision data. The mobile robot has an actuator module which pans and tilts an integrated stereo camera. The proposed system controls the actuator to maintain the line of sight of the stereo camera towards a stationary target by using the robot motion data. The robot motion data are obtained from a gyroscope and encoders of the mobile robot. The stereo vision data from the camera is used to compensate for errors in the motion measurements, and to prevent a long term error accumulation. This vision-based compensation is used only when the robot stops or moves slowly, because the long vision processing times can cause the loop time to overrun. The proposed system is experimentally evaluated while the robot moves on a trajectory.
IFAC Proceedings Volumes | 2009
Eun Sub Shim; Wonsang Hwang; Muhammad Latif Anjum; Hyun-Seok Kim; Kwang Suk Park; Kwang-soo Kim; Dong-il Dan Cho
Abstract This paper addresses a stable vision system for indoor moving robot using encoder information. The proposed system uses encoder signals to calculate robot motion and rotates the camera in order to fix the target on the image frame during locomotion. Since the calculation of rotating angle of the camera is based on the integration of encoder data over time, this leads to the unbounded accumulation of error. To prevent the error accumulation, vision sensor signals are used periodically to compensate for the errors. The resulting standard deviations of the rotated camera angle error were 0.94°, 0.32° and 1.55° for rotation, translation and rotation-translation combined motion while the robot heading angles have standard deviations of 23.27°, 16.56° and 48.53°, respectively. However, the system shows performance degradation in slippery ground conditions. In our experiments the errors have trends to increase with the decrease of friction coefficient between the ground and the robot wheels. Future work should overcome this problem using additional sensors such as inertial measurement units (IMUs) that are able to detect the correct robot motion even in the case of slip disturbance.
Journal of Neural Engineering | 2017
Myounghwan Choi; JungRyul Ahn; Dae Jin Park; Sangmin Lee; Kwang-soo Kim; Dong-il Dan Cho; Solomon S. Senok; Kyo-in Koo; Yong Sook Goo
OBJECTIVEnDirect stimulation of retinal ganglion cells in degenerate retinas by implanting epi-retinal prostheses is a recognized strategy for restoration of visual perception in patients with retinitis pigmentosa or age-related macular degeneration. Elucidating the best stimulus-response paradigms in the laboratory using multielectrode arrays (MEA) is complicated by the fact that the short-latency spikes (within 10u2009ms) elicited by direct retinal ganglion cell (RGC) stimulation are obscured by the stimulus artifact which is generated by the electrical stimulator.nnnAPPROACHnWe developed an artifact subtraction algorithm based on topographic prominence discrimination, wherein the duration of prominences within the stimulus artifact is used as a strategy for identifying the artifact for subtraction and clarifying the obfuscated spikes which are then quantified using standard thresholding.nnnMAIN RESULTSnWe found that the prominence discrimination based filters perform creditably in simulation conditions by successfully isolating randomly inserted spikes in the presence of simple and even complex residual artifacts. We also show that the algorithm successfully isolated short-latency spikes in an MEA-based recording from degenerate mouse retinas, where the amplitude and frequency characteristics of the stimulus artifact vary according to the distance of the recording electrode from the stimulating electrode. By ROC analysis of false positive and false negative first spike detection rates in a dataset of one hundred and eight RGCs from four retinal patches, we found that the performance of our algorithm is comparable to that of a generally-used artifact subtraction filter algorithm which uses a strategy of local polynomial approximation (SALPA).nnnSIGNIFICANCEnWe conclude that the application of topographic prominence discrimination is a valid and useful method for subtraction of stimulation artifacts with variable amplitudes and shapes. We propose that our algorithm may be used as stand-alone or supplementary to other artifact subtraction algorithms like SALPA.
conference of the industrial electronics society | 2011
Changhun Lee; Jaehong Park; Wook Bahn; Tae-Il Kim; Tae-Jae Lee; M. Muneeb Shaikh; Kwang-soo Kim; Dong-il Dan Cho
This paper presents a vision tracking system for a moving robot that provides continuous tracking of another moving robot. The purpose of the proposed system is to control the line of sight of the camera, which is on the targetting robot, for tracking the targetted robot by using the position information of both the targetting robot and the targetted robot. To estimate the positions of both robots, the proposed system uses two encoders and a gyroscope for the targetting robot and active beacons for the targetted robot. The proposed system computes the relative position between the two robots and aligns the camera for tracking the targetted robot. The proposed system is experimentally evaluated for various scenarios, including the occultation of the targetted robot and temporary illumination changes. The experimental results show that the proposed system is able to track the moving robot well without visual information, in normal situations.