Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yonghoon Ji is active.

Publication


Featured researches published by Yonghoon Ji.


Intelligent Service Robotics | 2015

RGB-D SLAM using vanishing point and door plate information in corridor environment

Yonghoon Ji; Atsushi Yamashita; Hajime Asama

This paper proposes a novel approach to an RGB-D simultaneous localization and mapping (SLAM) system that uses the vanishing point and door plates in a corridor environment simultaneously for navigation. To increase the stability of the SLAM process in such an environment, the vanishing point and door plates are utilized as landmarks for extended Kalman filter-based SLAM (i.e., EKF SLAM). The vanishing point is a semi-global unique feature usually observed in the corridor frontage, and a door plate has strong signature information (i.e., the room number) for the data association process. Thus, using these types of reliable features as landmarks maintains the stability of the SLAM process. A dense 3D map is represented by an octree structure that includes room number information, which can be useful semantic information. The experimental results showed that the proposed scheme yields a better performance than previous SLAM systems.


international conference on robotics and automation | 2015

Fuzzy based traversability analysis for a mobile robot on rough terrain

Yusuke Tanaka; Yonghoon Ji; Atsushi Yamashita; Hajime Asama

We present a novel rough terrain traversability analysis method for mobile robot navigation. We focused on the scenario of mobile robot operation in a disaster environment with limited sensor data. The robot builds a map in real time and analyzes the terrain traversability of its surrounding environment. The proposed method is based on fuzzy inference so that it can handle uncertainties in the sensor data. Two values associated with the terrain traversability, roughness and slope, are calculated from an elevation map built by a laser range finder mounted on the mobile robot. These two values are inputted to the fuzzy inference, and the traversability is analyzed. Based on the traversability output from the fuzzy inference, a vector field histogram (VFH) is generated. The mobile robot course is determined according to the VFH. We demonstrated our algorithm on an artificial environment. The experimental results showed that the mobile robot was able to reach the target position safely while avoiding untraversable areas.


international conference on robotics and automation | 2017

Lane-Change Detection Based on Vehicle-Trajectory Prediction

Hanwool Woo; Yonghoon Ji; Hitoshi Kono; Yusuke Tamura; Yasuhide Kuroda; Takashi Sugano; Yasunori Yamamoto; Atsushi Yamashita; Hajime Asama

We propose a new detection method to predict a vehicles trajectory and use it for detecting lane changes of surrounding vehicles. According to the previous research, more than 90% of the car crashes are caused by human errors, and lane changes are the main factor. Therefore, if a lane change can be detected before a vehicle crosses the centerline, accident rates will decrease. Previously reported detection methods have the problem of frequent false alarms caused by zigzag driving that can result in user distrust in driving safety support systems. Most cases of zigzag driving are caused by the abortion of a lane change due to the presence of adjacent vehicles on the next lane. Our approach reduces false alarms by considering the possibility of a crash with adjacent vehicles by applying trajectory prediction when the target vehicle attempts to change a lane, and it reflects the result of lane-change detection. We used a traffic dataset with more than 500 lane changes and confirmed that the proposed method can considerably improve the detection performance.


2016 11th France-Japan & 9th Europe-Asia Congress on Mechatronics (MECATRONICS) /17th International Conference on Research and Education in Mechatronics (REM) | 2016

3-D reconstruction of underwater objects using arbitrary acoustic views

Seungchul Kwak; Yonghoon Ji; Atsushi Yamashita; Hajime Asama

This paper presents a method for 3-D measurement of underwater objects using acoustic cameras. The 3-D measurement of underwater objects using arbitrary acoustic views is a major advantage to grasp underwater situations. Robots such as autonomous underwater vehicles (AUVs) and remotely operated underwater vehicles (ROVs) are desired to mount acoustic cameras for underwater investigations, especially in turbid or deep environments. Acoustic cameras are the most powerful sensors for acquisition of underwater information because they have no limitation in their visibility. Furthermore, their sensing area covers wide range, which is often the limitation of traditional sonar sensors. However, 3-D reconstruction systems using acoustic images from arbitrary acoustic views have not been established even with their undeniable worth. In this paper, we propose a novel approach which enables 3-D measurements of underwater objects using arbitrary viewpoints. This approach contributes to establishing a methodology for 3-D shape reconstruction systems, where the correspondences between feature points on each acoustic image are described. The experimental results indicate not only the validity of our proposed approach, but also that the novel methodology demonstrates superior performance in estimating 3-D information of underwater objects.


conference on automation science and engineering | 2015

Automatic calibration and trajectory reconstruction of mobile robot in camera sensor network

Yonghoon Ji; Atsushi Yamashita; Hajime Asama

To operate mobile robots in an intelligent space such as a distributed camera sensor network, pre-calibration of all environmental cameras (i.e., determining the absolute poses of each camera) is an essential task that is extremely tedious. The optimization problem for camera calibration with a mobile robot has been intensively studied in the past. However, most existing solutions have limitations in that they can estimate only three degree of freedom (DOF) parameters (x, y, yaw) with restrictive assumptions. In this paper, we propose a novel method that achieves trajectory reconstruction of a mobile robot and calibration of complete 6DOF (x, y, z, roll, pitch, yaw) external parameters of distributed cameras by utilizing easily obtainable grid map information of the environment as prior information. In addition, a novel two-way observation model is proposed. The map information and two-way observation model help seek a global minimum solution (i.e., 6DOF camera parameters and robot trajectory) within the objective function containing many local minimums. We evaluate the proposed method in a simulation environment with a virtual camera network of up to 10 cameras and a real environment with a mobile robot in a wireless camera network The results demonstrate that the proposed framework can estimate the 6DOF camera parameters and the target trajectory successfully.


international conference on environment and electrical engineering | 2015

Development of acoustic camera-imaging simulator based on novel model

Seungchul Kwak; Yonghoon Ji; Atsushi Yamashita; Hajime Asama

Acquisition of accurate image information from underwater environments is essential to operate underwater tasks such as maintenance, inspection, trajectory estimation, target recognition and SLAM (simultaneous localization and mapping). In this respect, an acoustic camera is superior device because it can provide images with more accurate details than what the optical cameras provide even in turbid water. However, the structure of the acoustic image is very dissimilar to that of optical image because the acoustic camera provides sequences of distance and azimuth angle data, which elevation angle data are missing. Consequently, it is difficult to generate simulated acoustic images although simulating realistic images is vital for verification for performance of proposed algorithms and saving costs and time. This paper describes the principles of acoustic natures and imaging geometry model of the acoustic camera which greatly contribute to developing simulator for the acoustic images. When compared to a state-of-the-art technique, our approach demonstrates surpassing performance in representing simulated acoustic images in complete gray-scale. We evaluate the simulated images from developed simulator by comparing with real acoustic images. The results indicate that the proposed simulator can generate realistic virtual acoustic images based on the principles.


international conference on multisensor fusion and integration for intelligent systems | 2016

Acoustic camera-based 3D measurement of underwater objects through automated extraction and association of feature points

Yonghoon Ji; Seungchul Kwak; Atsushi Yamashita; Hajime Asama

This paper presents a novel scheme for the three-dimensional (3D) reconstruction of underwater objects by using multiple acoustic views based on geometric and image processing approaches. Underwater tasks such as maintenance, ship hull inspection, and harbor surveillance require accurate underwater information. In such cases, 3D reconstructed information would greatly contribute to a better understanding of the underwater environment. Acoustic cameras are the most suitable sensors because they provide acoustic images with more accurate details than other sensors, even in turbid water. In order to enable 3D measurement, feature points of each acoustic image should be extracted and associated in advance. In a previous study, we proposed a 3D measurement method, but it was limited by the assumption of complete correspondence information between feature points. This new methodology establishes a 3D measurement model by automatically determining correspondences between feature points through the application of geometric constraints and extracting these points. The result of the real experiment demonstrated that the proposed framework can automatically perform 3D measurement tasks of underwater objects.


international conference on control automation and systems | 2016

Lane-changing feature extraction using multisensor integration

Hanwool Woo; Yonghoon Ji; Hitoshi Kono; Yusuke Tamura; Yasuhide Kuroda; Takashi Sugano; Yasunori Yamamoto; Atsushi Yamashita; Hajime Asama

We propose a feature extraction method for lane changes of other traffic participants. According to previous research, over 90 % of car crashes are caused by human mistakes, and lane changes are the main factor. Therefore, if an intelligent system can predict a lane change and alarm a driver before another vehicle crosses the center line, this can contribute to reducing the accident rate. The main contribution of this work is to propose a feature extraction method using the multisensor system which consists of a position sensor and a laser scanner with line markings information. For a lane change prediction of other traffic participants, the most effective features are a lateral position and velocity with respect to a center line. We installed the sensor system to the primary vehicle and measured positions of other traffic participants while the primary vehicle drives on a highway. We extracted the features as the distance with respect to the center line and the lateral velocity of other vehicles using the measurement data. We confirmed that our feature extraction method has an enough accuracy for the lane change prediction.


international conference on multisensor fusion and integration for intelligent systems | 2017

3D reconstruction of line features using multi-view acoustic images in underwater environment

Ngoc Trung Mai; Hanwool Woo; Yonghoon Ji; Yusuke Tamura; Atsushi Yamashita; Hajime Asama

In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the next generation of sonar sensors, to extract and track the line of the underwater objects which are used as visual features for the image processing algorithm. In this work, we concentrate on artificial underwater environments, such as dams and bridges. In these structured environments, the line segments are preferred over the points feature, as they can represent structure information more effectively. We also developed a method for automatic extraction and correspondences matching of line features. Our approach enables 3D measurement of underwater objects using arbitrary viewpoints based on an extended Kalman filter (EKF). The probabilistic method allows computing the 3D reconstruction of underwater objects even in presence of uncertainty in the control input of the cameras movements. Experiments have been performed in real environments. Results showed the effectiveness and accuracy of the proposed solution.


systems, man and cybernetics | 2016

Dynamic potential-model-based feature for lane change prediction

Hanwool Woo; Yonghoon Ji; Hitoshi Kono; Yusuke Tamura; Yasuhide Kuroda; Takashi Sugano; Yasunori Yamamoto; Atsushi Yamashita; Hajime Asama

We propose a prediction method for lane changes in other vehicles. According to previous research, over 90 % of car crashes are caused by human mistakes, and lane changes are the main factor. Therefore, if an intelligent system can predict a lane change and alarm a driver before another vehicle crosses the center line, this can contribute to reducing the accident rate. The main contribution of this work is to propose a new feature describing the relationship of a vehicle to adjacent vehicles. We represent the new feature using a dynamic characteristic potential field that changes the distribution depending on the relative number of adjacent vehicles. The new feature addresses numerous situations in which lane changes are made. Adding the new feature can be expected to improve prediction performance. We trained the prediction model and evaluated the performance using a real traffic dataset with over 900 lane changes, and we confirmed that the proposed method outperforms previous methods in terms of both accuracy and prediction time.

Collaboration


Dive into the Yonghoon Ji's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge