Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongshan Yu is active.

Publication


Featured researches published by Hongshan Yu.


Mathematical Problems in Engineering | 2014

Identification of Nonlinear Dynamic Systems Using Hammerstein-Type Neural Network

Hongshan Yu; Jinzhu Peng; Yandong Tang

Hammerstein model has been popularly applied to identify the nonlinear systems. In this paper, a Hammerstein-type neural network (HTNN) is derived to formulate the well-known Hammerstein model. The HTNN consists of a nonlinear static gain in cascade with a linear dynamic part. First, the Lipschitz criterion for order determination is derived. Second, the backpropagation algorithm for updating the network weights is presented, and the stability analysis is also drawn. Finally, simulation results show that HTNN identification approach demonstrated identification performances.


international symposium on neural networks | 2007

Neural Network-Based Robust Tracking Control for Nonholonomic Mobile Robot

Jinzhu Peng; Yaonan Wang; Hongshan Yu

A robust tracking controller with bound estimation based on neural network is proposed to deal with the unknown factors of nonholonomic mobile robot, such as model uncertainties and external disturbances. The neural network is to approximate the uncertainties terms and the interconnection weights of the neural network can be tuned online. And the robust controller is designed to compensate for the approximation error. Moreover, an adaptive estimation algorithm is employed to estimate the bound of the approximation error. The stability of the proposed controller is proven by Lyapunov function. The proposed neural network-based robust tracking controller can overcome the uncertainties and the disturbances. The simulation results demonstrate that the proposed method has good robustness.


Sensors | 2014

Obstacle classification and 3D measurement in unstructured environments based on ToF cameras

Hongshan Yu; Jiang Zhu; Yaonan Wang; Wenyan Jia; Mingui Sun; Yandong Tang

Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robots movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.


Sensors | 2017

Automatic Camera Calibration Using Active Displays of a Virtual Pattern

Lei Tan; Yaonan Wang; Hongshan Yu; Jiang Zhu

Camera calibration plays a critical role in 3D computer vision tasks. The most commonly used calibration method utilizes a planar checkerboard and can be done nearly fully automatically. However, it requires the user to move either the camera or the checkerboard during the capture step. This manual operation is time consuming and makes the calibration results unstable. In order to solve the above problems caused by manual operation, this paper presents a full-automatic camera calibration method using a virtual pattern instead of a physical one. The virtual pattern is actively transformed and displayed on a screen so that the control points of the pattern can be uniformly observed in the camera view. The proposed method estimates the camera parameters from point correspondences between 2D image points and the virtual pattern. The camera and the screen are fixed during the whole process; therefore, the proposed method does not require any manual operations. Performance of the proposed method is evaluated through experiments on both synthetic and real data. Experimental results show that the proposed method can achieve stable results and its accuracy is comparable to the standard method by Zhang.


chinese automation congress | 2013

Registration and fusion for ToF camera and 2D camera reading

Hongshan Yu; Ke Zhao; Yaonan Wang; Luo Kan; Mingui Sun; Wenyan Jia

Time of Flight (ToF) cameras have become a competitive alternative to traditional distance sensing techniques like laser or stereo vision as it can deliver grayscale images and 3D information simultaneously at high frame-rates. However the low resolution of ToF camera limits the applications for accurate segmentation or classification. This paper presents a fast and robust solution to combine the 3D information of ToF camera and high resolution color image. Firstly, we setup a 2D/3D stereo camera with fixed spatial relation and similar visual field. Based on characteristic of ToF camera and the principles of stereo vision system, the 3D information of ToF camera is registered with high resolution color image by matching the high resolution color image with grayscale image of ToF camera. This method has very low computation cost while the matching accuracy is only determined by physical characteristics or parameters of the 2D/3D stereo camera system without extra computation error. Experiments results have demonstrated the feasibility, efficiency and accuracy of proposed algorithms.


international symposium on neural networks | 2007

An Occupancy Grids Building Method with Sonar Sensors Based on Improved Neural Network Model

Hongshan Yu; Yaonan Wang; Jinzhu Peng

This paper presents an improved neural network model interpretating sonar readings to build occupancy grids of mobile robot. The proposed model interprets sensor readings in the context of their space neighbors and relevant successive history readings simultaneously. Consequently the presented method can greatly weaken the effects by multiple reflections or specular reflection. The output of the neural network is the probability vector of three possible status(empty, occupancy, uncertainty) for the cell. As for sensor readings integration, three probabilities of cells status are updated by the Bayesian update formula respectively, and the final status of cell is defined by Max-Min principle.Experiments performed in lab environment has shown occupancy map built by proposed approach is more consistent, accurate and robust than traditional method while it still could be conducted in real time.


Neurocomputing | 2018

Methods and datasets on semantic segmentation: A review

Hongshan Yu; Zhengeng Yang; Lei Tan; Yaonan Wang; Wei Sun; Mingui Sun; Yandong Tang

Abstract Semantic segmentation, also called scene labeling, refers to the process of assigning a semantic label (e.g. car, people, and road) to each pixel of an image. It is an essential data processing step for robots and other unmanned systems to understand the surrounding scene. Despite decades of efforts, semantic segmentation is still a very challenging task due to large variations in natural scenes. In this paper, we provide a systematic review of recent advances in this field. In particular, three categories of methods are reviewed and compared, including those based on hand-engineered features, learned features and weakly supervised learning. In addition, we describe a number of popular datasets aiming for facilitating the development of new segmentation algorithms. In order to demonstrate the advantages and disadvantages of different semantic segmentation models, we conduct a series of comparisons between them. Deep discussions about the comparisons are also provided. Finally, this review is concluded by discussing future directions and challenges in this important field of research.


chinese automation congress | 2013

Fast and robust frontier line segment extracting method based on FCM for robot exploration

Hongshan Yu; Yuan Zhang; Yaonan Wang; Jiang Zhu

Accessible frontier is an important factor for mobile robot autonomous exploration. This paper presents a fast and robust frontier line segment extracting method based on fuzzy c-means clustering algorithm for robot exploration. Firstly, the proposed method divides robots local occupancy map into sub-regions with same size. In the next step, this paper analyzes the characteristic of robot exploration frontier with occupancy grid map, and the optimal number of FCM cluster center in each sub-region is defined. Consequently, line segments corresponding to exploration frontiers based on fuzzy c-mean algorithm are calculated in sub-region level to alleviate the extensive computation. Following those steps, line segments merging, line endpoints extending and line excluding are conducted to get more accurate frontier segment parameters in global level. In the end, the effectiveness of proposed method is verified by experiments results in lab environment.


international symposium on neural networks | 2012

Robot navigation based on fuzzy behavior controller

Hongshan Yu; Jiang Zhu; Yaonan Wang; Miao Hu; Yuan Zhang

This paper presents a robot navigation method based on fuzzy inference and behavior control. Stroll, Avoiding, Goal-reaching, Escape and Correct behavior are defined for robot navigation. The detailed scheme for each behavior is described in detail. Furthermore, fuzzy rules are used to switch those behaviors for best robot performances in real time. Experiments about five navigation tasks in two different environments were conducted on pioneer 2-DXE mobile robot. Experiment results shows that the proposed method is robust and efficiency in different environments.


Archive | 2006

Defective goods automatic sorting method and equipment for high-speed automated production line

Yaonan Wang; Hongshan Yu; Wei Wang; Haoping Chen; Yangguo Li; Jinzhu Peng; Liangjiang Liu; Hui Zhang; Yankun Jiang

Collaboration


Dive into the Hongshan Yu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yandong Tang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Mingui Sun

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenyan Jia

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge