Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cang Ye is active.

Publication


Featured researches published by Cang Ye.


international conference on robotics and automation | 2002

Characterization of a 2D laser scanner for mobile robot obstacle negotiation

Cang Ye; Johann Borenstein

This paper presents a characterization study of the Sick LMS 200 laser scanner. A number of parameters, such as operation time, data transfer rate, target surface properties, as well as the incidence angle, which may potentially affect the sensing performance, are investigated. A probabilistic range measurement model is built based on the experimental results. The paper also analyzes the mixed pixels problem of the scanner.


systems man and cybernetics | 2003

A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance

Cang Ye; Nelson Hon Ching Yung; Danwei Wang

Fuzzy logic systems are promising for efficient obstacle avoidance. However, it is difficult to maintain the correctness, consistency, and completeness of a fuzzy rule base constructed and tuned by a human expert. A reinforcement learning method is capable of learning the fuzzy rules automatically. However, it incurs a heavy learning phase and may result in an insufficiently learned rule base due to the curse of dimensionality. In this paper, we propose a neural fuzzy system with mixed coarse learning and fine learning phases. In the first phase, a supervised learning method is used to determine the membership functions for input and output variables simultaneously. After sufficient training, fine learning is applied which employs reinforcement learning algorithm to fine-tune the membership functions for output variables. For sufficient learning, a new learning method using a modification of Sutton and Bartos model is proposed to strengthen the exploration. Through this two-step tuning approach, the mobile robot is able to perform collision-free navigation. To deal with the difficulty of acquiring a large amount of training data with high consistency for supervised learning, we develop a virtual environment (VE) simulator, which is able to provide desktop virtual environment (DVE) and immersive virtual environment (IVE) visualization. Through operating a mobile robot in the virtual environment (DVE/IVE) by a skilled human operator, training data are readily obtained and used to train the neural fuzzy system.


systems man and cybernetics | 1999

An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning

Nelson Hon Ching Yung; Cang Ye

In this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behaviour and goal seeking behaviour to determine its control actions, where adaptability is achieved with the aid of an environment evaluator. A comparison of this navigator using the rule bases obtained from the new training method and the EEM method, shows that the new navigator guarantees a solution and its solution is more acceptable.


Proceedings of SPIE | 2009

Characterization of the Hokuyo URG-04LX laser rangefinder for mobile robot obstacle negotiation

Yoichi Okubo; Cang Ye; Johann Borenstein

This paper presents a characterization study of the Hokuyo URG-04LX scanning laser rangefinder (LRF). The Hokuyo LRF is similar in function to the Sick LRF, which has been the de-facto standard range sensor for mobile robot obstacle avoidance and mapping applications for the last decade. Problems with the Sick LRF are its relatively large size, weight, and power consumption, allowing its use only on relatively large mobile robots. The Hokuyo LRF is substantially smaller, lighter, and consumes less power, and is therefore more suitable for small mobile robots. The question is whether it performs just as well as the Sick LRF in typical mobile robot applications. In 2002, two of the authors of the present paper published a characterization study of the Sick LRF. For the present paper we used the exact same test apparatus and test procedures as we did in the 2002 paper, but this time to characterize the Hokuyo LRF. As a result, we are in the unique position of being able to provide not only a detailed characterization study of the Hokuyo LRF, but also to compare the Hokuyo LRF with the Sick LRF under identical test conditions. Among the tested characteristics are sensitivity to a variety of target surface properties and incidence angles, which may potentially affect the sensing performance. We also discuss the performance of the Hokuyo LRF with regard to the mixed pixels problem associated with LRFs. Lastly, the present paper provides a calibration model for improving the accuracy of the Hokuyo LRF.


IEEE Transactions on Robotics | 2004

A novel filter for terrain mapping with laser rangefinders

Cang Ye; Johann Borenstein

This paper introduces a novel filter for terrain mapping with a two-dimensional laser rangefinder. The filter, called the certainty-assisted spatial (CAS) filter, uses the physical constraints on motion continuity and spatial continuity to identify corrupted pixels and missing data in an elevation map. The filter removes the corrupted pixels, fills in the missing data, and leaves the uncorrupted pixels intact so as to preserve the details of a terrain map. Our extensive indoor and outdoor mapping experiments show the CAS filters superior performance in erroneous data reduction and map detail preservation over conventional filters.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

Xiangfei Qian; Cang Ye

This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.


systems man and cybernetics | 2000

A novel behavior fusion method for the navigation of mobile robots

Cang Ye; Danwei Wang

This paper presents a novel Behavior Fusion method for the navigation of Autonomous Mobile Vehicle in unknown environments. The proposed navigator consists of an Obstacle Avoider (OA), a Goal Seeker (GS) and a Navigation Supervisor (NS). The fuzzy actions inferred by the OA and the GS are weighted by the NS using the local and global environmental information and fused through fuzzy set operation to produce a command action, from which the final crisp action is determined by defuzzification. Simulation shows that the navigator is able to perform successful navigation task in various unknown environments, and it has smooth action and exceptionally good robustness to sensor noise.


intelligent robots and systems | 2009

Extraction of planar features from Swissranger SR-3000 Range Images by a clustering method using Normalized Cuts

GuruPrasad M. Hegde; Cang Ye

This paper describes a new approach to extract planar features from 3D range data captured by a range imaging sensor—the SwissRanger SR-3000. The focus of this work is to segment vertical and horizontal planes from range images of indoor environments. The method first enhances a range image by using the surface normal information. It then partitions the Normal Enhanced Range Images (NERI) into a number of segments using the Normalized-Cuts (N-Cuts) algorithm. A least-square plane is fit to each segment and the fitting error is used to determine if the segment is planar or not. From the resulting planar segments, each vertical or horizontal segment is labeled based on the normal of its least-square plane. A pair of vertical or horizontal segments is merged if they are neighbors. Through this region growing process, the vertical and horizontal planes are extracted from the range data. The proposed method has a myriad of applications in navigating mobile robots in indoor environments.


Proceedings of SPIE | 2010

A Visual Odometry Method Based on the SwissRanger SR4000

Cang Ye; Michael Bruch

This paper presents a pose estimation method based on a 3D camera - the SwissRanger SR4000. The proposed method estimates the cameras ego-motion by using intensity and range data produced by the camera. It detects the SIFT (Scale- Invariant Feature Transform) features in one intensity image and match them to that in the next intensity image. The resulting 3D data point pairs are used to compute the least-square rotation and translation matrices, from which the attitude and position changes between the two image frames are determined. The method uses feature descriptors to perform feature matching. It works well with large image motion between two frames without the need of spatial correlation search. Due to the SR4000s consistent accuracy in depth measurement, the proposed method may achieve a better pose estimation accuracy than a stereovision-based approach. Another advantage of the proposed method is that the range data of the SR4000 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it possible to navigate a mobile robot by using a single perception sensor. In this paper, we will validate the idea of the pose estimation method and characterize the methods pose estimation performance.


international conference on robotics and automation | 2009

Robust edge extraction for SwissRanger SR-3000 range images

Cang Ye; GuruPrasad M. Hegde

This paper presents a new method for extracting object edges from range images obtained by a 3D range imaging sensor.the SwissRanger SR-3000. In range image preprocessing stage, the method enhances object edges by using surface normal information; and it employs the Hough Transform to detect straight line features in the Normal-Enhanced Range Image (NERI). Due to the noise in the sensors range data, a NERI contains corrupted object surfaces that may result in unwanted edges and greatly encumber the extraction of linear features. To alleviate this problem, a Singular Value Decomposition (SVD) filter is developed to smooth object surfaces. The efficacy of the edge extraction method is validated by experiments in various environments.

Collaboration


Dive into the Cang Ye's collaboration.

Top Co-Authors

Avatar

GuruPrasad M. Hegde

University of Arkansas at Little Rock

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soonhac Hong

University of Arkansas at Little Rock

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiangfei Qian

University of Arkansas at Little Rock

View shared research outputs
Top Co-Authors

Avatar

Amirhossein Tamjidi

University of Arkansas at Little Rock

View shared research outputs
Top Co-Authors

Avatar

Danwei Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ashley Stroupe

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

He Zhang

University of Arkansas at Little Rock

View shared research outputs
Top Co-Authors

Avatar

Edward Tunstel

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge