Naigong Yu
Beijing University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Naigong Yu.
chinese control and decision conference | 2015
Naigong Yu; Panna Jiao; Yuling Zheng
LeNet5 is a kind of Convolutional Neural Network (CNN) and has been used in handwritten digits recognition. In order to improve the recognition rate of LeNet5 in handwritten digits recognition, this article presents an improved LeNet5 by replacing the last two layers of the LeNet5 structure with Support Vector Machines (SVM) classifier. And LeNet5 performs as a trainable feature extractor and SVM works as a recognizer. To accelerate the networks convergence speed, the stochastic diagonal Levenberg-Marquardt algorithm is introduced to train the network. A series of studies has been conducted on the MINST digit database to test and evaluate the proposed method performance. The results show that this method can outperform both SVMs and LeNet5. Moreover, the improved method gets a faster convergence speed in training process.
world congress on intelligent control and automation | 2006
Xiaogang Ruan; Liang Liu; Naigong Yu; Mingxiao Ding
Based on the feedback error learning (FEL) method and Michael et al.s research work on dynamical state estimation, a new model of motor control system is proposed to overcome the drawback deriving from time delay. In the new model, the supervised signal derives from both the output of the Kalman estimator and the feedback motor command, and this comprehensive signal provides the instructive information to train the forward neural network in the cerebellar cortex. The effectiveness of the proposed new model is demonstrated by simulation experiments on inverted pendulum
world congress on intelligent control and automation | 2006
Fangfang Zhang; Junfei Qiao; Chaobin Liu; Naigong Yu; Xiaogang Ruan
A cellular automata model for a sequencing batch reactor was constructed according to biological mechanism of activated sludge processes. Simulations were made with practical parameters. The results indicate that the model replicates complex evolvements of microorganism during reaction time, and well describes the removal of biochemical oxygen demand and the growth of activated sludge. The model, whose validity is proved by Eckenfelder formula, simulates activated sludge processes visually, and exhibits advantages of modeling wastewater treatment bases on cellular automata. It offers a basic method for modeling similar complex systems. At the same time, the model depicts the characteristics of complex system such as diversity, randomicity, indeterminacy and strong nonlinearity
world congress on intelligent control and automation | 2004
Naigong Yu; Xiaogang Ruan
To simulate penicillin fermentation biomass growth process, a penicillin batch fermentation biomass growth model using cellular automata is established. The model uses three-dimensional cellular automata as its growth space, and uses Morre type neighborhood as its cell neighborhood. The transition rules of the model are made based on the mechanism and dynamic differential equation model of a penicillin batch fermentation process. Each cell of the model represents a single or specific number of penicillin production bacteria, and has various states. The results of simulation experiments show that the model replicates the penicillin batch fermentation biomass growth process described by a dynamic differential equation model accordingly. The establishment of this model makes it possible to model, simulate and control penicillin fermentation process using cellular automata.
Automatic Control and Computer Sciences | 2017
Jia Lin; Xiaogang Ruan; Naigong Yu; Jianxian Cai
This paper proposes a novel moving hand segmentation approach using skin color, grayscale, depth, and motion cues for gesture recognition. The proposed approach does not depend on unreasonable restrictions, and it can solve the problem of hand-over-face occlusion. First, an online updated skin color histogram (OUSCH) model is built to robustly represent skin color; second, according to the variance information of grayscale and depth optical flow, a motion region of interest (MRoI) is adaptively extracted to locate the moving body part (MBP) and reduce the impact of noise; then, Harris-Affine corners that satisfy skin color and adaptive motion constraints are adopted as skin seed points in the MRoI; next, the skin seed points are grown to obtain a candidate hand region utilizing skin color, depth and motion criteria; finally, boundary depth gradient, skeleton extraction, and shortest path search are employed to segment the moving hand region from the candidate hand region. Experimental results demonstrate that the proposed approach can accurately segment moving hand regions under different situations, especially when the face is occluded by a hand. Furthermore, this approach achieves higher segmentation accuracy than other state-of-the-art approaches.
Sensors | 2016
Jia Lin; Xiaogang Ruan; Naigong Yu; Yee-Hong Yang
Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal features from one or a few samples per gesture class. To tackle these problems, an adaptive local spatiotemporal feature (ALSTF) using fused RGB-D data is proposed. First, motion regions of interest (MRoIs) are adaptively extracted using grayscale and depth velocity variance information to greatly reduce the impact of noise. Then, corners are used as keypoints if their depth, and velocities of grayscale and of depth meet several adaptive local constraints in each MRoI. With further filtering of noise, an accurate and sufficient number of keypoints is obtained within the desired moving body parts (MBPs). Finally, four kinds of multiple descriptors are calculated and combined in extended gradient and motion spaces to represent the appearance and motion features of gestures. The experimental results on the ChaLearn gesture, CAD-60 and MSRDailyActivity3D datasets demonstrate that the proposed feature achieves higher performance compared with published state-of-the-art approaches under the one-shot learning setting and comparable accuracy under the leave-one-out cross validation.
chinese control and decision conference | 2015
Jia Lin; Xiaogang Ruan; Naigong Yu; Ruoyan Wei
To satisfy the distinctive feature extraction requirement of one-shot learning gesture recognition for mobile robot control, a improved three-dimensional local sparse motion scale invariant feature transform (3D SMoSIFT) feature descriptor is proposed, which fuses RGB-D videos. Firstly, gray pyramid, depth pyramid and optical flow pyramids are built as scale space for each gray frame (converted from RGB frame) and depth frame. Then interest regions are extracted according the variance of optical flow, and variance is calculated in horizontal and vertical direction. Subsequently, corners are just extracted in each interest region as interest points, and then the information of gray and depth optical flow is simultaneously used to detect robust keypoints around the motion pattern in the scale space. Finally, SIFT descriptors are calculated on 3D gradient space and 3D motion space. The improved feature descriptor has been evaluated under a bag of feature model on one-shot learning Chalearn Gesture Dataset. Experiments demonstrate that the proposed method distinctly improves the accuracy of gesture recognition. The results also show that the improved 3D SMoSIFT feature descriptor surpasses other spatiotemporal feature descriptors and is comparable to the state-of-the-art approaches.
world congress on intelligent control and automation | 2014
Naigong Yu; Huanzhao Chen; Ti Li; Lin Wang
Spatial recognition is one of neural functions of rats hippocampus. Head direction cells, grid cells and place cells play a major role in this function. Information about direction, distance, and location is integrated in the grid cells. Place cells receive information from grid cell firing and construct representation of the spatial environment of the brain. This representation is demonstrated to the foundation of the cognitive map. Spatial representations in the hippocampus emerge and develop as rat pups first begin to explore their environment outside of the nest and develop with age. This paper reviews and discusses the basis of neural functions including their biological properties, development process, modeling work and some applications based on this mechanism in agents.
world congress on intelligent control and automation | 2012
Xiaogang Ruan; Jun Li; Feng Xu; Naigong Yu
Aiming at control of remote teleoperation for mobile robot using at search and rescue, especially for application in a complex environment, this paper presents a vision-based mobile robot remote control system. After the realization of the physical simulation, transplantation of this system to the mobile self-balancing robot control physical entities also success, giving a callback-style remote control instruction set system at the meantime. Forming a self-balancing robot balance control with a combination of motion control, vision systems and coordination of effective mechanical control system.
international symposium on neural networks | 2006
Liang Liu; Naigong Yu; Mingxiao Ding; Xiaogang Ruan
Motivated by recent physiological and anatomical evidence, a new feedback error learning scheme is proposed for tracing in motor control system. In the scheme, the model of cerebellar cortex is regarded as the feedforward controller. Specifically, a neural network and an estimator are adopted in the cerebellar cortex model which can predict the future state and eliminate faults caused by time delay. Then the new scheme was used to control inverted pendulum. The simulation experimental results show that the new scheme can learn to control the inverted pendulum for tracing successfully.