Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruibin Feng is active.

Publication


Featured researches published by Ruibin Feng.


IEEE Transactions on Neural Networks | 2017

A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation

Chi Sing Andrew Leung; Wai Yan Wan; Ruibin Feng

Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.


IEEE Transactions on Neural Networks | 2017

Lagrange Programming Neural Network for Nondifferentiable Optimization Problems in Sparse Approximation

Ruibin Feng; Chi-Sing Leung; Anthony G. Constantinides; Wen-Jun Zeng

The major limitation of the Lagrange programming neural network (LPNN) approach is that the objective function and the constraints should be twice differentiable. Since sparse approximation involves nondifferentiable functions, the original LPNN approach is not suitable for recovering sparse signals. This paper proposes a new formulation of the LPNN approach based on the concept of the locally competitive algorithm (LCA). Unlike the classical LCA approach which is able to solve unconstrained optimization problems only, the proposed LPNN approach is able to solve the constrained optimization problems. Two problems in sparse approximation are considered. They are basis pursuit (BP) and constrained BP denoise (CBPDN). We propose two LPNN models, namely, BP-LPNN and CBPDN-LPNN, to solve these two problems. For these two models, we show that the equilibrium points of the models are the optimal solutions of the two problems, and that the optimal solutions of the two problems are the equilibrium points of the two models. Besides, the equilibrium points are stable. Simulations are carried out to verify the effectiveness of these two LPNN models.


Neural Processing Letters | 2015

GPU Accelerated Self-Organizing Map for High Dimensional Data

Yi Xiao; Ruibin Feng; Zi-Fa Han; Chi-Sing Leung

The self-organizing map (SOM) model is an effective technique applicable in a wide range of areas, such as pattern recognition and image processing. In the SOM model, the most time-consuming procedure is the training process. It consists of two time-consuming parts. The first part is the calculation of the Euclidean distances between training vectors and codevectors. The second part is the update of the codevectors with the pre-defined neighborhood structure. This paper proposes a graphics processing unit (GPU) algorithm that accelerates these two parts using the graphics rendering ability of GPUs. The distance calculation is implemented in the form of matrix multiplication with compute shader, while the update process is treated as a point-rendering process with vertex shader and fragment shader. Experimental results show that our algorithm runs much faster than previous CUDA implementations, especially for the large neighborhood case. Also, our method can handle the case with large codebook size and high dimensional data.


Neurocomputing | 2017

Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures

Ruibin Feng; Zi-Fa Han; Wai Yan Wan; Chi-Sing Leung

Although there are many fault tolerant algorithms for neural networks, they usually focus on one kind of weight failure or node failure only. This paper first proposes a unified fault model for describing the concurrent weight and node failure situation, where open weight fault, open node fault, weight noise, and node noise could happen in a network at the same time. Afterwards, we analyze the training set error of radial basis function (RBF) networks under the concurrent weight and node failure situation. Based on the finding, we define an objective function for tolerating the concurrent weight and node failure situation. We then develop two learning algorithms, one for batch mode learning and one for online mode learning. Furthermore, for the online mode learning, we derive the convergent conditions for two cases, fixed learning rate and adaptive learning rate.


IEEE Transactions on Neural Networks | 2015

Properties and Performance of Imperfect Dual Neural Network-Based

Ruibin Feng; Chi-Sing Leung; John Sum; Yi Xiao

The dual neural network (DNN)-based k-winner-take-all (k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function. This brief studies the properties of the DNN-k WTA model under this imperfect situation. We prove that, given any initial state, the network settles down at the unique equilibrium point. Besides, the energy function of the model is revealed. Based on the energy function, we propose an efficient method to study the model performance when the inputs are with continuous distribution functions. Furthermore, for uniformly distributed inputs, we derive a formula to estimate the probability that the model produces the correct outputs. Finally, for the case that the minimum separation Δmin of the inputs is given, we prove that if the gain of the activation function is greater than 1/4Δmin max (\ln 2n, 2 ln 1-ϵ/ϵ), then the network can produce the correct outputs with winner outputs greater than 1-ϵ and loser outputs less than ϵ, where ϵ is the threshold less than 0.5.


Cognitive Computation | 2014

k

Ruibin Feng; Yi Xiao; Chi-Sing Leung; Peter Wai Ming Tsang; John Sum

As the concept of artificial neural networks is based on the mechanism of the human brain, it is essential that a trained artificial neural network should exhibit certain amount of fault-tolerant ability. In this paper, we propose a fault-tolerant learning method for training radial basis function (RBF) networks that may contain the coexistence of the stuck-at-zero node fault and the stuck-at-one node fault. First, we provide a formulation for evaluating the mean square error of the faulty RBF networks. Next an objective function, together with an algorithm for training the fault-tolerant RBF networks, is developed. Subsequently, we derive a mean prediction error (MPE) formula to estimate the test set error of the faulty RBF networks. With the MPE formula, we can estimate the RBF width that leads to near-optimal fault-tolerant capability. Finally, simulations are conducted to demonstrate the feasibility of our method, as well as its compliance with the theoretical outcome.


international conference on neural information processing | 2013

WTA Networks

Yi Xiao; Ruibin Feng; Chi-Sing Leung; Pui Fai Sum

The spherical K-means algorithm is frequently used in high-dimensional data clustering. Although there are some GPU algorithms for K-means training, their implementations suffer from a large amount of data transfer between CPU and GPU, and a large number of rendering passes. By utilizing the random write ability of vertex shaders, we can reduce the overheads mentioned above. However, this vertex shader based approach can handle low dimensional data only. This paper presents a GPU-based training implementation for spherical K-means for high dimensional data. We utilizes the feature of geometry shaders to generate new vertices to handle high-dimensional data.


Neural Computing and Applications | 2018

An Improved Fault-Tolerant Objective Function and Learning Algorithm for Training the Radial Basis Function Neural Network

Hao Wang; Ching Man Lee; Ruibin Feng; Chi-Sing Leung

This paper addresses the analog optimization for non-differential functions. The Lagrange programming neural network (LPNN) approach provides us a systematic way to build analog neural networks for handling constrained optimization problems. However, its drawback is that it cannot handle non-differentiable functions. In compressive sampling, one of the optimization problems is least absolute shrinkage and selection operator (LASSO), where the constraint is non-differentiable. This paper considers the hidden state concept from the local competition algorithm to formulate an analog model for the LASSO problem. Hence, the non-differentiable limitation of LPNN can be overcome. Under some conditions, at equilibrium, the network leads to the optimal solution of the LASSO. Also, we prove that these equilibrium points are stable. Simulation study illustrates that the proposed analog model and the traditional digital method have the similar mean squared performance.


Cognitive Computation | 2018

GPU Accelerated Spherical K-Means Training

Hao Wang; Ruibin Feng; Andrew Chi-Sing Leung; Kim Fung Tsang

There are two interesting properties in human brain. One is its massively interconnected structure. Another one is that human can handle outlier data effectively. For instance, human is able to recognize an object from an image with non-Gaussian noise. Artificial neural network is one of biologically inspired techniques. From the structural point of view, many neural network models have massively interconnected structures. Since the traditional analog neural network approach cannot handle an l1-norm-like objective function, it cannot be used to handle outlier data. This paper proposes two neural network models for the robust source localization problem in the time-of-arrival (TOA) model. Our development is based on the Lagrange programming neural network (LPNN) approach. To alleviate the influence of outliers, this paper introduces an l1-norm objective function. However, in the traditional LPNN approach, the constraints and the objective function must be differentiable. We devise two methods to handle the non-differentiable l1-norm term. The first method introduces an approximation to replace the l1-norm term. The second one uses the concept of hidden state from the locally competitive algorithm (LCA) to avoid the computation of the gradient vector at non-differentiable points. We also present the local stability of the two proposed models. From the simulations, our proposed methods are capable to handle the outliers and their error performances are better than many existing TOA algorithms.


international conference on neural information processing | 2016

An analog neural network approach for the least absolute shrinkage and selection operator problem

Hao Wang; Ruibin Feng; Chi-Sing Leung

One of the traditional models for finding the location of a mobile source is the time-of-arrival (TOA). It usually assumes that the measurement noise follow a Gaussian distribution. However, in practical, outliers are difficult to be avoided. This paper proposes an \(l_1\)-norm based objective function for alleviating the influence of outliers. Afterwards, we utilize the Lagrange programming neural network (LPNN) framework for the position estimation. As the framework requires that its objective function and constraints should be twice differentiable, we introduce an approximation for the \(l_1\)-norm term in our LPNN formulation. From the simulation result, our proposed algorithm has very good robustness.

Collaboration


Dive into the Ruibin Feng's collaboration.

Top Co-Authors

Avatar

Chi-Sing Leung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hao Wang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Zi-Fa Han

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yi Xiao

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

John Sum

National Chung Hsing University

View shared research outputs
Top Co-Authors

Avatar

Hing Cheung So

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Wai Yan Wan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ching Man Lee

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Kai-Tat Ng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Peter Wai Ming Tsang

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge