Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zi-Fa Han is active.

Publication


Featured researches published by Zi-Fa Han.


Neurocomputing | 2017

Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures

Ruibin Feng; Zi-Fa Han; Wai Yan Wan; Chi-Sing Leung

Although there are many fault tolerant algorithms for neural networks, they usually focus on one kind of weight failure or node failure only. This paper first proposes a unified fault model for describing the concurrent weight and node failure situation, where open weight fault, open node fault, weight noise, and node noise could happen in a network at the same time. Afterwards, we analyze the training set error of radial basis function (RBF) networks under the concurrent weight and node failure situation. Based on the finding, we define an objective function for tolerating the concurrent weight and node failure situation. We then develop two learning algorithms, one for batch mode learning and one for online mode learning. Furthermore, for the online mode learning, we derive the convergent conditions for two cases, fixed learning rate and adaptive learning rate.


Neural Processing Letters | 2017

Sparse and Truncated Nuclear Norm Based Tensor Completion

Zi-Fa Han; Chi-Sing Leung; Longting Huang; Hing Cheung So

One of the main difficulties in tensor completion is the calculation of the tensor rank. Recently a tensor nuclear norm, which is equal to the weighted sum of matrix nuclear norms of all unfoldings of the tensor, was proposed to address this issue. However, in this approach, all the singular values are minimized simultaneously. Hence the tensor rank may not be well approximated. In addition, many existing algorithms ignore the structural information of the tensor. This paper presents a tensor completion algorithm based on the proposed tensor truncated nuclear norm, which is superior to the traditional tensor nuclear norm. Furthermore, to maintain the structural information, a sparse regularization term, defined in the transform domain, is added into the objective function. Experimental results showed that our proposed algorithm outperforms several state-of-the-art tensor completion schemes.


Neurocomputing | 2015

Online training and its convergence for faulty networks with multiplicative weight noise

Zi-Fa Han; Ruibin Feng; Wai Yan Wan; Chi-Sing Leung

A recent article showed that the objective function of the online weight noise injection algorithm is not equal to the training set error of faulty radial basis function (RBF) networks under the weight noise situation (Ho et al., 2010). Hence the online weight noise injection algorithm is not able to optimize the training set error of faulty networks with multiplicative weight noise. This paper proposes an online learning algorithm to tolerate multiplicative weight noise. Two learning rate cases, namely fixed learning rate and adaptive learning rate, are investigated. For the fixed learning rate case, we show that if the learning rate µ is less than 2 / ( ? b 2 + max i ? ? ( x i ) | 2 ) , then the online algorithm converges, where x i ?s are the training input vectors, ?2b is the variance of the multiplicative weight noise, ? ( x i ) = ? 1 ( x i ) , ? , ? M ( x i ) ] T , and ? j ( ? ) is the output of the j-th RBF node. In addition, as µ ? 0 , the trained weight vector tends to the optimal solution. For the adaptive learning rate case, let the learning rates { µ k } be a decreasing sequence and lim k ? ∞ µ k = 0 , where k is the index of learning cycles. We prove that if ? k = 1 ∞ µ k = ∞ and ? k = 1 ∞ µ k 2 < ∞ , then the weight vector converges to the optimal solution. Our simulation results show that the performance of the proposed algorithm is better than that of the conventional online approaches, such as the online weight decay and weight noise injection.


international conference on neural information processing | 2015

Non-Line-of-Sight Mitigation via Lagrange Programming Neural Networks in TOA-Based Localization

Zi-Fa Han; Chi-Sing Leung; Hing Cheung So; John Sum; Anthony G. Constantinides

A common measurement model for locating a mobile source is time-of-arrival (TOA). However, when non-line-of-sight (NLOS) bias error exists, the error can seriously degrade the estimation accuracy. This paper formulates the problem of estimating a mobile source position under the NLOS situation as a nonlinear constrained optimization problem. Afterwards, we apply the concept of Lagrange programming neural networks (LPNNs) to solve the problem. In order to improve the stability at the equilibrium point, we add an augmented term into the LPNN objective function. Simulation results show that the proposed method provides much robust estimation performance.


international conference on neural information processing | 2014

Tensor Completion Based on Structural Information

Zi-Fa Han; Ruibin Feng; Longting Huang; Yi Xiao; Chi-Sing Leung; Hing Cheung So

In tensor completion, one of the challenges is the calculation of the tensor rank. Recently, a tensor nuclear norm, which is a weighted sum of matrix nuclear norms of all unfoldings, has been proposed to solve this difficulty. However, in the matrix nuclear norm based approach, all the singular values are minimized simultaneously. Hence the rank may not be well approximated. This paper presents a tensor completion algorithm based on the concept of matrix truncated nuclear norm, which is superior to the traditional matrix nuclear norm. Since most existing tensor completion algorithms do not consider of the tensor, we add an additional term in the objective function so that we can utilize the spatial regular feature in the tensor data. Simulation results show that our proposed algorithm outperforms some the state-of-the-art tensor/matrix completion algorithms.


international conference on neural information processing | 2014

Online Learning for Faulty RBF Networks with the Concurrent Fault

Wai Yan Wan; Chi-Sing Leung; Zi-Fa Han; Ruibin Feng

Although there are some batch model learning algorithms for handling the weight fault situation, there are few results about online learning for handling this situation. Besides, a recent article showed that the objective function of the weight fault injection algorithm is not equal to the training error of faulty radial basis function (RBF) networks. This paper proposes an online learning algorithm for handling faulty RBF networks with two types of weight failure. We prove that the trained weight vector converges to the batch mode solution. Our experimental results show that the convergent behavior of the proposed algorithm is better than the conventional online weight decay algorithm.


arxiv:eess.SP | 2018

Robust MIMO Radar Target Localization based on Lagrange Programming Neural Network

Hao Wang; Chi-Sing Leung; Hing Cheung So; Junli Liang; Ruibin Feng; Zi-Fa Han


arxiv:eess.IV | 2018

Robust Real-time Ellipse Fitting Based on Lagrange Programming Neural Network and Locally Competitive Algorithm.

Hao Wang; Chi-Sing Leung; Hing Cheung So; Junli Liang; Ruibin Feng; Zi-Fa Han


arXiv: Learning | 2018

l0-norm Based Centers Selection for Failure Tolerant RBF Networks.

Hao Wang; Chi-Sing Leung; Hing Cheung So; Ruibin Feng; Zi-Fa Han


IEEE Transactions on Neural Networks | 2018

Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements

Zi-Fa Han; Chi-Sing Leung; Hing Cheung So; Anthony G. Constantinides

Collaboration


Dive into the Zi-Fa Han's collaboration.

Top Co-Authors

Avatar

Chi-Sing Leung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ruibin Feng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hing Cheung So

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hao Wang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Wai Yan Wan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Longting Huang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Junli Liang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Xiao

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

John Sum

National Chung Hsing University

View shared research outputs
Researchain Logo
Decentralizing Knowledge