Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dianhui Wang is active.

Publication


Featured researches published by Dianhui Wang.


international symposium on neural networks | 2005

Protein sequence classification using extreme learning machine

Dianhui Wang; Guang-Bin Huang

Traditionally, two protein sequences are classified into the same class if they have high homology in terms of feature patterns extracted through sequence alignment algorithms. These algorithms compare an unseen protein sequence with all the identified protein sequences and returned the higher scored protein sequences. As the sizes of the protein sequence databases are very large, it is a very time consuming job to perform exhaustive comparison of existing protein sequence. Therefore, there is a need to build an improved classification system for effectively identifying protein sequences. In this paper, a recently developed machine learning algorithm referred to as the extreme learning machine (ELM) is used to classify protein sequences with ten classes of super-families downloaded from a public domain database. A comparative study on system performance is conducted between ELM and the main conventional neural network classifier - backpropagation neural networks. Results show that ELM needs up to four orders of magnitude less training time compared to BP Network. The classification accuracy of ELM is also higher than that of BP network. For given network architecture, ELM does not have any control parameters (i.e, stopping criteria, learning rate, learning epoches, etc.) to be manually tuned and can be implemented easily.


Information Sciences | 2015

Distributed learning for Random Vector Functional-Link networks

Simone Scardapane; Dianhui Wang; Massimo Panella; Aurelio Uncini

This paper aims to develop distributed learning algorithms for Random Vector Functional-Link (RVFL) networks, where training data is distributed under a decentralized information structure. Two algorithms are proposed by using Decentralized Average Consensus (DAC) and Alternating Direction Method of Multipliers (ADMM) strategies, respectively. These algorithms work in a fully distributed fashion and have no requirement on coordination from a central agent during the learning process. For distributed learning, the goal is to build a common learner model which optimizes the system performance over the whole set of local data. In this work, it is assumed that all stations know the initial weights of the input layer, the output weights of local RVFL networks can be shared through communication channels among neighboring nodes only, and local datasets are blocked strictly. The proposed learning algorithms are evaluated over five benchmark datasets. Experimental results with comparisons show that the DAC-based learning algorithm performs favorably in terms of effectiveness, efficiency and computational complexity, followed by the ADMM-based learning algorithm with promising accuracy but higher computational burden.


Neurocomputing | 2013

Evolutionary extreme learning machine ensembles with size control

Dianhui Wang; Monther Alhamdoosh

Ensemble learning aims to improve the generalization power and the reliability of learner models through sampling and optimization techniques. It has been shown that an ensemble constructed by a selective collection of base learners outperforms favorably. However, effective implementation of such an ensemble from a given learner pool is still an open problem. This paper presents an evolutionary approach for constituting extreme learning machine (ELM) ensembles. Our proposed algorithm employs the model diversity as fitness function to direct the selection of base learners, and produces an optimal solution with ensemble size control. A comprehensive comparison is carried out, where the basic ELM is used to generate a set of neural networks and 12 benchmarked regression datasets are employed in simulations. Our reporting results demonstrate that the proposed method outperforms other ensembling techniques, including simple average, bagging and adaboost, in terms of both effectiveness and efficiency.


Neurocomputing | 2011

A new robust training algorithm for a class of single-hidden layer feedforward neural networks

Zhihong Man; Kevin Lee; Dianhui Wang; Zhenwei Cao; Chunyan Miao

Abstract A robust training algorithm for a class of single-hidden layer feedforward neural networks (SLFNs) with linear nodes and an input tapped-delay-line memory is developed in this paper. It is seen that, in order to remove the effects of the input disturbances and reduce both the structural and empirical risks of the SLFN, the input weights of the SLFN are assigned such that the hidden layer of the SLFN performs as a pre-processor, and the output weights are then trained to minimize the weighted sum of the output error squares as well as the weighted sum of the output weight squares. The performance of an SLFN-based signal classifier trained with the proposed robust algorithm is studied in the simulation section to show the effectiveness and efficiency of the new scheme.


Neurocomputing | 2006

Letters: A neuro-fuzzy approach for diagnosis of antibody deficiency syndrome

Joon S. Lim; Dianhui Wang; Yong-soo Kim; Sudhir Gupta

This paper presents a neuro-fuzzy approach for diagnosis of antibody deficiency syndrome, where a new neuro-fuzzy network with fuzzy activation functions (FAFs) at hidden layer is used. The FAFs capturing some essential information on pattern distributions, can be adaptively constructed using training examples. To improve the generalization capability and reduce the model complexity, a heuristic method for feature selection is proposed by measuring the size of non-overlapped areas of the FAFs. The effectiveness of our proposed techniques is investigated by an immunology clinical data set collected from the University of California, Irvine (UCI) immunology laboratory.


IEEE Transactions on Fuzzy Systems | 2011

Extraction and Adaptation of Fuzzy Rules for Friction Modeling and Control Compensation

Yong-Fu Wang; Dianhui Wang; Tianyou Chai

Modeling of friction forces has been a challenging task in mechanical engineering. Parameterized approaches for modeling friction find it difficult to achieve satisfactory performance due to the presence of nonlinearity and uncertainties in dynamical systems. This paper aims to develop adaptive fuzzy friction models by the use of data-mining techniques and system theory. Our main technical contributions are twofold: extraction of fuzzy rules and formulation of a static fuzzy friction model and adaptation of the fuzzy friction model by the use of the Lyapunov stability theory, which is associated with a control compensation of a typical motion dynamics. The proposed framework in this paper shows a successful application of adaptive data-mining techniques in engineering. A single-degree-of-freedom mechanical system is employed as an experimental model in simulation studies. Results demonstrate that our proposed fuzzy friction model has promise in the design of uncertain mechanical control systems.


Neural Networks | 2016

A decentralized training algorithm for Echo State Networks in distributed big data applications

Simone Scardapane; Dianhui Wang; Massimo Panella

The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy.


IEEE Transactions on Neural Networks | 2012

Global Convergence of Online BP Training With Dynamic Learning Rate

Rui Zhang; Zongben Xu; Guang-Bin Huang; Dianhui Wang

The online backpropagation (BP) training procedure has been extensively explored in scientific research and engineering applications. One of the main factors affecting the performance of the online BP training is the learning rate. This paper proposes a new dynamic learning rate which is based on the estimate of the minimum error. The global convergence theory of the online BP training procedure with the proposed learning rate is further studied. It is proved that: 1) the error sequence converges to the global minimum error; and 2) the weight sequence converges to a fixed point at which the error function attains its global minimum. The obtained global convergence theory underlies the successful applications of the online BP training procedure. Illustrative examples are provided to support the theoretical analysis.


Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery | 2017

Randomness in neural networks: an overview

Simone Scardapane; Dianhui Wang

Neural networks, as powerful tools for data mining and knowledge engineering, can learn from data to build feature‐based classifiers and nonlinear predictive models. Training neural networks involves the optimization of nonconvex objective functions, and usually, the learning process is costly and infeasible for applications associated with data streams. A possible, albeit counterintuitive, alternative is to randomly assign a subset of the networks’ weights so that the resulting optimization task can be formulated as a linear least‐squares problem. This methodology can be applied to both feedforward and recurrent networks, and similar techniques can be used to approximate kernel functions. Many experimental results indicate that such randomized models can reach sound performance compared to fully adaptable ones, with a number of favorable benefits, including (1) simplicity of implementation, (2) faster learning with less intervention from human beings, and (3) possibility of leveraging overall linear regression and classification algorithms (e.g., ℓ 1 norm minimization for obtaining sparse formulations). This class of neural networks attractive and valuable to the data mining community, particularly for handling large scale data mining in real‐time. However, the literature in the field is extremely vast and fragmented, with many results being reintroduced multiple times under different names. This overview aims to provide a self‐contained, uniform introduction to the different ways in which randomization can be applied to the design of neural networks and kernel functions. A clear exposition of the basic framework underlying all these approaches helps to clarify innovative lines of research, open problems, and most importantly, foster the exchanges of well‐known results throughout different communities. WIREs Data Mining Knowl Discov 2017, 7:e1200. doi: 10.1002/widm.1200


Information Sciences | 2015

A probabilistic learning algorithm for robust modeling using neural networks with random weights

Feilong Cao; Hailiang Ye; Dianhui Wang

Robust modeling approaches have received considerable attention due to its practical value to deal with the presence of outliers in data. This paper proposes a probabilistic robust learning algorithm for neural networks with random weights (NNRWs) to improve the modeling performance. The robust NNRW model is trained by optimizing a hybrid regularization loss function according to the sparsity of outliers and compressive sensing theory. The well-known expectation maximization (EM) algorithm is employed to implement our proposed algorithm under some assumptions on noise distribution. Experimental results on function approximation as well as UCI data sets for regression and classification demonstrate that the proposed algorithm is promising with good potential for real world applications.

Collaboration


Dive into the Dianhui Wang's collaboration.

Top Co-Authors

Avatar

Tianyou Chai

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong-Fu Wang

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Feilong Cao

China Jiliang University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth Chang

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weitao Li

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yajun Zhang

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Kevin Lee

Swinburne University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge