Md. Faijul Amin
University of Fukui
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Md. Faijul Amin.
Neurocomputing | 2009
Md. Faijul Amin; Kazuyuki Murase
This paper presents a model of complex-valued neuron (CVN) for real-valued classification problems, introducing two new activation functions. In this CVN model, each real-valued input is encoded into a phase between 0 and @p of a complex number of unity magnitude, and multiplied by a complex-valued weight. The weighted sum of inputs is then fed to an activation function. Both the proposed activation functions map complex values into real values, and their role is to divide the net-input (weighted sum) space into multiple regions representing the classes of input patterns. Gradient-based learning rules are derived for each of the activation functions. The ability of such CVN is discussed and tested with two-class problems, such as two- and three-input Boolean problems, and the symmetry detection in binary sequences. We show here that the CVN with both activation functions can form proper boundaries for these linear and nonlinear problems. For solving n-class problems, a complex-valued neural network (CVNN) consisting of n CVNs is also studied. We defined the one exhibiting the largest output among all the neurons as representing the output class. We tested such single-layered CVNNs on several real world benchmark problems. The results show that the classification ability of single-layered CVNN on unseen data is comparable to the conventional real-valued neural network (RVNN) having one hidden layer. Moreover, convergence of the CVNN is much faster than that of the RVNN in most cases.
Neurocomputing | 2009
Md. Faijul Amin; Md. Monirul Islam; Kazuyuki Murase
This paper presents ensemble approaches in single-layered complex-valued neural network (CVNN) to solve real-valued classification problems. Each component CVNN of an ensemble uses a recently proposed activation function for its complex-valued neurons (CVNs). A gradient-descent based learning algorithm was used to train the component CVNNs. We applied two ensemble methods, negative correlation learning and bagging, to create the ensembles. Experimental results on a number of real-world benchmark problems showed a substantial performance improvement of the ensembles over the individual single-layered CVNN classifiers. Furthermore, the generalization performances were nearly equivalent to those obtained by the ensembles of real-valued multilayer neural networks.
systems man and cybernetics | 2009
Md. Monirul Islam; Md. Faijul Amin; Xin Yao; Kazuyuki Murase
The generalization ability of artificial neural networks (ANNs) is greatly dependent on their architectures. Constructive algorithms provide an attractive automatic way of determining a near-optimal ANN architecture for a given problem. Several such algorithms have been proposed in the literature and shown their effectiveness. This paper presents a new constructive algorithm (NCA) in automatically determining ANN architectures. Unlike most previous studies on determining ANN architectures, NCA puts emphasis on architectural adaptation and functional adaptation in its architecture determination process. It uses a constructive approach to determine the number of hidden layers in an ANN and of neurons in each hidden layer. To achieve functional adaptation, NCA trains hidden neurons in the ANN by using different training sets that were created by employing a similar concept used in the boosting algorithm. The purpose of using different training sets is to encourage hidden neurons to learn different parts or aspects of the training data so that the ANN can learn the whole training data in a better way. In this paper, the convergence and computational issues of NCA are analytically studied. The computational complexity of NCA is found to be O(W times Pt times tau), where W is the number of weights in the ANN, Pt is the number of training examples, and tau is the number of training epochs. This complexity has the same order as what the backpropagation learning algorithm requires for training a fixed ANN architecture. A set of eight classification and two approximation benchmark problems was used to evaluate the performance of NCA. The experimental results show that NCA can produce ANN architectures with fewer hidden neurons and better generalization ability compared to existing constructive and nonconstructive algorithms.
international conference on neural information processing | 2011
Md. Faijul Amin; Muhammad Ilias Amin; Ahmed Yarub H. Al-Nuaimi; Kazuyuki Murase
Complex-valued neural networks (CVNNs) bring in nonholomorphic functions in two ways: (i) through their loss functions and (ii) the widely used activation functions. The derivatives of such functions are defined in Wirtinger calculus. In this paper, we derive two popular algorithms—the gradient descent and the Levenberg-Marquardt (LM) algorithm—for parameter optimization in the feedforward CVNNs using the Wirtinger calculus, which is simpler than the conventional derivation that considers the problem in real domain. While deriving the LM algorithm, we solve and use the result of a least squares problem in the complex domain,
Neural Networks | 2012
Md. Faijul Amin; Ramaswamy Savitha; Muhammad Ilias Amin; Kazuyuki Murase
\|\mathbf{b-(Az+Bz^*)}\|_{\underset{\mathbf{z}}{\min}}
ieee international conference on fuzzy systems | 2012
Pintu Chandra Shill; Md. Faijul Amin; M. A. H. Akhand; Kazuyuki Murase
, which is more general than the
international symposium on neural networks | 2008
Md. Monirul Islam; Md. Faijul Amin; Suman Ahmmed; Kazuyuki Murase
\|\mathbf{b-Az}\|_{\underset{\mathbf{z}}{\min}}
international symposium on neural networks | 2011
Md. Faijul Amin; Ramaswamy Savitha; Muhammad Ilias Amin; Kazuyuki Murase
. Computer simulation results exhibit that as with the real-valued case, the complex-LM algorithm provides much faster learning with higher accuracy than the complex gradient descent algorithm.
international conference on neural information processing | 2011
Abdul Rahman Hafiz; Md. Faijul Amin; Kazuyuki Murase
|\mathbf{b-(Az+Bz^*)}\|_{\underset{\mathbf{z}}{\min}}
ieee international conference on fuzzy systems | 2011
Pintu Chandra Shill; Kishore Kumar Pal; Md. Faijul Amin; Kazuyuki Murase
, which is more general than the