Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bogdan M. Wilamowski is active.

Publication


Featured researches published by Bogdan M. Wilamowski.


IEEE Transactions on Neural Networks | 2010

Improved Computation for Levenberg–Marquardt Training

Bogdan M. Wilamowski; Hao Yu

The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.


IEEE Transactions on Industrial Electronics | 2011

Advantages of Radial Basis Function Networks for Dynamic System Design

Hao Yu; Tiantian Xie; Stanislaw Paszczynski; Bogdan M. Wilamowski

Radial basis function (RBF) networks have advantages of easy design, good generalization, strong tolerance to input noise, and online learning ability. The properties of RBF networks make it very suitable to design flexible control systems. This paper presents a review on different approaches of designing and training RBF networks. The recently developed algorithm is introduced for designing compact RBF networks and performing efficient training process. At last, several problems are applied to test the main properties of RBF networks, including their generalization ability, tolerance to input noise, and online learning ability. RBF networks are also compared with traditional neural networks and fuzzy inference systems.


IEEE Industrial Electronics Magazine | 2009

Neural network architectures and learning algorithms

Bogdan M. Wilamowski

Neural networks are the topic of this paper. Neural networks are very powerful as nonlinear signal processors, but obtained results are often far from satisfactory. The purpose of this article is to evaluate the reasons for these frustrations and show how to make these neural networks successful. The following are the main challenges of neural network applications: (1) Which neural network architectures should be used? (2) How large should a neural network be? (3) Which learning algorithms are most suitable? The multilayer perceptron (MLP) architecture is unfortunately the preferred neural network topology of most researchers. It is the oldest neural network architecture, and it is compatible with all training softwares. However, the MLP topology is less powerful than other topologies such as bridged multilayer perceptron (BMLP), where connections across layers are allowed. The error-back propagation (EBP) algorithm is the most popular learning algorithm, but it is very slow and seldom gives adequate results. The EBP training process requires 100-1,000 times more iterations than the more advanced algorithms such as Levenberg-Marquardt (LM) or neuron by neuron (NBN) algorithms. What is most important is that the EBP algorithm is not only slow but often it is not able to find solutions for close-to-optimum neural networks. The paper describes and compares several learning algorithms.


international symposium on neural networks | 2001

An algorithm for fast convergence in training neural networks

Bogdan M. Wilamowski; Serdar Iplikci; Okyay Kaynak; Mehmet Önder Efe

In this work, two modifications on Levenberg-Marquardt (LM) algorithm for feedforward neural networks are studied. One modification is made on performance index, while the other one is on calculating gradient information. The modified algorithm gives a better convergence rate compared to the standard LM method and is less computationally intensive and requires less memory. The performance of the algorithm has been checked on several example problems.


IEEE Transactions on Industrial Informatics | 2012

Selection of Proper Neural Network Sizes and Architectures—A Comparative Study

David K. Hunter; Hao Yu; Michael S. Pukish; Janusz Kolbusz; Bogdan M. Wilamowski

One of the major difficulties facing researchers using neural networks is the selection of the proper size and topology of the networks. The problem is even more complex because often when the neural network is trained to very small errors, it may not respond properly for patterns not used in the training process. A partial solution proposed to this problem is to use the least possible number of neurons along with a large number of training patterns. The discussion consists of three main parts: first, different learning algorithms, including the Error Back Propagation (EBP) algorithm, the Levenberg Marquardt (LM) algorithm, and the recently developed Neuron-by-Neuron (NBN) algorithm, are discussed and compared based on several benchmark problems; second, the efficiency of different network topologies, including traditional Multilayer Perceptron (MLP) networks, Bridged Multilayer Perceptron (BMLP) networks, and Fully Connected Cascade (FCC) networks, are evaluated by both theoretical analysis and experimental results; third, the generalization issue is discussed to illustrate the importance of choosing the proper size of neural networks.


IEEE Transactions on Industrial Electronics | 2008

Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks

Bogdan M. Wilamowski; Nicholas J. Cotton; Okyay Kaynak; Günhan Dündar

This paper describes a new algorithm with neuron-by-neuron computation methods for the gradient vector and the Jacobian matrix. The algorithm can handle networks with arbitrarily connected neurons. The training speed is comparable with the Levenberg-Marquardt algorithm, which is currently considered by many as the fastest algorithm for neural network training. More importantly, it is shown that the computation of the Jacobian, which is required for second-order algorithms, has a similar computation complexity as the computation of the gradient for first-order learning methods. This new algorithm is implemented in the newly developed software, Neural Network Trainer, which has unique capabilities of handling arbitrarily connected networks. These networks with connections across layers can be more efficient than commonly used multilayer perceptron networks.


IEEE Transactions on Neural Networks | 2010

Neural Network Learning Without Backpropagation

Bogdan M. Wilamowski; Hao Yu

The method introduced in this paper allows for training arbitrarily connected neural networks, therefore, more powerful neural network architectures with connections across layers can be efficiently trained. The proposed method also simplifies neural network training, by using the forward-only computation instead of the traditionally used forward and backward computation.


IEEE Industrial Electronics Magazine | 2007

Major challenges of IEEE Transactions on Industrial Electronics [My View]

Bogdan M. Wilamowski

W e are observing a significant increase of interest in the area of industrial electronics. The number of submitted manuscripts to IES conferences is doubling every two years. In 2004 we received less than 2,000 papers and in 2006 we received 4,096. During the last couple of years we created two new publications, IEEE Transactions on Industrial Informatics and IEEE Industrial Electronics Magazine, but this is not reducing pressure on IEEE Transactions on Industrial Electronics (TIE). In 2006 we received 1,286 new manuscripts. By the middle of April 2007 we had already received 520 new manuscripts, and we expect to have about 1,600 by the end of the year. Assuming a 25% acceptance rate, these 1,600 manuscripts will result in 400 accepted papers. In 2006 we published exactly 200 papers in 1,974 pages. On 1 January 2007 we had a backlog of 330 accepted papers that were not yet published. This means that in 2007 we may need room for 730 papers (or about 6,500 pages), but our approved IEEE budget has only 2,000 pages for 2007. There is no good solution, but in order to resolve the issue a number of decisions have been made. ■ IES AdCom during its November 2006 meeting allocated an additional US


international symposium on neural networks | 2001

Implementing a fuzzy system on a field programmable gate array

Michael McKenna; Bogdan M. Wilamowski

170,000 to increase the limit of printed pages in 2007 to 3,500. ■ IES AdCom during its March 2007 meeting approved 4,500 pages for 2008. ■ We negotiated with IEEE to be able to publish our papers on IEEE Xplore several months ahead of the printing schedule. The drawback of this approach will be that papers in the issue will be published in random order because page numbers will be assigned as soon as IEEE receives the final manuscripts. ■ The following priorities were established to eliminate backlog: • the highest printing priority is given to the up-to-date manuscripts with references to recently published work • shorter papers will receive higher priority so that more papers can be published • higher priority will be given to manuscripts within the main scope of TIE. The scope of TIE is printed on the journal cover and on the journal Web page (http://tie.ieee-ies.org/tie/). This description is not detailed enough and many subjects overlap with other journals. However, sometimes it is not easy to define the scope of the manuscript. One way to objectively evaluate the scope of the manuscript is to check if its references include previously published papers in TIE. The “scope issue” also has another dimension. The Society is being accused of accepting manuscripts for TIE that were already rejected elsewhere. This is not a fair statement because TIE is similar to other IEEE transactions. Most of the journals are rejecting about 70% of submitted manuscripts, and this pile of already-rejected papers is growing. Some of these rejected manuscripts are being resubmitted to other journals. This is a headache not only for TIE but also for other journals. We have already considered sharing our database with other transactions such as IEEE Transactions on Industry Applications or IEEE Transactions on Power Electronics, but unfortunately we cannot do that without jeopardizing authors’ rights to confidentiality. Therefore, we have to rely on the following indicators typical for these manuscripts: ■ the subject is on the borderline of the TIE scope ■ there are not enough citations to work previously published in TIE ■ there are no recent references (within 18 months) ■ the manuscript is not formatted correctly ■ there are no citations to IES conferences ■ authors are not members of the Industrial Electronics Society. The indicators listed above can only be used as warning signs and, of course, they do not by themselves warrant rejection of a manuscript. The current review criteria are as discussed in the following paragraphs. What are the chances of the manuscript to be cited in the future? Does the manuscript clearly describe the accomplishments? How significant is the contribution, and is the describedThe article gives the historical background and a brief introduction to fractional calculus. An overview of fractional calculus was also given as well as its potential applications. Examples of complex system modeled by means of fractional calculus was given.


conference of the industrial electronics society | 2002

Fuzzy system based maximum power point tracking for PV system

Bogdan M. Wilamowski; Xiangli Li

Fuzzy controllers are traditionally implemented in a microprocessor and they produce relatively raw surfaces. The purpose of this work is to implement a fuzzy control system in a FPGA and to have the resulted control surface as smooth as possible. The FPGA has allowed designers to create large designs, test them and make modifications very easily and quickly. This approach uses a new weighted average concept to keep the fuzzy lookup table small, yet the input sizes can be large. This is implemented by using three or four most significant bits of each input to determine the address for the lookup table. A weighted average is performed using the remaining bits to eliminate rawness.

Collaboration


Dive into the Bogdan M. Wilamowski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janusz Kolbusz

Rzeszów University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milos Manic

Virginia Commonwealth University

View shared research outputs
Researchain Logo
Decentralizing Knowledge